pax_global_header00006660000000000000000000000064122736747750014536gustar00rootroot0000000000000052 comment=d631ab0521de94a0d7df9fd103c6a0a3fe1f94bc cobbler-2.4.1/000077500000000000000000000000001227367477500131525ustar00rootroot00000000000000cobbler-2.4.1/.gitignore000066400000000000000000000006311227367477500151420ustar00rootroot00000000000000*.pyc *.swp *.tmp *~ *.class *~ *.class dist rpm-build build MANIFEST TAGS .project .pydevproject .coverage .metadata # docs - ignore pod output docs/*.gz docs/*.html # Build output cobbler/webui/master.py config/version # Autogenerated cobbler4j classes, should never be checked in: cobbler4j/src/main/java/org/fedorahosted/cobbler/autogen cobbler4j/user.properties cobbler4j/target /cobbler-2.4.0.tar.gz cobbler-2.4.1/AUTHORS000066400000000000000000000102071227367477500142220ustar00rootroot00000000000000Cobbler was originally created by: Michael DeHaan And is maintained by: Michael DeHaan Scott Henson James Cammarata Jorgen Maas With patches and other contributions from (alphabetically, by last name): Partha Aji Anton Arapov Niels Basjes Tim Bielawa Joseph Boyer Jr. Andrew Brown David Brown James Bowes James Cammarata Jasper Capel Flavio Castelli C. Daniel Chase Gordon Child Cristian Ciupitu Carsten Clasohm Ian Ward Comfort Rob Crittenden Michael DeHaan Scott Dodson Charles Duffy Máirín Duffy John Eckersberg Lee Faus Leonid Flaks Eoghan Gaffney Uwe Gansert Dan Guernsey Marcel Haerry Peter Halliday Niels Hasse Dave Hatton Scott Henson Garrett Holmstrom Stephan Huiser Tru Huynh Matt Hyclak Kelsey Hightower Mihai Ibanescu Shuichi Ihara Pablo Iranzo Gómez Christopher Johnston Brian Kearney Henry Kemp Ruben Kerkhof Tony Kew Don Khan Douglas Kilpatrick James Laska Vito Laurenza Hans Lellelid Adrian Likins Dominic LoBue David Lutterkort Lester M. Jorgen Maas Bryan Mason Mike McCune Mark McLoughlin Jeroen van Meeuwen Jim Meyering Sean Millichamp Perry Myers Jack Neely Javier Palacious Bill Peck Lassi Pölönen Jasper Poppe Dan Radez Ben Riggs Gavin Romig-Koch Jeremy Rosengren Adam Rosenwald Jonathan Sabo Christophe Sahut Satoru Satoh Brandon Sawyers Jeff Schroeder Scott Seago Justin Sherill Chuck Short James Shubin Anderson Silva Joe Smith Michael Stahnke Dylan Swift Al Tobey Thomas Uhde Matthias Vandegaer Ronald van den Blink Tim Verhoeven John L. Villalovos Peter Vreman Adam Wolf Simon Woolsgrove Todd Zullinger [...send patches to get your name here...] cobbler-2.4.1/CHANGELOG000066400000000000000000003316371227367477500144010ustar00rootroot00000000000000Cobbler CHANGELOG - Feb 10 2014 - 2.4.1 - Nov 15 2011 - 2.2.2 - Fixed indentation on closing tr tag (gregswift@gmail.com) - Added leader column to the non-generic tables so that all tables have the same layout. It leaves room for a checkbox and multiple selects i nthese other tables as well. (gregswift@gmail.com) - Added action class to the event log link to bring it inline with other table functions (gregswift@gmail.com) - buildiso bugfix: overriding dns nameservers via the dns kopt now works. reported by Simon Woolsgrove (jorgen.maas@gmail.com) - Fix for pxegen, where an image without a distro could cause a stack dump on cobbler sync (jimi@sngx.net) - Added initial support for specifying the on-disk format of virtual disks, currently supported for QEMU only when using koan (jimi@sngx.net) - Add fedora16, rawhide, opensuse 11.2, 11.3, 11.4 and 12.1 to codes.py This should also fix ticket #611 (jorgen.maas@gmail.com) - Use VALID_OS_VERSIONS from codes.py in the redhat importer. (jorgen.maas@gmail.com) - Cleanup: use utils.subprocess_call in services.py (jorgen.maas@gmail.com) - Cleanup: use utils.subprocess_call in remote.py. (jorgen.maas@gmail.com) - Cleanup: use utils.subprocess_call in scm_track.py. Also document that 'hg' is a valid option in the settings file. (jorgen.maas@gmail.com) - Dont import the sub_process module when it's not needed. (jorgen.maas@gmail.com) - Fixes to import_tree() to actually copy files to a safe place when --available-as is specified. Also some cleanup to the debian/ubuntu import module for when --available-as is specified. (jimi@sngx.net) - Modification to import processes so that rsync:// works as a path. These changes should also correct the incorrect linking issue where the link created in webdir/links/ pointed at a directory in ks_mirror without the arch specified, resulting in a broken link if --arch was specified on the command line Also removed the .old import modules for debian/ubuntu, which were replaced with the unified manage_import_debian_ubuntu.py (jimi@sngx.net) - cleanup: use codes.VALID_OS_VERSIONS in the freebsd importer (jorgen.maas@gmail.com) - cleanup: use codes.VALID_OS_VERSIONS in the debian/ubuntu importer (jorgen.maas@gmail.com) - Bugfix: add the /var/www/cobbler/pub directory to setup.py. Calling buildiso from cobbler-web now works as expected. (jorgen.maas@gmail.com) - BUGFIX: patch koan (xencreate) to correct the same issue that was broken for vmware regarding qemu_net_type (jimi@sngx.net) - BUGFIX: fixed issue with saving objects in the webgui failing when it was the first of that object type saved. (jimi@sngx.net) - Minor fix to the remote version to use the nicer extended version available (jimi@sngx.net) - Fix a bug in buildiso when duplicate kopt keys are used. Reported and tested by Simon Woolsgrove (jorgen.maas@gmail.com) - Fix for koan, where vmwcreate.py was not updated to accept the network type, causing failures. (jimi@sngx.net) - Added a %post section for the cobbler-web package, which replaces the SECRET_KEY field in the Django settings.py with a random string (jimi@sngx.net) - BUGFIX: added sign_puppet_certs_automatically to settings.py. The fact that this was missing was causing failures in the the pre/post puppet install modules. (jimi@sngx.net) - set the auto-boot option for a virtual machine (ug@suse.de) - Correction for koan using the incorrect default port for connecting to cobblerd (jimi@sngx.net) - config/settings: add "manage_tftpd: 1" (default setting) (cristian.ciupitu@yahoo.com) - Oct 5 2011 - 2.2.0 - Remove the version (shenson@redhat.com) - New upstream 2.2.0 release (shenson@redhat.com) - Add networking snippet for SuSE systems. (jorgen.maas@gmail.com) - Add a /etc/hosts snippet for SuSE systems. (jorgen.maas@gmail.com) - Add a proxy snippet for SuSE systems. (jorgen.maas@gmail.com) - Buildiso: make use of the proxy field (SuSE, Debian/Ubuntu). (jorgen.maas@gmail.com) - Rename buildiso.header to buildiso.template for consistency. Also restore the local LABEL in the template. (jorgen.maas@gmail.com) - Bugfix: uppercase macaddresses used in buildiso netdevice= keyword cause the autoyast installer to not setup the network and thus fail. (jorgen.maas@gmail.com) - Buildiso: minor cleanup diff. (jorgen.maas@gmail.com) - Buildiso: behaviour changed after feedback from the community. (jorgen.maas@gmail.com) - Build standalone ISO from the webinterface. (jorgen.maas@gmail.com) - Fix standalone ISO building for SuSE, Debian and Ubuntu. (jorgen.maas@gmail.com) - add proxy field to field_info.py (jorgen.maas@gmail.com) - Remove FreeBSD from the unix breed as it has it's own now. Also, add freebsd7 as it is supported until feb 2013. Minor version numbers don't make sense, also removed. (jorgen.maas@gmail.com) - Add a proxy field to profile and system objects. This is useful for environments where systems are not allowed to make direct connections to the cobbler/repo servers. (jorgen.maas@gmail.com) - Introduce a "status" field to system objects. Useful in environments where DTAP is required, the possible values for this field are: development, testing, acceptance, production (jorgen.maas@gmail.com) - Buildiso: only process profiles for selected systems. (jorgen.maas@gmail.com) - Buildiso: add batch action to build an iso for selected profiles. (jorgen.maas@gmail.com) - Buildiso: use management interface feature. (jorgen.maas@gmail.com) - Buildiso: get rid of some code duplication (ISO header). (jorgen.maas@gmail.com) - Buildiso: add interface to macaddr resolution. (jorgen.maas@gmail.com) - Buildiso: add Debian and Ubuntu support. (jorgen.maas@gmail.com) - Buildiso: select systems from the webinterface. (jorgen.maas@gmail.com) - Fix an exception when buildiso is called from the webinterface. (jorgen.maas@gmail.com) - fix power_virsh template to check dom status before executing command. (bpeck@redhat.com) - if hostname is not resolvable do not fail and use that hostname (msuchy@redhat.com) - Removed action_import module and references to it in code to prevent future confusion. (jimi@sngx.net) - Fixing redirects after a failed token validation. You should now be redirected back to the page you were viewing after having to log back in due to a forced login. (jimi@sngx.net) - Use port to access cobbler (peter.vreman@acision.com) - Stripping "g" from vgs output case-insensitive runs faster (mmello@redhat.com) - Adding ability to create new sub-directories when saving snippets. Addresses trac #634 - save new snippet fails on non existing subdir (jimi@sngx.net) - Fix traceback when executing "cobbler system reboot" with no system name specified Trac ticket #578 - missing check for name option with system reboot (jimi@sngx.net) - bind zone template writing (jcallaway@squarespace.com) - Removing the duplicate lines from importing re module (mmello@redhat.com) - Merge remote-tracking branch 'jimi1283/bridge-interface' (shenson@redhat.com) - Modification to allow DEPRECATED options to be added as options to optparse so they work as aliases (jimi@sngx.net) - Re-adding the ability to generate a random mac from the webui. Trac #543 (Generate random mac missing from 2.x webui) (jimi@sngx.net) - Merge remote-tracking branch 'jsabo/fbsdreplication' (shenson@redhat.com) - Tim Verhoeven (Tue. 08:35) (Cobbler attachment) Subject: [PATCH] Add support to koan to select type of network device to emulate To: cobbler development list Date: Tue, 2 Aug 2011 14:35:21 +0200 (shenson@redhat.com) - Hello, (shenson@redhat.com) - scm_track: Add --all to git add options to handle deletions (tmz@pobox.com) - Moved HEADER heredoc from action_buildiso.py to /etc/cobbler/iso/buildiso.header (gbailey@terremark.com) - Enable replication for FreeBSD (jsabo@verisign.com) - Merge branch 'master' into bridge-interface (jimi@sngx.net) - Remove json settings from local_get_cobbler_xmlrpc_url() (jsabo@verisign.com) - 1) Moving --subnet field to --netmask 2) Created DEPRECATED_FIELDS structure in field_info.py to deal with moves like this * also applies to the bonding->interface_type move for bridged interface support (jimi@sngx.net) - Merge remote-tracking branch 'jimi1283/bridge-interface' (shenson@redhat.com) - Fixing up some serializer module stuff: * detecting module load errors when trying to deserialize collections * added a what() function to all the serializer modules for ID purposes * error detection for mongo stuff, including pymongo import problems as well as connection issues (jimi@sngx.net) - Cleanup of bonding stuff in all files, including webui and koan. Additional cleanup in the network config scripts, and re-added the modprobe.conf renaming code to the post install network config. (jimi@sngx.net) - Initial rework to allow bridge/bridge slave interfaces Added static route configuration to pre_install_network_config Major cleanup/reworking of post_install_network_config script (jimi@sngx.net) - Fix for bad commit of some json settings test (jimi@sngx.net) - Merge remote-tracking branch 'jsabo/fbsdimport' (shenson@redhat.com) - Adding initial support for FreeBSD media importing (jsabo@verisign.com) - Setting TIME_ZONE to None in web/settings.py causes a 500 error on a RHEL5 system with python 2.4 and django 1.1. Commenting out the config line has the same effect as setting it to None, and prevents the 500. (jimi@sngx.net) - Fixes for importing RHEL6: * path_tail() was previously moved to utils, a couple places in the import modules still used self.path_tail instead of utils.path_tail, causing a stack dump * Fixed an issue in utils.path_tail(), which was using self. still from when it was a member of the import class * When mirror name was set on import and using --available-as, it was appending a lot of junk instead of just using the specified mirror name (jimi@sngx.net) - Merge branch 'master' of git://git.fedorahosted.org/cobbler (jimi@sngx.net) - Fix a quick error (shenson@redhat.com) - Set the tftpboot dir for rhel6 hosts (jsabo@verisign.com) - Fixed a typo (jorgen.maas@gmail.com) - Added an extra field in the system/interface item. The field is called "management" and should be used to identify the management interface, this could be useful information for multihomed systems. (jorgen.maas@gmail.com) - In the event log view the data/time field got wrapped which is very annoying. Fast fix for now, i'm pretty sure there are better ways to do this. (jorgen.maas@gmail.com) - Event log soring on date reverted, let's sort on id instead. Reverse over events in the template. Convert gmtime in the template to localtime. (jorgen.maas@gmail.com) - Sort the event log by date/time (jorgen.maas@gmail.com) - Remove some unsupported OS versions from codes.py (jorgen.maas@gmail.com) - Some changes in the generate_netboot_iso function/code: - Users had to supply all system names on the commandline which they wanted to include in the ISO boot menu. This patch changes that behaviour; all systems are included by default now. You can still provide an override with the --systems parameter, thus making this feature more consistent with what one might expect from reading the help. - While at it I tried to make the code more readable and removed some unneeded iterations. - Prevent some unneeded kernel/initrd copies. - You can now override ip/netmask/gateway/dns parameters with corresponding kernel_options. - Fixed a bug for SuSE systems where ksdevice should be netdevice. - If no ksdevice/netdevice (or equivalent) has been supplied via kernel_options try to guess the proper interface to use, but don't just use one if we can't be sure about it (e.g. for multihomed systems). (jorgen.maas@gmail.com) - Add SLES 11 to codes.py (jorgen.maas@gmail.com) - Add support for Fedora15 to codes.py (jorgen.maas@gmail.com) - Django uses the timezone information from web/settings.py Changing the hardcoded value to None forces Django to use the systems timezone instead of this hardcoded value (jorgen.maas@gmail.com) - Fix cobbler replication for non-RHEL hosts. The slicing used in the link_distro function didn't work for all distros. (jsabo@verisign.com) - Fix vmware esx importing. It was setting the links dir to the dir the iso was mounted on import (jsabo@verisign.com) - Merge remote-tracking branch 'jsabo/webuifun' (shenson@redhat.com) - Fix bug with esxi replication. It wasn't rsyncing the distro over if the parentdir already existed. (jsabo@verisign.com) - Merge branch 'master' of git://git.fedorahosted.org/cobbler (jimi@sngx.net) - Initial commit for mongodb backend support and adding support for settings as json (jimi@sngx.net) - Web UI patches from Greg Swift applied (jsabo@verisign.com) - whitespace fix (dkilpatrick@verisign.com) - Fix to fix to py_tftp change to sync in bootloaders (dkilpatrick@verisign.com) - Fixing a bug reported by Jonathan Sabo. (dkilpatrick@verisign.com) - Merge branch 'master' of git://git.fedorahosted.org/cobbler (dkilpatrick@verisign.com) - Revert "Jonathan Sabo (June 09) (Cobbler)" (shenson@redhat.com) - Unmount and deactivate all software raid devices after searching for ssh keys (jonathan.underwood@gmail.com) - Merge remote-tracking branch 'ugansert/master' (shenson@redhat.com) - Jonathan Sabo (June 09) (Cobbler) Subject: [PATCH] Fix issue with importing distro's on new cobbler box To: cobbler development list Date: Thu, 9 Jun 2011 16:17:20 -0400 (shenson@redhat.com) - missing manage_rsync option from config/settings (jsabo@criminal.org) - Remove left-over debugging log message (dkilpatrick@verisign.com) - SUSE requires the correct arch to find kernel+initrd on the inst-source (ug@suse.de) - added autoyast=... parameter to the ISO building code when breed=suse (ug@suse.de) - calculate meta data in the XML file without cheetah variables now (ug@suse.de) - render the cheetah template before passing the XML to the python XML parser (ug@suse.de) - made the pathes flexible to avoid problem on other distros than fedora/redhat (ug@suse.de) - bugfix (ug@suse.de) - Merge patch from stable (cristian.ciupitu@yahoo.com) - utils: initialize main_logger only when needed (cristian.ciupitu@yahoo.com) - During refactor, failed to move templater initialization into write_boot_files_distro. (dkilpatrick@verisign.com) - Fixed a couple of simple typos. Made the boot_files support work (added template support for the key, defined the img_path attribute for that expansion) (dkilpatrick@verisign.com) - Fixes to get to the "minimally tested" level. Fixed two syntax errors in tftpd.py, and fixed refences to api and os.path in manage_in_tftpd.py (dkilpatrick@verisign.com) - Rebasing commit, continued. (kilpatds@oppositelock.org) - Change the vmware stuff to use 'boot_files' as the space to set files that need to be available to a tftp-booting process (dkilpatrick@verisign.com) - Added 'boot_files' field for 'files that need to be put into tftpboot' (dkilpatrick@verisign.com) - Merge conflict. (kilpatds@oppositelock.org) - Add in a default for puppet_auto_setup, thanks to Camille Meulien for finding it. (shenson@redhat.com) - Add a directory remap feature to fetchable_files processing. /foo/*=/bar/ Client requests for "/foo/baz" will be turned into requests for /bar/baz. Target paths are evaluated against the root filesystem, not tftpboot. Template expansion is done on "bar/baz", so that would typically more usefully be something like /boot/*=$distro_path/boot (dkilpatrick@verisign.com) - Removed trailing whitespace causing git warnings (dkilpatrick@verisign.com) - Fix a bug where tftpd.py would throw if a client requested '/'. (dkilpatrick@verisign.com) - Allow slop in the config, not just the client. modules: don't hardcode /tftpboot (dkilpatrick@verisign.com) - Moved footer to actually float at the bottom of the page or visible section, whichever is further down. Unfortunately leaves a slightly larger margin pad on there. Will have to see if it can be made cleaner (gregswift@gmail.com) - Removed right padding on delete checkboxes (gregswift@gmail.com) - Adjusted all the self closing tags to end eith a " />" instead of not having a space separating them (gregswift@gmail.com) - Added "add" button to the filter bit (gregswift@gmail.com) - Removed "Enabled" label on checkboxes, this can be added via css as part of the theme if people want it using :after { content: " Enabled" } Padded the context-tip off the checkboxes so that it lines up with most of the other context tips instead of being burring in the middle of the form (gregswift@gmail.com) - Added bottom margin on text area so that it isn't as tight next to other form fields (gregswift@gmail.com) - Added id tags to the forms for ks templates and snippets Set some margins for those two forms, they were a bit scrunched because they didn't have a sectionbody fieldset and legend Removed inline formatting of input sizes on those two pages Set the textareas in those two pages via css (gregswift@gmail.com) - Made the tooltips get hiddent except for on hover, with a small image displayed in their place (gregswift@gmail.com) - Added a top margin to the submit/reset buttons... looks cleaner having some space. (gregswift@gmail.com) - Changed generic edit form to the following: - Made blocks into fieldsets again, converting the h2 to a legend. I didn't mean to change this the first time through. - Pulled up a level, removing the wrapping div, making each fieldset contain an order list, instead of each line being an ordered list, which was silly of me. - Since it went up a level, un-indented all of the internal html tags 2 spaces - changed the place holder for the network widgets to spans so that they displayed cleanly (Don't like the spans either, but its for the javascript) In the stylesheet just changed the div.sectionbody to ol.sectionbody (gregswift@gmail.com) - Fixed closing ul->div on multiselect section. Must have missed it a few commits ago. (gregswift@gmail.com) - IE uses input styling such as borders even on checkboxes... was not intended, so has been cleared for checkboxes (gregswift@gmail.com) - This is a change to the multiselect buttons view, i didn't mean to commit the style sheet along with the spelling check fixes, but since I did might as well do the whole thing and then erevert it later if people dislike it (gregswift@gmail.com) - Fixed another postition mispelling (gregswift@gmail.com) - fixed typo postition should be position (gregswift@gmail.com) - Returned the multiselect section to being div's, since its actually not a set of list items, it is a single list item. Re-arranged the multiselect so that the buttons are centered between the two sections Removed all of the line breaks form that section Made the select box headings actually labels moved the order of multiselect after sectionbody definition due to inheritence (gregswift@gmail.com) - Restored select boxes to "default" styling since they are not as cleanly css- able Made visibly selected action from Batch Actions bold, mainly so by default Batch Action is bold. Moved text-area and multi-select sizing into stylesheet. re-alphabetized some of the tag styles Made the default login's text inputs centered, since everything else on that page is (gregswift@gmail.com) - Added missing bracket from two commits ago in the stylesheet. (gregswift@gmail.com) - Re-added the tool tips for when they exist in the edit forms and set a style on them. Removed an extraneous line break from textareas in edit form (gregswift@gmail.com) - Fixed javascript where I had used teh wrong quotes, thus breaking the network interface widgets (gregswift@gmail.com) - Added label and span to cleanup block (gregswift@gmail.com) - Added version across all of the template loads so that the footer is populated with it (gregswift@gmail.com) - all css: - set overall default font size of 1em - added missing tags to the cleanup css block - fixed button layout -- list line buttons are smaller font to keep lines smaller -- set action input button's size - set indentation and bolding of items in batch action - redid the list formatting -- removed zebra stripes, they share the standard background now -- hover is now the background color of the old darker zebra stripe -- selected lines now background of the older light zebra stripe - added webkit border radius (gregswift@gmail.com) - generic_lists.tmpl - Removed force space on the checklists generic_lists.tmpl - Added javascript to allow for selected row highlighting (gregswift@gmail.com) - Removed inline formatting from import.tmpl Made the context tips spans (gregswift@gmail.com) - Made both filter-adder elements exist in the same li element (gregswift@gmail.com) - Added default formatting for ordered lists Added formatting for the new multiselect unordered list Changed old div definitions for the multiselect to li Added label formatting for inside sectionbody to line up all the forms. (gregswift@gmail.com) - Adjusted multiselect section to be an unordered list instead of a div (gregswift@gmail.com) - Moved the close list tag inside the for loop, otherwise we generate lots of nasty nested lists (gregswift@gmail.com) - Changed edit templates to use ol instead of ul, because it apparently helps out those using screen readers, and we should be making things accessible, yes? (gregswift@gmail.com) - Re-structured the edit templates to be unordered lists. Standardized the tooltip/contextual data as context-tip class Redid the delete setup so that its Delete->Really? Instead of Delete:Yes->Really? Same number of check boxes. Setup the delete bit so that Delete and Really are labels for the checkboxes and there isn't extraneous html input tags (gregswift@gmail.com) - Added top margin on the filter adder (gregswift@gmail.com) - Adjusted single action item buttons to be in the same list element, as it makes alignment cleaner, and more sense from a grouping standpoint Set submenubar default height to 26px Set submenubar's alignment to be as clean as I've been able to get so far. (gregswift@gmail.com) - Set background color back to original (gregswift@gmail.com) - Adjusted all buttons to hover invert from blue to-blackish, the inverse of the normal links (which go blackish to blue) but left the text color the same. i'm not sure its as pretty, but dfinately more readable. Plus the color change scheme is more consistant. Also made table buttons smaller than other buttons (gregswift@gmail.com) - Fixed width on paginate select boxes to auto, instead of over 200px (gregswift@gmail.com) - Removed margin around hr tag, waste of space, and looks closer to original now (gregswift@gmail.com) - Removed extraneous body div by putting user div inside container. (gregswift@gmail.com) - Adjuested style sheet to improve standardization of form fields, such as buttons, text input widths, and fontsizes in buttons vs drop downs. (gregswift@gmail.com) - Some menu re-alignment on both menubar and submenubar (gregswift@gmail.com) - Got the container and the user display into a cleaner size alignment to display on the screen. less chance of horiz scroll (gregswift@gmail.com) - Fix to get login form a bit better placed without duplicate work (gregswift@gmail.com) - pan.action not needed... .action takes care of it (gregswift@gmail.com) - Removed padding on login screen (gregswift@gmail.com) - Redid action and button classes to make them look like buttons.. still needs work. Resized pointer classes to make things a bit more level on that row (gregswift@gmail.com) - New cleanup at the top negates the need for this table entry (gregswift@gmail.com) - Removed the body height to 99%. Was doing this for sticky footer, but current path says its not needed (gregswift@gmail.com) - Added some windows and mac default fonts Made the body relative, supposed to help with the layout Set text color to slightly off black.. was told there is some odd optical reasoning behind this (gregswift@gmail.com) - Made class settings for the table rows a touch more specific in the css (gregswift@gmail.com) - Added "normalization" to clean up cross browser differences at top of style.css (gregswift@gmail.com) - Added button class to all buttons, submit, and resets (gregswift@gmail.com) - Fixed sectionheader to not be styled as actions... they are h2! (gregswift@gmail.com) - Fixed container reference from class to id (gregswift@gmail.com) - Added missing action class on the "Create new" links in generic_list.tmpl (gregswift@gmail.com) - Revert part of 344969648c1ce1e753af because RHEL5's django doesn't support that (gregswift@gmail.com) - removed underline on remaing links (gregswift@gmail.com) - Fixed the way the logo was placed on the page and removed the excess background setting. (gregswift@gmail.com) - Some cleanup to the style sheet along - removed fieldset since no more exist (not sure about this in long run.... we'll see) - cleaned up default style for ul cause it was causing override issues - got menubar and submenu bar mostly settled (gregswift@gmail.com) - Fixed submenu bar ul to be identified by id not class (gregswift@gmail.com) - Rebuilt primary css stylesheet - not complete yet (gregswift@gmail.com) - Removed logout from cobbler meft hand menu (gregswift@gmail.com) - Next step in redoing layout: - added current logged in user and logout button to a div element at top of page - fixed content div from class to id - added footer (version entry doesn't work for some reason) - links to cobbler website (gregswift@gmail.com) - in generic_list.tmpl - set the edit link to class 'action' - merged the creation of the edit action 'View kickstart' for system and profile (gregswift@gmail.com) - Replaced tool tip as div+em with a span classed as tooltip. tooltip class just adds italic. (gregswift@gmail.com) - Fixed table header alignment to left (gregswift@gmail.com) - Take the logo out of the html, making it a css element, but retain the location and basic feel of the placement. (gregswift@gmail.com) - Step one of redoing the action list, pagination and filters. - split pagination and filters to two tmpl files - pagination can be called on its own (so it can live in top and bottom theoretically) - filter will eventually include pagination so its on the bottom - new submenubar includes pagination - new submenubar does age specific actiosn as links instead of drop downs cause there is usually 1, rarely 2, never more. (gregswift@gmail.com) - Removed pagination from left hand column (gregswift@gmail.com) - Removed an erroneous double quote from master.tmpl (gregswift@gmail.com) - Went a bit overboard and re-adjusted whitespace in all the templates. Trying to do the code in deep blocks across templates can be a bit tedious and difficult to maintain. While the output is not perfect, at least the templates are more readable. (gregswift@gmail.com) - Removed remaining vestige of action menu shading feature (gregswift@gmail.com) - Removed header shade references completely from the lists and the code from master.tmpl (gregswift@gmail.com) - Wrapped setting.tmpl error with the error class (gregswift@gmail.com) - Changed h3 to h2 inside pages Made task_created's h4 into a h1 and standarized with the other pages (gregswift@gmail.com) - Standardized header with a hr tag before the form tags (gregswift@gmail.com) - Added base width on the multiple select boxes, primarily for when they are empty (gregswift@gmail.com) - Removed fieldset wrappers and replaced legends with h1 and h2 depending on depth (gregswift@gmail.com) - Adjusted logic for the legent to only change one word, instead of the full string (gregswift@gmail.com) - Removed empty cell from table in generic_edit.tmpl (gregswift@gmail.com) - Revert 8fed301e61f28f8eaf08e430869b5e5df6d02df0 because it was to many different changes (gregswift@gmail.com) - Removed empty cell from table in generic_edit.tmpl (gregswift@gmail.com) - Moved some cobbler admin and help menus to a separate menu in the menubar (gregswift@gmail.com) - Added HTML5 autofocus attribute to login.tmpl. Unsupported browsers just ignores this. (gregswift@gmail.com) - Re-built login.tmpl: - logo isn't a link anymore back to the same page - logo is centered with the login form - fieldset has been removed - set a css class for the body of the login page, unused for now. And the css: - removed the black border from css - centered the login button as well (gregswift@gmail.com) - Made the links and span.actions hover with the same color as used for the section headings (gregswift@gmail.com) - Removed as much in-HTML placed formatting as possible and implemented them in css. The main bit remaining is the ul.li floats in paginate.tmpl (gregswift@gmail.com) - Cleaned up single tag closing for several of the checkboxes (gregswift@gmail.com) - removed a trailing forward slash that was creating an orphaned close span tag (gregswift@gmail.com) - Relabeled cells in thead row from td tags to th (gregswift@gmail.com) - Added tr wrapper inside thead of tables for markup validation (gregswift@gmail.com) - Use :// as separator for virsh URIs (atodorov@otb.bg) - Create more condensed s390 parm files (thardeck@suse.de) - Add possibility to interrupt zPXE and to enter CMS (thardeck@suse.de) - Cleanup the way that we download content - Fixes a bug where we were only downloading grub-x86_64.efi (shenson@redhat.com) - Port this config over as well (shenson@redhat.com) - Only clear logs that exist. (bpeck@redhat.com) - Pull in new configs from the obsoletes directory. (shenson@redhat.com) - Removed extraneous close row tag from events.tmpl (gregswift@gmail.com) - Fixed spelling of receive in enoaccess.tmpl (gregswift@gmail.com) - Added missing close tags on a few menu unordered list items in master.tmpl (gregswift@gmail.com) - Added missing "for" correlation tag for labels in generic_edit.tmpl (gregswift@gmail.com) - Removed extraneous close divs from generic_edit.tmpl (gregswift@gmail.com) - Removing old and unused template files (gregswift@gmail.com) - Add support for Ubuntu distros. (andreserl@ubuntu.com) - Koan install tree path for Ubuntu/Debian distros. (andreserl@ubuntu.com) - Fixing hardlink bin path. (andreserl@ubuntu.com) - Do not fail when yum python module is not present. (andreserl@ubuntu.com) - Add Ubuntu/Debian support to koan utils for later use. (andreserl@ubuntu.com) - typo in autoyast xml parsing (ug@suse.de) - Minor change to validate a token before checking on a user. (jimi@sngx.net) - get install tree from install=... parameter for SUSE (ug@suse.de) - handle autoyast XML files (ug@suse.de) - fixed support for SUSE in build-iso process. Fixed a typo (ug@suse.de) - added SUSE breed to import-webui (ug@suse.de) - Merge remote-tracking branch 'lanky/master' (shenson@redhat.com) - Merge remote-tracking branch 'jimi1283/master' (shenson@redhat.com) - added support for suse-distro import (ug@suse.de) - Fix a sub_process Popen call that did not set close_fds to true. This causes issues with sync where dhcpd keeps the XMLRPC port open and prevents cobblerd from restarting (jimi@sngx.net) - Cleanup of unneccsary widgets in distro/profile. These needed to be removed as part of the multiselect change. (jimi@sngx.net) - Yet another change to multiselect editing. Multiselects are now presented as side-by-side add/delete boxes, where values can be moved back and forth and only appear in one of the two boxes. (jimi@sngx.net) - Fix for django traceback when logging into the web interface with a bad username and/or password (jimi@sngx.net) - Fix for snippet/kickstart editing via the web interface, where a 'tainted file path' error was thrown (jimi@sngx.net) - added the single missed $idata.get() item (stuart@sjsears.com) - updated post_install_network_config to use $idata.get(key, "") instead of $idata[key]. This stops rendering issues with the snippet when some keys are missing (for example after an upgrade from 2.0.X to 2.1.0, where a large number of new keys appear to have been added.) and prevents us from having to go through all system records and add default values for them. (stuart@sjsears.com) - Take account of puppet_auto_setup in install_post_puppet.py (jonathan.underwood@gmail.com) - Take account of puppet_auto_setup in install_pre_puppet.py (jonathan.underwood@gmail.com) - Add puppet snippets to sample.ks (jonathan.underwood@gmail.com) - Add puppet_auto_setup to settings file (jonathan.underwood@gmail.com) - Add snippets/puppet_register_if_enabled (jonathan.underwood@gmail.com) - Add snippets/puppet_install_if_enabled (jonathan.underwood@gmail.com) - Add configuration of puppet pre/post modules to settings file (jonathan.underwood@gmail.com) - Add install_post_puppet.py module (jonathan.underwood@gmail.com) - Add install_pre_puppet.py module (jonathan.underwood@gmail.com) - Apply a fix for importing red hat distros, thanks jsabo (shenson@redhat.com) - Changes to action/batch actions at top of generic list pages * move logic into views, where it belongs * simplify template code * change actions/batch actions into drop down select lists * added/modified javascript to deal with above changes (jimi@sngx.net) - Minor fixes to cobbler.conf, since the AliasMatch was conflicting with the WSGI script alias (jimi@sngx.net) - Initial commit for form-based login and authentication (jimi@sngx.net) - Convert webui to use WSGI instead of mod_python (jimi@sngx.net) - Save field data in the django user session so the webui doesn't save things unnecessarily (jimi@sngx.net) - Make use of --format in git and use the short hash. Thanks Todd Zullinger (shenson@redhat.com) - We need git. Thanks to Luc de Louw (shenson@redhat.com) - Start of the change log supplied by Michael MacDonald (shenson@redhat.com) - Fix typo in cobbler man page entry for profile (jonathan.underwood@gmail.com) - Fix cobbler man page entry for parent profile option (jonathan.underwood@gmail.com) - Set SELinux context of host ssh keys correctly after reinstallation (jonathan.underwood@gmail.com) - Fixing bug with img_path. It was being used prior to being set if you have images. (jonathan.sabo@gmail.com) - Add firstboot install trigger mode (jonathan.sabo@gmail.com) - Fix old style shell triggers by checking for None prior to adding args to arg list and fix indentation (jonathan.sabo@gmail.com) - Bugfix: restore --no-fail functionality to CLI reposync (icomfort@stanford.edu) - Add the ability to replicate the new object types (mgmtclass,file,package). (jonathan.sabo@gmail.com) - Add VMware ESX and ESXi replication. (jonathan.sabo@gmail.com) - Add batch delete option for profiles and mgmtclasses (jonathan.sabo@gmail.com) - Spelling fail (shenson@redhat.com) - Remove deploy as a valid direct action (shenson@redhat.com) - Trac Ticket #509: A fix that does not break everything else. (https://fedorahosted.org/cobbler/ticket/509) (andrew@eiknet.com) - Only chown the file if it does not already exist (shenson@redhat.com) - Modification to cobbler web interface, added a drop-down select box for management classes and some new javascript to add/remove items from the multi-select (jimi@sngx.net) - Check if the cachedir exists before we run find on it. (shenson@redhat.com) - Fix trac#574 memtest (shenson@redhat.com) - Add network config snippets for esx and esxi network configuration $SNIPPET('network_config_esxi') renders to: (jonathan.sabo@gmail.com) - Trac Ticket #510: Modified 'cobbler buildiso' to use /var/cache/cobbler/buildiso by default. Added a /etc/cobbler/settings value of 'buildisodir' to make it setable by the end user. --tempdir will still overwrite either setting on the command line. (andrew@eiknet.com) - Add img_path to the metadata[] so that it's rendered out in the esxi pxe templates. Add os_version checks for esxi in kickstart_done so that it uses wget or curl depending on what's known to be available. (jonathan.sabo@gmail.com) - Added --sync-all option to cobbler replicate which forces all systems, distros, profiles, repos and images to be synced without specifying each. (rrr67599@rtpuw027.corpnet2.com) - Added manage_rsync option which defaults to 0. This will make cobbler not overwrite a local rsyncd.conf unless enabled. (rrr67599@rtpuw027.corpnet2.com) - Added semicolon master template's placement of the arrow in the page heading (gregswift@gmail.com) - Quick fix from jsabo (shenson@redhat.com) - added hover line highlighting to table displays (gregswift@gmail.com) - Modification to generic_edit template so that the name field is not a text box when editing. (jimi@sngx.net) - Minor fixes for mgmt classes webui changes. - Bug when adding a new obj, since obj is None it was causing a django stack dump - Minor tweaks to javascript (jimi@sngx.net) - Fixed error in which the json files for mgmtclasses was not being deleted when a mgmtclass was removed, meaning they showed back up the next time cobblerd was restarted (jimi@sngx.net) - Fixed syntax error in clogger.py that was preventing cobblerd from starting (jimi@sngx.net) - Supports an additional initrd from kernel_options. (bpeck@redhat.com) - Remove a bogus self (shenson@redhat.com) - Re-enable debmirror. (chuck.short@canonical.com) - Extending the current Wake-on-Lan support for wider distro compatibility. Thanks to Dustin Kirkland. (chuck.short@canonical.com) - Dont hardcode /etc/rc.d/init.d redhatism. (chuck.short@canonical.com) - Newer (pxe|sys)linux's localboot value produces unreliable results when using documented options, -1 seems to provide the best supported value (chuck.short@canonical.com) - Detect the webroot to be used based on the distro. (chuck.short@canonical.com) - If the logfile path doesn't exist, don't attempt to create the log file. Mainly needed when cobbler is required to run inside the build env (cobbler4j). Thanks to Dave Walker (chuck.short@canonical.com) - Implement system power status API method and CLI command (crosa@redhat.com) - Update setup files to use proper apache configuration path (konrad.scherer@windriver.com) - Debian has www-data user for web server file access instead of apache. (konrad.scherer@windriver.com) - Update init script to work under debian. (konrad.scherer@windriver.com) - Use lsb_release module to detect debian distributions. Debian release is returned as a string because it could be sid which will never have a version number. (konrad.scherer@windriver.com) - Fix check for apache installation (konrad.scherer@windriver.com) - Handle Cheetah version with more than 3 parts (konrad.scherer@windriver.com) - Allow dlcontent to use proxy environment variables (shenson@redhat.com) - Copy memtest to $bootloc/images/. Fixes BZ#663307 (shenson@redhat.com) - Merge remote branch 'jimi1283/master' (shenson@redhat.com) - Turn the cheetah version numbers into integers while testing them so we don't always return true (shenson@redhat.com) - Kill some whitespace (shenson@redhat.com) - Fix for bug #587 - Un-escaped '$' in snippet silently fails to render (jimi@sngx.net) - Fix for bug #587 - Un-escaped '$' in snippet silently fails to render (jimi@sngx.net) - Merge branch 'master' of git://git.fedorahosted.org/cobbler (jimi@sngx.net) - Don't use link caching in places it isn't needed (shenson@redhat.com) - Better logging on subprocess calls (shenson@redhat.com) - Fix for trac #541 - cobbler sync deletes /var/www/cobbler/pub (jimi@sngx.net) - Merged work in the import-modules branch with the debian/ubuntu modules created by Chuck Short (jimi@sngx.net) - Merge branch 'cshort' into import-modules (jimi@sngx.net) - Finished up debian/ubuntu support for imports Tweaked redhat/vmware import modules logging output Added rsync function to utils to get it out of each module - still need to fix the redhat/vmware modules to actually use this (jimi@sngx.net) - Initial commit for the Debian import module. * tested against Debian squeeze. (chuck.short@canonical.com) - Initial commit for the Ubuntu import module. * tested against Natty which imported successfully. (chuck.short@canonical.com) - tftp-hpa users for both Ubuntu Debian use /var/lib/tftpboot. (chuck.short@canonical.com) - Disable the checks that are not really valid for Ubuntu or Debian. (chuck.short@canonical.com) - Add myself to the authors file. (chuck.short@canonical.com) - Updates for debian/ubuntu support in import modules (jimi@sngx.net) - Fix a problem with cheetah >= 2.4.2 where the snippets were causing errors, particularly on F14 due to its use of cheetah 2.4.3. (shenson@redhat.com) - Initial commit of the Ubuntu import module (jimi@sngx.net) - Merge remote branch 'jimi1283/import-modules' (shenson@redhat.com) - Merge remote branch 'jimi1283/master' (shenson@redhat.com) - Extended ESX/ESXi support * Fixed release detection for both ESX and ESXi * Added support to kickstart_finder() so that the fetchable_files list gets filled out when the distro is ESXi (jimi@sngx.net) - Fixed distro_adder() in manage_import_vmware so ESXi gets imported properly (jimi@sngx.net) - Initial commit for the VMWare import module * tested against esx4 update 1, which imported successfully (jimi@sngx.net) - Minor style changes for web css * darken background slightly so the logo doesn't look washed out * make text input boxes wider (jimi@sngx.net) - Fix for the generic_edit function for the web page. The choices field for management classes was not being set for distros/profiles - only systems, causing a django stack dump (jimi@sngx.net) - modify keep_ssh_host_keys snippet to use old keys during OS installation (flaks@bnl.gov) - Merge remote branch 'jimi1283/master' (shenson@redhat.com) - Added replicate to list of DIRECT_ACTIONS, so it shows up in the --help output (jimi@sngx.net) - Merge branch 'master' into import-modules (jimi@sngx.net) - Merge branch 'master' of git://git.fedorahosted.org/cobbler (jimi@sngx.net) - Some fixes to the manage_import_redhat module * stop using mirror_name for path stuff - using self.path instead * fixed rsync command to use self.path too, this should really be made a global somewhere else though (jimi@sngx.net) - Add synopsis entries to man page to enable whatis command (kirkland@ubuntu.com) - Add "ubuntu" as detected distribution. (clint@ubuntu.com) - Fix for redhat import module. Setting the kickstart file with a default value was causing some issues later on with the kickstart_finder() function, which assumes all new profiles don't have a kickstart file yet (jimi@sngx.net) - Fix for non x86 arches, bug and fix by David Robinson (shenson@redhat.com) - Don't die when we find deltas, just don't use them (shenson@redhat.com) - Merge remote branch 'khightower/khightower/enhanced-configuration-management' (shenson@redhat.com) - By: Bill Peck exclude initrd.addrsize as well. This affects s390 builds (shenson@redhat.com) - Fix an issue where an item was getting handed to remove_item instead of the name of the item. This would cause an exception further down in the stack when .lower() was called on the object (by the call to get_item). (shenson@redhat.com) - Add a check to make sure system is in obj_types before removing it. Also remove an old FIXME that this previously fixed (shenson@redhat.com) - Fix regression in 2.0.8 that dumped into pxe cfg files (shenson@redhat.com) - Initial commit of import module for redhat (jimi@sngx.net) - Merge branch 'master' of git://git.fedorahosted.org/cobbler (jimi@sngx.net) - Added new modules for copying a distros's fetchable files to the /tftpboot/images directory - add_post_distro_tftp_copy_fetchable_files.py copies on an add/edit - sync_post_tftp_copy_fetchable_files.py copies the files for ALL distros on a full sync (jimi@sngx.net) - Removed trailing '---' from each of the PXE templates for ESXi, which causes PXE issues (jimi@sngx.net) - Make stripping of "G" from vgs output case-insensitive (heffer@fedoraproject.org) - Replace rhpl with ethtool (heffer@fedoraproject.org) - Add --force-path option to force overwrite of virt-path location (pryor@bnl.gov) - item_[profile|system] - update parents after editing (mlevedahl@gmail.com) - collection.py - rename rather than delete mirror dirs (mlevedahl@gmail.com) - Wil Cooley (shenson@redhat.com) - Merge remote branch 'kilpatds/io' (shenson@redhat.com) - Add additional qemu_driver_type parameter to start_install function (Konrad.Scherer@windriver.com) - Add valid debian names for releases (Konrad.Scherer@windriver.com) - Add debian preseed support to koan (Konrad.Scherer@windriver.com) - Add support for EFI grub booting. (dgoodwin@rm-rf.ca) - Turn the 'daemonize I/O' code back on. cobbler sync seems to still work (dkilpatrick@verisign.com) - Fix some spacing in the init script (dkilpatrick@verisign.com) - Added a copy-default attribute to koan, to control the params passed to grubby (paji@redhat.com) - Turn on the cache by default Enable a negative cache, with a shorter timeout. Use the cache for normal lookups, not much ip-after-failed. (dkilpatrick@verisign.com) - no passing full error message. Der (dkilpatrick@verisign.com) - Pull the default block size into the template, since that can need to be changed. Make tftpd.py understand -B for compatibility. Default to a smaller mtu, for vmware compatibility. (dkilpatrick@verisign.com) - in.tftpd needs to be run as root. Whoops (dkilpatrick@verisign.com) - Handle exceptions in the idle-timer handling. This could cause tftpd.py to never exit (dkilpatrick@verisign.com) - Do a better job of handling things when a logger doesn't exist. And don't try and find out what the FD is for logging purposes when I know that might throw and I won't catch it. (dkilpatrick@verisign.com) - Scott Henson pointed out that my earlier changes stopped a sync from also copying kernel/initrd files into the web directry. Split out the targets from the copy, and make sure that sync still copies to webdir, and then also fixed where I wasn't copying those files in the synclite case. (dkilpatrick@verisign.com) - Put back code that I removed incorrectly. (sync DHCP, DNS) (dkilpatrick@verisign.com) - Support installing FreeBSD without an IP address set in the host record. (dkilpatrick@verisign.com) - Fixed some bugs in the special-case handling code, where I was not properly handling kernel requests, because I'd merged some code that looked alike, but couldn't actually be merged. (dkilpatrick@verisign.com) - fixing koan to use cobblers version of os_release which works with RHEL 6 (jsherril@redhat.com) - Adding preliminary support for importing ESXi for PXE booting (jimi@sngx.net) - Fix cobbler check tftp typo. (dgoodwin@rm-rf.ca) - buildiso now builds iso's that include the http_port setting (in /etc/cobbler/settings) in the kickstart file url (maarten.dirkse@filterworks.com) - Add check detection for missing ksvalidator (dean.wilson@gmail.com) - Use shlex.split() to properly handle a quoted install URL (e.g. url --url="http://example.org") (jlaska@redhat.com) - Update codes.py to accept 'fedora14' as a valid --os-version (jlaska@redhat.com) - No more self (shenson@redhat.com) - Don't die if a single repo fails to sync. (shenson@redhat.com) - Refactor: depluralize madhatter branch (kelsey.hightower@gmail.com) - Updating setup.py and spec file. (kelsey.hightower@gmail.com) - New unit tests: Mgmtclasses (kelsey.hightower@gmail.com) - Updating cobbler/koan man pages with info on using the new configuration management capabilities (kelsey.hightower@gmail.com) - Cobbler web integration for new configuration management capabilities (kelsey.hightower@gmail.com) - Koan configuration management enhancements (kelsey.hightower@gmail.com) - Cobbler configuration management enhancements (kelsey.hightower@gmail.com) - New cobbler objects: mgmtclasses, packages, and files. (kelsey.hightower@gmail.com) - Merge remote branch 'jsabo/kickstart_done' (shenson@redhat.com) - Move kickstart_done and kickstart_start out of kickgen.py and into their own snippets. This also adds support for VMware ESX triggers and magic urls by checking for the "vmware" breed and then using curl when that's all thats available vs wget. VMware's installer makes wget available during the %pre section but only curl is around following install at %post time. Yay! I've also updated the sample kickstarts to use $SNIPPET('kickstart_done') and $SNIPPET('kickstart_start') (jonathan.sabo@gmail.com) - No more getting confused between otype and obj_type (shenson@redhat.com) - The clean_link_cache method was calling subprocess_call without a logger (shenson@redhat.com) - Scott Henson pointed out that my earlier changes stopped a sync from also copying kernel/initrd files into the web directry. Split out the targets from the copy, and make sure that sync still copies to webdir, and then also fixed where I wasn't copying those files in the synclite case. (dkilpatrick@verisign.com) - revert bad templates path (dkilpatrick@verisign.com) - Put back code that I removed incorrectly. (sync DHCP, DNS) (dkilpatrick@verisign.com) - Support installing FreeBSD without an IP address set in the host record. (dkilpatrick@verisign.com) - Fixed some bugs in the special-case handling code, where I was not properly handling kernel requests, because I'd merged some code that looked alike, but couldn't actually be merged. (dkilpatrick@verisign.com) - Two more fixes to bugs introduced by pytftpd patch set: * The generated configs did not have initrd set propertly * Some extra debugging log lines made it into remote.py (dkilpatrick@verisign.com) - Fix Trac#530 by properly handling a logger being none. Additionally, make subprocess_call and subprocess_get use common bits to reduce duplication. (shenson@redhat.com) - Fix a cobbler_web authentication leak issue. There are times when the token that cobbelr_web had did not match the user logged in. This patch ensures that the token always matches the user that is logged in. (shenson@redhat.com) - No more getting confused between otype and obj_type (shenson@redhat.com) - The clean_link_cache method was calling subprocess_call without a logger (shenson@redhat.com) - Merge remote branch 'kilpatds/master' (shenson@redhat.com) - Scott Henson pointed out that my earlier changes stopped a sync from also copying kernel/initrd files into the web directry. Split out the targets from the copy, and make sure that sync still copies to webdir, and then also fixed where I wasn't copying those files in the synclite case. (dkilpatrick@verisign.com) - revert bad templates path (dkilpatrick@verisign.com) - Put back code that I removed incorrectly. (sync DHCP, DNS) (dkilpatrick@verisign.com) - Support installing FreeBSD without an IP address set in the host record. (dkilpatrick@verisign.com) - Fixed some bugs in the special-case handling code, where I was not properly handling kernel requests, because I'd merged some code that looked alike, but couldn't actually be merged. (dkilpatrick@verisign.com) - Two more fixes to bugs introduced by pytftpd patch set: * The generated configs did not have initrd set propertly * Some extra debugging log lines made it into remote.py (dkilpatrick@verisign.com) - fast sync. A new way of copying files around using a link cache. It creates a link cache per device and uses it as an intermediary so that files that are the same are not copied multiple times. Should greatly speed up sync times. (shenson@redhat.com) - A few small fixes and a new feature for the Python tftp server * Support environments where the MAC address is know, but the IP address is not (private networks). I do this by waiting for pxelinux.0 to request a file with the mac address added to the filename, and then look up the host by MAC. * Fix my MAC lookup logic. I didn't know to look for the ARP type (01-, at least for ethernet) added by pxelinux.0 * Fix up some log lines to make more sense * Fix a bug where I didn't get handle an empty fetchable_files properly, and didn't fall back to checking for profile matches. (dkilpatrick@verisign.com) - Two fixed to bad changes in my prior patch set. Sorry about that. * Bad path in cobbler/action_sync.py. No "templates" * Bad generation of the default boot menu. The first initrd from a profile was getting into the metadata cache and hanging around, thus becoming the initrd for all labels. (dkilpatrick@verisign.com) - A smart tftp server, and a module to manage it (dkilpatr@dkilpatr.verisign.com) - Export the generated pxelinux.cfg file via the materialized system information RPC method. This enables the python tftpd server below to serve that file up without any sync being required. (dkilpatr@dkilpatr.verisign.com) - Move management of /tftpboot into modules. This is a setup step for a later python tftpd server that will eliminate the need for much of this work. (dkilpatr@dkilpatr.verisign.com) - Fetchable Files attribute: Provides a new attribute similar in spirit to mgmt_files, but with somewhat reversed meaning. (dkilpatr@dkilpatr.verisign.com) - fix log rotation to actually work (bpeck@redhat.com) - find_kernel and find_initrd already do the right checks for file_is_remote and return None if things are wrong. (bpeck@redhat.com) - Trac #588 Add mercurial support for scm tracking (kelsey.hightower@gmail.com) - Add a breed for scientific linux (shenson@redhat.com) - "mgmt_parameters" for item_profile has the wrong default setting when creating a sub_profile. I'm assuming that <> would be correct for a sub_profile as well. (bpeck@redhat.com) - The new setup.py placed webui_content in the wrong spot... (akesling@redhat.com) - Merge commit 'a81ca9a4c18f17f5f8d645abf03c0e525cd234e1' (jeckersb@redhat.com) - Added back in old-style version tracking... because api.py needs it. (akesling@redhat.com) - Wrap the cobbler-web description (shenson@redhat.com) - Create the tftpboot directory during install (shenson@redhat.com) - Add in /var/lib/cobbler/loaders (shenson@redhat.com) - Create the images directory so that selinux will be happy (shenson@redhat.com) - Dont install some things in the webroot and put the services script down (shenson@redhat.com) - Fix some issues with clean installs of cobbler post build cleanup (shenson@redhat.com) - rhel5 doesn't build egg-info by default. (bpeck@redhat.com) - Some systems don't reboot properly at the end of install. s390 being one of them. This post module will call power reboot if postreboot is in ks_meta for that system. (bpeck@redhat.com) - Changes to allow s390 to work. s390 has a hard limit on the number of chars it can recieve. (bpeck@redhat.com) - show netboot status via koan. This is really handy if you have a system which fails to pxe boot you can create a service in rc.local which checks the status of netboot and calls --replace-self for example. (bpeck@redhat.com) - When adding in distros/profiles from disk don't bomb out if missing kernel or ramdisk. just don't add it. (bpeck@redhat.com) - add X log to anamon tracking as well. (bpeck@redhat.com) - Added new remote method clear_logs. Clearing console and anamon logs in %pre is too late if the install never happens. (bpeck@redhat.com) - fixes /var/www/cobbler/svc/services.py to canonicalize the uri before parsing it. This fixes a regression with mod_wsgi enabled and trying to provision a rhel3 machine. (bpeck@redhat.com) - anaconda umounts /proc on us while were still running. Deal with it. (bpeck@redhat.com) - fix escape (bpeck@redhat.com) - dont lowercase power type (bpeck@redhat.com) - Bump to 2.1.0 (shenson@redhat.com) - Properly detect unknown distributions (shenson@redhat.com) - cobblerd service: Required-Start: network -> $network (cristian.ciupitu@yahoo.com) - cobblerd service: add Default-Stop to LSB header (cristian.ciupitu@yahoo.com) - No more . on the end (shenson@redhat.com) - Do not delete settings and modules.conf (shenson@redhat.com) - Remove manpage generation from the make file (shenson@redhat.com) - Update the author and author email (shenson@redhat.com) - Proper ownership on some files (shenson@redhat.com) - More rpm cleanups (shenson@redhat.com) - Don't have the #! because rpm complains (shenson@redhat.com) - No more selinux here, we should not be calling chcon, things will end up with the proper context in a well configured selinux environment (shenson@redhat.com) - No more chowning the log file. (shenson@redhat.com) - A new spec file to go with the new setup.py (shenson@redhat.com) - Forgot to add aux to MANIFEST.in (akesling@redhat.com) - Fixed naming scheme for web UI to make it more uniform, what was Puppet Parameters is now Management Parameters. (akesling@redhat.com) - Removed unnecessary cruft. (akesling@redhat.com) - Reconfigured setup.py to now place config files and web ui content in the right places. The paths are configurable like they were in the previous setup.py, but everything is much cleaner. (akesling@redhat.com) - Removed unnecessary templating functionality from configuration generation (and setup.py) (akesling@redhat.com) - Added more useful files to setup.py and MANIFEST.in as well as extra functionality which setup.py should contain. (akesling@redhat.com) - Massive overhaul of setup.py . Moved things around a little to clean up building/packaging/distributing. The new setup.py is still incomplete. (akesling@redhat.com) - RPM specific changes to setup.cfg. (akesling@redhat.com) - Currently working through making setup.py functional for generating rpms dynamically. setup.py is just cobbler-web at the moment... and it appears to work. The next things to do are test the current RPM and add in functionality for reducing repetitive setup.py configuration lines. (akesling@redhat.com) - Changed list-view edit link from a javascript onclick event to an actual link... so that you can now just open it in a new tab. (akesling@redhat.com) - Added tip for random MAC Address functionality to System MAC Address field. (akesling@redhat.com) - Added "Puppet Parameters" attribute to Profile and System items. The new input field is a textarea which takes proper a YAML formatted dictionary. This data is used for the Puppet External Nodes api call (found in services.py). (akesling@croissant.usersys.redhat.com) - Resume apitesting assuming against local Cobbler server. (dgoodwin@rm-rf.ca) - Replace rogue tab with whitespace. (dgoodwin@rm-rf.ca) - Open all log files in append mode. Tasks should not be special. This simplifies the handling of logging for selinux. (shenson@redhat.com) - Add rendered dir to cobbler.spec. (dgoodwin@rm-rf.ca) - Re-add mod_python dep only for cobbler-web. (dgoodwin@rm-rf.ca) - initializing variable that is not always initialized but is always accessed (jsherril@redhat.com) - Merge remote branch 'pvreman/master' (shenson@redhat.com) - add logging of triggers (peter.vreman@acision.com) - add logging of triggers (peter.vreman@acision.com) - cobbler-ext-nodes needs also to use http_port (peter.vreman@acision.com) - Adding VMware ESX specific boot options (jonathan.sabo@gmail.com) - Merge stable into master (shenson@redhat.com) - Fix cobbler_web authentication in a way that doesn't break previously working stuff (shenson@redhat.com) - Allow qemu disk type to be specified. Contributed by Galia Lisovskaya (shenson@redhat.com) - Merge remote branch 'jsabo/esx' (shenson@redhat.com) - Fix a bug where we were not looking for the syslinux provided menu.c32 before going after the getloaders one (shenson@redhat.com) - Fix cobbler_web authentication in a way that doesn't break previously working stuff (shenson@redhat.com) - More preparation for the release (shenson@redhat.com) - Update spec file for release (shenson@redhat.com) - Update changelog for release (shenson@redhat.com) - Bugfix: fetch extra metadata from upstream repositories more safely (icomfort@stanford.edu) - Bugfix: allow the creation of subprofiles again (icomfort@stanford.edu) - Don't warn needlessly when repo rpm_list is empty (icomfort@stanford.edu) - Bugfix: run createrepo on partial yum mirrors (icomfort@stanford.edu) - Change default mode for new directories from 0777 to 0755 (icomfort@stanford.edu) - Fix replication when prune is specified and no systems are specified. This prevents us from killing systems on a slave that keeps its own systems. To get the old behavior, just specify a systems list that won't match anything. (shenson@redhat.com) - Always authorize the CLI (shenson@redhat.com) - Bugfix: fetch extra metadata from upstream repositories more safely (icomfort@stanford.edu) - Bugfix: allow the creation of subprofiles again (icomfort@stanford.edu) - Don't warn needlessly when repo rpm_list is empty (icomfort@stanford.edu) - Bugfix: run createrepo on partial yum mirrors (icomfort@stanford.edu) - Change default mode for new directories from 0777 to 0755 (icomfort@stanford.edu) - Fix replication when prune is specified and no systems are specified. This prevents us from killing systems on a slave that keeps its own systems. To get the old behavior, just specify a systems list that won't match anything. (shenson@redhat.com) - Always authorize the CLI (shenson@redhat.com) - Merge branch 'wsgi' (dgoodwin@rm-rf.ca) - Adding VMware ESX 4 update 1 support (jonathan.sabo@gmail.com) - remove references to apt support from the man page (jeckersb@redhat.com) - wsgi: Service cleanup. (dgoodwin@rm-rf.ca) - wsgi: Revert to old error handling. (dgoodwin@rm-rf.ca) - wsgi: Switch Cobbler packaging/config from mod_python to mod_wsgi. (dgoodwin @rm-rf.ca) - wsgi: Return 404 when hitting svc URLs for missing objects. (dgoodwin@rm- rf.ca) - Merge branch 'master' into wsgi (dgoodwin@rm-rf.ca) - wsgi: First cut of port to mod_wsgi. (dgoodwin@rm-rf.ca) - Mar 17 2011 - 2.1.0 - (FEAT) enhanced config mgmt capabilities - (BUGF) add a check to make sure system is in obj_types before removing it - (BUGF) fix for keep_ssh_host_keys snippet on Fedora 13+ - (BUGF) re-enable debian/ubuntu support - (BUGF) fix compatibility problem with cheetah >= 2.4.2 - (BUGF) fix for trac #541 - cobbler sync deletes /var/www/cobbler/pub - (BUGF) fix for trac #587 - Un-escaped '$' in snippet silently fails to render - (BUGF) copy memtest to $bootloc/images/. Fixes BZ#663307 - (FEAT) allow dlcontent to use proxy environment variables - (FEAT) implement system power status API method and CLI command - (FEAT) add manage_rsync option which defaults to 0 - (FEAT) add --sync-all option to cobbler replicate - (BUGF) fix trac#574 memtest - (BUGF) fix for trac #509/#510 -- can't build iso with web UI - (FEAT) add VMware ESX and ESXi support - (FEAT) add FreeBSD support (includes pytftpd) - (FEAT) add batch delete option for profiles and mgmtclasses - (BUGF) restore --no-fail functionality to CLI reposync - (FEAT) add firstboot install trigger mode - Dec 8 2010 - 2.0.9 - (BUGF) Install several templates that were needed - (BUGF) Ensure kernel_path is expanded in pxe templates - (BUGF) Place qemu_driver_type on other image creators that don't use it so they work - Dec 3 2010 - 2.0.8 - (FEAT) PXE - (BUGF) Support EL6 and FC14 (and FC15) - (FEAT) Ability to force path in koan to overwrite existing files - (BUGF) Rename repos instead of deleting them - (BUGF) True daemonize - (BUGF) Debian cleanups - (FEAT) Allow qemu disk type to be specified - (BUGF) Close file descriptors so we don't leak listen ports - (BUGF) Prefer ethtool over rhpl - Nov 29 2010 - (FEAT) --force-path to overwrite virt-path file location - Apr 27 2010 - 2.0.4 - (BUGF) fetch extra metadata from upstream repositories more safely - (BUGF) allow the creation of subprofiles again - (BUGF) don't warn needlessly when repo rpm_list is empty - (BUGF) run createrepo on partial yum mirrors - (BUGF) change default mode for new directories from 0777 to 0755 - (BUGF) fix replication pruning to not prune systems when inappropriate - (BUGF) always authorize the CLI - (BUGF) fix passthru auth Trac#549 - (BUGF) fix for spacewalk bug 529277 - (BUGF) fix umask so files are not world writable - (BUGF) fix default pxe template to support default system record - (BUGF) redo the PXE menuif a profile is removed - (BUGF) properly escape snippets - (FEAT) Support Mercurial (hg) for scm tracking - Feb 15 2010 - 2.0.3.1 - (BUGF) Fix kernel command line parsing - Feb 12 2010 - 2.0.3 - (FEAT) API Test Framework - (FEAT) Cobbler4j Eclipse bindings - (BUGF) Trac544 Create noarch and src repos - (BUGF) Snippet Cleanups - (FEAT) Cleanup Kickstart Generation - (FEAT) Ability to include remote kickstart content - (FEAT) Koan ability for remote kickstart content - (BUGF) Trac547 Profile Creation in WebUI - (BUGF) Trac520 Sort HTML Dropdown Lists - (BUGF) Allow koan to use http urls for kernel/initrd - (FEAT) Cobbler4j now uses maven to build - (FEAT) Allow multiple kernel options with the same name - (BUGF) Various logging cleanups - (BUGF) Exit with proper error codes - (BUGF) Fix multiselect fields in the WUI - (BUGF) Trac550 stop leaking FDs in subprocess_call - (BUGF) Trac566 link distros on replicate - (BUGF) Trac563 Replace dos line endings with unix line endings when saving ksdata - (FEAT) Support RHEL6 kernel sizes - (BUGF) Preserve hardlinks on replicate - Full List 2b3a4f960b564691ca9a0470f677bd902f60b021..b99401d4dc6cf7f6c010eecf6c5d235316b59f73 - Nov 23 2009 - 2.0.2 - (FEAT) Added support for Cobbler4j - (FEAT) Added method for enabling autostart on qemu domains - (BUGF) web ui: move sessions directory to /var - (BUGF) Update the vlanpattern regex to cover more common virtual interface formats - (BUGF) cobbler check: fix BIND detection - (BUGF) Fix error message creating profile without a distro - (BUGF) use proper HTTP error codes - (BUGF) Create more intuitive system for displaying actions under configuration items. - (BUGF) Fixed hardlinking - (BUGF) Creating subprofile in WebUI no longer fails on default_ownership - (BUGF) No longer delete excluded files in reposync rsync - (BUGF) Add fedora12 and fedora13 as valid 'redhat' versions - (BUGF) Correct improper distro creation while importing i386 Fedora/RHEL. - (BUGF) Better messaging for invalid object errors - (BUGF) Added a legacy sync mechanism - (BUGF) No longer bundle libraries on distros where they are already available - Sep 18 2009 - 2.0.1 - (BUGF) Fixes for image based CLI usage and object validation - (BUGF) Fix koan check for images ... they don't need a kickstart - (BUGF) Get NFS image mounting working again - (BUGF) Show object name on cobbler-web edit pages - (BUGF) Make memtest integration look in the right file locations - (BUGF) Fix check for dhcpd installation - (BUGF) Fix code to detect dhcpd config file installation - (BUGF) Cleaned up generated comments in settings files - (BUGF) Unhide per-system post-install kernel options - (BUGF) Unhide owners fields - (BUGF) Fix "cobbler profile/system dumpvars" command - (BUGF) Remove acl_engine reference from authz_configfile - (BUGF) Add "View Kickstart" link back on profile list page - (FEAT) Very Experimental CouchDB serializer, off by default. Touch /etc/cobbler/use.couch - (BUGF) Fix pagination for webapp - (BUGF) Widen left column for webapp to better show controls - (BUGF) Workaround older mod_python scoping bug with utils.uniquify - (BUGF) Fix interface additions from web system edit page for new systems - (BUGF) Make cobbler reposync --only=N sync only that repo - (BUGF) Mark cobbler_web.conf config(noreplace) in the RPM specfile - (BUGF) Have cobbler check look for more SELinux things to do - (BUGF) Fix saving of "static" network field in webapp - (BUGF) Deepcopy interfaces to prevent copying of interface data - Sep 17 2009 - for 2.0 release - Development release start - (FEAT) add two new default flags to yum reposync (-m and -d) for grabbing comps automatically and also deleting duplicate RPMs - (BUGF) validate gateway input against python-netaddr - (FEAT) --kopts="~foo" --in-place can be used to delete a hash entry without affecting the others, not to be confused with "!foo" to supress it. - (FEAT) web app moves to Django - (FEAT) koan: Support for virt-autoboot feature (Xen only at this point, needs fixing) - (FEAT) koan: use private address range for random MACs - (FEAT) yum createrepo flags are now tweakable in settings - (FEAT) new snippet for RHN Hosted: keep RHN keys between reinstalls - (FEAT) allow per_distro snippets in addition to per_profile/per_system - (BUGF) don't add port number to http_server if it's 80. - (FEAT) support F11 fence-agent paths - (BUGF) koan autodetection now can gather MACs/IPs for bridged interfaces better - (FEAT) F11 and RAID support for keep_ssh_host_keys - (FEAT) keep_ssh_host_keys gets F11+RAID support - (FEAT) per-task logging - (BUGF) validateks checkes /every/ kickstart, as different variables may be present - (FEAT) CLI now speaks XMLRPC (faster) - (FEAT) task engine for webapp - (FEAT) background APIs for all actions - (FEAT) look at ksdevice when building buildiso CDs to add the proper network lines - (BUGF) don't use RPM to check os_version, so RPM won't be called in rpm %post - (BUGF) deprecated omapi/omshell features for manage_isc now removed - (FEAT) removed configuration files for binary paths, will look for in PATH - (FEAT) removed configuration settings indicating where some config files are - (FEAT) yum priorities plugin setting for distro repos now tweakable in settings - (BUGF) unfinished contrib. debian import/repo support disabled until it can be finished - (FEAT) replicate is now greatly improved and smarter - (BUGF) aclengine (unfinished feature) removed, (not the same as aclsetup) - - 1.6.9 - (FEAT/cobbler) cobbler can now pass the virtio26 os version to virtinst, so virtio drivers can be used for RHEL5 .3 for instance - (BUGF/cobbler) The URL to the WebUI documentation in the manpage was corrected - (BUGF/cobbler) add the --nameserver option for static interfaces when defined (snippet fix) - 1.6.7 - (BUGF/koan) Workaround for a bug in ybin, use --verbose for happy return codes - (BUGF/cobbler) patch to redhat_register for Satellite 1.6.6 (for more accurate listing, see release16 branch) - (BUGF) don't use -m option for drac unless power_id is set - (BUGF) don't run createrepo on reposync'd repos, it is redundant - (BUGF) fix error message when wrong virt ram value is applied - (FEAT) koan now partially works in source code form on EL 2 - (FEAT) discover repomd filenames (for F11 and later) - (BUGF) Satellite also has a python module named "server" so we have to remove ours for Python imports. This code was not used anyway. - (BUGF) fix typo in network config snippet - ... - 1.6.5 - (BUGF) don't use json on python x<=2.3 since we know it's simplejson x<2.0, which has unicode issues - (FEAT) support F-11 imports - (BUGF) don't add koan virtual interfaces for vlan tagged interfaces - (BUGF) pair PAE and non-PAE content in imports better - (BUGF) don't try to scp /etc/cobbler/*.ks on replicate, this is an old path - (BUGF) set proper umask on .mtime files - (BUGF) fix koan error with unspecified arch using --kexec - (BUGF) don't use --onboot=on in network config for kickstart for RHEL 3. - (FEAT) koan can be built on EL-2 - Fri May 8 2009 - 1.6.4 - (BUGF) empty nameserver search paths now don't screw up resolv.conf - (BUGF) unicode fix for --template-files feature - (BUGF) mod python input fix for repo/profile associations in webapp - (BUGF) change "hostname" to "option hostname" in dhcp.template - (BUGF) fixes for unicode problem with Cheetah, repos, and json - (BUGF) unicode fix for --template-files feature - (BUGF) another unicode fix - (BUGF) bonding interface setup for RHEL4 now works - (BUGF) generate DHCP info even if netboot is disabled - (BUGF) Change to network_config snippet to ignore bond masters to prevent anaconda failures when MAC addresses are not specified - (BUGF) fix to CLI rename deleting subobjects via cobblerd (important!) - (BUGF) fix blender cache problem with using dnsnames instead of record names (ISC DHCP management) - (BUGF) quote host name in stock ISC template - XXX - 1.6.3 - (BUGF) import with datestamp 0 when .discinfo is not found, do not crash - (BUGF) various fixes to "cobbler report" when using non-standard formats - (BUGF) rename with same name does not delete - (BUGF) Fix for traceback when generating DHCP config and the bonding_master interface does not exist. - (BUGF) remove trailing newline from tokens - (BUGF) fix unicode handling of some fields under EL4 - (BUGF) fix webui search error (fixed better on devel) - (BUGF) fix duplicate power reference - (BUGF) fix error message that makes it appear dnsmasq had a problem restarting, when it did not - (BUGF) spec requires python-urlgrabber - (BUGF) change settings default from 'ksdevice=eth0' to 'bootif' - (BUGF) keep DHCP restart quiet - (BUGF) better hostname setting with DHCP management on - (BUGF) fix bug in repo breed auto-detection code - (BUGF) close file descriptors by using subprocess, not os.system - (BUGF) make cobbler package Arch for now, require syslinux where it makes sense only. To be changed in later release to download it from intertubes. - Mon Mar 30 2009 - 1.6.2 - (BUGF) Fix for cache cleanup problem in cobblerd - Fri Mar 27 2009 - 1.6.1 - (FEAT) Improved anaconda monitoring code - (FEAT) download comps.xml always (even if yum does not have the option) - (FEAT) performance upgrades for cobblerd service to avoid reloads - (FEAT) more code for s390 imports (WIP) - (BUGF) import works better with rawhide - (FEAT) snippet to preserve SSH host keys across reinstalls - (FEAT) email trigger to notify when builds finish - (FEAT) triggers now are written in Python, old system still exists. - (FEAT) s390x zpxe support - (BUGF) retry power 5 times before giving up - (BUGF) fix for RHEL3 ppc imports - (FEAT) use nameserver variable in network config in Anaconda when set - (BUGF) sleep 5 seconds between power cycling systems - (FEAT) support for /usr/bin/cobbler-register - (FEAT) web UI search feature - (FEAT) very simple "cobbler hardlink" command to optimize space in /var/www/cobbler - (FEAT) new "change" trigger that runs on all adds/edits/removes/syncs - (FEAT) an SCM trigger that knows about /var/lib/cobbler kickstarts, snippets, and config directories, and audits them for changes - (BUGF) update ctime on object copies - (BUGF) fix cobbler check code that makes sure default password is not in use - (BUGF) remove duplicate text box for management classes from server edit page - (BUGF) fix stderr output on Ctrl+C - (BUGF) potential performance savings WRT duplicate python objects - (BUGF) don't deserialize during sync, it makes it take twice as long - (FEAT) Ubuntu support for import (added/improved) - (BUGF) make DHCP managemnet work with bonding - (BUGF) teach serializer_catalog to fly with JSON - (BUGF) don't use yumdownloader when rpm_list is [] - (BUGF) fix default kickstart location - (BUGF) properly "flatten" environment values for webapp - (BUGF) fix for MSIE rendering in web app - (BUGF) stop yumdownloader including system yum repos - (DOCS) update webui project page link - (BUGF) remove caching logic from remote.py - (FEAT) added --exclude-dns option to buildiso - (BUGF) fix string comparisons to tolerate unicode - (BUGF) fix dealing with name servers and name servers search, esp in web app - Tue Mar 3 2009 - 1.4.3 - (BUGF) fix OMAPI support's (note: deprecated) usage of subprocess - (BUGF) don't traceback on invalid cmd ("cobbler distro list --name=foo") - (BUGF) fix import usage with --kickstart option (no more traceback) - (BUGF) fix removal of images with child system objects - (BUGF) make --rpmlist on repo use same parsing routes as the rest of cobbler - (BUGF) default value for server override should be <> not - (BUGF) ensure if virt bridge is set to "" it will apply the value from settings - (BUGF) cobbler check should say path to config file is /etc/cobbler, not /var/lib - (FEAT) enable cobblerweb username/pass logins when authing with spacewalk - (BUGF) allow --kopts to take parameters that are shell quoted. - (BUGF) allow kernel options to start with '-' - (BUGF) Use shlex for parsing --kopts to allow a wider variety of kernel options - (BUGF) prevent potential traceback when --template-file data isn't a hash - (BUGF) anaconda doesn't honor nameserver always, set manually - (BUGF) various post install network snippet fixes - (BUGF) fix OMAPI support's (note: deprecated) usage of subprocess - (SPEC) fix build for rawhide now that yaboot is included - (BUGF) allow src and noarch arches for repo objects - (BUGF) move to PyYAML for YAML implementation since it is better maintained - (BUGF) fixed web-interface editing of ownership - (BUGF) fixed web-interface editing of management classes - (BUGF) don't run full sync in import, sync makes the install system unavailable for very short periods of time - (FEAT) XMLRPC functions for searching objects, just like the Python API has - (BUGF) make "find repo" CLI command search repos, not systems - XXX - 1.4.2 - (BUGF) fix WTI power templates - (FEAT) add WTI power type - (BUGF) remove Python 2.6/3000 warnings - (BUGF) fix typo in network template (DNS enumeration) - (BUGF) make buildiso work for systems that are using bonding - (BUGF) blending fix for --template-files - (BUGF) cobbler check ignores comments in TFTP config - (BUGF) fix image type field editing in the webapp - (BUGF) allow deletion of eth0 if other interfaces are present - (BUGF) fix server-override setting in CLI for profile objects - (BUGF) systems and profiles are now sorted for "cobbler buildiso" - (BUGF) ensure that directories exist when installing a template file - (BUGF) fix for typo in network config snippet - (BUGF) fix for func integration script (should overwrite config, not append) - (BUGF) append nameservers and gateway, if set, for buildiso - Fri Jan 09 2009 - 1.4.1 - (BUGF) Cobbler check looks for right httpd on SUSE - (BUGF) post_install_network_config snippet had bash errors - (BUGF) don't run restorecon programatically - (FEAT) have cobbler check mention when semanage rules should be applied - (BUGF) fix an obscure xmlrpclib corner case where a string that looks like an int can't be served because xmlrpclib thinks it's an int. Applies to XMLRPC consumers only, not CobblerWeb or the cobbler CLI. - (BUGF) fix external kickstart URLs (not templated) and per-system overrides. - (BUGF) fix 'cobbler check' code for SuSE services - (FEAT) batch editing on the system page. - Fri Dec 19 2008 - 1.4 - (----) Stable release of 1.3 development branch - Fri Dec 19 2008 - 1.3 - (FEAT) ACLs to extend authz (see Wiki) - (FEAT) puppet integration with --mgmt-classes and external nodes URL - (FEAT) added puppet external nodes script, cobbler-ext-nodes see https://fedorahosted.org/cobbler/wiki/UsingCobblerWithConfigManagementSystem - (FEAT) ability to use --enable-menu=0/1 to hide profiles from the PXE menu, and config setting to change default value for --enable-menu - (FEAT) added livecd based physical machine cloner script to "contrib" - (FEAT) enable import for debian ISOs and mirrors (1 distro at a time for now) - (FEAT) auto-create rescue profile objects - (FEAT) included network_config snippet and added --static=0/1 to system objects - (FEAT) cobbler report gains additional options for Wiki formatting, csv, and showing only certain fields - (FEAT) changed default kernel options to include ksdevice=bootif (not ksdevice=eth0) and added ipappend 2 to PXE - (FEAT) distro edits now no longer require a sync to rebuild the PXE menu - (BUGF) minor tweak to the blender function to remove a certain class of typing errors where a string is being blended with a list, should not have any noticable effect on existing installs - (BUGF) add missing import of "_" in remote.py - (FEAT) upgraded webui editing for multiple NICs - (FEAT) "template_universe" variable created for snake's usage, variable contains all template variables and is also passed to the template. - (FEAT) refactored import with better Debian/Ubuntu support - (FEAT) Func integration snippets and new settings - (FEAT) settings file and modules.conf now generated by setup.py using templates - (FEAT) --template-files makes cobbler more of a full config management system! - (FEAT) cobbler reposync now supports --tries=N and --no-fail - (FEAT) duplicate hostname prevention, on by default - (FEAT) make import work for Scientific Linux - (FEAT) distro remove will remove mirrored content when it's safe to do so - (FEAT) repo remove will remove mirrored repo content - (FEAT) added snippet for better post-install network configuration - (BUGF) duplicate repo supression to prevent errors in certain Andaconda's - (FEAT) post_network_snippet/interfaces can set up VLANs - (FEAT) call unittests with nose, which offers cleaner output, see tests/README - (FEAT) update included elilo to 3.8 - (BUGF) quote append line for elilo - (BUGF) added ExcludeArch: ppc64 - (FEAT) --environment parameter added to reposync - (BUGF) have triggers ignore dotfiles so directories can be version controlled (and so on) - (FEAT) init scripts and specfiles now install on SuSE - (TEST) added remote tests, moved existing unit tests to api.py methods - (FEAT) import can auto assign kickstarts based on distro (feature for beaker) - (FEAT) import now saves the dot version of the OS in the comment field - (FEAT) import now adds tree build time - (FEAT) --comment field added to all objects - (FEAT) keep creation and modification times with each object - (FEAT) added ppc imports and arches to supplement s390x (and assoc koan chage) - (FEAT) added number of interfaces required to each image - ~~~ 1.3.2 - (BUGF) fix for possible import of insecure modules by Cheetah in Cobbler web. - (FEAT) new version function - (FEAT) misc WUI organization - (FEAT) allow auth against multiple LDAP servers - (FEAT) replicate now copies image objects - (BUGF) replicate now uses higher level API methods to avoid some profile sync errors when distros fail - (FEAT) /etc/cobbler/pxe and /etc/cobbler/power are new config file paths - (FEAT) /var/lib/cobbler/kickstarts is the new home for kicktart files - (FEAT) webui sidebar reorg - (FEAT) manage_dhcpd feature (ISC) now checks service status before restarting - (BUGF) omapi for manage_dhcp feature (ISC) now defaults to off - (FEAT) added module whitelist to settings file - (BUGF) don't let the service handler connect to itself over mod_python/proxy, RH 5.3 does not like - (FEAT) mtimes and ctimes and uids in the API - (FEAT) new functions to get objects based on last modified time, for sync with other apps - ~~~ 1.3.3 Test release - (FEAT) Python 2.6 specfile changes - ~~~ 1.3.X - (FEAT) cobbler check output is always logged to /var/log/cobbler/check.log - (FEAT) new redhat_register snippet and --redhat-management-key options! - (BUGF) SELinux optimizations to symlink/hardlink/chcon/restorecon behavior - (----) no SELinux support on EL 4 - (FEAT) yum_post_install_mirror now on by default - ??? - 1.2.9 - (BUGF) do not allow unsafe Cheetah imports - Wed Oct 15 2008 - 1.2.8 - (BUGF) make cobbler read /etc/cobbler/settings.py (introduced in 1.2.6) - Tue Oct 14 2008 - 1.2.7 - (BUGF) go easier on restrictings when importing subprofiles to prevent load errors - (FEAT) added debuginator script to scripts/ (not shipped with RPM) - Fri Oct 10 2008 - 1.2.6 - (BUGF) fix image vs system parentage problem affecting cobbler replicate - (BUGF) fix restart-services trigger to not restart dns unneccessarily - (BUGF) add missing variable newname to signature for remote copy_ functions - (BUGF) fix for ownership ownership module when editing kickstarts - Fri Sep 26 2008 - 1.2.5 - (BUGF) expose --arch for "cobbler image add" - (BUGF) unbreak dnsmasq DHCP management, similar to ISC bug - (BUGF) fix --arch for cobbler distro add/edit - (BUGF) fix merge error with remote.py's remove_profile function (fix webapp) - (BUGF) make keep_updated and mirror_locally checkboxes in WebUI display correctly - Mon Sep 08 2008 - 1.2.4 - (BUGF) simple rebuild to remove cli_report.py, which is not in git - Sun Sep 07 2008 - 1.2.3 - (BUGF) fix to manage_isc.py code - Fri Sep 05 2008 - 1.2.2 - (BUGF) fix to where elilo location is loaded in manage_isc.py - (BUGF) populate netboot_enabled in WebUI correctly - (BUGF) make RPM own some unowned directories in "triggers" - (BUGF) add named_conf setting to settings - Tue Sep 02 2008 - 1.2.1 - (BUGF) fix merge problem with 1.2 - Fri Aug 29 2008 - 1.2.0 - (FEAT) All development work from 1.X merged in - (FEAT) when --netboot-enabled is toggled, rather than deleting the PXE config, create a local boot PXE config - (BUGF) disable some s390 aspects of cobbler check until it is supported - (FEAT) new --os-version, better validation for --breed, --breed/--os-version also for images - ??? - 1.1.1 - (FEAT) make template replacement use regex module - (BUGF) remove bootloader check in settings as that code doesn't need it - (BUGF) refinements to image handling - (FEAT) getks command added to command line - (BUGF) don't print traceback during certain SystemExits - (BUGF) --help now works more intuitively on odd command line usage - (BUGF) use pxesystem for systems, not pxeprofile - (FEAT) make Cheetah formatting errors contain help text for users - (FEAT) --kopts-post can configure post-install kernel options - (BUGF) subtemplates are now errorCatcher Echo compatible - ??? - 1.1.0 - devel branch - added cobbler aclsetup command for running cobbler as non-root - added cobbler find command to do searches from the command line - fix mkdir invocation - improved cobbler replicate, it now can rsync needed files - further templatize ISC dhcp config file (pay attention to /etc/cobbler/dhcp.template.rpmnew !) - fix for NFS imported URLs during kickstart generation - added yumreposync_flags to settings, default "-l" for use plugins - added an extra mkdir for rhn's reposync, though I believe the -l cures it already - allow mod python bits to work via non-80 http ports - when mirroring repos, act as 686 not 386 to get all the kernel updates - upgrades to cobbler buildiso - added patch to allow --in-place editing of ksmeta/kopts - added patch to allow multiple duplicate kernel options - fix kickstart serving when the tree is on NFS - s390x import, added support for s390x "pseudo-PXE" trees - added support for tracking image objects and virtual ISO images - support for multiple copies of the same kernel option in kopts - add cobbler bash completion script - fix bug with 255 kernel options line warning not firing soon enough - add findks.cgi support back as http://server/cblr/svc/op/findks - merge patch to surface status over API - make yum repos served up for /etc/yum.repos.d fully dynamic (mod_python) - cobbler find API calls and command line usage can now use fnmatch (wildcards) - return code cleanup (0/1 now more predictable) - added comments to /etc/cobbler/modules.conf - during import non-xen kernels will default --virt-type to qemu - when editing/adding profiles, auto-rebuild the PXE menu - added http://cobbler.example.org/cblr/svc/op/list/what/systems (or profiles, etc) - in the webui, only show compatible repos when editing a profile - refresh cobblerd cache before adding objects - system object IP's ok in CIDR notation (AAA.BBB.CCC.DDD/EE) for defining PXE behavior. - split partition select template into two parts (old one still ships) - cleanup some stock kickstarts so we always use $kickstart_start - hook ctrl+c during serializer writes to prevent possible corruption of data in /var/lib/cobbler - added 'serializer_catalog' as the new default serializer. It is backward compatible and much faster. - removed serializer_shelve - webui page lookups don't load the full object collection - systems can also inherit from images - changes to PXE images directly - bootloaders is no longer a config file setting - we can now look for syslinux in one of two places (/usr/lib, /usr/share) - cobbler hardlinks a bit more when it can for /var/www image copies - add Xen FV and VMware virt types to WebUI - Thu Jul 17 2008 - 1.0.4 (tentative) - Backported findks.cgi to mod_python, minor mod_python svc handler changes - Wed Jun 03 2008 - 1.0.3 - Fix typo in replicate code - remove rhpl reference - scrub references to manage_*_mode and rewrite the restart-services trigger - add new settings to control whether the restart-trigger restarts things - yum reposync should also pull i686 kernels, not just i386 - make cobblerd close file handles - fix kickstart serving when the tree is on NFS - fix missing reposync createdir (also now in stable branch) - add back missing remove_profile/remove_repo - remove profile_change support - Mon Jun 09 2008 - 1.0.2 - Fix mkdir invocation - Fix error message output from failed kickstart rendering - manpage edits - make buildiso work for SuSE - Tue Jun 03 2008 - 1.0.1 - Fix misformatted warning in "check" - Do not have RPM own tftpboot, just generate files in TFTP dir as detected - Default arches to 'i386' not 'x86' for consistency, esp. in import - When querying kickstart templates, do not return directories - Make triggers for add/delete work for renames and copies (same triggers) - Do not cache snippets so they can be tweaked w/o starting the service - Make manpage reference /etc/cobbler/settings, not /var/lib - Added manage_forward_zones/manage_reverse_zones to included config file - Fix python double-use-of-parameter error - Mon May 12 2008 - 0.9.2 - run createrepo with less preconditions during cobbler reposync - doc upgrades and error handling for "cobbler replicate" - improved error message that occurs when copying from nfs w/ rootsquash - mac duplication checking improvements for CLI - add warning to cobbler check if selinux is on and Apache boolean not set - added warning to cobbler check if templates use the default password - setting per-system kickstart template to "" or "delete" restores inheritance - if repos in profiles no longer exist, remove noisy warning, move to "check" - move warning about reposync to check also (check is more useful at runtime now) - build pxe trees for systems even if interface0 is undefined - add sync() back into XMLRPC API, missing in 0.9.1 - added 'distro_name', 'profile_name', and 'system_name' to generated template vars - it's now possible to undefine a --ksmeta or kopts symbol defined in a parent with "!foo" - log errors while rendering kickstarts - comments added to the config file, neat! - settings file is now /etc/cobbler/settings - Fri May 09 2008 - 0.9.1 - patch to allow yumopts to override gpgcheck - applied patch to send hostname from ISC - added patch to allow --kopts/--ksmeta items to be cleared with --kopts=delete - tftpboot location is now inferred from xinetd config (added for F9 compat) - added authn_ldap and stub for authz_configfile - authz_configfile allows filtering ldap/other users by config file - WebUI now has checkbox on distro/profile for deleting child objects - cli has different semantics between "add" and "edit" now for safety reasons - cobbler wants to keep IPs/MACs unique now in configuration (can be disabled) - added --clobber option to allow add to overwrite existing objects (for scripts) - updated/tested kerberos support for those needing to auth against it - update menu.c32 to 3.62 to allow for timeouts during menu (and future submenu) - update PXE defaults to invoke menu.c32 automatically w/ timeout - removed dependency on rhpl - import can now take an --arch (and is recommended usage) - now possible to override snippets on a profile/system specific basis - provide a different default sample kickstart for imports of F8 and later - support for kerberos authentication - revamped pre/post install triggers system (triggered via cgi from kickstart wget) - logrotate should not send emails to root when restarting services - default core (but not repo add) repos to priority 1 (lowest) if using priorities plugin - change default authentication to deny_all, xmlrpc_rw_enabled now on by default - additional fix for mod_python select box submissions - set repo arch if found in the URL and no --arch is specified - CGI scripts have been moved under mod_python for speed/consolidation - kickstart templates are now evaluated dynamically - optional MAC registration is now built-in to requesting kickstarts - legacy static file generation from /var/www/cobbler removed - implement "cobbler ___ dumpvars --name=X" feature to show template vars - validateks now works against all URLs as opposed to rendered local files - now possible to create new kickstarts in webui, and delete unused ones - support for OMAPI for avoid dhcp restarts - support for managing BIND - xen kernel (PV) distros do not get added to PXE menus as they won't boot there - cobbler buildiso command to build non live ISOs - cobbler replicate command - added cobbler repo option --mirror-locally to reference external repos without mirroring - all virt parameters on profiles can now be overriden on cobbler profile objects - added some additional links for kickstart viewing/editing to the web page - ??? - 0.8.3 - Make createrepo get run for local cobbler reposync invocations as needed - fix WebUI documentation URL - fix bug in /etc/cobbler/modules.conf regarding pluggable authn/z - fix default flags for yumdownloader - fix for RHEL 4u6 DVD/tree import x86_64 arch detection - fix for dnsmasq template file host config path - fix dnsmasq template to point at the correct hosts file - force all names to be alphanumeric - all mod python pieces now happy with Unicode output * Fri Feb 22 2008 - 0.8.2 - fix to webui to allow repos to be edited there on profile page - disable local socket XMLRPC as nothing is using it. - fixed findks.cgi so it supports multiple NICs - import now supports both --path and --mirror as aliases, as before - added change_profile.cgi for changing profiles from CGI - added register_mac.cgi * Wed Feb 20 2008 - 0.8.1 - bugfix in reposync code - don't print tracebacks on SystemExit from optparse - manpage tweaks * Fri Feb 15 2008 - 0.8.0 (TBD) - stable release of 0.7.* branch plus ... - fixed potential user problem with source_repos in upgrade scenario - additional higher level API functions for find, fixes for other higher level API functions - better messaging when insufficient permissions on needed files - update permissions on reposync fixes * Thu Jan 31 2008 - 0.7.2 (0.8 rc) - default_virt_file_size and default_virt_ram added to settings - enforce permissions/selinux context after reposync - better API for copying/renames, API consistancy cleanup - support for renames that resolve dependencies, inclusion in CLI+webapp - remove leading newline in rendered template files, which apparently breaks AutoYAST? - recursive syncs automatically sync all subobjects when editing parent objects (default behavior) - deletes can now be done recursively (optional --recursive on distro/profile remove) - 'cobbler list' is now (re)sorted * Wed Jan 09 2008 - 0.7.1 - allow imports to force usage of a specific kickstart template with --kickstart - added --yumopts parameter to repos (works just like --kopts/--ksmeta) - minor doc fixes - fix for name of F8 comps.xml file - added option --rsync-flags to import command - added http_port to settings to run Apache on non-80 - rsync during createrepo now keeps filesystem permissions/groups - ... * Mon Dec 10 2007 - 0.7.0 - Testing branch - Fix bug related to <> and kickstart args - Make CLI functions modular and use optparse - Quote wget args to avoid creating stray files on target system - Support Xen FV as virt type (requires F8+) - Implemented fully pluggable authn/authz system - WebUI is now mod_python based - Greatly enhanced logging (goes to /var/log/cobbler/cobbler.log) - ... * Wed Nov 14 2007 - 0.6.4 - Changed permissions of auth.conf - Fixes for working with rhn_yum_plugin - still allow repo configuration for distro repos that have just 1 repo (like C5.1) - disable CGI weblogging by default (backend logging TBA) - fix WebUI handling of keep_updated (repo field) and netboot_enabled (system field) - disable the blender_cache as it's running afoul of the sync code - update htaccess file to only authenticate the webui, not nopxe.cgi and findks.cgi * Wed Nov 07 2007 - 0.6.3 - Be able to define and use Multiple NICs per system - Add --virt-cpus to profile editing - Fix bug where WUI (XMLRPC) auth wasn't supported on EL4 - Add --virt-bridge to profile editing and NICs - Added serializer_shelve (as option) for added performance/persistance over YAML, experimental in /etc/cobbler/modules.conf, see Wiki - Backup state files and migrate state structures upon RPM upgrade - Added some more redundant files (for unsupported distros) to the rsync.exclude file - added pre-sync and post-sync triggers, service restarts are now handled by /var/lib/cobbler/triggers - webui now uses htaccess (see manpage and Wiki for setup instructions) - added pagination to the WUI to keep pages from growing overly long - added --server-override parameter for help with multi-subnet configurations (also see Wiki) - removed yum-utils as a hard requirement, cobbler check now looks for yum-utils - fixed bug where cobbler would try to copy hardlinks to themselves during sync - misc random bugfixing * Fri Sep 28 2007 - 0.6.2 - cobbler repo auto-add to discover yum repos automatically - fix bug that allows empty mac addresses (None) in dhcpd.conf - kickstarts automatically save kickstart file used to /root/cobbler.ks - allow multiple (comma-seperated) values for --virt-size - removed deprecated 'enchant' function (use SSH and koan instead) - cleanup of a few unused settings - allow for serialization modules to be selected in /etc/cobbler/modules.conf - patch to allow for reposync of specific repos, even if not set to update - added --dhcp-tag section for better DHCP customization (esp with multiple subnets) - added Apache proxying around XMLRPC port for wider network access - refactor XMLRPC API and establish a read-write API - allow for configuring of read-write XMLRPC users in /etc/cobbler/auth.conf - WebUI - packaged /var/lib/cobbler/settings as a config file - added BuildRequires to help build on other platforms - relocate cgi-bin files to cgi-bin/cobbler for namespacing - fix syslog logging for systems not in the cobbler DB. - fix bug in which non-lowercase intermediate objects could be deleted * Thu Aug 30 2007 - 0.6.1 - re enable --resolve in yumdownloader (cobbler repo mgmt feature) - fix get_distros_for_koan API function in cobblerd (not used by koan) - allow find API to search by arbitrary fields - status and logging now shows system names - upgraded init scripts - zeroconf/avahi publishing for cobblerd service - logRequests = 0 for XMLRPC. Make it be quiet. - ignore subdirectories of /var/lib/cobbler/snippets - fixed bug in graph rendering that allowed for upward property propogation in some cases - fixed bug that did not correctly evaluate repository settings of inherited sub-profiles/objects - tweaked domU sample kickstart to include wget - added some more unit tests - fix typo down one error path in cobbler sync. - fix reposync handling when using rsync protocol and directory paths do not contain arch - allow basic usage of Cheetah variables in config files @@server@@, etc. - fix auto-repo attachment for distros with split trees (i.e. RHEL5) * Thu Aug 09 2007 - 0.6.0 - bugfix in error path in "cobbler check" - stable release for 0.5.x * Thu Jul 26 2007 - 0.5.2 (RC) - Have cobbler check ensure services are started - Add cobbler validateks command to look for broken rendered kickstarts - Added -v/--version - Added SNIPPET::foo capability to pull /var/lib/cobbler/snippets/foo into templates (anywhere) - Import can now take an --available-as=nfs://server:/mount/point to do cobbler import without mirroring - Feature to enable "pxe_just_once" for boot loop prevention * Fri Jul 20 2007 - 0.5.1 - Added logging for cobblerd -- /var/log/cobbler/cobblerd.log - Cobblerd now ignores XMLRPC IOError - Added findks.cgi - Misc bugfixing - Added --virt-path, --virt-type * Wed Jun 24 2007 - 0.5.0 - Remove hardcode of /var/www/cobbler in cobblerd - Improve various warning warning messages - cobbler (objecttype) (objectname) now gives info about the object instead of just all objects - Added --hostname to "cobbler system add", --ip-address (or --ip) is also a better alias for the misnamed --pxe-address - Optionally use dnsmasq for DHCP (and DNS!) instead of ISC dhcpd. - Add --mac and remove requirement for --name to be an ip, mac, or hostname. - Manpage cleanup - Patch to allow pre and post triggers - Patch to allow --createrepo-flags and to cache on import, fix multiple calls to createrepo - Various modifications to allow for profile inheritance - All variables in object tree now available for use in templating, nicer blending algorithms - Optional override of --kickstart in system object * Thu Apr 26 2007 - 0.4.8 - Make import friendlier for older distros - Make import friendlier for newer createrepos that don't have --basedir * Fri Apr 20 2007 - 0.4.7 - Disable mod_python tracker piece for RHEL5 (replacement eventual). - Kickstart tracking now understands Apache logs - Added support for --rpm-list parameter to "repo add" for download of partial content from repositories (ex: cobbler and koan from FC6extras, w/o games). - More consistant naming on imports, regardless of data source. - Teach cobbler to remove .olddata dirs, which can happen if createrepo crashes or is killed mid-process - Default yum_core_repos_from_server to 0 - Implemented triggers for add/delete commands - BootAPI and Config object classes are now Borg patterned to prevent duplication of config info from the API. - cobbler_syslogd -> cobblerd, now has XMLRPC component for koan >= 0.2.9 clients. Old clients still work. - Make cobbler_import work for Centos 5 - Removed requirements on what files that are parameters to --kernel and --initrd must be named. - Added support for "rename", "copy", and "edit" commands -- before there just was "add" and "remove" * Thu Apr 05 2007 - 0.4.6 - Bind cobbler_syslogd to all addresses - Store repos as list, not string - Fix traceback in cobbler_sync with older configurations (pre-kickstart tracking) - Make cobbler import feature better understand older RHEL and in-between builds of Fedora. - Make cobbler repo add/reposync understand http://, ftp://, and some limited support for RHN. - Add settings parameter to toggle core repo mirror behavior on/off. - Manpage cleanup. * Fri Mar 23 2007 - 0.4.5 - Removed legacy --virt-name parameter, requires koan upgrade to 0.2.8 * Fri Mar 23 2007 - 0.4.4 - Generate PXE configuration files from templates in /etc/cobbler to be more customizable - Fix bug with wrong kickstart metadata being used for import - Fix bug with argument parsing for --repos - Much cleaner distro/profile names with --import - For import, the "tree" parameter is now attached to the distro, not the profile - Add "links" directory in webdir for symlinking to full kickstart tree paths. - Misc tweaks to shorten kernel parameter length - Giving invalid arguments to "report" will show an error message - Distros, Profiles, and System names are now case insensitive. * Wed Feb 28 2007 - 0.4.3 - Added netboot_enabled option for systems to control install loops in programmatic context. - Disabling anchors in YAML serialization (which make files harder to edit) - Fix bug in ksmeta argument processing, takes whitespace again, not commas - Fix bug in old-style deserialization with str and int concatenation * Mon Feb 19 2007 - 0.4.2 - Fix bug in "cobbler system remove" * Mon Feb 19 2007 - 0.4.1 - Bundle menu.c32 for older distros - Unbundle Cheetah as it's available at http://www.python.org/pyvault/centos-4-i386/ * Mon Feb 19 2007 - 0.4.0 - Added feature to minimize the need to run "cobbler sync" for add commands Now only need to run sync when files change behind the scenes or when manually editing YAML - Moving back to Cheetah for templating (old kickstarts should escape $ with \$) - PXE menus for the default profile. Type "menu" at the prompt to get a menu, or wait for local boot. - Manpage cleanup and clarification - Bugfix: cobbler will no longer create repo files on remotes when --local-filename is not used for "repo add" * Mon Jan 28 2007 - 0.3.9 - Make init scripts correspond with FC-E guidelines * Thu Jan 24 2007 - 0.3.8 - Fixed minor bug in logfile processing related to 0.3.7 * Thu Jan 24 2007 - 0.3.7 - Default/examples kickstarts are now fully automatic (added hd type/size detection). - Kickstart tracking now includes remote syslog support, just run "cobbler sync" to enable. - "cobbler status" command improved to include syslog info/times. - Added fc6 kickstart file that was left out of the RPM earlier - Added mini domU kickstart - bugfix: don't install mod_python watcher on older Apache installs (like RHEL4) as it somehow corrupts downloads on older copies. kickstart tracking by syslog still works on those platforms. (This only applies to the cobbler server, not clients). * Thu Dec 21 2006 - 0.3.6 - locking feature now enabled - "enchant" now supports provisioning virtual images remotely when using --is-virt=yes - cobbler no longer restarts httpd if the config file already exists. - "cobbler repo sync" is now an alias for "cobbler reposync" - "cobbler list --something" can now be invoked as "cobbler something list" - "cobbler list" just shows names of items now - "cobbler report" is now used for showing full information output - "list" (as well as report) are now sorted alphabetically - basic kickstart tracking feature. requests on /var/www/cobbler get logged to /var/log/cobbler. * Wed Dec 20 2006 - 0.3.5 - Fixed bug in cobbler import related to orphan detection - Made default rsync.exclude more strict (OO langpacks and KDE translation) - Now runs createrepo during "cobbler import" to build more correct repodata - Added additional repo mirroring commands: "cobbler repo add", etc - Documentation on repo mirroring features. - fix bug in rsync:// import code that saved distributions in the wrong path - The --dryrun option on "cobbler sync" is now unsupported. - Fixed bug in virt specific profile information not being used with koan - import now takes --name in addition to --mirror-name to be more consistant - rsync repo import shouldn't assume SSH unless no rsync:// in mirror URL - strict host key checking disabled for "cobbler enchant" feature * Mon Dec 05 2006 - 0.3.4 - Don't rsync PPC content or ISO's on cobbler import - Manpage cleanup * Tue Nov 14 2006 - 0.3.3 - During "cobbler sync" only PXE-related directories in /tftpboot are deleted. This allows /tftpboot to be used for other purposes. * Thr Oct 25 2006 - 0.3.2 - By default, boot and install in text mode * Wed Oct 25 2006 - 0.3.1 - The app now refers to "virt" in many places instead of "xen". It's been coded such that files will migrate forward without any major issues, and the newer version of koan can still hit older releases (for now). The CLI still takes the --xen options as well as the new --virt options, as they are aliased. The API now exclusively just uses methods with "virt" in them, however. - ... * Tue Oct 24 2006 - 0.3.0 - Reload httpd during sync - New profiles without set kickstarts default to /etc/cobbler/default.ks though this can be changed in /var/lib/cobbler/settings - Better forward upgrades for /var/lib/cobbler/settings. New entries get added when they are referenced. * Tue Oct 24 2006 - 0.2.9 - Bug fix, enchant now detects if koan_path is not set - import now can do ssh rsync as well as just rsyncd - Misc bug fixes related to not choking on bad info - Fixed bug where --pxe-address wasn't surfaced - Sync is a little less verbose * Wed Oct 18 2006 - 0.2.8 - Performance speedups to "import" command - Bug fix, imported paths (again) convert slashes to underscores * Tue Oct 17 2006 - 0.2.7 - Removed pexpect to enhance support for other distros - enchant syntax changed (see NEWS) - now builds on RHEL4 * Tue Oct 17 2006 - 0.2.6 - Removing Cheetah and replacing w/ simpler templating system - Don't delete localmirror on sync * Mon Oct 16 2006 - 0.2.5 - New "import" feature for rsync:// mirrors and filesystem directories - Manpage clarification - "enchant" is now a subcommand of "cobbler system" and takes less arguments. - Several random bugfixes (mainly along error paths) * Wed Oct 11 2006 - 0.2.4 - Changes to make things work with python 2.3 (RHEL4, etc) - Updated YAML code to ensure better backward compatibility * Mon Oct 9 2006 - 0.2.3 - Cobbler now creates a profile and system listing (YAML) in /var needed by the next version of koan (which will be 0.2.1) - bugfix: enchant should reboot the target box - bugfix: enchant should fail if path to koan isn't configured * Fri Oct 6 2006 - 0.2.2 - bugfix: "--pxe-hostname" made available in CLI and renamed as "--pxe-address" - workaround: elilo doesn't do MAC address pxe config files, use IP for ia64 - bugfix: added next-server line for per-MAC dhcp configs - bugfix: fixed manpage errors * Thu Sep 28 2006 - 0.2.1 - New ability to "enchant" remote systems (see NEWS) - Misc. bugfixes * Fri Sep 22 2006 - 0.2.0 - New dhcp.d conf management features (see NEWS) - IA64 support (see NEWS) - dhcpd.conf MAC & hostname association features * Thu Sep 21 2006 - 0.1.1-8 - (RPM) Added doc files to doc section, removed INSTALLED_FILES * Wed Sep 20 2006 - 0.1.1-7 - Split HTTP and TFTP content to seperate directories to enable running in SELinux targetted/enforcing mode. - Make the Virt MAC address a property of a system, not a profile - Misc. fixes, mainly along the error path * Fri Sep 15 2006 - 0.1.1-6 - Make koan own it's directory, add GPL "COPYING" file. * Wed Aug 16 2006 - 0.1.1-5 - Spec file tweaks only for FC-Extras * Thu Jul 20 2006 - 0.1.1-4 - Fixed python import paths in yaml code, which errantly assumed yaml was installed as a module. * Wed Jul 12 2006 - 0.1.1-3 - Added templating support using Cheetah * Thu Jul 9 2006 - 0.1.0-2 - Fedora-Extras rpm spec tweaks * Tue Jun 28 2006 - 0.1.0-1 - rpm genesis cobbler-2.4.1/COPYING000066400000000000000000000431041227367477500142070ustar00rootroot00000000000000 GNU GENERAL PUBLIC LICENSE Version 2, June 1991 Copyright (C) 1989, 1991 Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Lesser General Public License instead.) You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things. To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it. For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software. Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations. Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all. The precise terms and conditions for copying, distribution and modification follow. GNU GENERAL PUBLIC LICENSE TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 0. This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term "modification".) Each licensee is addressed as "you". Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Whether that is true depends on what the Program does. 1. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program. You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. 2. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a) You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change. b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License. c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.) These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it. Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program. In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License. 3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following: a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.) The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable. If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code. 4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. 5. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it. 6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License. 7. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program. If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances. It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice. This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License. 8. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License. 9. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation. 10. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. Also add information on how to contact you by electronic and paper mail. If the program is interactive, make it output a short notice like this when it starts in an interactive mode: Gnomovision version 69, Copyright (C) year name of author Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, the commands you use may be called something other than `show w' and `show c'; they could even be mouse-clicks or menu items--whatever suits your program. You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the program, if necessary. Here is a sample; alter the names: Yoyodyne, Inc., hereby disclaims all copyright interest in the program `Gnomovision' (which makes passes at compilers) written by James Hacker. , 1 April 1989 Ty Coon, President of Vice This General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. cobbler-2.4.1/MANIFEST.in000066400000000000000000000006131227367477500147100ustar00rootroot00000000000000include COPYING AUTHORS README CHANGELOG include cobbler.spec config/cobblerd.service recursive-include aux * recursive-include bin * recursive-include config * recursive-include misc * recursive-include docs * recursive-include installer_templates * recursive-include kickstarts * recursive-include snippets * recursive-include scripts * recursive-include templates * recursive-include web * cobbler-2.4.1/Makefile000066400000000000000000000106261227367477500146170ustar00rootroot00000000000000#MESSAGESPOT=po/messages.pot TOP_DIR:=$(shell pwd) prefix=devinstall statepath=/tmp/cobbler_settings/$(prefix) all: clean build clean: -rm -rf build rpm-build -rm -f *~ -rm -f cobbler/*.pyc -rm -rf dist -rm -rf buildiso -rm -f MANIFEST -rm -f koan/*.pyc -rm -f config/version -rm -f docs/*.1.gz -rm -f *.tmp -rm -f *.log test: make savestate prefix=test make rpms make install make eraseconfig /sbin/service cobblerd restart -(make nosetests) make restorestate prefix=test /sbin/service cobblerd restart nosetests: PYTHONPATH=./cobbler/ nosetests -v -w newtests/ 2>&1 | tee test.log build: python setup.py build -f # Assume we're on RedHat by default ('apache' user), # otherwise Debian / Ubuntu ('www-data' user) install: build if [ -n "`getent passwd apache`" ] ; then \ python setup.py install -f; \ chown -R apache /usr/share/cobbler/web; \ chown -R apache /var/lib/cobbler/webui_sessions; \ else \ python setup.py install -f --install-layout=deb; \ chown -R www-data /usr/share/cobbler/web; \ chown -R www-data /var/lib/cobbler/webui_sessions; \ fi devinstall: -rm -rf /usr/share/cobbler make savestate make install make restorestate savestate: mkdir -p $(statepath) cp -a /var/lib/cobbler/config $(statepath) cp /etc/cobbler/settings $(statepath)/settings cp /etc/cobbler/modules.conf $(statepath)/modules.conf @if [ -d /etc/httpd ] ; then \ cp /etc/httpd/conf.d/cobbler.conf $(statepath)/http.conf; \ cp /etc/httpd/conf.d/cobbler_web.conf $(statepath)/cobbler_web.conf; \ else \ cp /etc/apache2/conf.d/cobbler.conf $(statepath)/http.conf; \ cp /etc/apache2/conf.d/cobbler_web.conf $(statepath)/cobbler_web.conf; \ fi cp /etc/cobbler/users.conf $(statepath)/users.conf cp /etc/cobbler/users.digest $(statepath)/users.digest cp /etc/cobbler/dhcp.template $(statepath)/dhcp.template cp /etc/cobbler/rsync.template $(statepath)/rsync.template # Assume we're on RedHat by default, otherwise Debian / Ubuntu restorestate: cp -a $(statepath)/config /var/lib/cobbler cp $(statepath)/settings /etc/cobbler/settings cp $(statepath)/modules.conf /etc/cobbler/modules.conf cp $(statepath)/users.conf /etc/cobbler/users.conf cp $(statepath)/users.digest /etc/cobbler/users.digest if [ -d /etc/httpd ] ; then \ cp $(statepath)/http.conf /etc/httpd/conf.d/cobbler.conf; \ cp $(statepath)/cobbler_web.conf /etc/httpd/conf.d/cobbler_web.conf; \ else \ cp $(statepath)/http.conf /etc/apache2/conf.d/cobbler.conf; \ cp $(statepath)/cobbler_web.conf /etc/apache2/conf.d/cobbler_web.conf; \ fi cp $(statepath)/dhcp.template /etc/cobbler/dhcp.template cp $(statepath)/rsync.template /etc/cobbler/rsync.template find /var/lib/cobbler/triggers | xargs chmod +x if [ -n "`getent passwd apache`" ] ; then \ chown -R apache /var/www/cobbler; \ else \ chown -R www-data /usr/share/cobbler/web/cobbler_web; \ fi if [ -d /var/www/cobbler ] ; then \ chmod -R +x /var/www/cobbler/web; \ chmod -R +x /var/www/cobbler/svc; \ fi if [ -d /usr/share/cobbler/web ] ; then \ chmod -R +x /usr/share/cobbler/web/cobbler/cobbler_web; \ chmod -R +x /srv/www/cobbler/svc; \ fi rm -rf $(statepath) completion: python mkbash.py webtest: devinstall make clean make devinstall make restartservices # Assume we're on RedHat by default, otherwise Debian / Ubuntu restartservices: if [ -x /sbin/service ] ; then \ /sbin/service cobblerd restart; \ /sbin/service httpd restart; \ else \ /usr/sbin/service cobblerd restart; \ /usr/sbin/service apache2 restart; \ fi sdist: clean python setup.py sdist rpms: clean sdist mkdir -p rpm-build cp dist/*.gz rpm-build/ rpmbuild --define "_topdir %(pwd)/rpm-build" \ --define "_builddir %{_topdir}" \ --define "_rpmdir %{_topdir}" \ --define "_srcrpmdir %{_topdir}" \ --define "_specdir %{_topdir}" \ --define '_rpmfilename %%{NAME}-%%{VERSION}-%%{RELEASE}.%%{ARCH}.rpm' \ --define "_sourcedir %{_topdir}" \ -ba cobbler.spec eraseconfig: -rm /var/lib/cobbler/distros* -rm /var/lib/cobbler/profiles* -rm /var/lib/cobbler/systems* -rm /var/lib/cobbler/repos* -rm /var/lib/cobbler/networks* -rm /var/lib/cobbler/config/distros.d/* -rm /var/lib/cobbler/config/images.d/* -rm /var/lib/cobbler/config/profiles.d/* -rm /var/lib/cobbler/config/systems.d/* -rm /var/lib/cobbler/config/repos.d/* -rm /var/lib/cobbler/config/networks.d/* .PHONY: tags tags: find . \( -name build -o -name .git \) -prune -o -type f -name '*.py' -print | xargs etags -o TAGS -- cobbler-2.4.1/README000066400000000000000000000013741227367477500140370ustar00rootroot00000000000000Cobbler Cobbler is a Linux installation server that allows for rapid setup of network installation environments. It glues together and automates many associated Linux tasks so you do not have to hop between lots of various commands and applications when rolling out new systems, and, in some cases, changing existing ones. It can help with installation, DNS, DHCP, package updates, power management, configuration management orchestration, and much more. Read more at http://www.cobblerd.org To view the manpages, install the RPM and run "man cobbler" or run "perldoc cobbler.pod" from a source checkout. "koan" also has a manpage. To build the RPM, run "make". Developers, try "make webtest" to do a local "make install" that preserves your configuration. cobbler-2.4.1/README.openvz000066400000000000000000000075711227367477500153640ustar00rootroot00000000000000Support for OpenVZ containers in Cobbler THIS FUNCTIONS CONSIDERED AS ALPHA STAGE FOR TESTING AND LIMITED UGAGE! USAGE IN PRODUCTION CAN BE DANGEROUS! YOU WARNED! Cobbler is amazing tool for deploying barebones and virtual machines and I think it is suitable for deploying OpenVZ containers too. Current support for OpenVZ is rather basic, but I think this functionality can reach level we have now for KVM. How to use it? Because OpenVZ container is in nature chrooted environment we use cobbler+koan to create this on OpenVZ-enabled node. For cobbler and koan in case of OpenVZ all operations is similar - we should define distros, kickstarts, profiles, systems and so on with some additions. Now we do all operations only for RHEL/CentOS6. It may be suitable for recent Fedoras, but we do nothing for other distributions. How it works? All options keeps on cobbler side as for other VMs. Besides of common options you can use openvz-specific ones by defining them as vz_ prefixed, low-cased variables from this list: KMEMSIZE, LOCKEDPAGES, PRIVVMPAGES, SHMPAGES, NUMPROC, VMGUARPAGES, OOMGUARPAGES, NUMTCPSOCK, NUMFLOCK, NUMPTY, NUMSIGINFO, TCPSNDBUF, TCPRCVBUF, OTHERSOCKBUF, DGRAMRCVBUF, NUMOTHERSOCK, DCACHESIZE, NUMFILE, AVNUMPROC, NUMIPTENT, DISKINODES, QUOTATIME, VE_ROOT, VE_PRIVATE, SWAPPAGES, ONBOOT (See ctid.conf(5) for meaning of this parameters). Because cobbler does not have a place to keep CTID you MUST use it in ks_meta (as you can see in example below)! We use it on cobbler-side to be able allocate them from one place. We turn off pxe-menu creation for openvz containers to not pollute this menu. For exapmle: # cobbler profile add --name=vz01 --distro=CentOS6-x86_64 --kickstart=/your/kickstart.cfg \ --ks_meta="lang=ru_RU.UTF-8 keyb=ru vz_ctid=101 vz_swappages=0:2G vz_numproc=120:120" \ --repos="centos6-x86_64-os centos-x86_64-updates" \ --virt-type=openvz \ --virt-ram=1024 \ --virt-cpus=1 # cobbler system add --name=vz01 \ --profile=vz01 \ --virt-type=openvz \ --virt-ram=1024 \ --virt-cpus=1 # cobbler system edit --name=vz01 \ --hostname=vz01.example.com \ --interface=eth0 \ --mac=YOUR_MAC_HERE \ --static=1 \ --ip-address=YOUR_IP \ --subnet=MASK \ --gateway=GATEWAY_IP \ --name-servers=NAME_SERVERS_IPs On koan side: # koan --server=COBBLER_IP --virt --system=vz01 This will start installation process. ovz-install script will install all packages and groups listed in $packages section. As root for installation ovz-install will use /vz/private/$VEID (/vz/private/101 for example above), that can be overriden with vz_ve_private variable in ks_meta (eg. vz_ve_private=/some/path or vz_ve_private=/other/path/$VEID or vz_ve_private=/some/path/101 - $VEID will be replaced with CTID). After installation ovz-install will process "services" option from kickstart like it do anaconda and run post-installation script, defined in kickstart (only in chroot), so you can tune the container for your needs. At the end of process ovz-install process installed tree to be truly OpenVZ container - creates dev files, change init scripts etc. Created container started after that, so you should be able to log in to it with root and password you defined for root in kickstart file. Options for creating OpenVZ containers. You should set virt-type to "openvz" in profile or system to create OpenVZ container. --virt-file-size not used for now. We think we can use it for logical volume creation, or quoting filesystem usage, or for creating containers in ploop-file. --virt-ram as for other VMs --virt-cpus as for other VMs --virt-path not used now. Container will be created in /vz/private/$VEID, where $VEID will be replaced by openvz with CTID (container ID). Can be redefined by vz_ve_private variable you can place in ks_meta. --virt_bridge not used now. cobbler-2.4.1/aux/000077500000000000000000000000001227367477500137475ustar00rootroot00000000000000cobbler-2.4.1/aux/anamon000066400000000000000000000207751227367477500151560ustar00rootroot00000000000000#!/usr/bin/python """ This is a script used to automatically log details from an Anaconda install back to a cobbler server. Copyright 2008, Red Hat, Inc and Others various@redhat.com This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import os import sys import string import time import re import base64 import shlex # on older installers (EL 2) we might not have xmlrpclib # and can't do logging, however this is more widely # supported than remote syslog and also provides more # detail. try: import xmlrpclib except ImportError, e: print "xmlrpclib not available, exiting" sys.exit(0) # shlex.split support arrived in python-2.3, the following will provide some # accomodation for older distros (e.g. RHEL3) if not hasattr(shlex, "split"): shlex.split = lambda s: s.split(" ") class WatchedFile: def __init__(self, fn, alias): self.fn = fn self.alias = alias self.reset() def reset(self): self.where = 0 self.last_size = 0 self.lfrag='' self.re_list={} self.seen_line={} def exists(self): return os.access(self.fn, os.F_OK) def lookfor(self,pattern): self.re_list[pattern] = re.compile(pattern,re.MULTILINE) self.seen_line[pattern] = 0 def seen(self,pattern): if self.seen_line.has_key(pattern): return self.seen_line[pattern] else: return 0 def changed(self): if not self.exists(): return 0 size = os.stat(self.fn)[6] if size > self.last_size: self.last_size = size return 1 else: return 0 def uploadWrapper(self, blocksize = 262144): """upload a file in chunks using the uploadFile call""" retries = 3 fo = file(self.fn, "r") totalsize = os.path.getsize(self.fn) ofs = 0 while True: lap = time.time() contents = fo.read(blocksize) size = len(contents) data = base64.encodestring(contents) if size == 0: offset = -1 sz = ofs else: offset = ofs sz = size del contents tries = 0 while tries <= retries: debug("upload_log_data('%s', '%s', %s, %s, ...)\n" % (name, self.alias, sz, offset)) if session.upload_log_data(name, self.alias, sz, offset, data): break else: tries = tries + 1 if size == 0: break ofs += size fo.close() def update(self): if not self.exists(): return if not self.changed(): return try: self.uploadWrapper() except: raise class MountWatcher: def __init__(self,mp): self.mountpoint = mp self.zero() def zero(self): self.line='' self.time = time.time() def update(self): found = 0 if os.path.exists('/proc/mounts'): fd = open('/proc/mounts') while 1: line = fd.readline() if not line: break parts = string.split(line) mp = parts[1] if mp == self.mountpoint: found = 1 if line != self.line: self.line = line self.time = time.time() fd.close() if not found: self.zero() def stable(self): self.update() if self.line and (time.time() - self.time > 60): return 1 else: return 0 def anamon_loop(): alog = WatchedFile("/tmp/anaconda.log", "anaconda.log") alog.lookfor("step installpackages$") slog = WatchedFile("/tmp/syslog", "sys.log") xlog = WatchedFile("/tmp/X.log", "X.log") llog = WatchedFile("/tmp/lvmout", "lvmout.log") storage_log = WatchedFile("/tmp/storage.log", "storage.log") prgm_log = WatchedFile("/tmp/program.log", "program.log") vnc_log = WatchedFile("/tmp/vncserver.log", "vncserver.log") kcfg = WatchedFile("/tmp/ks.cfg", "ks.cfg") scrlog = WatchedFile("/tmp/ks-script.log", "ks-script.log") dump = WatchedFile("/tmp/anacdump.txt", "anacdump.txt") mod = WatchedFile("/tmp/modprobe.conf", "modprobe.conf") kspre = WatchedFile("/tmp/ks-pre.log", "ks-pre.log") # Setup '/mnt/sysimage' watcher sysimage = MountWatcher("/mnt/sysimage") # Monitor for {install,upgrade}.log changes package_logs = list() package_logs.append(WatchedFile("/mnt/sysimage/root/install.log", "install.log")) package_logs.append(WatchedFile("/mnt/sysimage/tmp/install.log", "tmp+install.log")) package_logs.append(WatchedFile("/mnt/sysimage/root/upgrade.log", "upgrade.log")) package_logs.append(WatchedFile("/mnt/sysimage/tmp/upgrade.log", "tmp+upgrade.log")) # Monitor for bootloader configuration changes bootloader_cfgs = list() bootloader_cfgs.append(WatchedFile("/mnt/sysimage/boot/grub/grub.conf", "grub.conf")) bootloader_cfgs.append(WatchedFile("/mnt/sysimage/boot/etc/yaboot.conf", "yaboot.conf")) bootloader_cfgs.append(WatchedFile("/mnt/sysimage/boot/efi/efi/redhat/elilo.conf", "elilo.conf")) bootloader_cfgs.append(WatchedFile("/mnt/sysimage/etc/zipl.conf", "zipl.conf")) # Were we asked to watch specific files? watchlist = list() waitlist = list() if watchfiles: # Create WatchedFile objects for each requested file for watchfile in watchfiles: if os.path.exists(watchfile): watchfilebase = os.path.basename(watchfile) watchlog = WatchedFile(watchfile, watchfilebase) watchlist.append(watchlog) # Use the default watchlist and waitlist else: watchlist = [alog, slog, dump, scrlog, mod, llog, kcfg, storage_log, prgm_log, vnc_log, xlog, kspre] waitlist.extend(package_logs) waitlist.extend(bootloader_cfgs) # Monitor loop while 1: time.sleep(5) # Not all log files are available at the start, we'll loop through the # waitlist to determine when each file can be added to the watchlist for watch in waitlist: if alog.seen("step installpackages$") or (sysimage.stable() and watch.exists()): debug("Adding %s to watch list\n" % watch.alias) watchlist.append(watch) waitlist.remove(watch) # Send any updates for wf in watchlist: wf.update() # If asked to run_once, exit now if exit: break # Establish some defaults name = "" server = "" port = "80" daemon = 1 debug = lambda x,**y: None watchfiles = [] exit = False # Process command-line args n = 0 while n < len(sys.argv): arg = sys.argv[n] if arg == '--name': n = n+1 name = sys.argv[n] elif arg == '--watchfile': n = n+1 watchfiles.extend(shlex.split(sys.argv[n])) elif arg == '--exit': exit = True elif arg == '--server': n = n+1 server = sys.argv[n] elif arg == '--port': n = n+1 port = sys.argv[n] elif arg == '--debug': debug = lambda x,**y: sys.stderr.write(x % y) elif arg == '--fg': daemon = 0 n = n+1 # Create an xmlrpc session handle session = xmlrpclib.Server("http://%s:%s/cobbler_api" % (server, port)) # Fork and loop if daemon: if not os.fork(): # Redirect the standard I/O file descriptors to the specified file. DEVNULL = getattr(os, "devnull", "/dev/null") os.open(DEVNULL, os.O_RDWR) # standard input (0) os.dup2(0, 1) # Duplicate standard input to standard output (1) os.dup2(0, 2) # Duplicate standard input to standard error (2) anamon_loop() sys.exit(1) sys.exit(0) else: anamon_loop() cobbler-2.4.1/aux/anamon.init000066400000000000000000000037241227367477500161130ustar00rootroot00000000000000#!/bin/bash ## BEGIN INIT INFO # Provides: anamon # Default-Start: 3 5 # Default-Stop: 0 1 2 4 6 # Required-Start: # Should-Start: $network # Short-Description: Starts the cobbler anamon boot notification program # Description: anamon runs the first time a machine is booted after # installation. ## END INIT INFO # # anamon: Starts the cobbler post-install boot notification program # # chkconfig: 35 99 95 # # description: anamon runs the first time a machine is booted after # installation. # LOCKFILE="/var/lock/subsys/anamon" CFGFILE="/etc/sysconfig/anamon" # Source function library. . /etc/init.d/functions # Source anamon config . $CFGFILE LOGFILES=${LOGFILES:-/var/log/boot.log} # FIXME - can we rely on the koan snippet to update /etc/profile.d/cobbler.sh? if [ -z "$COBBLER_SERVER" ]; then echo "No COBBLER_SERVER defined in $CFGFILE" exit 1 fi if [ -z "$COBBLER_NAME" ]; then echo "No COBBLER_NAME defined in $CFGFILE" exit 1 fi if [ -z "$LOGFILES" ]; then echo "No LOGFILES defined in $CFGFILE" exit 1 fi start() { echo -n $"Starting anamon: " daemon /usr/local/sbin/anamon --watchfile \"$LOGFILES\" --name $COBBLER_NAME --server $COBBLER_SERVER --port ${COBBLER_PORT:-80} --exit RETVAL=$? [ $RETVAL -eq 0 ] && touch $LOCKFILE echo # Disable service start chkconfig anamon off return $RETVAL } stop () { echo -n $"Shutting down anamon: " killproc /usr/local/sbin/anamon RETVAL=$? [ $RETVAL -eq 0 ] && rm -f $LOCKFILE echo return $RETVAL } restart() { stop start } case "$1" in start) start ;; stop) stop ;; restart) restart ;; condrestart) if [ -f $LOCKFILE ]; then restart fi ;; status) status anamon RETVAL=$? ;; *) echo $"Usage: $0 {start|stop|status|restart|condrestart}" RETVAL=2 ;; esac exit $RETVAL cobbler-2.4.1/bin/000077500000000000000000000000001227367477500137225ustar00rootroot00000000000000cobbler-2.4.1/bin/cobbler000077500000000000000000000015011227367477500152550ustar00rootroot00000000000000#!/usr/bin/python """ Wrapper for cobbler Copyright 2006-2009, Red Hat, Inc and Others Michael DeHaan This software may be freely redistributed under the terms of the GNU general public license. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. """ import cobbler.cli as app import sys PROFILING = False if PROFILING: print "** PROFILING **" import hotshot, hotshot.stats prof = hotshot.Profile("cobbler.prof") prof.runcall(app.main) prof.close() stats = hotshot.stats.load("cobbler.prof") stats.strip_dirs() stats.sort_stats('time') print "** REPORT **" stats.print_stats(100) sys.exit(0) else: sys.exit(app.main()) cobbler-2.4.1/bin/cobbler-ext-nodes000066400000000000000000000007761227367477500171730ustar00rootroot00000000000000#!/usr/bin/python import yaml # PyYAML version import urlgrabber import sys if __name__ == "__main__": hostname = None try: hostname = sys.argv[1] except: print "usage: cobbler-ext-nodes " if hostname is not None: conf = open("/etc/cobbler/settings") config = yaml.safe_load(conf.read()); conf.close() url = "http://%s:%s/cblr/svc/op/puppet/hostname/%s" % (config["server"], config["http_port"], hostname) print urlgrabber.urlread(url) cobbler-2.4.1/bin/cobbler-register000077500000000000000000000010321227367477500170760ustar00rootroot00000000000000#!/usr/bin/python """ cobbler-register wrapper script. See 'man cobbler-register' for details. Copyright 2009 Red Hat, Inc and Others. Michael DeHaan This software may be freely redistributed under the terms of the GNU general public license. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. """ import sys import koan.register as register sys.exit(register.main() or 0) cobbler-2.4.1/bin/cobblerd000077500000000000000000000053611227367477500154310ustar00rootroot00000000000000#!/usr/bin/python """ Wrapper for cobbler's remote syslog watching daemon. Copyright 2006-2009, Red Hat, Inc and Others Michael DeHaan This software may be freely redistributed under the terms of the GNU general public license. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. """ import sys import os import cobbler.cobblerd as app import logging import cobbler.utils as utils import traceback import optparse import cobbler.api as cobbler_api def daemonize_self(): # daemonizing code: http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/66012 # logger.info("cobblerd started") try: pid = os.fork() if pid > 0: # exit first parent sys.exit(0) except OSError, e: print >>sys.stderr, "fork #1 failed: %d (%s)" % (e.errno, e.strerror) sys.exit(1) # decouple from parent environment os.chdir("/") os.setsid() os.umask(022) # do second fork try: pid = os.fork() if pid > 0: # print "Daemon PID %d" % pid sys.exit(0) except OSError, e: print >>sys.stderr, "fork #2 failed: %d (%s)" % (e.errno, e.strerror) sys.exit(1) dev_null = file('/dev/null','rw') os.dup2(dev_null.fileno(), sys.stdin.fileno()) os.dup2(dev_null.fileno(), sys.stdout.fileno()) os.dup2(dev_null.fileno(), sys.stderr.fileno()) def main(): op = optparse.OptionParser() op.set_defaults(daemonize=True, log_level=None) op.add_option('-B', '--daemonize', dest='daemonize', action='store_true', help='run in background (default)') op.add_option('-F', '--no-daemonize', dest='daemonize', action='store_false', help='run in foreground (do not daemonize)') op.add_option('-f', '--log-file', dest='log_file', metavar='NAME', help='file to log to') op.add_option('-l', '--log-level', dest='log_level', metavar='LEVEL', help='log level (ie. INFO, WARNING, ERROR, CRITICAL)') options, args = op.parse_args() # load the API now rather than later, to ensure cobblerd # startup time is done before the service returns api = None try: api = cobbler_api.BootAPI(is_cobblerd=True) except Exception, exc: if sys.exc_type==SystemExit: return exc.code else: # FIXME: log this too traceback.print_exc() return 1 logger = api.logger if options.daemonize: daemonize_self() try: app.core(api) except Exception, e: logger.error(e) traceback.print_exc() if __name__ == "__main__": main() cobbler-2.4.1/bin/debuginator.py000066400000000000000000000026541227367477500166060ustar00rootroot00000000000000#!/usr/bin/python """ Quick test script to read the cobbler configurations and touch and mkdir -p any files neccessary to trivially debug another user's configuration even if the distros don't exist yet Intended for basic support questions only. Not for production use. Copyright 2008-2009, Red Hat, Inc and Others Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import glob import cobbler.yaml as camel import os.path import os for f in glob.glob("/var/lib/cobbler/config/distros.d/*"): fh = open(f) data = fh.read() fh.close() d = camel.load(data).next() k = d["kernel"] i = d["initrd"] dir = os.path.dirname(k) if not os.path.exists(dir): os.system("mkdir -p %s" % dir) os.system("touch %s" % k) os.system("touch %s" % i) cobbler-2.4.1/bin/demo_connect.py000066400000000000000000000027671227367477500167450ustar00rootroot00000000000000#!/usr/bin/python """ Copyright 2007-2009, Red Hat, Inc and Others Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ from xmlrpclib import ServerProxy import optparse if __name__ == "__main__": p = optparse.OptionParser() p.add_option("-u","--user",dest="user",default="test") p.add_option("-p","--pass",dest="password",default="test") # NOTE: if you've changed your xmlrpc_rw port or # disabled xmlrpc_rw this test probably won't work sp = ServerProxy("http://127.0.0.1:25151") (options, args) = p.parse_args() print "- trying to login with user=%s" % options.user token = sp.login(options.user,options.password) print "- token: %s" % token print "- authenticated ok, now seeing if user is authorized" check = sp.check_access(token,"imaginary_method_name") print "- access ok? %s" % check cobbler-2.4.1/bin/index.py000077500000000000000000000122011227367477500154020ustar00rootroot00000000000000""" mod_python gateway to all interesting cobbler web functions Copyright 2007-2009, Red Hat, Inc and Others Michael DeHaan This software may be freely redistributed under the terms of the GNU general public license. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. """ from mod_python import apache from mod_python import Session from mod_python import util import xmlrpclib import cgi import os from cobbler.webui import CobblerWeb import cobbler.utils as utils import yaml # PyYAML XMLRPC_SERVER = "http://127.0.0.1:25151" # FIXME: pull port from settings #======================================= class ServerProxy(xmlrpclib.ServerProxy): """ Establishes a connection from the mod_python web interface to cobblerd, which incidentally is also being proxied by Apache. """ def __init__(self, url=None): xmlrpclib.ServerProxy.__init__(self, url, allow_none=True) xmlrpc_server = ServerProxy(XMLRPC_SERVER) #======================================= def __get_user(req): """ What user are we logged in as? """ req.add_common_vars() env_vars = req.subprocess_env.copy() return env_vars["REMOTE_USER"] def __get_session(req): """ Get/Create the Apache Session Object FIXME: any reason to not use MemorySession? """ if not hasattr(req,"session"): req.session = Session.MemorySession(req) return req.session #====================================================== def handler(req): """ Right now, index serves everything. Hitting this URL means we've already cleared authn/authz but we still need to use the token for all remote requests. """ my_user = __get_user(req) my_uri = req.uri sess = __get_session(req) if not sess.has_key('cobbler_token'): # using Kerberos instead of Python Auth handler? # We need to get our own token for use with authn_passthru # which should also be configured in /etc/cobbler/modules.conf # if another auth mode is configured in modules.conf this will # most certaintly fail. try: if not os.path.exists("/var/lib/cobbler/web.ss"): apache.log_error("cannot load /var/lib/cobbler/web.ss") return apache.HTTP_UNAUTHORIZED fd = open("/var/lib/cobbler/web.ss") data = fd.read() my_pw = data fd.close() token = xmlrpc_server.login(my_user,my_pw) except Exception, e: apache.log_error(str(e)) return apache.HTTP_UNAUTHORIZED sess['cobbler_token'] = token else: token = sess['cobbler_token'] # needed? # usage later req.add_common_vars() # process form and qs data, if any fs = util.FieldStorage(req) form = {} for x in fs.keys(): form[x] = str(fs.get(x,'default')) fd = open("/etc/cobbler/settings") data = fd.read() fd.close() ydata = yaml.safe_load(data) remote_port = ydata.get("xmlrpc_port", 25151) mode = form.get('mode','index') # instantiate a CobblerWeb object cw = CobblerWeb.CobblerWeb( apache = apache, token = token, base_url = "/cobbler/web/", mode = mode, server = "http://127.0.0.1:%s" % remote_port ) # check for a valid path/mode # handle invalid paths gracefully if mode in cw.modes(): func = getattr( cw, mode ) content = func( **form ) else: func = getattr( cw, 'error_page' ) content = func( "Invalid Mode: \"%s\"" % mode ) if content.startswith("# REDIRECT "): util.redirect(req, location=content[11:], permanent=False) else: # apache.log_error("%s:%s ... %s" % (my_user, my_uri, str(form))) req.content_type = "text/html;charset=utf-8" req.write(unicode(content).encode('utf-8')) if not content.startswith("# ERROR") and content.find("") == -1: return apache.OK else: # catch Cheetah errors and web errors return apache.HTTP_INTERNAL_SERVER_ERROR #====================================================== def authenhandler(req): """ Validates that username/password are a valid combination, but does not check access levels. """ my_pw = req.get_basic_auth_pw() my_user = req.user my_uri = req.uri try: token = xmlrpc_server.login(my_user,my_pw) except Exception, e: apache.log_error(str(e)) return apache.HTTP_UNAUTHORIZED try: ok = xmlrpc_server.check_access(token,my_uri) except Exception, e: apache.log_error(str(e)) return apache.HTTP_FORBIDDEN sess=__get_session(req) sess['cobbler_token'] = token sess.save() return apache.OK #====================================================== def accesshandler(req): """ Not using this """ return apache.OK #====================================================== def authenzhandler(req): """ Not using this """ return apache.OK cobbler-2.4.1/bin/koan000077500000000000000000000007701227367477500146040ustar00rootroot00000000000000#!/usr/bin/python """ Koan wrapper script. See 'man koan' for details. Copyright 2006-2009 Red Hat, Inc and Others. Michael DeHaan This software may be freely redistributed under the terms of the GNU general public license. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. """ import sys import koan.app as app sys.exit(app.main() or 0) cobbler-2.4.1/bin/ovz-install000077500000000000000000000176611227367477500161450ustar00rootroot00000000000000#!/usr/bin/env bash ## OpenVZ container-type virtualization installation functions. ## ## Copyright 2012 Sergey Podushkin ## ## This program is free software; you can redistribute it and/or modify ## it under the terms of the GNU General Public License as published by ## the Free Software Foundation; either version 2 of the License, or ## (at your option) any later version. ## ## This program is distributed in the hope that it will be useful, ## but WITHOUT ANY WARRANTY; without even the implied warranty of ## MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the ## GNU General Public License for more details. ## ## You should have received a copy of the GNU General Public License ## along with this program; if not, write to the Free Software ## Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA ## 02110-1301 USA ## PROFILE_NAME=$1 KICKSTART_URL=$2 ROOTDIR=$3 if [ -z $PROFILE_NAME -o -z $ROOTDIR -o -z $KICKSTART_URL ] ; then echo "Some arguments missing!" echo "Usage: $0 system_name kickstart_url private_dir" echo "Exiting..." exit 1 fi PATH=/bin:/sbin:/usr/bin:/usr/sbin KICKSTART="/tmp/$PROFILE_NAME-kickstart.cfg" # get the kickstart wget $KICKSTART_URL -q -O $KICKSTART # get the root password hash from kickstart ROOTPW=`cat $KICKSTART| awk '/^rootpw/{ print $NF }'` # what shell will be used for post-install script? get it from kickstart POST_INSTALL_SHELL=`cat $KICKSTART | grep '^%post.*--interpreter' | sed -n 's/^.*--interpreter \([^ ][^ ]*\).*/\1/;p'` # if not defined in kickstart, then use /bin/sh [ -z $POST_INSTALL_SHELL ] && POST_INSTALL_SHELL="/bin/sh" # where to store post-install script POST_INSTALL_SCRIPT="/tmp/$PROFILE_NAME-post-install" # add postinstall script from kickstart cat $KICKSTART | sed -n '0,/\%post/d;p' >$POST_INSTALL_SCRIPT # get list of services that should be enabled after installation (anaconda-like behaviour) SERVICES_ENABLED=`cat $KICKSTART| grep '^services.*--enabled' | sed 's/^services.*--enabled//; s/,/ /g'` # get list of services that should be disabled after installation (anaconda-like behaviour) SERVICES_DISABLED=`cat $KICKSTART| grep '^services.*--disabled' | sed 's/^services.*--disabled//; s/,/ /g'` # some our anaconda-like actions before postinstall script execution SERVICES_SCRIPT="/tmp/$PROFILE_NAME-services.sh" cat /dev/null >$SERVICES_SCRIPT # disable+enable service as directed in kickstart options # first disable services for serv in $SERVICES_DISABLED ; do echo chkconfig --level 345 $serv off >>$SERVICES_SCRIPT done # then enable services for serv in $SERVICES_ENABLED ; do echo chkconfig --level 345 $serv on >>$SERVICES_SCRIPT done # temporary yum config YUM_CONFIG="/tmp/$PROFILE_NAME-yum.cfg" echo -e "[main]\ncachedir=/var/cache/yum/\$basearch/\$releasever\nkeepcache=0\ndebuglevel=2\nlogfile=/var/log/yum.log\nexactarch=1\nobsoletes=1\ngpgcheck=0\nplugins=1\ndistroverpkg=centos-release\nreposdir=/dev/null\n" >$YUM_CONFIG echo -e "groupremove_leaf_only=1\ngroup_package_types=mandatory\ntsflags=nodocs\n" >>$YUM_CONFIG # --ignoremissing processing cat $KICKSTART| grep '\-\-ignoremissing'>/dev/null if [ $? -eq 0 ] ; then echo -e "skip_broken=1\n" >>$YUM_CONFIG ; fi # just new line echo >>$YUM_CONFIG # base package set we get from kickstart's url option (this option used only for http/ftp install, that is in use by cobbler, if kickstart use other method we'll FAIL!!!) BASE_REPO_URL=`cat $KICKSTART| grep ^url | sed 's/^url.*--url=//'` # put in to our config echo -e "[base-os]\nname=base-os\nbaseurl=$BASE_REPO_URL\nenabled=1\npriority=1\ngpgcheck=0\n\n" >>$YUM_CONFIG # get additional repos from kickstart and put it to config too cat $KICKSTART | grep ^repo | \ sed 's/^repo\ //; s/--//g' | \ while read repo_name repo_url ; do repo_tag=`echo $repo_name | sed 's/name=//'` echo -e "[$repo_tag]\n$repo_name\n$repo_url\nenabled=1\npriority=99\ngpgcheck=0\n" >>$YUM_CONFIG done # packages we don't need to install (but included in installed groups) EXCLUDED_PKGS="selinux-policy-targeted kernel* *firmware* b43*" # packages we want to be installed, besides of listed in kickstart PKGS_LIST="vim-minimal ssh-clients openssh-server logrotate" # temporary yum script YUM_SCRIPT="/tmp/$PROFILE_NAME-yum.yum" cp /dev/null $YUM_SCRIPT (echo config assumeyes True echo config gpgcheck False echo install $PKGS_LIST ) >>$YUM_SCRIPT if [ -n "$EXCLUDED_PKGS" ] ; then echo config exclude \"$EXCLUDED_PKGS\" >>$YUM_SCRIPT ; fi cat $KICKSTART| awk '/^\%packages/,/^\%post/{ print $0 }'|egrep -v '^#|^$|^%' | \ while read line ; do # if package name can start with '-' sign, that means we have to exclude it ACTION="install" IS_GROUP="" echo $line|grep '^-'>/dev/null if [ $? -eq 0 ] ; then line=`echo $line|sed 's/^-//'` ACTION="remove" fi echo $line|grep '^@' >/dev/null # if name starts with @ - it's a group if [ $? -eq 0 ] ; then line=`echo $line|sed 's/^@//'` IS_GROUP="group" fi line=`echo $line | sed 's/^\s*//'` echo ${IS_GROUP}${ACTION} \"$line\" >>$YUM_SCRIPT done cat $KICKSTART| grep '\-\-nobase'>/dev/null if [ $? -eq 0 ] ; then echo groupremove base >>$YUM_SCRIPT ; fi echo run >>$YUM_SCRIPT # install all packages in one pass by using yum shell #### THIS IS LONG-RUNNING TASK! ###### echo Start installing packages yum shell --quiet --config=$YUM_CONFIG --installroot=$ROOTDIR $YUM_SCRIPT ## >/dev/null 2>&1 # some optimization yum remove kernel kernel-firmware dracut dracut-kernel dracut-network fcoe-utils libdrm lldpad plymouth -y --quiet --config=$YUM_CONFIG --installroot=$ROOTDIR echo Packages installed # remove all *.repo files, cobbler will install it's own repo-file with needed repos rm -f $ROOTDIR/etc/yum.repos.d/*.repo # move services setup script in container root, to be reachable inside of chroot mv $SERVICES_SCRIPT $ROOTDIR/$SERVICES_SCRIPT # turn off and on services in chroot echo Disabling and enabling services as needed chroot $ROOTDIR /bin/bash $SERVICES_SCRIPT # move postinstall script in container root, to be reachable inside of chroot mv $POST_INSTALL_SCRIPT $ROOTDIR/$POST_INSTALL_SCRIPT # run the postinstall actions in chroot echo Perform post-installation actions (chroot $ROOTDIR $POST_INSTALL_SHELL $POST_INSTALL_SCRIPT )>/dev/null 2>&1 # tune the installations to be suitable for OpenVZ as environment echo Make the tree container-ready cd $ROOTDIR # remove unneeded upstart scripts rm -f $ROOTDIR/etc/init/control-alt-delete.conf rm -f $ROOTDIR/etc/init/plymouth-shutdown.conf rm -f $ROOTDIR/etc/init/prefdm.conf rm -f $ROOTDIR/etc/init/quit-plymouth.conf rm -f $ROOTDIR/etc/init/rcS-sulogin.conf rm -f $ROOTDIR/etc/init/serial.conf rm -f $ROOTDIR/etc/init/start-ttys.conf rm -f $ROOTDIR/etc/init/tty.conf sed -i -e 's/^console/#console/' $ROOTDIR/etc/init/rc.conf sed -i -e 's/^console/#console/' $ROOTDIR/etc/init/rcS.conf # tune sshd sed -i -e 's/GSSAPIAuthentication\ yes/GSSAPIAuthentication\ no/g' $ROOTDIR/etc/ssh/sshd_config # turn off SELinux mkdir -p $ROOTDIR/etc/selinux echo SELINUX=disabled>$ROOTDIR/etc/selinux/config # we use ! as the delimiter for sed, because $ROOTPW hash is full of weird signs ;) sed -i -e "s!root:.:!root:$ROOTPW:!" $ROOTDIR/etc/shadow # who needs it?! #echo "PS1='[\u@\h \W]\$'" >> /etc/profile # link mtab from outer space [ -f $ROOTDIR/etc/mtab ] && rm $ROOTDIR/etc/mtab ln -s /proc/mounts etc/mtab # some point to fstab echo "none /dev/pts devpts rw,gid=5,mode=620 0 0">$ROOTDIR/etc/fstab # here is plain file dev/null, so we remove it rm -f $ROOTDIR/dev/null # create neccessary device files for dir in $ROOTDIR/dev $ROOTDIR/etc/udev/devices ; do /sbin/MAKEDEV -d $dir -x {p,t}ty{a,p}{0,1,2,3,4,5,6,7,8,9,a,b,c,d,e,f} console core full kmem kmsg mem null port ptmx random urandom zero ram0 ln -s /proc/self/fd $dir/fd ln -s /proc/self/fd/2 $dir/stderr ln -s /proc/self/fd/0 $dir/stdin ln -s /proc/self/fd/1 $dir/stdout done # ajust permissions chmod 1777 $ROOTDIR/tmp chmod 1777 $ROOTDIR/var/tmp echo All done exit 0 cobbler-2.4.1/bin/services.py000077500000000000000000000063061227367477500161270ustar00rootroot00000000000000""" This module is a mod_wsgi application used to serve up the Cobbler service URLs. Copyright 2010, Red Hat, Inc and Others This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import yaml import os import urllib import cgi from cobbler.services import CobblerSvc def application(environ, start_response): my_uri = urllib.unquote(environ['REQUEST_URI']) form = {} # canonicalizes uri, mod_python does this, mod_wsgi does not my_uri = os.path.realpath(my_uri) tokens = my_uri.split("/") tokens = tokens[3:] label = True field = "" for t in tokens: if label: field = t else: form[field] = t label = not label form["query_string"] = cgi.parse_qs(environ['QUERY_STRING']) # This MAC header is set by anaconda during a kickstart booted with the # kssendmac kernel option. The field will appear here as something # like: eth0 XX:XX:XX:XX:XX:XX form["REMOTE_MAC"] = environ.get("HTTP_X_RHN_PROVISIONING_MAC_0", None) # REMOTE_ADDR isn't a required wsgi attribute so it may be naive to assume # it's always present in this context. form["REMOTE_ADDR"] = environ.get("REMOTE_ADDR", None) # Read config for the XMLRPC port to connect to: fd = open("/etc/cobbler/settings") data = fd.read() fd.close() ydata = yaml.safe_load(data) remote_port = ydata.get("xmlrpc_port",25151) # instantiate a CobblerWeb object cw = CobblerSvc(server = "http://127.0.0.1:%s" % remote_port) # check for a valid path/mode # handle invalid paths gracefully mode = form.get('op','index') # TODO: We could do proper exception handling here and return # corresponding HTTP status codes: # Execute corresponding operation on the CobblerSvc object: func = getattr( cw, mode ) content = func( **form ) content = unicode(content).encode('utf-8') status = '200 OK' if content.find("# *** ERROR ***") != -1: status = '500 SERVER ERROR' print("possible cheetah template error") # TODO: Not sure these strings are the right ones to look for... elif content.find("# profile not found") != -1 or \ content.find("# system not found") != -1 or \ content.find("# object not found") != -1: print("content not found: %s" % my_uri) status = "404 NOT FOUND" # req.content_type = "text/plain;charset=utf-8" response_headers = [('Content-type', 'text/plain;charset=utf-8'), ('Content-Length', str(len(content)))] start_response(status, response_headers) return [content] cobbler-2.4.1/bin/tftpd.py000077500000000000000000001274011227367477500154250ustar00rootroot00000000000000#!/usr/bin/python """ SYNOPSIS tftpd.py [-h,--help] [-v,--verbose] [-d,--debug] [--version] [--port=(69)] DESCRIPTION A python, cobbler integrated TFTP server. It is suitable to call via xinetd, or as a stand-alone daemon. If called via xinetd, it will run, handling requests, until it has been idle for at least 30 seconds, and will then exit. This server queries cobbler for information about hosts that make requests, and will instantiate template files from the materialized hosts' 'fetchable_files' attribute. EXIT STATUS AUTHOR Douglas Kilpatrick LICENSE This script is in the public domain, free from copyrights or restrictions VERSION 0.5 TODO Requirement: retransmit Requirement: Ignore stale retrainsmits Security: only return files that are o+r Security: support hosts.allow/deny Security: Make absolute path support optional, and default off Feature: support blksize2 (blksize, limited to powers of 2) Feature: support utimeout (timeout, in ms) """ VERSION=0.5 import sys, os, stat, errno, time, optparse, re, socket, pwd, traceback import logging, logging.handlers import xmlrpclib from collections import deque from fnmatch import fnmatch from cobbler.utils import local_get_cobbler_api_url, tftpboot_location import tornado.ioloop as ioloop import cobbler.templar import Cheetah # need exception types from struct import *; from subprocess import *; #import functools # Data/Defines TFTP_OPCODE_RRQ = 1 TFTP_OPCODE_DATA = 3 TFTP_OPCODE_ACK = 4 TFTP_OPCODE_ERROR = 5 TFTP_OPCODE_OACK = 6 COBBLER_HANDLE = xmlrpclib.Server(local_get_cobbler_api_url()) OPTIONS = { "port" : "69", "timeout" : 10, "min_timeout": 1, "max_timeout": 255, "blksize" : 512, # that's the default, required "max_blksize": 1428, # MTU - overhead "min_blksize": 512, # the default is small enough already "retries" : 4, "verbose" : False, "debug" : False, "idle" : 0, # how long to stick around: 0: unlimited "idle_timer" : None, "cache" : True, # 'cache-time' = 300 "cache-time" : 5 * 300, "neg-cache-time" : 10, "active" : 0, "prefix" : tftpboot_location(), "logger" : "stream", "file_cmd" : "/usr/bin/file", "user" : "nobody", # the well known socket. needs to be global for timeout # Using the options hash as a hackaround for python's # "create a new object at local scope by default" design. "sock" : None } ERRORS = [ 'Not defined, see error message (if any)', # 0 'File not found', # 1 'Access violation', # 2 'Disk full or allocation exceeded', # 3 'Illegal TFTP operation', # 4 'Unknown transfer ID', # 5 'File already exists', # 6 'No such user', # 7 'Option negotiation' # 8 ] REQUESTS = None class RenderedFile: """A class to manage rendered files, without changing the logic of the rest of the TFTP server. It replaces the file object via duck typing, and feeds out sections of the saved string as required""" def __init__(self, data=""): """Provide the string to be served out as an argument to the constructor. The data object needs to support slices.""" self.data = data self.offset = 0 def seek(self,bytes): """Only the two-argument version of seek (SEEK_SET) is currently supported""" self.offset = bytes def read(self,size): """Returns bytes relative to the current offset.""" end = self.offset + size return self.data[self.offset:end] class Packet: """Represents a packet received (or sent?) from a tftp client. Is a base class that is intended to be overridden. The main use cases are "I got a packet, parse it", but I'm also keeping the "how to write a packet of type X" in the same class to keep the relevant code snippets close to each other. Any strings that control behavior (mode, rfc2347 options) are case-INsensitive. Filename is allowed to be case sensitive """ def __init__(self, data, local_sock, remote_addr): self.data = data self.local_sock = local_sock self.remote_addr = remote_addr self.opcode, = unpack("!H",data[0:2]) def marshall(self): raise NotImplementedError("%s: Write marshall method" % repr(self)) def is_error(self): return False class RRQPacket(Packet): """The RRQ Packet. We only receive those, so this object only supports the receive use case. 2 bytes string byte String byte --------------------------------------- DATA | 01 | name | \0 | mode | \0 | --------------------------------------- string string ---------------------------- rfc2347 | name | \0 | value | \0 | [*] ---------------------------- """ def __init__(self, data, local_sock, remote_addr): Packet.__init__(self,data,local_sock,remote_addr) # opcode already extracted, and unpack is awkward for this # so pulling out strings by hand (file,mode,rfc2347str) = data[2:].split('\0',2) logging.debug("RRQ for file %s(%s) from %s"%(file,mode,remote_addr)) # Ug. Can't come up with a simplier way of doing this if rfc2347str: # the "-1" is to trim off the trailing \0 self.req_options = deque(rfc2347str[:-1].split('\0')) logging.debug("client %s requested options: %s" % ( str(remote_addr),str(rfc2347str.replace('\0',',')))) else: self.req_options = deque() self.filename = file self.mode = mode def get_request(self,templar): return Request(file,self.remote_addr,templar) class DATAPacket(Packet): """The DATA packet. We only send these, so this object only supports the send use case. 2 bytes 2 bytes n bytes ----------------------------------- DATA | 03 | Block # | Data | ----------------------------------- """ def __init__(self, data, blk_num): self.data = data; self.blk_num = blk_num; def marshall(self): return pack("!HH %ds" % (len(self.data)), TFTP_OPCODE_DATA, self.blk_num & 0xFFFF ,str(self.data)) class ACKPacket(Packet): """The ACK packet. We only receive these. 2 bytes 2 bytes ---------------------- ACK | 04 | Block # | ---------------------- """ def __init__(self, data, local_sock, remote_addr): Packet.__init__(self,data,local_sock,remote_addr) block_number, = unpack("!H",data[2:4]) logging.log(9,"ACK for packet %d from %s"%(block_number,remote_addr)) self.block_number = block_number def marshall(self): raise NotImplementedError("We don't send these, we read them") return pack("!HH", TFTP_OPCODE_ACK, self.block_number) class ERRORPacket(Packet): """The error packet. We could send or receive these. But we really only handle sending them. 2 bytes 2 bytes string 1 byte ------------------------------------------ ERROR | 05 | ErrorCode | ErrMsg | 0 | ------------------------------------------ """ def __init__(self, data, local_sock, remote_addr): Packet.__init__(self,data,local_sock,remote_addr) self.error_code, = unpack("!h",data[2:4]) self.error_str = data[4:-1] logging.debug("ERROR %d: %s from %s"% (self.error_code,self.error_str,remote_addr)) def __init__(self, error_code, error_str): self.error_code = error_code self.error_str = error_str def is_error(self): return True def marshall(self): return pack("!HH %dsB" % (len(self.error_str)), TFTP_OPCODE_ERROR, self.error_code, self.error_str,0) class OACKPacket(Packet): """The Option Acknowledge (rfc2347) packet. We only send these. We make an effort to retain name case and order, to aid clients that depend on either. 2 bytes string 1 byte string 1 byte ----------|-------------------------------- OACK | 06 || name | 0 | value | 0 | [*] ----------|-------------------------------- """ def __init__(self, rfc2347): self.opcode = TFTP_OPCODE_OACK self.options = rfc2347 def marshall(self): optstr = "\0".join(map(lambda x: str(x), self.options)) return pack("!H %ds c" % (len(optstr)),self.opcode, optstr, '\0') class XMLRPCSystem: """Use XMLRPC to look up system attributes. This is the recommended method. The cache is controlled by the "cache" option and the "cache-time" option """ cache = {} def __init__(self, ip_address=None, mac_address=None): name = None resolve = True # Try the cache. if XMLRPCSystem.cache.has_key(ip_address): cache_ent = XMLRPCSystem.cache[ip_address] now = time.time() cache_time = float(OPTIONS["cache-time"]) neg_cache_time = float(OPTIONS["neg-cache-time"]) if cache_ent["time"] + cache_time > now: name = cache_ent["name"] if name is not None: logging.debug("Using cache name for system %s,%s" % (cache_ent["name"],ip_address)) resolve = False elif (name is None and mac_address is None and cache_ent["time"] + neg_cache_time > now): age = (cache_ent["time"] + neg_cache_time) - now logging.debug("Using neg-cache for system %s:%f" %(ip_address,age)) resolve = False else: age = (cache_ent["time"] + neg_cache_time) - now logging.debug("ignoring cache for %s:%d"%(ip_address,age)) # Don't bother trying to find it.. until the neg-cache-time # expires anyway else: del XMLRPCSystem.cache[ip_address] # Not in the cache, try to find it. if resolve: query = {} if mac_address is not None: query["mac_address"] = mac_address.replace("-",":").upper() elif ip_address is not None: query["ip_address"] = ip_address try: logging.debug("Searching for system %s" % repr(query)) systems = COBBLER_HANDLE.find_system(query) if len(systems) > 1: raise RuntimeError("Args mapped to multiple systems") elif len(systems) == 0: raise RuntimeError("%s,%s not found in cobbler" % (ip_address,mac_address)) name = systems[0] except RuntimeError, e: logging.info(str(e)) name = None except: (etype,eval,) = sys.exc_info()[:2] logging.warn("Exception retrieving rendered system: %s (%s):%s" % (name,eval,traceback.format_exc())) name = None if name is not None: logging.debug("Materializing system %s" % name) try: self.system = COBBLER_HANDLE.get_system_for_koan(name) self.attrs = self.system self.name = self.attrs["name"] except: (etype,eval,) = sys.exc_info()[:2] logging.warn ("Exception Materializing system %s (%s):%s" % (name,eval,traceback.format_exc())) if XMLRPCSystem.cache.has_key(ip_address): del XMLRPCSystem.cache[ip_address] self.system = None self.attrs = dict() self.name = str(ip_address) else: self.system = None self.attrs = dict() self.name = str(ip_address) # fill the cache, negative entries too if OPTIONS["cache"] and resolve: logging.debug("Putting %s,%s into cache"%(name,ip_address)) XMLRPCSystem.cache[ip_address] = { "name" : name, "time" : time.time(), } class Request: """Handles the "business logic" of the TFTP server. One instance is spawned per client request (RRQ packet received on well-known port) and it is responsible for keeping track of where the file transfer is...""" def __init__(self,rrq_packet,local_sock,templar): # Trim leading /s, since that's kinda implicit self.filename = rrq_packet.filename.lstrip('/') # assumed self.type = "chroot" self.remote_addr = rrq_packet.remote_addr self.req_options = rrq_packet.req_options self.options = dict() self.offset = 0 self.local_sock = local_sock self.state = TFTP_OPCODE_RRQ self.expand = False self.templar = templar # Sanitize input more # Strip out \s self.filename = self.filename.replace('\\','') # Look for elements starting with ".", and blow up. try: if len(self.filename) == 0: raise RuntimeError("Empty Path: ") for elm in self.filename.split("/"): if elm[0] == ".": raise RuntimeError("Path includes '.': ") except RuntimeError, e: logging.warn(str(e) + rrq_packet.filename) self.error_code = 2 self.error_str = "Invalid file name" self.state = TFTP_OPCODE_ERROR self.filename = None OPTIONS["active"] += 1 self.system = XMLRPCSystem(self.remote_addr[0]) def _remap_strip_ip(self,filename): # remove per-host IP or Mac prefixes, so that earlier pxelinux requests # can be templated. We are already doing per-host stuff, so we don't # need the IP addresses/mac addresses tacked on # //UUID (503a463c-537a-858b-af2a-519686f53c58) # //MAC (01-00-50-56-8b-33-88) # //IP (C000025B) trimmed = filename if self.system.system is None: # If the file name has a mac address, strip that, use it to # look up the system, and recurse. m = re.compile("01((-[0-9a-f]{2}){6})$").search(filename) if m: # Found a mac address. try and look up a system self.system = XMLRPCSystem(self.system.name,m.group(1)[1:]) if self.system.system is not None: logging.info("Looked up host late: '%s'" % self.system.name) return self._remap_strip_ip(filename) # We can still trim off an ip address... system.name is the # incoming ip suffix = "/%08X" %unpack('!L',socket.inet_aton(self.system.name))[0] if suffix and trimmed[len(trimmed)-len(suffix):] == suffix: trimmed = trimmed.replace(suffix,"") logging.debug('_remap_strip_ip: converted %s to %s' % (filename, trimmed)) return trimmed else: # looking over all keys, because I have to search for keys I want for (k,v) in self.system.system.iteritems(): suffix = False # if I find a mac_address key or ip_address key, then see if # that matches the file I'm looking at if k.find("mac_address") >= 0 and v != '': # the "01" is the ARP type of the interface. 01 is # ethernet. This won't work for token ring, for example suffix = "/01-" + v.replace(":","-").lower() elif k.find("ip_address") >= 0 and v != '': # IPv4 hardcoded here. suffix = "/%08X" % unpack('!L',socket.inet_aton(v))[0] if suffix and trimmed[len(trimmed)-len(suffix):] == suffix: trimmed = trimmed.replace(suffix,"") logging.debug('_remap_strip_ip: converted %s to %s' % (filename, trimmed)) return trimmed return filename def _remap_via_profiles(self,filename): pattern = re.compile("images/([^/]*)/(.*)") m = pattern.match(filename) if m: logging.debug("client requesting distro?") p = COBBLER_HANDLE.get_distro_for_koan(m.group(1)) if p: logging.debug("%s matched distro %s" % (filename,p["name"])) if m.group(2) == os.path.basename(p["kernel"]): return p["kernel"],"template" elif m.group(2) == os.path.basename(p["initrd"]): return p["initrd"],"template" logging.debug("but unknown file requested.") else: logging.debug("Couldn't load profile %s" % m.group(1)) return filename,"chroot" def _remap_name_via_fetchable(self,filename): fetchable_files = self.system.attrs["fetchable_files"].strip() if not fetchable_files: return filename,None # We support two types of matches in fetchable_files # * Direct match ("/foo=/bar") # * Globs on directories ("/foo/*=/bar") # A glob is realliy just a directory remap glob_pattern = re.compile("(/)?[*]$") # Look for the file in the fetchable_files hash # XXX: Template name for (k,v) in map(lambda x: x.split("="),fetchable_files.split(" ")): k = k.lstrip('/') # Allow some slop w/ starting /s # Full Path: "/foo=/bar" result = None if k == filename: logging.debug('_remap_name: %s => %s' % (k,v)) result = v # Glob Path: "/foo/*=/bar/" else: match = glob_pattern.search(k) if match and fnmatch("/"+filename,"/"+k): logging.debug('_remap_name (glob): %s => %s' % (k,v)) # Erase the trailing '/?[*]' in key # Replace the matching leading portion in filename # with the value # Expand the result if match.group(1): lead_dir = glob_pattern.sub(match.group(1),k) else: lead_dir = glob_pattern.sub("",k) result = filename.replace(lead_dir,v,1) # Render the target, to expand things like "$kernel" if result is not None: try: return self.templar.render( result, self.system.attrs, None).strip(),"template" except Cheetah.Parser.ParseError, e: logging.warn('Unable to expand name: %s(%s): %s' % (filename,result,e)) return filename,None def _remap_name_via_boot_files(self,filename): boot_files = self.system.attrs["boot_files"].strip() if not boot_files: logging.debug('_remap_name: no boot_files for %s/%s' % (self.system,filename)) return filename,None filename = filename.lstrip('/') # assumed # Override "img_path", as that's the only variable used by # the VMWare boot_files support, and they use a slightly different # definition: one that's relative to tftpboot attrs = self.system.attrs.copy() attrs["img_path"] = os.path.join("images",attrs["distro_name"]) # Look for the file in the boot_files hash for (k,v) in map(lambda x: x.split("="),boot_files.split(" ")): k = k.lstrip('/') # Allow some slop w/ starting /s # Render the key, to expand things like "$img_path" try: expanded_k = self.templar.render(k, attrs, None) except Cheetah.Parser.ParseError, e: logging.warn('Unable to expand name: %s(%s): %s' % (filename,k,e)) continue if expanded_k == filename: # Render the target, to expand things like "$kernel" logging.debug('_remap_name: %s => %s' % (expanded_k,v)) try: return self.templar.render(v, attrs,None).strip(),"template" except Cheetah.Parser.ParseError, e: logging.warn('Unable to expand name: %s(%s): %s' % (filename,v,e)) return filename,None def _remap_name(self,filename): filename = filename.lstrip('/') # assumed # If possible, ignore pxelinux.0 added things we already know trimmed = self._remap_strip_ip(filename) if self.system.system is None: # Look for image match. All we can do return self._remap_via_profiles(trimmed) # Specific hacks to handle the PXE/initrd case without any configuration if self.system.attrs.has_key(trimmed): if trimmed in ["pxelinux.cfg"]: return trimmed,"hash_value" elif trimmed in ["initrd"]: return self.system.attrs[trimmed],"template" # for the two tests below, I want to ignore "pytftp.*" in the string, # which allows for some minimal control over extensions, which matters # to pxelinux.0 noext = re.sub("pytftpd.*","",filename) if self.system.attrs.has_key(noext) and noext in ["kernel"]: return self.system.attrs[noext],"template" (new_name,find_type) = self._remap_name_via_fetchable(trimmed) if find_type is not None: return new_name,find_type (new_name,find_type) = self._remap_name_via_boot_files(trimmed) if find_type is not None: return new_name,find_type # last try: try profiles return self._remap_via_profiles(trimmed) def _render_template(self): try: return RenderedFile(self.templar.render(open(self.filename,"r"), self.system.attrs,None)) except Cheetah.Parser.ParseError, e: logging.warn('Unable to expand template: %s: %s' % (self.filename,e)) return None except IOError, e: logging.warn('Unable to expand template: %s: %s' % (self.filename,e)) return None def _setup_xfer(self): """Open the file to be loaded, or materalize the template. This method can set the state to be an ERROR state, so avoid setting state after calling this method. """ logging.info('host %s requesting %s' % (self.system.name, self.filename)) self.filename,self.type = self._remap_name(self.filename) logging.debug('host %s getting %s: %s' % (self.system.name,self.filename,self.type)) if self.type == "template": # TODO: Add file magic here output = Popen([OPTIONS["file_cmd"],self.filename], stdout=PIPE,stderr=STDOUT, close_fds=True).communicate()[0] if output.find(" text") >= 0: self.file = self._render_template() if self.file: self.block_count = 0 self.file_size = len(self.file.data) return else: logging.debug('Template failed to render.') else: logging.debug('Not rendering binary file %s (%s).' % (self.filename,output)) elif self.type == "hash_value": self.file = RenderedFile(self.system.attrs[self.filename]) self.block_count = 0 self.file_size = len(self.file.data) return else: logging.debug('Relative path') # Oh well. Look for the actual given file. # XXX: add per-host IP or Mac prefixes? # add: for pxeboot, or other non pxelinux try: logging.debug('starting xfer of %s to %s' % (self.filename,self.remote_addr)); # Templates are specified by an absolute path if self.type == "template": self.file = open(self.filename,'rb',0) else: # TODO! restrict. Chroot? # We are sanitizing in the input, but a second line of defense # wouldn't be a bad idea self.file = open(OPTIONS["prefix"] + "/" + self.filename,'rb',0) self.block_count = 0 self.file_size = os.fstat(self.file.fileno()).st_size except IOError: logging.debug('%s requested %s: file not found.' % (self.remote_addr, self.filename)) self.state = TFTP_OPCODE_ERROR self.error_code = 1 self.error_str = "No such file" return def finish(self): io_loop = ioloop.IOLoop.instance() logging.debug("finishing req from %s for %s" % (self.filename,self.remote_addr)) self.state = 0 try: io_loop.remove_handler(self.local_sock.fileno()) logging.debug("closing fd %d" % self.local_sock.fileno()) self.local_sock.close() except: logging.debug("closed FD twice. Ignoring") if self.timeout: io_loop.remove_timeout(self.timeout) self.timeout = None OPTIONS["active"] -= 1 if (OPTIONS["idle"] > 0 and OPTIONS["active"] == 0 and OPTIONS["idle_timer"] is None): io_loop.stop() def handle_timeout(self): # We timed out. We're done... (I hope) logging.info('Timeout. Transfer of %s done' % self.filename) self.timeout = None self.finish() def handle_input(self,packet): """The client sent us a new packet. Respond to it. RRQ is handled in the constructor sequence, basically. This should handle everything else""" # We got input, so didn't time out. Refresh the timeout io_loop = ioloop.IOLoop.instance() io_loop.remove_timeout(self.timeout) self.timeout = io_loop.add_timeout(time.time()+self.options["timeout"], lambda : self.handle_timeout()) if packet.opcode == TFTP_OPCODE_ACK: if self.state == TFTP_OPCODE_DATA: # Incremement offset. They got the last bit # the FFFF are to permit wrap. It's OK for the block # number to wrap, since it's one client (and not unicast), # so the client can figure that out. if ( (packet.block_number & 0xFFFF) == ((self.block_count + 1) & 0xFFFF)): # Only update if they actually ack the packet we # sent, but we'll still resend the last packet either way self.block_count += 1 self.state = TFTP_OPCODE_ACK elif self.state == TFTP_OPCODE_OACK: # Ok, start feeding data self.state = TFTP_OPCODE_ACK elif packet.opcode == TFTP_OPCODE_ERROR: logging.warn("Error from clients %s: %d:%s" % (self.remote_addr, packet.error_code, packet.error_str)) self.state = 0 else: logging.warn("Unknown opcode from clients %s: %ds" % (self.remote_addr, packet.opcode)) self.state = 0 def reply(self): """Given the current state, returns the next packet we should send to the client""" # Python doesn't have a switch statement (I presume on the theory # that needing one means you didn't set your classes up right) # so ... have a set of if/elif statements. # Fast path: it's an ACK. Feed more data if self.state == TFTP_OPCODE_ACK: offset = self.block_count * self.options["blksize"] if self.file_size < offset: # We're done. logging.info('Transfer of %s to %s done' % (self.filename,self.remote_addr)) return None self.file.seek(self.block_count * self.options["blksize"]) data = self.file.read(self.options["blksize"]) self.state = TFTP_OPCODE_DATA # Block Count starts at 1, so offset logging.log(9,"DATA to %s/%d, block_count %d/%d, size %d(%d/%d)" % ( self.remote_addr[0],self.remote_addr[1], self.block_count + 1, (self.block_count+1)&0xFFFF, len(data),offset+len(data),self.file_size)) return DATAPacket(data, self.block_count + 1) if self.state == 0: return None if self.state == TFTP_OPCODE_ERROR: # Don't bother waiting.. this was the first request # a "resend" would go to the well known port return ERRORPacket(self.error_code,self.error_str) if self.state == TFTP_OPCODE_RRQ and self.req_options: # They asked for various rfc2347 options. Figure out # what we'll allow, and send an OACK. self.state = TFTP_OPCODE_OACK # Most clients will ask for tsize, which is the size of the # file we'll be giving them. Let's look that up. self._setup_xfer() # Check for an error loading the file if self.state == TFTP_OPCODE_ERROR: # Don't bother waiting.. this was the first request # a "resend" would go to the well known port return ERRORPacket(self.error_code,self.error_str) # make sure we have defaults self.options = dict( blksize = OPTIONS["blksize"], timeout = OPTIONS["timeout"]); accepted_opts = [] # Sorry for the excessive complexity here. # I'm trying to maintain client's case and order, to protect # against braindamaged clients. Given clients are frequently # written in assembly, they can be excused some braindamage logging.debug("Requested options: %s"%(repr(self.req_options))) for i in range(0,len(self.req_options),2): key = self.req_options[i] value = self.req_options[i+1] logging.debug("looking at key %s" % (key)) if key.lower() == "tsize": accepted_opts.append(key) accepted_opts.append(self.file_size) elif ("min_"+key).lower() in OPTIONS: value = int(value) # string # if it's an option we know about/can bound upper_bound = OPTIONS[("max_"+key).lower()] lower_bound = OPTIONS[("min_"+key).lower()] logging.debug("%s: req: %d (%d - %d)" % (key,value,lower_bound,upper_bound)) if value < lower_bound: value = lower_bound if value > upper_bound: value = upper_bound self.options[key.lower()] = value accepted_opts.append(key) accepted_opts.append(str(value)) else: # ignore it, do not include in the OACK logging.info("Unknown option requested %s" % (key)) logging.debug("Using Options: %s"%(repr(self.options))) return OACKPacket(accepted_opts) if self.state == TFTP_OPCODE_RRQ: # No options. Fill in the defaults # and then recurse, pretending we just got the ACK to our OACK self.options = dict( blksize = OPTIONS["blksize"], timeout = OPTIONS["timeout"]); logging.debug("Using Options: %s"%(repr(self.options))) self.state = TFTP_OPCODE_ACK self._setup_xfer() if self.state == TFTP_OPCODE_ERROR: return ERRORPacket(self.error_code,self.error_str) return self.reply() raise NotImplementedError("Unknown state %d" % (self.state)) REQ_NAME = 0 REQ_CLASS = 1 REQUESTS = [ [ "INVALID", None ], # 0 [ "RRQ", RRQPacket ], # 1 [ "WRQ", None ], # 2 [ "DATA", None ], # 3 [ "ACK", ACKPacket ], # 4 [ "ERROR", None ], # 5 [ "OACK", OACKPacket ] # 6 ] def read_packet(data,local_sock,remote_addr): """Object factory. Reads the first tiny bit of the packet to get the opcode, and returns a Packet object of the relevant type Returns None on failure """ packet = None opcode, = unpack("!H",data[0:2]) if opcode < 1 or opcode > 6: logging.warn("Unknown request id %d from %s" % (opcode,remote_addr)) local_sock.sendto(ERRORPacket(0,"Unknown request").marshall(), remote_addr) return None if REQUESTS[opcode][REQ_CLASS] == None: if opcode != TFTP_OPCODE_ERROR: logging.warn("Unsupported request %d(%s) from %s" % (opcode,REQUESTS[opcode][REQ_NAME],remote_addr)) local_sock.sendto( ERRORPacket(2,"Unsupported request").marshall(),remote_addr) return None try: return (REQUESTS[opcode][REQ_CLASS])(data,local_sock,remote_addr) except: return None def partial(func, *args, **keywords): """Method factory. Returns a semi-anonymous method that provides certain arguments to another method. Usually could be replaced by a lambda expression Example: def add(i,j): return i+j fn = partial(add,1) # always pass 1 as the first arg to add fn(2) # returns 1+2 """ def newfunc(*fargs, **fkeywords): newkeywords = keywords.copy() newkeywords.update(fkeywords) return func(*(args + fargs), **newkeywords) newfunc.func = func newfunc.args = args newfunc.keywords = keywords return newfunc def handle_request(request,fd,events): """Used as the IO handler for subsequent requests. Followup packets for a given request are sent to a different port, because UDP doesn't have it's own connection concept. Also packets can be larger after option negotiation, so the amount to read can vary. This method handles packets sent to the transient ports, and calls the Request.handle_input method of the request associated with the port. """ try: while request.state != 0: # 0 is the "done" state try: data,address = request.local_sock.recvfrom( request.options["blksize"]); except socket.error, e: if e[0] in (errno.EWOULDBLOCK, errno.EAGAIN): return else: raise if address == request.remote_addr: packet = read_packet(data,request.local_sock,address) if (packet is None): request.finish() continue request.handle_input(packet) reply = request.reply() if reply: request.local_sock.sendto(reply.marshall(),address) if not reply or reply is ERRORPacket: request.finish() else: raise NotImplementedError("Input from unexpected source") finally: # Reset the timer if OPTIONS["idle"] > 0: io_loop = ioloop.IOLoop.instance() try: io_loop.remove_timeout(OPTIONS["idle_timer"]) except: pass OPTIONS["idle_timer"] = io_loop.add_timeout( time.time()+OPTIONS["idle"], lambda : idle_out()) def idle_out(): logging.info("Idling out") io_loop = ioloop.IOLoop.instance() io_loop.remove_handler(OPTIONS["sock"].fileno()) OPTIONS["sock"].close() if OPTIONS["active"] == 0: OPTIONS["idle_timer"] = None io_loop.stop() # called from ioloop.py:245 def new_req(sock, templar, fd, events): """The IO handler for the well-known port. Handles the RRQ packet of a known size (512), and sets up the transient port for future messages for the same request. """ io_loop = ioloop.IOLoop.instance() if OPTIONS["idle"] > 0: try: io_loop.remove_timeout(OPTIONS["idle_timer"]) except: pass while True: try: data,address = sock.recvfrom(OPTIONS["blksize"]); except socket.error, e: if e[0] not in (errno.EWOULDBLOCK, errno.EAGAIN): raise break packet = read_packet(data,sock,address) # this is the new_request handler. (packet had better be an RRQ # request) if packet is None or packet.opcode != TFTP_OPCODE_RRQ: local_sock.sendto( ERRORPacket(2,"Unsupported initial request").marshall(),address) break # Create the new transient port for this request new_address = socket.socket(socket.AF_INET,socket.SOCK_DGRAM, 0) new_address.bind(("",0)) # random port: XXX control? logging.debug("Bound to transient socket %d" % new_address.getsockname()[1]) new_address.setblocking(0) packet.local_sock = new_address # Create the request object to handle this request, and bind it # to IO from the transient port request = Request(packet,new_address,templar) io_loop.add_handler( new_address.fileno(), partial(handle_request,request), io_loop.READ) request.timeout = io_loop.add_timeout(time.time() + OPTIONS["timeout"], lambda : request.handle_timeout()) # Ask the request what to do now.. reply = request.reply() if reply: new_address.sendto(reply.marshall(),address) if not reply or reply.is_error(): request.finish() # After the while loop. Re-add the idle timer if OPTIONS["idle"] > 0: OPTIONS["idle_timer"] = io_loop.add_timeout(time.time()+OPTIONS["idle"], lambda : idle_out()) def main(): # If we're called from xinetd, set idle to non-zero mode = os.fstat(sys.stdin.fileno()).st_mode if stat.S_ISSOCK(mode): OPTIONS["idle"] = 30 OPTIONS["logger"] = "syslog" # setup option parsing opt_help = dict( port = dict(type="int",help="The port to bind to for new requests"), idle = dict(type="int",help="How long to wait for input"), timeout= dict(type="int",help="How long to wait for a given request"), max_blksize=dict(type="int", help="The maximum block size to permit" ), prefix = dict(type="string", help="Where files are stored by default [" +OPTIONS["prefix"]+"]"), logger = dict(type="string",help="How to log"), file_cmd= dict(type="string",help="The location of the 'file' command"), user = dict(type="string",help="The user to run as [nobody]"), ) parser = optparse.OptionParser( formatter=optparse.IndentedHelpFormatter(), usage=globals()['__doc__'], version=VERSION) parser.add_option('-v','--verbose',action='store_true',default=False, help="Increase output verbosity") parser.add_option('-d','--debug',action='store_true',default=False, help="Debug (vastly increases output verbosity)") parser.add_option('-c','--cache',action='store_true',default=True, help="Use a cache to help find hosts w/o IP address") parser.add_option('--cache-time',action='store',type="int",default=5*60, help="How long an ip->name mapping is valid") parser.add_option('--neg-cache-time',action='store',type="int",default=10, help="How long an ip->name mapping is valid") opts = opt_help.keys() opts.sort() for k in opts: v = opt_help[k] parser.add_option("--"+k,default=OPTIONS[k], type=v["type"],help=v["help"]) parser.add_option('-B',dest="max_blksize",type="int",default=1428, help="alias for --max-blksize, for in.tftpd compatibility") # Actually read the args (options,args) = parser.parse_args() for attr in dir(options): if attr in OPTIONS: OPTIONS[attr] = getattr(options,attr) if stat.S_ISSOCK(mode) or OPTIONS["logger"] == "syslog": # log to syslog. Facility 11 isn't in the class, but it's FTP on linux logger = logging.handlers.SysLogHandler("/dev/log", 11) logger.setFormatter( logging.Formatter('%(filename)s: %(levelname)s: %(message)s')) elif OPTIONS["logger"] == "stream": # log to stdout logger = logging.StreamHandler() logger.setFormatter(logging.Formatter(logging.BASIC_FORMAT)) else: logger = logging.FileHandler("/var/log/tftpd") logger.setFormatter(logging.Formatter( "%(asctime)s %(name)s: %(levelname)s: %(message)s")) logging.getLogger().addHandler(logger) if OPTIONS["debug"]: logging.getLogger().setLevel(logging.DEBUG) elif OPTIONS["verbose"]: logging.getLogger().setLevel(logging.INFO) else: logging.getLogger().setLevel(logging.WARN) if stat.S_ISSOCK(mode): OPTIONS["sock"] = socket.fromfd(sys.stdin.fileno(), socket.AF_INET, socket.SOCK_DGRAM, 0) else: OPTIONS["sock"] = socket.socket(socket.AF_INET,socket.SOCK_DGRAM, 0) OPTIONS["sock"].setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) try: OPTIONS["sock"].bind(("",OPTIONS["port"])) except socket.error, e: if e[0] in (errno.EPERM, errno.EACCES): print "Unable to bind to port %d" % OPTIONS["port"] return -1 else: raise OPTIONS["sock"].setblocking(0) if os.getuid() == 0: uid = pwd.getpwnam(OPTIONS["user"])[2] os.setreuid(uid,uid) # This takes a while, so do it after we open the port, so we # don't drop the packet that spawned us templar = cobbler.templar.Templar(None) io_loop = ioloop.IOLoop.instance() io_loop.add_handler(OPTIONS["sock"].fileno(), partial(new_req,OPTIONS["sock"],templar),io_loop.READ) # Shove the timeout into OPTIONS, because it's there if OPTIONS["idle"] > 0: OPTIONS["idle_timer"] = io_loop.add_timeout(time.time()+OPTIONS["idle"], lambda : idle_out()) logging.info('Starting Eventloop') try: try: io_loop.start() except KeyboardInterrupt: # Someone hit ^C logging.info('Exiting') finally: OPTIONS["sock"].close() return 0 if __name__ == "__main__": sys.exit(main()) cobbler-2.4.1/bin/zpxe.rexx000066400000000000000000000226731227367477500156320ustar00rootroot00000000000000/* zPXE: REXX PXE Client for System z zPXE is a PXE client used with Cobbler. It must be run under z/VM. zPXE uses TFTP to first download a list of profiles, then a specific kernel, initial RAMdisk, and PARM file. These files are then punched to start the install process. zPXE does not require a writeable 191 A disk. Files are downloaded to a temporary disk (VDISK). zPXE can also IPL a DASD disk by default. You can specify the default dasd in ZPXE CONF, as well as the hostname of the Cobbler server. --- Copyright 2006-2009, Red Hat, Inc and Others Brad Hinson This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA */ /* Defaults */ server = '' /* define server in ZPXE CONF */ iplDisk = 100 /* overridden by value in ZPXE CONF */ profilelist = PROFILE LIST T /* VDISK will be defined as T later */ profiledetail = PROFILE DETAIL T zpxeparm = ZPXE PARM T zpxeconf = ZPXE CONF T config = ZPXE CONF /* For translating strings to lowercase */ upper = xrange('A', 'Z') lower = xrange('a', 'z') /* Query user ID. This is used later to determine: 1. Whether a user-specific PXE profile exists. 2. Whether user is disconnected. If so, IPL the default disk. */ 'pipe cp query' userid() '| var user' parse value user with id . dsc . userid = translate(id, lower, upper) /* Useful settings normally found in PROFILE EXEC */ 'cp set run on' 'cp set pf11 retrieve forward' 'cp set pf12 retrieve' /* Make it possible to interrupt zPXE and to enter CMS even with a specific user profile present */ if (dsc <> 'DSC') then do /* user is connected */ say '' say 'Enter a non-blank character and ENTER (or two ENTERs) within 10' say ' seconds to interrupt zPXE.' 'WAKEUP +00:10 (CONS' /* Check for interrupt */ if rc = 6 then do say 'Interrupt: entering CMS.' pull /* Clear Stack */ exit end end /* Check for config file */ if lines(config) > 0 then do inputline = linein(config) /* first line is server hostname/IP */ parse var inputline . server . inputline = linein(config) /* second line is DASD disk to IPL */ parse var inputline . iplDisk . if lines(config) > 0 then do inputline = linein(config) /* third line is name of system in cobbler */ parse var inputline . userid . end end /* Define temporary disk (VDISK) to store files */ 'set vdisk syslim infinite' 'set vdisk userlim infinite' 'detach ffff' /* detach ffff if present */ 'define vfb-512 as ffff blk 200000' /* 512 byte block size =~ 100 MB */ queue '1' queue 'tmpdsk' 'format ffff t' /* format VDISK as file mode t */ /* Link TCPMAINT disk for access to TFTP */ 'link tcpmaint 592 592 rr' 'access 592 e' /* Check whether a user-specific PXE profile exists. If so, proceed with this. Otherwise, continue and show the system-wide profile menu. */ call GetTFTP '/s390x/s_'userid 'profile.detail.t' if lines(profiledetail) > 0 then do /* Get user PARM and CONF containing network info */ call GetTFTP '/s390x/s_'userid'_parm' 'zpxe.parm.t' call GetTFTP '/s390x/s_'userid'_conf' 'zpxe.conf.t' vmfclear /* clear screen */ call CheckServer /* print server name */ say 'Profile 'userid' found' say '' bootRc = ParseSystemRecord() /* parse file for boot action */ if bootRc = 0 then 'cp ipl' iplDisk /* boot default DASD */ else do call DownloadBinaries /* download kernel and initrd */ say 'Starting install...' say '' call PunchFiles /* punch files to begin install */ exit end /* if bootRc = 0 */ end /* if user-specific profile found */ /* Download initial profile list */ call GetTFTP '/s390x/profile_list' 'profile.list.t' vmfclear /* clear screen */ call CheckServer /* print server name */ say 'zPXE MENU' /* show menu */ say '---------' count = 0 do while lines(profilelist) > 0 /* display one profile per line */ count = count + 1 inputline = linein(profilelist) parse var inputline profile.count say count'. 'profile.count end if (count = 0) then say '** Error connecting to server: no profiles found **' count = count + 1 say count'. Exit to CMS shell [IPL CMS]' say '' say '' say 'Enter Choice -->' say 'or press to boot from disk [DASD 'iplDisk']' /* Check if user is disconnected, indicating logon by XAUTOLOG. In this case, IPL the default disk. */ if (dsc = 'DSC') then do /* user is disconnected */ say 'User disconnected. Booting from DASD 'iplDisk'...' 'cp ipl' iplDisk end else do /* user is interactive -> prompt */ parse upper pull answer . select when (answer = count) then do say 'Exiting to CMS shell...' exit end when (answer = '') /* IPL by default */ then do say 'Booting from DASD 'iplDisk'...' 'cp ipl' iplDisk end when (answer < 0) | (answer > count) /* invalid respone */ then do say 'Invalid choice, exiting to CMS shell.' exit end when (answer > 0) & (answer < count) /* valid response */ then do call GetTFTP '/s390x/p_'profile.answer 'profile.detail.t' /* get profile-based PARM and CONF files */ call GetTFTP '/s390x/p_'profile.answer'_parm' 'zpxe.parm.t' call GetTFTP '/s390x/p_'profile.answer'_conf' 'zpxe.conf.t' vmfclear /* clear screen */ say 'Using profile 'answer' ['profile.answer']' say '' call DownloadBinaries /* download kernel and initrd */ say 'Starting install...' say '' call PunchFiles end /* valid answer */ otherwise say 'Invalid choice, exiting to CMS shell.' exit end /* Select */ end exit /* Procedure CheckServer Print error message if server is not defined. Otherwise show server name */ CheckServer: if server = '' then say '** Error: No host defined in ZPXE.CONF **' else say 'Connected to server 'server say '' return 0 /* CheckServer */ /* Procedure GetTFTP Use CMS TFTP client to download files path: remote file location filename: local file name transfermode [optional]: 'ascii' or 'octet' */ GetTFTP: parse arg path filename transfermode if transfermode <> '' then queue 'mode' transfermode queue 'get 'path filename queue 'quit' 'set cmstype ht' /* suppress tftp output */ tftp server 'set cmstype rt' return 0 /* GetTFTP */ /* Procedure DownloadBinaries Download kernel and initial RAMdisk. Convert both to fixed record length 80. */ DownloadBinaries: inputline = linein(profiledetail) /* first line is kernel */ parse var inputline kernelpath say 'Downloading kernel ['kernelpath']...' call GetTFTP kernelpath 'kernel.img.t' octet inputline = linein(profiledetail) /* second line is initrd */ parse var inputline initrdpath say 'Downloading initrd ['initrdpath']...' call GetTFTP initrdpath 'initrd.img.t' octet inputline = linein(profiledetail) /* third line is ks kernel arg */ parse var inputline ksline call lineout zpxeparm, ksline /* add ks line to end of parm */ call lineout zpxeparm /* close file */ /* convert to fixed record length */ 'pipe < KERNEL IMG T | fblock 80 00 | > KERNEL IMG T' 'pipe < INITRD IMG T | fblock 80 00 | > INITRD IMG T' return 0 /* DownloadBinaries */ /* Procedure PunchFiles Punch the kernel, initial RAMdisk, and PARM file. Then IPL to start the install process. */ PunchFiles: 'spool punch *' 'close reader' 'purge reader all' /* clear reader contents */ 'punch kernel img t (noh' /* punch kernel */ 'punch zpxe parm t (noh' /* punch PARM file */ 'punch initrd img t (noh' /* punch initrd */ 'change reader all keep' /* keep files in reader */ 'ipl 00c clear' /* IPL the reader */ return 0 /* PunchFiles */ /* Procedure ParseSystemRecord Open system record file to look for local boot flag. Return 0 if local flag found (guest will IPL default DASD). Return 1 otherwise (guest will download kernel/initrd and install). */ ParseSystemRecord: inputline = linein(profiledetail) /* get first line */ parse var inputline systemaction . call lineout profiledetail /* close file */ if systemaction = 'local' then return 0 else return 1 /* End ParseSystemRecord */ cobbler-2.4.1/cobbler.spec000066400000000000000000003017631227367477500154500ustar00rootroot00000000000000%{!?python_sitelib: %define python_sitelib %(%{__python} -c "from distutils.sysconfig import get_python_lib; print get_python_lib()")} %{!?pyver: %define pyver %(%{__python} -c "import sys ; print sys.version[:3]" || echo 0)} %define _binaries_in_noarch_packages_terminate_build 0 %global debug_package %{nil} Summary: Boot server configurator Name: cobbler License: GPLv2+ AutoReq: no Version: 2.4.1 Release: 1%{?dist} Source0: http://shenson.fedorapeople.org/cobbler/cobbler-%{version}.tar.gz Group: Applications/System BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-buildroot BuildArch: noarch Url: http://www.cobblerd.org/ BuildRequires: redhat-rpm-config BuildRequires: git BuildRequires: PyYAML BuildRequires: python-cheetah Requires: python >= 2.3 Requires: httpd Requires: tftp-server Requires: mod_wsgi Requires: createrepo Requires: python-cheetah Requires: python-netaddr Requires: python-simplejson Requires: python-urlgrabber Requires: PyYAML Requires: rsync Requires: syslinux %if 0%{?fedora} >= 11 || 0%{?rhel} >= 6 Requires: python(abi) >= %{pyver} Requires: genisoimage %else Requires: mkisofs %endif %if 0%{?fedora} >= 8 BuildRequires: python-setuptools-devel %else BuildRequires: python-setuptools %endif %if 0%{?fedora} >= 6 || 0%{?rhel} >= 5 Requires: yum-utils %endif %if 0%{?fedora} >= 16 BuildRequires: systemd-units Requires(post): systemd-sysv Requires(post): systemd-units Requires(preun): systemd-units Requires(postun): systemd-units %else Requires(post): /sbin/chkconfig Requires(preun): /sbin/chkconfig Requires(preun): /sbin/service %endif %description Cobbler is a network install server. Cobbler supports PXE, virtualized installs, and re-installing existing Linux machines. The last two modes use a helper tool, 'koan', that integrates with cobbler. There is also a web interface 'cobbler-web'. Cobbler's advanced features include importing distributions from DVDs and rsync mirrors, kickstart templating, integrated yum mirroring, and built-in DHCP/DNS Management. Cobbler has a XMLRPC API for integration with other applications. %prep %setup -q %build %{__python} setup.py build %install test "x$RPM_BUILD_ROOT" != "x" && rm -rf $RPM_BUILD_ROOT %{__python} setup.py install --optimize=1 --root=$RPM_BUILD_ROOT $PREFIX mkdir -p $RPM_BUILD_ROOT/etc/httpd/conf.d mv config/cobbler.conf $RPM_BUILD_ROOT/etc/httpd/conf.d/ mv config/cobbler_web.conf $RPM_BUILD_ROOT/etc/httpd/conf.d/ mkdir -p $RPM_BUILD_ROOT/etc/logrotate.d mv config/cobblerd_rotate $RPM_BUILD_ROOT/etc/logrotate.d/cobblerd mkdir -p $RPM_BUILD_ROOT/var/spool/koan %if 0%{?fedora} >= 9 || 0%{?rhel} > 5 mkdir -p $RPM_BUILD_ROOT/var/lib/tftpboot/images %else mkdir -p $RPM_BUILD_ROOT/tftpboot/images %endif rm -f $RPM_BUILD_ROOT/etc/cobbler/cobblerd %if 0%{?fedora} >= 16 rm -rf $RPM_BUILD_ROOT/etc/init.d mkdir -p $RPM_BUILD_ROOT%{_unitdir} install -m0644 config/cobblerd.service $RPM_BUILD_ROOT%{_unitdir} %post if [ $1 -eq 1 ] ; then # Initial installation /bin/systemctl daemon-reload >/dev/null 2>&1 || : elif [ "$1" -ge "2" ]; then # backup config if [ -e /var/lib/cobbler/distros ]; then cp /var/lib/cobbler/distros* /var/lib/cobbler/backup 2>/dev/null cp /var/lib/cobbler/profiles* /var/lib/cobbler/backup 2>/dev/null cp /var/lib/cobbler/systems* /var/lib/cobbler/backup 2>/dev/null cp /var/lib/cobbler/repos* /var/lib/cobbler/backup 2>/dev/null cp /var/lib/cobbler/networks* /var/lib/cobbler/backup 2>/dev/null fi if [ -e /var/lib/cobbler/config ]; then cp -a /var/lib/cobbler/config /var/lib/cobbler/backup 2>/dev/null fi # upgrade older installs # move power and pxe-templates from /etc/cobbler, backup new templates to *.rpmnew for n in power pxe; do rm -f /etc/cobbler/$n*.rpmnew find /etc/cobbler -maxdepth 1 -name "$n*" -type f | while read f; do newf=/etc/cobbler/$n/`basename $f` [ -e $newf ] && mv $newf $newf.rpmnew mv $f $newf done done # upgrade older installs # copy kickstarts from /etc/cobbler to /var/lib/cobbler/kickstarts rm -f /etc/cobbler/*.ks.rpmnew find /etc/cobbler -maxdepth 1 -name "*.ks" -type f | while read f; do newf=/var/lib/cobbler/kickstarts/`basename $f` [ -e $newf ] && mv $newf $newf.rpmnew cp $f $newf done /bin/systemctl try-restart cobblerd.service >/dev/null 2>&1 || : fi %preun if [ $1 -eq 0 ] ; then # Package removal, not upgrade /bin/systemctl --no-reload disable cobblerd.service > /dev/null 2>&1 || : /bin/systemctl stop cobblerd.service > /dev/null 2>&1 || : fi %postun /bin/systemctl daemon-reload >/dev/null 2>&1 || : if [ $1 -ge 1 ] ; then # Package upgrade, not uninstall /bin/systemctl try-restart cobblerd.service >/dev/null 2>&1 || : fi %triggerun -- cobbler < 2.0.11-3 # Save the current service runlevel info # User must manually run systemd-sysv-convert --apply cobblerd # to migrate them to systemd targets /usr/bin/systemd-sysv-convert --save cobblerd >/dev/null 2>&1 ||: # Run these because the SysV package being removed won't do them /sbin/chkconfig --del cobblerd >/dev/null 2>&1 || : /bin/systemctl try-restart cobblerd.service >/dev/null 2>&1 || : %else %post if [ "$1" = "1" ]; then # This happens upon initial install. Upgrades will follow the next else /sbin/chkconfig --add cobblerd elif [ "$1" -ge "2" ]; then # backup config if [ -e /var/lib/cobbler/distros ]; then cp /var/lib/cobbler/distros* /var/lib/cobbler/backup 2>/dev/null cp /var/lib/cobbler/profiles* /var/lib/cobbler/backup 2>/dev/null cp /var/lib/cobbler/systems* /var/lib/cobbler/backup 2>/dev/null cp /var/lib/cobbler/repos* /var/lib/cobbler/backup 2>/dev/null cp /var/lib/cobbler/networks* /var/lib/cobbler/backup 2>/dev/null fi if [ -e /var/lib/cobbler/config ]; then cp -a /var/lib/cobbler/config /var/lib/cobbler/backup 2>/dev/null fi # upgrade older installs # move power and pxe-templates from /etc/cobbler, backup new templates to *.rpmnew for n in power pxe; do rm -f /etc/cobbler/$n*.rpmnew find /etc/cobbler -maxdepth 1 -name "$n*" -type f | while read f; do newf=/etc/cobbler/$n/`basename $f` [ -e $newf ] && mv $newf $newf.rpmnew mv $f $newf done done # upgrade older installs # copy kickstarts from /etc/cobbler to /var/lib/cobbler/kickstarts rm -f /etc/cobbler/*.ks.rpmnew find /etc/cobbler -maxdepth 1 -name "*.ks" -type f | while read f; do newf=/var/lib/cobbler/kickstarts/`basename $f` [ -e $newf ] && mv $newf $newf.rpmnew cp $f $newf done # reserialize and restart # FIXIT: ????? #/usr/bin/cobbler reserialize /sbin/service cobblerd condrestart fi %preun if [ $1 = 0 ]; then /sbin/service cobblerd stop >/dev/null 2>&1 || : chkconfig --del cobblerd || : fi %postun if [ "$1" -ge "1" ]; then /sbin/service cobblerd condrestart >/dev/null 2>&1 || : /sbin/service httpd condrestart >/dev/null 2>&1 || : fi %endif %clean test "x$RPM_BUILD_ROOT" != "x" && rm -rf $RPM_BUILD_ROOT %files %defattr(-,root,root,-) %{_bindir}/cobbler %{_bindir}/cobbler-ext-nodes %{_bindir}/cobblerd %{_sbindir}/tftpd.py* %config(noreplace) %{_sysconfdir}/cobbler %config(noreplace) %{_sysconfdir}/logrotate.d/cobblerd %if 0%{?fedora} >= 16 %{_unitdir}/cobblerd.service %else /etc/init.d/cobblerd %endif %{python_sitelib}/cobbler %config(noreplace) /var/lib/cobbler %exclude /var/lib/cobbler/webui_sessions /var/log/cobbler /var/www/cobbler %{_mandir}/man1/cobbler.1.gz %config(noreplace) /etc/httpd/conf.d/cobbler.conf %if 0%{?fedora} >= 9 || 0%{?rhel} >= 5 %exclude %{python_sitelib}/cobbler/sub_process.py* %endif %if 0%{?fedora} >= 9 || 0%{?rhel} > 5 %{python_sitelib}/cobbler*.egg-info /var/lib/tftpboot/images %else /tftpboot/images %endif %doc AUTHORS CHANGELOG README COPYING %package -n koan Summary: Helper tool that performs cobbler orders on remote machines Group: Applications/System Requires: python >= 2.0 %if 0%{?fedora} >= 11 || 0%{?rhel} >= 6 Requires: python(abi) >= %{pyver} Requires: python-simplejson Requires: virt-install %endif %description -n koan Koan stands for kickstart-over-a-network and allows for both network installation of new virtualized guests and reinstallation of an existing system. For use with a boot-server configured with Cobbler %files -n koan %defattr(-,root,root,-) %dir /var/spool/koan %dir /var/lib/koan/config %{_bindir}/koan %{_bindir}/ovz-install %{_bindir}/cobbler-register %{python_sitelib}/koan %if 0%{?fedora} >= 9 || 0%{?rhel} >= 5 %exclude %{python_sitelib}/koan/sub_process.py* %exclude %{python_sitelib}/koan/opt_parse.py* %exclude %{python_sitelib}/koan/text_wrap.py* %endif %{_mandir}/man1/koan.1.gz %{_mandir}/man1/cobbler-register.1.gz %dir /var/log/koan %doc AUTHORS COPYING CHANGELOG README %package -n cobbler-web Summary: Web interface for Cobbler Group: Applications/System Requires: cobbler Requires: Django >= 1.1.2 Requires: mod_wsgi %if 0%{?fedora} >= 11 || 0%{?rhel} >= 6 Requires: python(abi) >= %{pyver} %endif %description -n cobbler-web Web interface for Cobbler that allows visiting http://server/cobbler_web to configure the install server. %post -n cobbler-web # Change the SECRET_KEY option in the Django settings.py file # required for security reasons, should be unique on all systems RAND_SECRET=$(openssl rand -base64 40 | sed 's/\//\\\//g') sed -i -e "s/SECRET_KEY = ''/SECRET_KEY = \'$RAND_SECRET\'/" /usr/share/cobbler/web/settings.py %files -n cobbler-web %defattr(-,root,root,-) %doc AUTHORS COPYING CHANGELOG README %config(noreplace) /etc/httpd/conf.d/cobbler_web.conf %defattr(-,apache,apache,-) /usr/share/cobbler/web %dir %attr(700,apache,root) /var/lib/cobbler/webui_sessions /var/www/cobbler_webui_content/ %changelog * Thu Jun 20 2013 James Cammarata 2.4.0-1 - Release 2.4.0-1 (jimi@sngx.net) * Fri May 24 2013 James Cammarata 2.4.0-beta6 - New BETA release - 2.4.0 beta6 * Mon Apr 22 2013 James Cammarata 2.4.0-beta5 - A few bugfixes and rebuilding the RPM because of a goof (jimi@sngx.net) * Wed Apr 03 2013 James Cammarata 2.4.0-beta4 - 2.4.0-beta4 release * Wed Dec 12 2012 James Cammarata 2.4.0-beta3 - New release 2.4.0-beta3 * Thu Oct 11 2012 James Cammarata 2.4.0-beta2 - Modified spec version/release to be 2.4.0-beta2 (jimi@sngx.net) - fixing up a bad commit merge (jimi@sngx.net) * Thu Oct 11 2012 James Cammarata 2.4.0-beta1 - Beta Release 1 of 2.4.0 - BUGFIX - Issue #329 - Systems no longer allow an add with an image for a parent (jimi@sngx.net) - BUGFIX - Issue #327 - revert 5afcff7 and fix in a more sane way (jimi@sngx.net) - Removed some duplicates created by reapplying a patch (jimi@sngx.net) - BUGFIX - Issue #267 - old python-virtinst does not support --boot (jimi@sngx.net) - Revise install_post_puppet.py to use newer puppet syntax (stephen@esstec.co.uk) - Get rid of deprecated Puppet syntax so that cobbler works with Puppet 3.0 (stephen@esstec.co.uk) - Added ubuntu to dist check for named.conf location (daniel.givens@rackspace.com) - Expanded automatic determination of tftpboot path, isc dhcp and bind service names and config files based on distro. (daniel@givenstx.com) - Make the service name for DHCP and DNS restarts configurable for better portable between distros. (daniel.givens@rackspace.com) - Serial based on formatted date and revision number (alevy@mobitv.com) - Correct undefined variable name (jbd@jbdenis.net) - fix merge Issue #252 BUGFIX and #262 (daikame@gmail.com) - Add check for valid driver_type before executing qemu-img (jimi@sngx.net) - fix mistake remove import. (daikame@gmail.com) - move exec method to utils.py, and catch unexpected exception. (daikame@gmail.com) - not check driver type on create method. (daikame@gmail.com) - BUGFIX - Issue #305 - Incorrect Kickstart file when gPXE enabled (jimi@sngx.net) - BUGFIX - Issue #304 - Cobbler does not store values correctly for ksmeta Objects were getting flattened improperly, so it was losing escapes/quoting for values with spaces (jimi@sngx.net) - add vmdk and raw file create support. (daikame@gmail.com) - BUGFIX - Issue #267 - old python-virtinst does not support --boot (jimi@sngx.net) - Modified spec version/release to be 2.4.0-beta-1 (jimi@sngx.net) - Initial commit for mysql backend support (jimi@sngx.net) - BUGFIX - Issue #277 - move webroot to /srv/www for debian/ubuntu (jimi@sngx.net) - FEATURE - adding 'zonetype' variable for DNS zone rendering (jimi@sngx.net) - BUGFIX - Issue #278 - cobbler import fails for ubuntu images due to rsync args (jimi@sngx.net) - BUGFIX - Issue #285 - update cobbler man page for incorrect options (jimi@sngx.net) - BUGFIX - Issue #241 - adding distro with blank name via XMLRPC should not work (jimi@sngx.net) - BUGFIX - Issue #272 - allow anamon to log entries when building systems based on profiles (no corresponding system record) (jimi@sngx.net) - BUGFIX - Issue #252 - fuzzy match on lvs name returns a false match preventing LV creation (jimi@sngx.net) - BUGFIX - Issue #287 - patch to allow templar to work without a config, which was breaking the tftpd.py script (jimi@sngx.net) - add qcow2 driver type (daikame@gmail.com) - fix koan qemu-machine-type param test. (daikame@gmail.com) - Only cosmetic cleanup - removed commands that were commented out, added spaces for more clear code (flaks@bnl.gov) - Modified sample.seed to make use kickstart_start and kickstart_done snippets for debian. As a result the following cobbler features work for debian: - prevent net boot looping - cobbler status reflects debian installations - preseed file is downloaded a nd saved on the installed system as /var/log/cobbler.seed Also made download_config_files_deb snippet, make use of late_command New post_run_deb snippet allows to execute post installation script. (flaks@bnl.gov) - Some changes for testing (jimi@sngx.net) - Minor fix for urlparse on older pythons (>2.5) (jimi@sngx.net) - FEATURE - Issue #253 - Use PEERDNS=no for DHCP interfaces when name servers are specified (jimi@sngx.net) - install-tree for debian/ubuntu modified to take tree= from meta data. http, ftp and nfs remote tree locations supported (flaks@bnl.gov) - add support of custom logical volume name (daikame@gmail.com) - Partial revert of 87acfc8b, and a minor change to bring the koan extra-args inline with the PXE args (jimi@sngx.net) - New default preseed, and a few minor changes to make ubuntu auto install work better (jimi@sngx.net) - Add support for qemu machine type to emulate (option --qemu-machine-type). (isaoshimizu@gmail.com) - Modern x86 kernels have 2048 char limit and this is needed to support configurations with kickstart+NIC kernel params. Otherwise koan refuses to accept the param list. (oliver@cpan.org) - Allow koan's -S option to work for SuSE breed. Also remove -S for breed=None, as I assume "Red Hat" is not a sane assumption for all Distros without a breed. (oliver@cpan.org) - Only add a udev net rule for an interface if the MAC is set. This fixes behaviour whereby a dummy udev rule at eth0 forces the first NIC to get eth1 post-install. (oliver@cpan.org) - Make the domainname setting be the full eth0 DNS Name, minus the first dotted part (and not the FQDN). (oliver@cpan.org) - BUGFIX - Issue #252 - fuzzy match on lvs name returns a false match preventing LV creation (jimi@sngx.net) - Added back in the filesystem loader. (oliver@cpan.org) - BUGFIX - Issue #247 - Reposync does not work from the web interface (jimi@sngx.net) - BUGFIX - Issue #246 - CentOS 5.x install fence_tools to /sbin/ (jimi@sngx.net) - Fix post_report trigger typo (jimi@sngx.net) - Some fixes for koan running with an old virt-install (jimi@sngx.net) - Define pxe_menu_items variable when creating PXE files for systems (jthiltges2@unl.edu) - Refactor PXE and GRUB menu item creation into a separate function (jthiltges2@unl.edu) - django 1.4 and later have deprecated the old TEMPLATE_LOADERS and replaced them with a new app_directories.Loader (oliver@cpan.org) - Add support for UEFI boot to the subnet, but not for defined systems yet. (erinn.looneytriggs@gmail.com) - Fix redhat import whitelist for Fedora 17 (jimi@sngx.net) - Fix unittest on the case of haven't virt-install libs. (daikame@gmail.com) - os_version for debian should be similar to ubunty for virt-install to work changed tree in app.py so that I can use debian mirror different from cobbler server (flaks@bnl.gov) - fedora 17 changed the output of ifconfig command. This will make IFNAME set in snippets again (flaks@bnl.gov) - remove edit for now (flaks@bnl.gov) - Fixed snippets for bonded_bridge_slave and a few other fixes for koan/web GUI (jimi@sngx.net) - Initial support for bonded_bridge_slave type. TODO: modifying snippets to actually make it work... (jimi@sngx.net) - The webui_sessions directory belongs only to cobbler-web (chutzimir@gmail.com) - RPM: put cobbler*.conf files only in /etc/httpd/conf.d (cristian.ciupitu@yahoo.com) - better fix for pull request #228 (jorgen.maas@gmail.com) - make rpms failed because the misc/ directory containing the augeas lense could not be found. this simple diff fixes that. (jorgen.maas@gmail.com) - Ubuntu actually requires auto=true in kopts See http://serverfault.com/a/144290/39018 (ekirpichov@gmail.com) - Whitespace cleanup for the new openvz stuff (jimi@sngx.net) - Remove dead code (useless imports) (cristian.ciupitu@yahoo.com) - BUGFIX extra-args option problems (daikame@gmail.com) - FIX koan virt-install tests. (daikame@gmail.com) - added debian support to prevent net boot looping (flaks@bnl.gov) - README.openvz: - added (nvrhood@gmail.com) - scripts/ovz-install: - added support for "services" kickstart option - corrected repos and installation source processing (nvrhood@gmail.com) - cobbler.spec, setup.py: - added scripts/ovz-install (nvrhood@gmail.com) - koan/openvzcreate.py, scripts/ovz-install: - changes in copyright notice (nvrhood@gmail.com) - koan/app.py: - bug in koan: size of freespace on VG expressed as float with comma, but need fload with point (nvrhood@gmail.com) - koan/app.py: - added type "openvz" (nvrhood@gmail.com) - cobbler/collection.py: - openvz containers doesn't need to boot from PXE, so we prevent PXE-menu creation for such profiles. (nvrhood@gmail.com) - cobbler/item_profile.py, cobbler/utils.py: - added "openvz" virtualization type (nvrhood@gmail.com) - cobbler/item_system.py: - added openvz for virt_type (nvrhood@gmail.com) - [BUGFIX] template errors can hit an exception path that references an undefined variable (jimi@sngx.net) - If the call to int() fails, inum has no value, thus the reference to inum in the except clause causes an UnboundLocalError when it tries to reference inum. (joshua@azariah.com) - Add new ubuntu (alpha) version to codes.py (jorgen.maas@gmail.com) - Not all remove current ifcfg- post_install_network_config (me@n0ts.org) - Update systemctl script to resolve some issues (jimi@sngx.net) - More spec fixes (jimi@sngx.net) - Removing replicate_use_default_rsync_options setting and setting replicate_rsync_options to existing rsync default. Issue #58 (john@julienfamily.com) - Commit for RFE: Expose rsync options during replication. Issue #58 (john@julienfamily.com) - Yet more HTML/CSS fixes, cleaning up some overly large inputs caused by other CSS changes (jimi@sngx.net) - More HTML/CSS improvements for new weblayout (jimi@sngx.net) - CSS improvements for the tabbed layout (jimi@sngx.net) - Fix for settings edit using the new tab format (jimi@sngx.net) - Added a cancel button to replace the reset button (jimi@sngx.net) - Fix saving of multiselect fields (jimi@sngx.net) - Modification to generic_edit template to use tabs for categories plus some miscellaneous cleanup (jimi@sngx.net) - Adding an example line for redhat imports to the whitelist file (jimi@sngx.net) - Another minor fix for suse imports - fixing up name when using --available-as (already done in other import modules) - allowing multiple arch imports (also already done in other imports) (jimi@sngx.net) - Some fixups for suse using --available-as (jimi@sngx.net) - Fix for import when using --available-as - currently rsyncs full remote tree, changing that to only import files in a white list - some modifications to import modules to clean some things up and make available-as work better - fix in utils.py for path_tail, which was not working right and appending the full path (jimi@sngx.net) - Run the same sed command on the default distributed config file to ensure consistent indentation (jimi@sngx.net) - Add setting to enable/disable dynamic settings changes Adding cobblersettings.aug to distributed files, since we need a copy that doesn't insert tabs Added a "cobbler check" that checks if dynamic settings is enabled and prints a sed command to cleanup the settings file spacing/indents (jimi@sngx.net) - Change cli command "settings" to "setting" to match other commands (which are not plurarlized) (jimi@sngx.net) - Removing commented-out try/except block in config.py, didn't mean to commit this (jimi@sngx.net) - Fixed/improved CLI reporting for settings (jimi@sngx.net) - Added support for validating setting type when saving Also fixed up the augeas stuff to save lists and hashes correctly (jimi@sngx.net) - Fix for incorrect redirect when login times out when looking at a setting edit (jimi@sngx.net) - Dynamic settings edit support for the web GUI (jimi@sngx.net) - Added ability to write settings file via augeas (jimi@sngx.net) - Initial support for modifying settings live Changed settings do not survive a reboot and revert to what's in /etc/cobbler/settings TODO: * report --name show a single setting * validate settings based on type (string, list, bool, etc.) * web support for editing * persisting settings after change (jimi@sngx.net) - Branch for 2.4.0, updated spec and setup.py (jimi@sngx.net) * Sun Jun 17 2012 James Cammarata 2.2.3-2 - [BUGFIX] re-enable writing of DHCP entries for non-pxeboot-enabled systems unless they're static (jimi@sngx.net) * Tue Jun 05 2012 James Cammarata 2.2.3-1 - [BUGFIX] add dns to kernel commandline when using static interface (frido@enu.zolder.org) - [BUGFIX] issue #196 - repo environment variables bleed into other repos during sync process This patch has reposync cleanup/restore any environment variables that were changed during the process (jimi@sngx.net) - BUGFIX quick dirty fix to work around an issue where cobbler would not log in ldap usernames which contain uppercase characters. at line 60 instead of "if user in data", "if user.lower() in data" is used. It would appear the parser puts the usernames in data[] in lowercase, and the comparison fails because "user" does hold capitalizations. (matthiasvandegaer@hotmail.com) - [BUGFIX] simplify SELinux check reporting * Remove calls to semanage, policy prevents apps from running that directly (and speeds up check immensely) * Point users at a wiki page which will contain details on ensuring cobbler works with SELinux properly (jimi@sngx.net) - [BUGFIX] issue #117 - incorrect permissions on files in /var/lib/cobbler (j-nomura@ce.jp.nec.com) - [BUGFIX] issue #183 - update objects mgmt classes field when a mgmt class is renamed (jimi@sngx.net) - [BUGFIX] adding some untracked directories and the new augeas lense to the setup.py and cobbler.spec files (jimi@sngx.net) - [FEATURE] Added ability to disable grubby --copy-default behavior for distros that may have problems with it (jimi@sngx.net) - [SECURITY] Major changes to power commands: * Fence options are now based on /usr/sbin/fence_* - so basically anything the fence agents package provides. * Templates will now be sourced from /etc/cobbler/power/fence_.template. These templates are optional, and are only required if you want to do extra options for a given command. - All options for the fence agent command are sent over STDIN. * Support for ipmitool is gone, use fence_ipmilan instead (which uses ipmitool under the hood anyway). This may apply to other power types if they were provided by a fence_ command. * Modified labels for the power options to be more descriptive. (jimi@sngx.net) - [BUGFIX] issue #136 - don't allow invalid characters in names when copying objects (jimi@sngx.net) - [BUGFIX] issue #168 - change input_string_or_list to use shlex for split This function was using a regular string split, which did not allow quoted or escaped strings to be preserved. (jimi@sngx.net) - [BUGFIX] Correct method to process the template file. This Fixes the previous issue and process the template. (charlesrg@gmail.com) - [BUGFIX] issue #170 - koan now checks length of drivers list before indexing (daniel@defreez.com) - [BUGFIX] Issue #153 - distro delete doesn't remove link from /var/www/cobbler/links Link was being created incorrectly during the import (jimi@sngx.net) - [FEATURE] snippets: save/restore boot-device on ppc64 on fedora17 (nacc@us.ibm.com) - [BUGFIX] Fixed typo in pre_anamon (brandor5@gmail.com) - [BUGFIX] Added use of $http_port to server URL in pre_anamon and post_anamon (brandor5@gmail.com) - [BUGFIX] Fixed dnsmasq issue regarding missing dhcp-host entries (cobbler@basjes.nl) - [BUGFIX] in buildiso for RedHat based systems. The interface->ip resolution was broken when ksdevice=bootif (default) (jorgen.maas@gmail.com) - [BUGFIX] rename failed for distros that did not live under ks_mirror (jimi@sngx.net) - [BUGFIX] Partial revert of commit 3c81dd3081 - incorrectly removed the 'extends' template directive, breaking rendering in django (jimi@sngx.net) - [BUGFIX] Reverting commit 1d6c53a97, which was breaking spacewalk Changed the web interface stuff to use the existing extended_version() remote call (jimi@sngx.net) - [BUGFIX] Minor fix for serializer_pretty_json change, setting indent to 0 was still causing more formatted JSON to be output (jimi@sngx.net) - [SECURITY] Adding PrivateTmp=yes to the cobblerd.service file for systemd (jimi@sngx.net) - [FEATURE] add a config option to enable pretty JSON output (disabled by default) (aronparsons@gmail.com) - [BUGFIX] issue #107 - creating xendomains link for autoboot fails Changing an exception to a printed warning, there's no need to completely bomb out on the process for this (jimi@sngx.net) - [BUGFIX] issue #28 - Cobbler drops errors on the floor during a replicate Added additional logging to add_ functions to report an error if the add_item call returns False (jimi@sngx.net) - [BUGFIX] add requirement for python-simplejson to koan's package (jimi@sngx.net) - [BUGFIX] action_sync: fix sync_dhcp remote calls (nacc@us.ibm.com) - [BUGFIX] Add support for KVM paravirt (justin@thespies.org) - [BUGFIX] Makefile updates for debian/ubuntu systems (jimi@sngx.net) - [BUGFIX] fix infinite netboot cycle with ppc64 systems (nacc@us.ibm.com) - [BUGFIX] Don't allow Templar classes to be created without a valid config There are a LOT of places in the templar.py code that use self.settings without checking to make sure a valid config was passed in. This could cause random stack dumps when templating, so it's better to force a config to be passed in. Thankfully, there were only two pieces of code that actually did this, one of which was the tftpd management module which was fixed elsewhere. (jimi@sngx.net) - [BUGFIX] instance of Templar() was being created without a config passed in This caused a stack dump when the manage_in_tftpd module tried to access the config settings (jimi@sngx.net) - [BUGFIX] Fix for issue #17 - Make cobbler import be more squeaky when it doesn't import anything (jimi@sngx.net) - [FEATURE] autoyast_sample: save and restore boot device order (nacc@us.ibm.com) - [BUGFIX] Fix for issue #105 - buildiso fails Added a new option for buildiso: --mkisofs-opts, which allows specifying extra options to mkisofs TODO: add input box to web interface for this option (jimi@sngx.net) - [BUGFIX] incorrect lower-casing of kickstart paths - regression from issue #43 (jimi@sngx.net) - [FEATURE] Automatically detect and support bind chroot (orion@cora.nwra.com) - [FEATURE] Add yumopts to kickstart repos (orion@cora.nwra.com) - [BUGFIX] Fix issue with cobbler system reboot (nacc@us.ibm.com) - [BUGFIX] fix stack trace in write_pxe_file if distro==None (smoser@brickies.net) - [BUGFIX] Changed findkeys function to be consisten with keep_ssh_host_keys snippet (flaks@bnl.gov) - [BUGFIX] Fix for issue #15 - cobbler image command does not recognize --image-type=memdisk (jimi@sngx.net) - [BUGFIX] Issue #13 - reposync with --tries > 1 always repeats, even on success The success flag was being set when the reposync ran, but didn't break out of the retry loop - easy fix (jimi@sngx.net) - [BUGFIX] Fix for issue #42 - kickstart not found error when path has leading space (jimi@sngx.net) - [BUGFIX] Fix for issue #26 - Web Interface: Profile Edit * Added jquery UI stuff * Added javascript to generic_edit template to make all selects in the class "edit" resizeable (jimi@sngx.net) - [BUGFIX] Fix for issue #53 - cobbler system add without --profile exits 0, but does nothing (jimi@sngx.net) - [BUGFIX] Issue #73 - Broken symlinks on distro rename from web_gui (jimi@sngx.net) - regular OS version maintenance (jorgen.maas@gmail.com) - [BUGFIX] let koan not overwrite existing initrd+kernel (ug@suse.de) - [FEATURE] koan: * Port imagecreate to virt-install (crobinso@redhat.com) * Port qcreate to virt-install (crobinso@redhat.com) * Port xen creation to virt-install (crobinso@redhat.com) - [FEATURE] new snippet allows for certificate-based RHN registration (jim.nachlin@gawker.com) - [FEATURE] Have autoyast by default behave more like RHEL, regarding networking etc. (chorn@fluxcoil.net) - [BUGFIX] sles patches (chorn@fluxcoil.net) - [BUGFIX] Simple fix for issue where memtest entries were not getting created after installing memtest86+ and doing a cobbler sync (rharriso@redhat.com) - [BUGFIX] REMOTE_ADDR was not being set in the arguments in calls to CobblerSvc instance causing ip address not to show up in install.log. (jweber@cofront.net) - [BUGFIX] add missing import of shutil (aparsons@redhat.com) - [BUGFIX] add a sample kickstart file for ESXi (aparsons@redhat.com) - [BUGFIX] the ESXi installer allows two nameservers to be defined (aparsons@redhat.com) - [BUGFIX] close file descriptors on backgrounded processes to avoid hanging %%pre (aparsons@redhat.com) - [BUGFIX] rsync copies the repositories with --delete hence deleting everyhting local that isn't on the source server. The createrepo then creates (following the default settings) a cache directory ... which is deleted by the next rsync run. Putting the cache directory in the rsync exclude list avoids this deletion and speeds up running reposync dramatically. (niels@basjes.nl) - [BUGFIX] Properly blame SELinux for httpd_can_network_connect type errors on initial setup. (michael.dehaan@gmail.com) - fix install=... kernel parameter when importing a SUSE distro (ug@suse.de) - [BUGFIX] Force Django to use the system's TIME_ZONE by default. (jorgen.maas@gmail.com) - [FEATURE] Separated check for permissions from file existence check. (aaron.peschel@gmail.com) - [BUGFIX] If the xendomain symlink already exists, a clearer error will be produced. (aaron.peschel@gmail.com) - [FEATURE] Adding support for ESXi5, and fixing a few minor things (like not having a default kickstart for esxi4) Todos: * The esxi*-ks.cfg files are empty, and need proper kickstart templates * Import bug testing and general kickstart testing (jimi@sngx.net) - [FEATURE] Adding basic support for gPXE (jimi@sngx.net) - [FEATURE] Add arm as a valid architecture. (chuck.short@canonical.com) - [SECURITY] Changes PYTHON_EGG_CACHE to a safer path owned just by the webserver. (chuck.short@canonical.com) - [BUGFIX] koan: do not include ks_meta args when obtaining tree When obtaining the tree for Ubuntu machines, ensure that ks_meta args are not passed as part of the tree if they exist. (chuck.short@canonical.com) - [FEATURE] koan: Use grub2 for --replace-self instead of grubby The koan option '--replace-self' uses grubby, which relies on grub1, to replace a local installation by installing the new kernel/initrd into grub menu entries. Ubuntu/Debian no longer uses it grub1. This patch adds the ability to use grub2 to add the kernel/initrd downloaded to a menuentry. On reboot, it will boot from the install kernel reinstalling the system. Fixes (LP: #766229) (chuck.short@canonical.com) - [BUGFIX] Fix reposync missing env variable for debmirror Fixes missing HOME env variable for debmirror by hardcoding the environment variable to /var/lib/cobbler (chuck.short@canonical.com) - [BUGFIX] Fix creation of repo mirror when importing iso. Fixes the creation of a disabled repo mirror when importing ISO's such as the mini.iso that does not contain any mirror/packages. Additionally, really enables 'apt' as possible repository. (chuck.short@canonical.com) - [BUGFIX] adding default_template_type to settings.py, caused some issues with templar when the setting was not specified in the /etc/cobbler/settings (jimi@sngx.net) - [BUGFIX] fix for following issue: can't save networking options of a system in cobbler web interface. (#8) (jimi@sngx.net) - [BUGFIX] Add a new setting to force CLI commands to use the localhost for xmlrpc (chjohnst@gmail.com) - [BUGFIX] Don't blow up on broken links under /var/www/cobbler/links (jeffschroeder@computer.org) - [SECURITY] Making https the default for the cobbler web GUI. Also modifying the cobbler- web RPM build to require mod_ssl and mod_wsgi (missing wsgi was an oversight, just correcting it now) (jimi@sngx.net) - [FEATURE] Adding authn_pam. This also creates a new setting - authn_pam_service, which allows the user to configure which PAM service they want to use for cobblerd. The default is the 'login' service (jimi@sngx.net) - [SECURITY] Change in cobbler.spec to modify permissions on webui sessions directory to prevent non-privileged user acccess to the session keys (jimi@sngx.net) - [SECURITY] Enabling CSRF protection for the web interface (jimi@sngx.net) - [SECURITY] Convert all yaml loads to safe_loads for security/safety reasons. https://bugs.launchpad.net/ubuntu/+source/cobbler/+bug/858883 (jimi@sngx.net) - [FEATURE] Added the setting 'default_template_type' to the settings file, and created logic to use that in Templar().render(). Also added an option to the same function to pass the template type in as an argument. (jimi@sngx.net) - [FEATURE] Initial commit for adding support for other template languages, namely jinja2 in this case (jimi@sngx.net) * Tue Nov 15 2011 Scott Henson 2.2.2-1 - Changelog update (shenson@redhat.com) - Fixed indentation on closing tr tag (gregswift@gmail.com) - Added leader column to the non-generic tables so that all tables have the same layout. It leaves room for a checkbox and multiple selects i nthese other tables as well. (gregswift@gmail.com) - Added action class to the event log link to bring it inline with other table functions (gregswift@gmail.com) - buildiso bugfix: overriding dns nameservers via the dns kopt now works. reported by Simon Woolsgrove (jorgen.maas@gmail.com) - Fix for pxegen, where an image without a distro could cause a stack dump on cobbler sync (jimi@sngx.net) - Added initial support for specifying the on-disk format of virtual disks, currently supported for QEMU only when using koan (jimi@sngx.net) - Add fedora16, rawhide, opensuse 11.2, 11.3, 11.4 and 12.1 to codes.py This should also fix ticket #611 (jorgen.maas@gmail.com) - Use VALID_OS_VERSIONS from codes.py in the redhat importer. (jorgen.maas@gmail.com) - Cleanup: use utils.subprocess_call in services.py (jorgen.maas@gmail.com) - Cleanup: use utils.subprocess_call in remote.py. (jorgen.maas@gmail.com) - Cleanup: use utils.subprocess_call in scm_track.py. Also document that 'hg' is a valid option in the settings file. (jorgen.maas@gmail.com) - Dont import the sub_process module when it's not needed. (jorgen.maas@gmail.com) - Fixes to import_tree() to actually copy files to a safe place when --available-as is specified. Also some cleanup to the debian/ubuntu import module for when --available-as is specified. (jimi@sngx.net) - Modification to import processes so that rsync:// works as a path. These changes should also correct the incorrect linking issue where the link created in webdir/links/ pointed at a directory in ks_mirror without the arch specified, resulting in a broken link if --arch was specified on the command line Also removed the .old import modules for debian/ubuntu, which were replaced with the unified manage_import_debian_ubuntu.py (jimi@sngx.net) - cleanup: use codes.VALID_OS_VERSIONS in the freebsd importer (jorgen.maas@gmail.com) - cleanup: use codes.VALID_OS_VERSIONS in the debian/ubuntu importer (jorgen.maas@gmail.com) - Bugfix: add the /var/www/cobbler/pub directory to setup.py. Calling buildiso from cobbler-web now works as expected. (jorgen.maas@gmail.com) - BUGFIX: patch koan (xencreate) to correct the same issue that was broken for vmware regarding qemu_net_type (jimi@sngx.net) - BUGFIX: fixed issue with saving objects in the webgui failing when it was the first of that object type saved. (jimi@sngx.net) - Minor fix to the remote version to use the nicer extended version available (jimi@sngx.net) - Fix a bug in buildiso when duplicate kopt keys are used. Reported and tested by Simon Woolsgrove (jorgen.maas@gmail.com) - Fix for koan, where vmwcreate.py was not updated to accept the network type, causing failures. (jimi@sngx.net) - Added a %post section for the cobbler-web package, which replaces the SECRET_KEY field in the Django settings.py with a random string (jimi@sngx.net) - BUGFIX: added sign_puppet_certs_automatically to settings.py. The fact that this was missing was causing failures in the the pre/post puppet install modules. (jimi@sngx.net) - set the auto-boot option for a virtual machine (ug@suse.de) - Correction for koan using the incorrect default port for connecting to cobblerd (jimi@sngx.net) - config/settings: add "manage_tftpd: 1" (default setting) (cristian.ciupitu@yahoo.com) * Wed Oct 05 2011 Scott Henson 2.2.1-1 - Import changes for systemd from the fedora spec file (shenson@redhat.com) * Wed Oct 05 2011 Scott Henson 2.2.0-1 - Remove the version (shenson@redhat.com) - New upstream 2.2.0 release (shenson@redhat.com) - Add networking snippet for SuSE systems. (jorgen.maas@gmail.com) - Add a /etc/hosts snippet for SuSE systems. (jorgen.maas@gmail.com) - Add a proxy snippet for SuSE systems. (jorgen.maas@gmail.com) - Buildiso: make use of the proxy field (SuSE, Debian/Ubuntu). (jorgen.maas@gmail.com) - Rename buildiso.header to buildiso.template for consistency. Also restore the local LABEL in the template. (jorgen.maas@gmail.com) - Bugfix: uppercase macaddresses used in buildiso netdevice= keyword cause the autoyast installer to not setup the network and thus fail. (jorgen.maas@gmail.com) - Buildiso: minor cleanup diff. (jorgen.maas@gmail.com) - Buildiso: behaviour changed after feedback from the community. (jorgen.maas@gmail.com) - Build standalone ISO from the webinterface. (jorgen.maas@gmail.com) - Fix standalone ISO building for SuSE, Debian and Ubuntu. (jorgen.maas@gmail.com) - add proxy field to field_info.py (jorgen.maas@gmail.com) - Remove FreeBSD from the unix breed as it has it's own now. Also, add freebsd7 as it is supported until feb 2013. Minor version numbers don't make sense, also removed. (jorgen.maas@gmail.com) - Add a proxy field to profile and system objects. This is useful for environments where systems are not allowed to make direct connections to the cobbler/repo servers. (jorgen.maas@gmail.com) - Introduce a "status" field to system objects. Useful in environments where DTAP is required, the possible values for this field are: development, testing, acceptance, production (jorgen.maas@gmail.com) - Buildiso: only process profiles for selected systems. (jorgen.maas@gmail.com) - Buildiso: add batch action to build an iso for selected profiles. (jorgen.maas@gmail.com) - Buildiso: use management interface feature. (jorgen.maas@gmail.com) - Buildiso: get rid of some code duplication (ISO header). (jorgen.maas@gmail.com) - Buildiso: add interface to macaddr resolution. (jorgen.maas@gmail.com) - Buildiso: add Debian and Ubuntu support. (jorgen.maas@gmail.com) - Buildiso: select systems from the webinterface. (jorgen.maas@gmail.com) - Fix an exception when buildiso is called from the webinterface. (jorgen.maas@gmail.com) - fix power_virsh template to check dom status before executing command. (bpeck@redhat.com) - if hostname is not resolvable do not fail and use that hostname (msuchy@redhat.com) - Removed action_import module and references to it in code to prevent future confusion. (jimi@sngx.net) - Fixing redirects after a failed token validation. You should now be redirected back to the page you were viewing after having to log back in due to a forced login. (jimi@sngx.net) - Use port to access cobbler (peter.vreman@acision.com) - Stripping "g" from vgs output case-insensitive runs faster (mmello@redhat.com) - Adding ability to create new sub-directories when saving snippets. Addresses trac #634 - save new snippet fails on non existing subdir (jimi@sngx.net) - Fix traceback when executing "cobbler system reboot" with no system name specified Trac ticket #578 - missing check for name option with system reboot (jimi@sngx.net) - bind zone template writing (jcallaway@squarespace.com) - Removing the duplicate lines from importing re module (mmello@redhat.com) - Merge remote-tracking branch 'jimi1283/bridge-interface' (shenson@redhat.com) - Modification to allow DEPRECATED options to be added as options to optparse so they work as aliases (jimi@sngx.net) - Re-adding the ability to generate a random mac from the webui. Trac #543 (Generate random mac missing from 2.x webui) (jimi@sngx.net) - Merge remote-tracking branch 'jsabo/fbsdreplication' (shenson@redhat.com) - Tim Verhoeven (Tue. 08:35) (Cobbler attachment) Subject: [PATCH] Add support to koan to select type of network device to emulate To: cobbler development list Date: Tue, 2 Aug 2011 14:35:21 +0200 (shenson@redhat.com) - Hello, (shenson@redhat.com) - scm_track: Add --all to git add options to handle deletions (tmz@pobox.com) - Moved HEADER heredoc from action_buildiso.py to /etc/cobbler/iso/buildiso.header (gbailey@terremark.com) - Enable replication for FreeBSD (jsabo@verisign.com) - Merge branch 'master' into bridge-interface (jimi@sngx.net) - Remove json settings from local_get_cobbler_xmlrpc_url() (jsabo@verisign.com) - 1) Moving --subnet field to --netmask 2) Created DEPRECATED_FIELDS structure in field_info.py to deal with moves like this * also applies to the bonding->interface_type move for bridged interface support (jimi@sngx.net) - Merge remote-tracking branch 'jimi1283/bridge-interface' (shenson@redhat.com) - Fixing up some serializer module stuff: * detecting module load errors when trying to deserialize collections * added a what() function to all the serializer modules for ID purposes * error detection for mongo stuff, including pymongo import problems as well as connection issues (jimi@sngx.net) - Cleanup of bonding stuff in all files, including webui and koan. Additional cleanup in the network config scripts, and re-added the modprobe.conf renaming code to the post install network config. (jimi@sngx.net) - Initial rework to allow bridge/bridge slave interfaces Added static route configuration to pre_install_network_config Major cleanup/reworking of post_install_network_config script (jimi@sngx.net) - Fix for bad commit of some json settings test (jimi@sngx.net) - Merge remote-tracking branch 'jsabo/fbsdimport' (shenson@redhat.com) - Adding initial support for FreeBSD media importing (jsabo@verisign.com) - Setting TIME_ZONE to None in web/settings.py causes a 500 error on a RHEL5 system with python 2.4 and django 1.1. Commenting out the config line has the same effect as setting it to None, and prevents the 500. (jimi@sngx.net) - Fixes for importing RHEL6: * path_tail() was previously moved to utils, a couple places in the import modules still used self.path_tail instead of utils.path_tail, causing a stack dump * Fixed an issue in utils.path_tail(), which was using self. still from when it was a member of the import class * When mirror name was set on import and using --available-as, it was appending a lot of junk instead of just using the specified mirror name (jimi@sngx.net) - Merge branch 'master' of git://git.fedorahosted.org/cobbler (jimi@sngx.net) - Fix a quick error (shenson@redhat.com) - Set the tftpboot dir for rhel6 hosts (jsabo@verisign.com) - Fixed a typo (jorgen.maas@gmail.com) - Added an extra field in the system/interface item. The field is called "management" and should be used to identify the management interface, this could be useful information for multihomed systems. (jorgen.maas@gmail.com) - In the event log view the data/time field got wrapped which is very annoying. Fast fix for now, i'm pretty sure there are better ways to do this. (jorgen.maas@gmail.com) - Event log soring on date reverted, let's sort on id instead. Reverse over events in the template. Convert gmtime in the template to localtime. (jorgen.maas@gmail.com) - Sort the event log by date/time (jorgen.maas@gmail.com) - Remove some unsupported OS versions from codes.py (jorgen.maas@gmail.com) - Some changes in the generate_netboot_iso function/code: - Users had to supply all system names on the commandline which they wanted to include in the ISO boot menu. This patch changes that behaviour; all systems are included by default now. You can still provide an override with the --systems parameter, thus making this feature more consistent with what one might expect from reading the help. - While at it I tried to make the code more readable and removed some unneeded iterations. - Prevent some unneeded kernel/initrd copies. - You can now override ip/netmask/gateway/dns parameters with corresponding kernel_options. - Fixed a bug for SuSE systems where ksdevice should be netdevice. - If no ksdevice/netdevice (or equivalent) has been supplied via kernel_options try to guess the proper interface to use, but don't just use one if we can't be sure about it (e.g. for multihomed systems). (jorgen.maas@gmail.com) - Add SLES 11 to codes.py (jorgen.maas@gmail.com) - Add support for Fedora15 to codes.py (jorgen.maas@gmail.com) - Django uses the timezone information from web/settings.py Changing the hardcoded value to None forces Django to use the systems timezone instead of this hardcoded value (jorgen.maas@gmail.com) - Fix cobbler replication for non-RHEL hosts. The slicing used in the link_distro function didn't work for all distros. (jsabo@verisign.com) - Fix vmware esx importing. It was setting the links dir to the dir the iso was mounted on import (jsabo@verisign.com) - Merge remote-tracking branch 'jsabo/webuifun' (shenson@redhat.com) - Fix bug with esxi replication. It wasn't rsyncing the distro over if the parentdir already existed. (jsabo@verisign.com) - Merge branch 'master' of git://git.fedorahosted.org/cobbler (jimi@sngx.net) - Initial commit for mongodb backend support and adding support for settings as json (jimi@sngx.net) - Web UI patches from Greg Swift applied (jsabo@verisign.com) - whitespace fix (dkilpatrick@verisign.com) - Fix to fix to py_tftp change to sync in bootloaders (dkilpatrick@verisign.com) - Fixing a bug reported by Jonathan Sabo. (dkilpatrick@verisign.com) - Merge branch 'master' of git://git.fedorahosted.org/cobbler (dkilpatrick@verisign.com) - Revert "Jonathan Sabo (June 09) (Cobbler)" (shenson@redhat.com) - Unmount and deactivate all software raid devices after searching for ssh keys (jonathan.underwood@gmail.com) - Merge remote-tracking branch 'ugansert/master' (shenson@redhat.com) - Jonathan Sabo (June 09) (Cobbler) Subject: [PATCH] Fix issue with importing distro's on new cobbler box To: cobbler development list Date: Thu, 9 Jun 2011 16:17:20 -0400 (shenson@redhat.com) - missing manage_rsync option from config/settings (jsabo@criminal.org) - Remove left-over debugging log message (dkilpatrick@verisign.com) - SUSE requires the correct arch to find kernel+initrd on the inst-source (ug@suse.de) - added autoyast=... parameter to the ISO building code when breed=suse (ug@suse.de) - calculate meta data in the XML file without cheetah variables now (ug@suse.de) - render the cheetah template before passing the XML to the python XML parser (ug@suse.de) - made the pathes flexible to avoid problem on other distros than fedora/redhat (ug@suse.de) - bugfix (ug@suse.de) - Merge patch from stable (cristian.ciupitu@yahoo.com) - utils: initialize main_logger only when needed (cristian.ciupitu@yahoo.com) - During refactor, failed to move templater initialization into write_boot_files_distro. (dkilpatrick@verisign.com) - Fixed a couple of simple typos. Made the boot_files support work (added template support for the key, defined the img_path attribute for that expansion) (dkilpatrick@verisign.com) - Fixes to get to the "minimally tested" level. Fixed two syntax errors in tftpd.py, and fixed refences to api and os.path in manage_in_tftpd.py (dkilpatrick@verisign.com) - Rebasing commit, continued. (kilpatds@oppositelock.org) - Change the vmware stuff to use 'boot_files' as the space to set files that need to be available to a tftp-booting process (dkilpatrick@verisign.com) - Added 'boot_files' field for 'files that need to be put into tftpboot' (dkilpatrick@verisign.com) - Merge conflict. (kilpatds@oppositelock.org) - Add in a default for puppet_auto_setup, thanks to Camille Meulien for finding it. (shenson@redhat.com) - Add a directory remap feature to fetchable_files processing. /foo/*=/bar/ Client requests for "/foo/baz" will be turned into requests for /bar/baz. Target paths are evaluated against the root filesystem, not tftpboot. Template expansion is done on "bar/baz", so that would typically more usefully be something like /boot/*=$distro_path/boot (dkilpatrick@verisign.com) - Removed trailing whitespace causing git warnings (dkilpatrick@verisign.com) - Fix a bug where tftpd.py would throw if a client requested '/'. (dkilpatrick@verisign.com) - Allow slop in the config, not just the client. modules: don't hardcode /tftpboot (dkilpatrick@verisign.com) - Moved footer to actually float at the bottom of the page or visible section, whichever is further down. Unfortunately leaves a slightly larger margin pad on there. Will have to see if it can be made cleaner (gregswift@gmail.com) - Removed right padding on delete checkboxes (gregswift@gmail.com) - Adjusted all the self closing tags to end eith a " />" instead of not having a space separating them (gregswift@gmail.com) - Added "add" button to the filter bit (gregswift@gmail.com) - Removed "Enabled" label on checkboxes, this can be added via css as part of the theme if people want it using :after { content: " Enabled" } Padded the context-tip off the checkboxes so that it lines up with most of the other context tips instead of being burring in the middle of the form (gregswift@gmail.com) - Added bottom margin on text area so that it isn't as tight next to other form fields (gregswift@gmail.com) - Added id tags to the forms for ks templates and snippets Set some margins for those two forms, they were a bit scrunched because they didn't have a sectionbody fieldset and legend Removed inline formatting of input sizes on those two pages Set the textareas in those two pages via css (gregswift@gmail.com) - Made the tooltips get hiddent except for on hover, with a small image displayed in their place (gregswift@gmail.com) - Added a top margin to the submit/reset buttons... looks cleaner having some space. (gregswift@gmail.com) - Changed generic edit form to the following: - Made blocks into fieldsets again, converting the h2 to a legend. I didn't mean to change this the first time through. - Pulled up a level, removing the wrapping div, making each fieldset contain an order list, instead of each line being an ordered list, which was silly of me. - Since it went up a level, un-indented all of the internal html tags 2 spaces - changed the place holder for the network widgets to spans so that they displayed cleanly (Don't like the spans either, but its for the javascript) In the stylesheet just changed the div.sectionbody to ol.sectionbody (gregswift@gmail.com) - Fixed closing ul->div on multiselect section. Must have missed it a few commits ago. (gregswift@gmail.com) - IE uses input styling such as borders even on checkboxes... was not intended, so has been cleared for checkboxes (gregswift@gmail.com) - This is a change to the multiselect buttons view, i didn't mean to commit the style sheet along with the spelling check fixes, but since I did might as well do the whole thing and then erevert it later if people dislike it (gregswift@gmail.com) - Fixed another postition mispelling (gregswift@gmail.com) - fixed typo postition should be position (gregswift@gmail.com) - Returned the multiselect section to being div's, since its actually not a set of list items, it is a single list item. Re-arranged the multiselect so that the buttons are centered between the two sections Removed all of the line breaks form that section Made the select box headings actually labels moved the order of multiselect after sectionbody definition due to inheritence (gregswift@gmail.com) - Restored select boxes to "default" styling since they are not as cleanly css- able Made visibly selected action from Batch Actions bold, mainly so by default Batch Action is bold. Moved text-area and multi-select sizing into stylesheet. re-alphabetized some of the tag styles Made the default login's text inputs centered, since everything else on that page is (gregswift@gmail.com) - Added missing bracket from two commits ago in the stylesheet. (gregswift@gmail.com) - Re-added the tool tips for when they exist in the edit forms and set a style on them. Removed an extraneous line break from textareas in edit form (gregswift@gmail.com) - Fixed javascript where I had used teh wrong quotes, thus breaking the network interface widgets (gregswift@gmail.com) - Added label and span to cleanup block (gregswift@gmail.com) - Added version across all of the template loads so that the footer is populated with it (gregswift@gmail.com) - all css: - set overall default font size of 1em - added missing tags to the cleanup css block - fixed button layout -- list line buttons are smaller font to keep lines smaller -- set action input button's size - set indentation and bolding of items in batch action - redid the list formatting -- removed zebra stripes, they share the standard background now -- hover is now the background color of the old darker zebra stripe -- selected lines now background of the older light zebra stripe - added webkit border radius (gregswift@gmail.com) - generic_lists.tmpl - Removed force space on the checklists generic_lists.tmpl - Added javascript to allow for selected row highlighting (gregswift@gmail.com) - Removed inline formatting from import.tmpl Made the context tips spans (gregswift@gmail.com) - Made both filter-adder elements exist in the same li element (gregswift@gmail.com) - Added default formatting for ordered lists Added formatting for the new multiselect unordered list Changed old div definitions for the multiselect to li Added label formatting for inside sectionbody to line up all the forms. (gregswift@gmail.com) - Adjusted multiselect section to be an unordered list instead of a div (gregswift@gmail.com) - Moved the close list tag inside the for loop, otherwise we generate lots of nasty nested lists (gregswift@gmail.com) - Changed edit templates to use ol instead of ul, because it apparently helps out those using screen readers, and we should be making things accessible, yes? (gregswift@gmail.com) - Re-structured the edit templates to be unordered lists. Standardized the tooltip/contextual data as context-tip class Redid the delete setup so that its Delete->Really? Instead of Delete:Yes->Really? Same number of check boxes. Setup the delete bit so that Delete and Really are labels for the checkboxes and there isn't extraneous html input tags (gregswift@gmail.com) - Added top margin on the filter adder (gregswift@gmail.com) - Adjusted single action item buttons to be in the same list element, as it makes alignment cleaner, and more sense from a grouping standpoint Set submenubar default height to 26px Set submenubar's alignment to be as clean as I've been able to get so far. (gregswift@gmail.com) - Set background color back to original (gregswift@gmail.com) - Adjusted all buttons to hover invert from blue to-blackish, the inverse of the normal links (which go blackish to blue) but left the text color the same. i'm not sure its as pretty, but dfinately more readable. Plus the color change scheme is more consistant. Also made table buttons smaller than other buttons (gregswift@gmail.com) - Fixed width on paginate select boxes to auto, instead of over 200px (gregswift@gmail.com) - Removed margin around hr tag, waste of space, and looks closer to original now (gregswift@gmail.com) - Removed extraneous body div by putting user div inside container. (gregswift@gmail.com) - Adjuested style sheet to improve standardization of form fields, such as buttons, text input widths, and fontsizes in buttons vs drop downs. (gregswift@gmail.com) - Some menu re-alignment on both menubar and submenubar (gregswift@gmail.com) - Got the container and the user display into a cleaner size alignment to display on the screen. less chance of horiz scroll (gregswift@gmail.com) - Fix to get login form a bit better placed without duplicate work (gregswift@gmail.com) - pan.action not needed... .action takes care of it (gregswift@gmail.com) - Removed padding on login screen (gregswift@gmail.com) - Redid action and button classes to make them look like buttons.. still needs work. Resized pointer classes to make things a bit more level on that row (gregswift@gmail.com) - New cleanup at the top negates the need for this table entry (gregswift@gmail.com) - Removed the body height to 99%. Was doing this for sticky footer, but current path says its not needed (gregswift@gmail.com) - Added some windows and mac default fonts Made the body relative, supposed to help with the layout Set text color to slightly off black.. was told there is some odd optical reasoning behind this (gregswift@gmail.com) - Made class settings for the table rows a touch more specific in the css (gregswift@gmail.com) - Added "normalization" to clean up cross browser differences at top of style.css (gregswift@gmail.com) - Added button class to all buttons, submit, and resets (gregswift@gmail.com) - Fixed sectionheader to not be styled as actions... they are h2! (gregswift@gmail.com) - Fixed container reference from class to id (gregswift@gmail.com) - Added missing action class on the "Create new" links in generic_list.tmpl (gregswift@gmail.com) - Revert part of 344969648c1ce1e753af because RHEL5's django doesn't support that (gregswift@gmail.com) - removed underline on remaing links (gregswift@gmail.com) - Fixed the way the logo was placed on the page and removed the excess background setting. (gregswift@gmail.com) - Some cleanup to the style sheet along - removed fieldset since no more exist (not sure about this in long run.... we'll see) - cleaned up default style for ul cause it was causing override issues - got menubar and submenu bar mostly settled (gregswift@gmail.com) - Fixed submenu bar ul to be identified by id not class (gregswift@gmail.com) - Rebuilt primary css stylesheet - not complete yet (gregswift@gmail.com) - Removed logout from cobbler meft hand menu (gregswift@gmail.com) - Next step in redoing layout: - added current logged in user and logout button to a div element at top of page - fixed content div from class to id - added footer (version entry doesn't work for some reason) - links to cobbler website (gregswift@gmail.com) - in generic_list.tmpl - set the edit link to class 'action' - merged the creation of the edit action 'View kickstart' for system and profile (gregswift@gmail.com) - Replaced tool tip as div+em with a span classed as tooltip. tooltip class just adds italic. (gregswift@gmail.com) - Fixed table header alignment to left (gregswift@gmail.com) - Take the logo out of the html, making it a css element, but retain the location and basic feel of the placement. (gregswift@gmail.com) - Step one of redoing the action list, pagination and filters. - split pagination and filters to two tmpl files - pagination can be called on its own (so it can live in top and bottom theoretically) - filter will eventually include pagination so its on the bottom - new submenubar includes pagination - new submenubar does age specific actiosn as links instead of drop downs cause there is usually 1, rarely 2, never more. (gregswift@gmail.com) - Removed pagination from left hand column (gregswift@gmail.com) - Removed an erroneous double quote from master.tmpl (gregswift@gmail.com) - Went a bit overboard and re-adjusted whitespace in all the templates. Trying to do the code in deep blocks across templates can be a bit tedious and difficult to maintain. While the output is not perfect, at least the templates are more readable. (gregswift@gmail.com) - Removed remaining vestige of action menu shading feature (gregswift@gmail.com) - Removed header shade references completely from the lists and the code from master.tmpl (gregswift@gmail.com) - Wrapped setting.tmpl error with the error class (gregswift@gmail.com) - Changed h3 to h2 inside pages Made task_created's h4 into a h1 and standarized with the other pages (gregswift@gmail.com) - Standardized header with a hr tag before the form tags (gregswift@gmail.com) - Added base width on the multiple select boxes, primarily for when they are empty (gregswift@gmail.com) - Removed fieldset wrappers and replaced legends with h1 and h2 depending on depth (gregswift@gmail.com) - Adjusted logic for the legent to only change one word, instead of the full string (gregswift@gmail.com) - Removed empty cell from table in generic_edit.tmpl (gregswift@gmail.com) - Revert 8fed301e61f28f8eaf08e430869b5e5df6d02df0 because it was to many different changes (gregswift@gmail.com) - Removed empty cell from table in generic_edit.tmpl (gregswift@gmail.com) - Moved some cobbler admin and help menus to a separate menu in the menubar (gregswift@gmail.com) - Added HTML5 autofocus attribute to login.tmpl. Unsupported browsers just ignores this. (gregswift@gmail.com) - Re-built login.tmpl: - logo isn't a link anymore back to the same page - logo is centered with the login form - fieldset has been removed - set a css class for the body of the login page, unused for now. And the css: - removed the black border from css - centered the login button as well (gregswift@gmail.com) - Made the links and span.actions hover with the same color as used for the section headings (gregswift@gmail.com) - Removed as much in-HTML placed formatting as possible and implemented them in css. The main bit remaining is the ul.li floats in paginate.tmpl (gregswift@gmail.com) - Cleaned up single tag closing for several of the checkboxes (gregswift@gmail.com) - removed a trailing forward slash that was creating an orphaned close span tag (gregswift@gmail.com) - Relabeled cells in thead row from td tags to th (gregswift@gmail.com) - Added tr wrapper inside thead of tables for markup validation (gregswift@gmail.com) - Use :// as separator for virsh URIs (atodorov@otb.bg) - Create more condensed s390 parm files (thardeck@suse.de) - Add possibility to interrupt zPXE and to enter CMS (thardeck@suse.de) - Cleanup the way that we download content - Fixes a bug where we were only downloading grub-x86_64.efi (shenson@redhat.com) - Port this config over as well (shenson@redhat.com) - Only clear logs that exist. (bpeck@redhat.com) - Pull in new configs from the obsoletes directory. (shenson@redhat.com) - Removed extraneous close row tag from events.tmpl (gregswift@gmail.com) - Fixed spelling of receive in enoaccess.tmpl (gregswift@gmail.com) - Added missing close tags on a few menu unordered list items in master.tmpl (gregswift@gmail.com) - Added missing "for" correlation tag for labels in generic_edit.tmpl (gregswift@gmail.com) - Removed extraneous close divs from generic_edit.tmpl (gregswift@gmail.com) - Removing old and unused template files (gregswift@gmail.com) - Add support for Ubuntu distros. (andreserl@ubuntu.com) - Koan install tree path for Ubuntu/Debian distros. (andreserl@ubuntu.com) - Fixing hardlink bin path. (andreserl@ubuntu.com) - Do not fail when yum python module is not present. (andreserl@ubuntu.com) - Add Ubuntu/Debian support to koan utils for later use. (andreserl@ubuntu.com) - typo in autoyast xml parsing (ug@suse.de) - Minor change to validate a token before checking on a user. (jimi@sngx.net) - get install tree from install=... parameter for SUSE (ug@suse.de) - handle autoyast XML files (ug@suse.de) - fixed support for SUSE in build-iso process. Fixed a typo (ug@suse.de) - added SUSE breed to import-webui (ug@suse.de) - Merge remote-tracking branch 'lanky/master' (shenson@redhat.com) - Merge remote-tracking branch 'jimi1283/master' (shenson@redhat.com) - added support for suse-distro import (ug@suse.de) - Fix a sub_process Popen call that did not set close_fds to true. This causes issues with sync where dhcpd keeps the XMLRPC port open and prevents cobblerd from restarting (jimi@sngx.net) - Cleanup of unneccsary widgets in distro/profile. These needed to be removed as part of the multiselect change. (jimi@sngx.net) - Yet another change to multiselect editing. Multiselects are now presented as side-by-side add/delete boxes, where values can be moved back and forth and only appear in one of the two boxes. (jimi@sngx.net) - Fix for django traceback when logging into the web interface with a bad username and/or password (jimi@sngx.net) - Fix for snippet/kickstart editing via the web interface, where a 'tainted file path' error was thrown (jimi@sngx.net) - added the single missed $idata.get() item (stuart@sjsears.com) - updated post_install_network_config to use $idata.get(key, "") instead of $idata[key]. This stops rendering issues with the snippet when some keys are missing (for example after an upgrade from 2.0.X to 2.1.0, where a large number of new keys appear to have been added.) and prevents us from having to go through all system records and add default values for them. (stuart@sjsears.com) - Take account of puppet_auto_setup in install_post_puppet.py (jonathan.underwood@gmail.com) - Take account of puppet_auto_setup in install_pre_puppet.py (jonathan.underwood@gmail.com) - Add puppet snippets to sample.ks (jonathan.underwood@gmail.com) - Add puppet_auto_setup to settings file (jonathan.underwood@gmail.com) - Add snippets/puppet_register_if_enabled (jonathan.underwood@gmail.com) - Add snippets/puppet_install_if_enabled (jonathan.underwood@gmail.com) - Add configuration of puppet pre/post modules to settings file (jonathan.underwood@gmail.com) - Add install_post_puppet.py module (jonathan.underwood@gmail.com) - Add install_pre_puppet.py module (jonathan.underwood@gmail.com) - Apply a fix for importing red hat distros, thanks jsabo (shenson@redhat.com) - Changes to action/batch actions at top of generic list pages * move logic into views, where it belongs * simplify template code * change actions/batch actions into drop down select lists * added/modified javascript to deal with above changes (jimi@sngx.net) - Minor fixes to cobbler.conf, since the AliasMatch was conflicting with the WSGI script alias (jimi@sngx.net) - Initial commit for form-based login and authentication (jimi@sngx.net) - Convert webui to use WSGI instead of mod_python (jimi@sngx.net) - Save field data in the django user session so the webui doesn't save things unnecessarily (jimi@sngx.net) - Make use of --format in git and use the short hash. Thanks Todd Zullinger (shenson@redhat.com) - We need git. Thanks to Luc de Louw (shenson@redhat.com) - Start of the change log supplied by Michael MacDonald (shenson@redhat.com) - Fix typo in cobbler man page entry for profile (jonathan.underwood@gmail.com) - Fix cobbler man page entry for parent profile option (jonathan.underwood@gmail.com) - Set SELinux context of host ssh keys correctly after reinstallation (jonathan.underwood@gmail.com) - Fixing bug with img_path. It was being used prior to being set if you have images. (jonathan.sabo@gmail.com) - Add firstboot install trigger mode (jonathan.sabo@gmail.com) - Fix old style shell triggers by checking for None prior to adding args to arg list and fix indentation (jonathan.sabo@gmail.com) - Bugfix: restore --no-fail functionality to CLI reposync (icomfort@stanford.edu) - Add the ability to replicate the new object types (mgmtclass,file,package). (jonathan.sabo@gmail.com) - Add VMware ESX and ESXi replication. (jonathan.sabo@gmail.com) - Add batch delete option for profiles and mgmtclasses (jonathan.sabo@gmail.com) - Spelling fail (shenson@redhat.com) - Remove deploy as a valid direct action (shenson@redhat.com) - Trac Ticket #509: A fix that does not break everything else. (https://fedorahosted.org/cobbler/ticket/509) (andrew@eiknet.com) - Only chown the file if it does not already exist (shenson@redhat.com) - Modification to cobbler web interface, added a drop-down select box for management classes and some new javascript to add/remove items from the multi-select (jimi@sngx.net) - Check if the cachedir exists before we run find on it. (shenson@redhat.com) - Fix trac#574 memtest (shenson@redhat.com) - Add network config snippets for esx and esxi network configuration $SNIPPET('network_config_esxi') renders to: (jonathan.sabo@gmail.com) - Trac Ticket #510: Modified 'cobbler buildiso' to use /var/cache/cobbler/buildiso by default. Added a /etc/cobbler/settings value of 'buildisodir' to make it setable by the end user. --tempdir will still overwrite either setting on the command line. (andrew@eiknet.com) - Add img_path to the metadata[] so that it's rendered out in the esxi pxe templates. Add os_version checks for esxi in kickstart_done so that it uses wget or curl depending on what's known to be available. (jonathan.sabo@gmail.com) - Added --sync-all option to cobbler replicate which forces all systems, distros, profiles, repos and images to be synced without specifying each. (rrr67599@rtpuw027.corpnet2.com) - Added manage_rsync option which defaults to 0. This will make cobbler not overwrite a local rsyncd.conf unless enabled. (rrr67599@rtpuw027.corpnet2.com) - Added semicolon master template's placement of the arrow in the page heading (gregswift@gmail.com) - Quick fix from jsabo (shenson@redhat.com) - added hover line highlighting to table displays (gregswift@gmail.com) - Modification to generic_edit template so that the name field is not a text box when editing. (jimi@sngx.net) - Minor fixes for mgmt classes webui changes. - Bug when adding a new obj, since obj is None it was causing a django stack dump - Minor tweaks to javascript (jimi@sngx.net) - Fixed error in which the json files for mgmtclasses was not being deleted when a mgmtclass was removed, meaning they showed back up the next time cobblerd was restarted (jimi@sngx.net) - Fixed syntax error in clogger.py that was preventing cobblerd from starting (jimi@sngx.net) - Supports an additional initrd from kernel_options. (bpeck@redhat.com) - Remove a bogus self (shenson@redhat.com) - Re-enable debmirror. (chuck.short@canonical.com) - Extending the current Wake-on-Lan support for wider distro compatibility. Thanks to Dustin Kirkland. (chuck.short@canonical.com) - Dont hardcode /etc/rc.d/init.d redhatism. (chuck.short@canonical.com) - Newer (pxe|sys)linux's localboot value produces unreliable results when using documented options, -1 seems to provide the best supported value (chuck.short@canonical.com) - Detect the webroot to be used based on the distro. (chuck.short@canonical.com) - If the logfile path doesn't exist, don't attempt to create the log file. Mainly needed when cobbler is required to run inside the build env (cobbler4j). Thanks to Dave Walker (chuck.short@canonical.com) - Implement system power status API method and CLI command (crosa@redhat.com) - Update setup files to use proper apache configuration path (konrad.scherer@windriver.com) - Debian has www-data user for web server file access instead of apache. (konrad.scherer@windriver.com) - Update init script to work under debian. (konrad.scherer@windriver.com) - Use lsb_release module to detect debian distributions. Debian release is returned as a string because it could be sid which will never have a version number. (konrad.scherer@windriver.com) - Fix check for apache installation (konrad.scherer@windriver.com) - Handle Cheetah version with more than 3 parts (konrad.scherer@windriver.com) - Allow dlcontent to use proxy environment variables (shenson@redhat.com) - Copy memtest to $bootloc/images/. Fixes BZ#663307 (shenson@redhat.com) - Merge remote branch 'jimi1283/master' (shenson@redhat.com) - Turn the cheetah version numbers into integers while testing them so we don't always return true (shenson@redhat.com) - Kill some whitespace (shenson@redhat.com) - Fix for bug #587 - Un-escaped '$' in snippet silently fails to render (jimi@sngx.net) - Fix for bug #587 - Un-escaped '$' in snippet silently fails to render (jimi@sngx.net) - Merge branch 'master' of git://git.fedorahosted.org/cobbler (jimi@sngx.net) - Don't use link caching in places it isn't needed (shenson@redhat.com) - Better logging on subprocess calls (shenson@redhat.com) - Fix for trac #541 - cobbler sync deletes /var/www/cobbler/pub (jimi@sngx.net) - Merged work in the import-modules branch with the debian/ubuntu modules created by Chuck Short (jimi@sngx.net) - Merge branch 'cshort' into import-modules (jimi@sngx.net) - Finished up debian/ubuntu support for imports Tweaked redhat/vmware import modules logging output Added rsync function to utils to get it out of each module - still need to fix the redhat/vmware modules to actually use this (jimi@sngx.net) - Initial commit for the Debian import module. * tested against Debian squeeze. (chuck.short@canonical.com) - Initial commit for the Ubuntu import module. * tested against Natty which imported successfully. (chuck.short@canonical.com) - tftp-hpa users for both Ubuntu Debian use /var/lib/tftpboot. (chuck.short@canonical.com) - Disable the checks that are not really valid for Ubuntu or Debian. (chuck.short@canonical.com) - Add myself to the authors file. (chuck.short@canonical.com) - Updates for debian/ubuntu support in import modules (jimi@sngx.net) - Fix a problem with cheetah >= 2.4.2 where the snippets were causing errors, particularly on F14 due to its use of cheetah 2.4.3. (shenson@redhat.com) - Initial commit of the Ubuntu import module (jimi@sngx.net) - Merge remote branch 'jimi1283/import-modules' (shenson@redhat.com) - Merge remote branch 'jimi1283/master' (shenson@redhat.com) - Extended ESX/ESXi support * Fixed release detection for both ESX and ESXi * Added support to kickstart_finder() so that the fetchable_files list gets filled out when the distro is ESXi (jimi@sngx.net) - Fixed distro_adder() in manage_import_vmware so ESXi gets imported properly (jimi@sngx.net) - Initial commit for the VMWare import module * tested against esx4 update 1, which imported successfully (jimi@sngx.net) - Minor style changes for web css * darken background slightly so the logo doesn't look washed out * make text input boxes wider (jimi@sngx.net) - Fix for the generic_edit function for the web page. The choices field for management classes was not being set for distros/profiles - only systems, causing a django stack dump (jimi@sngx.net) - modify keep_ssh_host_keys snippet to use old keys during OS installation (flaks@bnl.gov) - Merge remote branch 'jimi1283/master' (shenson@redhat.com) - Added replicate to list of DIRECT_ACTIONS, so it shows up in the --help output (jimi@sngx.net) - Merge branch 'master' into import-modules (jimi@sngx.net) - Merge branch 'master' of git://git.fedorahosted.org/cobbler (jimi@sngx.net) - Some fixes to the manage_import_redhat module * stop using mirror_name for path stuff - using self.path instead * fixed rsync command to use self.path too, this should really be made a global somewhere else though (jimi@sngx.net) - Add synopsis entries to man page to enable whatis command (kirkland@ubuntu.com) - Add "ubuntu" as detected distribution. (clint@ubuntu.com) - Fix for redhat import module. Setting the kickstart file with a default value was causing some issues later on with the kickstart_finder() function, which assumes all new profiles don't have a kickstart file yet (jimi@sngx.net) - Fix for non x86 arches, bug and fix by David Robinson (shenson@redhat.com) - Don't die when we find deltas, just don't use them (shenson@redhat.com) - Merge remote branch 'khightower/khightower/enhanced-configuration-management' (shenson@redhat.com) - By: Bill Peck exclude initrd.addrsize as well. This affects s390 builds (shenson@redhat.com) - Fix an issue where an item was getting handed to remove_item instead of the name of the item. This would cause an exception further down in the stack when .lower() was called on the object (by the call to get_item). (shenson@redhat.com) - Add a check to make sure system is in obj_types before removing it. Also remove an old FIXME that this previously fixed (shenson@redhat.com) - Fix regression in 2.0.8 that dumped into pxe cfg files (shenson@redhat.com) - Initial commit of import module for redhat (jimi@sngx.net) - Merge branch 'master' of git://git.fedorahosted.org/cobbler (jimi@sngx.net) - Added new modules for copying a distros's fetchable files to the /tftpboot/images directory - add_post_distro_tftp_copy_fetchable_files.py copies on an add/edit - sync_post_tftp_copy_fetchable_files.py copies the files for ALL distros on a full sync (jimi@sngx.net) - Removed trailing '---' from each of the PXE templates for ESXi, which causes PXE issues (jimi@sngx.net) - Make stripping of "G" from vgs output case-insensitive (heffer@fedoraproject.org) - Replace rhpl with ethtool (heffer@fedoraproject.org) - Add --force-path option to force overwrite of virt-path location (pryor@bnl.gov) - item_[profile|system] - update parents after editing (mlevedahl@gmail.com) - collection.py - rename rather than delete mirror dirs (mlevedahl@gmail.com) - Wil Cooley (shenson@redhat.com) - Merge remote branch 'kilpatds/io' (shenson@redhat.com) - Add additional qemu_driver_type parameter to start_install function (Konrad.Scherer@windriver.com) - Add valid debian names for releases (Konrad.Scherer@windriver.com) - Add debian preseed support to koan (Konrad.Scherer@windriver.com) - Add support for EFI grub booting. (dgoodwin@rm-rf.ca) - Turn the 'daemonize I/O' code back on. cobbler sync seems to still work (dkilpatrick@verisign.com) - Fix some spacing in the init script (dkilpatrick@verisign.com) - Added a copy-default attribute to koan, to control the params passed to grubby (paji@redhat.com) - Turn on the cache by default Enable a negative cache, with a shorter timeout. Use the cache for normal lookups, not much ip-after-failed. (dkilpatrick@verisign.com) - no passing full error message. Der (dkilpatrick@verisign.com) - Pull the default block size into the template, since that can need to be changed. Make tftpd.py understand -B for compatibility. Default to a smaller mtu, for vmware compatibility. (dkilpatrick@verisign.com) - in.tftpd needs to be run as root. Whoops (dkilpatrick@verisign.com) - Handle exceptions in the idle-timer handling. This could cause tftpd.py to never exit (dkilpatrick@verisign.com) - Do a better job of handling things when a logger doesn't exist. And don't try and find out what the FD is for logging purposes when I know that might throw and I won't catch it. (dkilpatrick@verisign.com) - Scott Henson pointed out that my earlier changes stopped a sync from also copying kernel/initrd files into the web directry. Split out the targets from the copy, and make sure that sync still copies to webdir, and then also fixed where I wasn't copying those files in the synclite case. (dkilpatrick@verisign.com) - Put back code that I removed incorrectly. (sync DHCP, DNS) (dkilpatrick@verisign.com) - Support installing FreeBSD without an IP address set in the host record. (dkilpatrick@verisign.com) - Fixed some bugs in the special-case handling code, where I was not properly handling kernel requests, because I'd merged some code that looked alike, but couldn't actually be merged. (dkilpatrick@verisign.com) - fixing koan to use cobblers version of os_release which works with RHEL 6 (jsherril@redhat.com) - Adding preliminary support for importing ESXi for PXE booting (jimi@sngx.net) - Fix cobbler check tftp typo. (dgoodwin@rm-rf.ca) - buildiso now builds iso's that include the http_port setting (in /etc/cobbler/settings) in the kickstart file url (maarten.dirkse@filterworks.com) - Add check detection for missing ksvalidator (dean.wilson@gmail.com) - Use shlex.split() to properly handle a quoted install URL (e.g. url --url="http://example.org") (jlaska@redhat.com) - Update codes.py to accept 'fedora14' as a valid --os-version (jlaska@redhat.com) - No more self (shenson@redhat.com) - Don't die if a single repo fails to sync. (shenson@redhat.com) - Refactor: depluralize madhatter branch (kelsey.hightower@gmail.com) - Updating setup.py and spec file. (kelsey.hightower@gmail.com) - New unit tests: Mgmtclasses (kelsey.hightower@gmail.com) - Updating cobbler/koan man pages with info on using the new configuration management capabilities (kelsey.hightower@gmail.com) - Cobbler web integration for new configuration management capabilities (kelsey.hightower@gmail.com) - Koan configuration management enhancements (kelsey.hightower@gmail.com) - Cobbler configuration management enhancements (kelsey.hightower@gmail.com) - New cobbler objects: mgmtclasses, packages, and files. (kelsey.hightower@gmail.com) - Merge remote branch 'jsabo/kickstart_done' (shenson@redhat.com) - Move kickstart_done and kickstart_start out of kickgen.py and into their own snippets. This also adds support for VMware ESX triggers and magic urls by checking for the "vmware" breed and then using curl when that's all thats available vs wget. VMware's installer makes wget available during the %pre section but only curl is around following install at %post time. Yay! I've also updated the sample kickstarts to use $SNIPPET('kickstart_done') and $SNIPPET('kickstart_start') (jonathan.sabo@gmail.com) - No more getting confused between otype and obj_type (shenson@redhat.com) - The clean_link_cache method was calling subprocess_call without a logger (shenson@redhat.com) - Scott Henson pointed out that my earlier changes stopped a sync from also copying kernel/initrd files into the web directry. Split out the targets from the copy, and make sure that sync still copies to webdir, and then also fixed where I wasn't copying those files in the synclite case. (dkilpatrick@verisign.com) - revert bad templates path (dkilpatrick@verisign.com) - Put back code that I removed incorrectly. (sync DHCP, DNS) (dkilpatrick@verisign.com) - Support installing FreeBSD without an IP address set in the host record. (dkilpatrick@verisign.com) - Fixed some bugs in the special-case handling code, where I was not properly handling kernel requests, because I'd merged some code that looked alike, but couldn't actually be merged. (dkilpatrick@verisign.com) - Two more fixes to bugs introduced by pytftpd patch set: * The generated configs did not have initrd set propertly * Some extra debugging log lines made it into remote.py (dkilpatrick@verisign.com) - Fix Trac#530 by properly handling a logger being none. Additionally, make subprocess_call and subprocess_get use common bits to reduce duplication. (shenson@redhat.com) - Fix a cobbler_web authentication leak issue. There are times when the token that cobbelr_web had did not match the user logged in. This patch ensures that the token always matches the user that is logged in. (shenson@redhat.com) - No more getting confused between otype and obj_type (shenson@redhat.com) - The clean_link_cache method was calling subprocess_call without a logger (shenson@redhat.com) - Merge remote branch 'kilpatds/master' (shenson@redhat.com) - Scott Henson pointed out that my earlier changes stopped a sync from also copying kernel/initrd files into the web directry. Split out the targets from the copy, and make sure that sync still copies to webdir, and then also fixed where I wasn't copying those files in the synclite case. (dkilpatrick@verisign.com) - revert bad templates path (dkilpatrick@verisign.com) - Put back code that I removed incorrectly. (sync DHCP, DNS) (dkilpatrick@verisign.com) - Support installing FreeBSD without an IP address set in the host record. (dkilpatrick@verisign.com) - Fixed some bugs in the special-case handling code, where I was not properly handling kernel requests, because I'd merged some code that looked alike, but couldn't actually be merged. (dkilpatrick@verisign.com) - Two more fixes to bugs introduced by pytftpd patch set: * The generated configs did not have initrd set propertly * Some extra debugging log lines made it into remote.py (dkilpatrick@verisign.com) - fast sync. A new way of copying files around using a link cache. It creates a link cache per device and uses it as an intermediary so that files that are the same are not copied multiple times. Should greatly speed up sync times. (shenson@redhat.com) - A few small fixes and a new feature for the Python tftp server * Support environments where the MAC address is know, but the IP address is not (private networks). I do this by waiting for pxelinux.0 to request a file with the mac address added to the filename, and then look up the host by MAC. * Fix my MAC lookup logic. I didn't know to look for the ARP type (01-, at least for ethernet) added by pxelinux.0 * Fix up some log lines to make more sense * Fix a bug where I didn't get handle an empty fetchable_files properly, and didn't fall back to checking for profile matches. (dkilpatrick@verisign.com) - Two fixed to bad changes in my prior patch set. Sorry about that. * Bad path in cobbler/action_sync.py. No "templates" * Bad generation of the default boot menu. The first initrd from a profile was getting into the metadata cache and hanging around, thus becoming the initrd for all labels. (dkilpatrick@verisign.com) - A smart tftp server, and a module to manage it (dkilpatr@dkilpatr.verisign.com) - Export the generated pxelinux.cfg file via the materialized system information RPC method. This enables the python tftpd server below to serve that file up without any sync being required. (dkilpatr@dkilpatr.verisign.com) - Move management of /tftpboot into modules. This is a setup step for a later python tftpd server that will eliminate the need for much of this work. (dkilpatr@dkilpatr.verisign.com) - Fetchable Files attribute: Provides a new attribute similar in spirit to mgmt_files, but with somewhat reversed meaning. (dkilpatr@dkilpatr.verisign.com) - fix log rotation to actually work (bpeck@redhat.com) - find_kernel and find_initrd already do the right checks for file_is_remote and return None if things are wrong. (bpeck@redhat.com) - Trac #588 Add mercurial support for scm tracking (kelsey.hightower@gmail.com) - Add a breed for scientific linux (shenson@redhat.com) - "mgmt_parameters" for item_profile has the wrong default setting when creating a sub_profile. I'm assuming that <> would be correct for a sub_profile as well. (bpeck@redhat.com) - The new setup.py placed webui_content in the wrong spot... (akesling@redhat.com) - Merge commit 'a81ca9a4c18f17f5f8d645abf03c0e525cd234e1' (jeckersb@redhat.com) - Added back in old-style version tracking... because api.py needs it. (akesling@redhat.com) - Wrap the cobbler-web description (shenson@redhat.com) - Create the tftpboot directory during install (shenson@redhat.com) - Add in /var/lib/cobbler/loaders (shenson@redhat.com) - Create the images directory so that selinux will be happy (shenson@redhat.com) - Dont install some things in the webroot and put the services script down (shenson@redhat.com) - Fix some issues with clean installs of cobbler post build cleanup (shenson@redhat.com) - rhel5 doesn't build egg-info by default. (bpeck@redhat.com) - Some systems don't reboot properly at the end of install. s390 being one of them. This post module will call power reboot if postreboot is in ks_meta for that system. (bpeck@redhat.com) - Changes to allow s390 to work. s390 has a hard limit on the number of chars it can recieve. (bpeck@redhat.com) - show netboot status via koan. This is really handy if you have a system which fails to pxe boot you can create a service in rc.local which checks the status of netboot and calls --replace-self for example. (bpeck@redhat.com) - When adding in distros/profiles from disk don't bomb out if missing kernel or ramdisk. just don't add it. (bpeck@redhat.com) - add X log to anamon tracking as well. (bpeck@redhat.com) - Added new remote method clear_logs. Clearing console and anamon logs in %pre is too late if the install never happens. (bpeck@redhat.com) - fixes /var/www/cobbler/svc/services.py to canonicalize the uri before parsing it. This fixes a regression with mod_wsgi enabled and trying to provision a rhel3 machine. (bpeck@redhat.com) - anaconda umounts /proc on us while were still running. Deal with it. (bpeck@redhat.com) - fix escape (bpeck@redhat.com) - dont lowercase power type (bpeck@redhat.com) - Bump to 2.1.0 (shenson@redhat.com) - Properly detect unknown distributions (shenson@redhat.com) - cobblerd service: Required-Start: network -> $network (cristian.ciupitu@yahoo.com) - cobblerd service: add Default-Stop to LSB header (cristian.ciupitu@yahoo.com) - No more . on the end (shenson@redhat.com) - Do not delete settings and modules.conf (shenson@redhat.com) - Remove manpage generation from the make file (shenson@redhat.com) - Update the author and author email (shenson@redhat.com) - Proper ownership on some files (shenson@redhat.com) - More rpm cleanups (shenson@redhat.com) - Don't have the #! because rpm complains (shenson@redhat.com) - No more selinux here, we should not be calling chcon, things will end up with the proper context in a well configured selinux environment (shenson@redhat.com) - No more chowning the log file. (shenson@redhat.com) - A new spec file to go with the new setup.py (shenson@redhat.com) - Forgot to add aux to MANIFEST.in (akesling@redhat.com) - Fixed naming scheme for web UI to make it more uniform, what was Puppet Parameters is now Management Parameters. (akesling@redhat.com) - Removed unnecessary cruft. (akesling@redhat.com) - Reconfigured setup.py to now place config files and web ui content in the right places. The paths are configurable like they were in the previous setup.py, but everything is much cleaner. (akesling@redhat.com) - Removed unnecessary templating functionality from configuration generation (and setup.py) (akesling@redhat.com) - Added more useful files to setup.py and MANIFEST.in as well as extra functionality which setup.py should contain. (akesling@redhat.com) - Massive overhaul of setup.py . Moved things around a little to clean up building/packaging/distributing. The new setup.py is still incomplete. (akesling@redhat.com) - RPM specific changes to setup.cfg. (akesling@redhat.com) - Currently working through making setup.py functional for generating rpms dynamically. setup.py is just cobbler-web at the moment... and it appears to work. The next things to do are test the current RPM and add in functionality for reducing repetitive setup.py configuration lines. (akesling@redhat.com) - Changed list-view edit link from a javascript onclick event to an actual link... so that you can now just open it in a new tab. (akesling@redhat.com) - Added tip for random MAC Address functionality to System MAC Address field. (akesling@redhat.com) - Added "Puppet Parameters" attribute to Profile and System items. The new input field is a textarea which takes proper a YAML formatted dictionary. This data is used for the Puppet External Nodes api call (found in services.py). (akesling@croissant.usersys.redhat.com) - Resume apitesting assuming against local Cobbler server. (dgoodwin@rm-rf.ca) - Replace rogue tab with whitespace. (dgoodwin@rm-rf.ca) - Open all log files in append mode. Tasks should not be special. This simplifies the handling of logging for selinux. (shenson@redhat.com) - Add rendered dir to cobbler.spec. (dgoodwin@rm-rf.ca) - Re-add mod_python dep only for cobbler-web. (dgoodwin@rm-rf.ca) - initializing variable that is not always initialized but is always accessed (jsherril@redhat.com) - Merge remote branch 'pvreman/master' (shenson@redhat.com) - add logging of triggers (peter.vreman@acision.com) - add logging of triggers (peter.vreman@acision.com) - cobbler-ext-nodes needs also to use http_port (peter.vreman@acision.com) - Adding VMware ESX specific boot options (jonathan.sabo@gmail.com) - Merge stable into master (shenson@redhat.com) - Fix cobbler_web authentication in a way that doesn't break previously working stuff (shenson@redhat.com) - Allow qemu disk type to be specified. Contributed by Galia Lisovskaya (shenson@redhat.com) - Merge remote branch 'jsabo/esx' (shenson@redhat.com) - Fix a bug where we were not looking for the syslinux provided menu.c32 before going after the getloaders one (shenson@redhat.com) - Fix cobbler_web authentication in a way that doesn't break previously working stuff (shenson@redhat.com) - More preparation for the release (shenson@redhat.com) - Update spec file for release (shenson@redhat.com) - Update changelog for release (shenson@redhat.com) - Bugfix: fetch extra metadata from upstream repositories more safely (icomfort@stanford.edu) - Bugfix: allow the creation of subprofiles again (icomfort@stanford.edu) - Don't warn needlessly when repo rpm_list is empty (icomfort@stanford.edu) - Bugfix: run createrepo on partial yum mirrors (icomfort@stanford.edu) - Change default mode for new directories from 0777 to 0755 (icomfort@stanford.edu) - Fix replication when prune is specified and no systems are specified. This prevents us from killing systems on a slave that keeps its own systems. To get the old behavior, just specify a systems list that won't match anything. (shenson@redhat.com) - Always authorize the CLI (shenson@redhat.com) - Bugfix: fetch extra metadata from upstream repositories more safely (icomfort@stanford.edu) - Bugfix: allow the creation of subprofiles again (icomfort@stanford.edu) - Don't warn needlessly when repo rpm_list is empty (icomfort@stanford.edu) - Bugfix: run createrepo on partial yum mirrors (icomfort@stanford.edu) - Change default mode for new directories from 0777 to 0755 (icomfort@stanford.edu) - Fix replication when prune is specified and no systems are specified. This prevents us from killing systems on a slave that keeps its own systems. To get the old behavior, just specify a systems list that won't match anything. (shenson@redhat.com) - Always authorize the CLI (shenson@redhat.com) - Merge branch 'wsgi' (dgoodwin@rm-rf.ca) - Adding VMware ESX 4 update 1 support (jonathan.sabo@gmail.com) - remove references to apt support from the man page (jeckersb@redhat.com) - wsgi: Service cleanup. (dgoodwin@rm-rf.ca) - wsgi: Revert to old error handling. (dgoodwin@rm-rf.ca) - wsgi: Switch Cobbler packaging/config from mod_python to mod_wsgi. (dgoodwin @rm-rf.ca) - wsgi: Return 404 when hitting svc URLs for missing objects. (dgoodwin@rm- rf.ca) - Merge branch 'master' into wsgi (dgoodwin@rm-rf.ca) - wsgi: First cut of port to mod_wsgi. (dgoodwin@rm-rf.ca) * Thu Jun 17 2010 Scott Henson - 2.1.0-1 - Bump upstream release * Tue Apr 27 2010 Scott Henson - 2.0.4-1 - Bug fix release, see Changelog for details * Thu Apr 15 2010 Devan Goodwin 2.0.3.2-1 - Tagging for new build tools. * Mon Mar 1 2010 Scott Henson - 2.0.3.1-3 - Bump release because I forgot cobbler-web * Mon Mar 1 2010 Scott Henson - 2.0.3.1-2 - Remove requires on mkinitrd as it is not used * Mon Feb 15 2010 Scott Henson - 2.0.3.1-1 - Upstream Brown Paper Bag Release (see CHANGELOG) * Thu Feb 11 2010 Scott Henson - 2.0.3-1 - Upstream changes (see CHANGELOG) * Mon Nov 23 2009 John Eckersberg - 2.0.2-1 - Upstream changes (see CHANGELOG) * Tue Sep 15 2009 Michael DeHaan - 2.0.0-1 - First release with unified spec files cobbler-2.4.1/cobbler/000077500000000000000000000000001227367477500145625ustar00rootroot00000000000000cobbler-2.4.1/cobbler/__init__.py000066400000000000000000000000001227367477500166610ustar00rootroot00000000000000cobbler-2.4.1/cobbler/action_acl.py000066400000000000000000000067721227367477500172440ustar00rootroot00000000000000""" Configures acls for various users/groups so they can access the cobbler command line as non-root. Now that CLI is largely remoted (XMLRPC) this is largely just useful for not having to log in (access to shared-secret) file but also grants access to hand-edit various config files and other useful things. Copyright 2006-2009, Red Hat, Inc and Others Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import os import os.path import shutil import sys import glob import traceback import errno import utils from cexceptions import * from utils import _ import clogger class AclConfig: def __init__(self,config,logger=None): """ Constructor """ self.config = config self.api = config.api self.settings = config.settings() if logger is None: logger = clogger.Logger() self.logger = logger def run(self,adduser=None,addgroup=None,removeuser=None,removegroup=None): """ Automate setfacl commands """ ok = False if adduser: ok = True self.modacl(True,True,adduser) if addgroup: ok = True self.modacl(True,False,addgroup) if removeuser: ok = True self.modacl(False,True,removeuser) if removegroup: ok = True self.modacl(False,False,removegroup) if not ok: raise CX("no arguments specified, nothing to do") return True def modacl(self,isadd,isuser,who): webdir = self.settings.webdir snipdir = self.settings.snippetsdir tftpboot = utils.tftpboot_location() PROCESS_DIRS = { "/var/log/cobbler" : "rwx", "/var/log/cobbler/tasks" : "rwx", "/var/lib/cobbler" : "rwx", "/etc/cobbler" : "rwx", tftpboot : "rwx", "/var/lib/cobbler/triggers" : "rwx" } if not snipdir.startswith("/var/lib/cobbler/"): PROCESS_DIRS[snipdir] = "r" cmd = "-R" if isadd: cmd = "%s -m" % cmd else: cmd = "%s -x" % cmd if isuser: cmd = "%s u:%s" % (cmd,who) else: cmd = "%s g:%s" % (cmd,who) for d in PROCESS_DIRS: how = PROCESS_DIRS[d] if isadd: cmd2 = "%s:%s" % (cmd,how) else: cmd2 = cmd cmd2 = "%s %s" % (cmd2,d) rc = utils.subprocess_call(self.logger,"setfacl -d %s" % cmd2,shell=True) if not rc == 0: utils.die(self.logger,"command failed") rc = utils.subprocess_call(self.logger,"setfacl %s" % cmd2,shell=True) if not rc == 0: utils.die(self.logger,"command failed") cobbler-2.4.1/cobbler/action_buildiso.py000066400000000000000000000667141227367477500203210ustar00rootroot00000000000000""" Builds bootable CD images that have PXE-equivalent behavior for all Cobbler distros/profiles/systems currently in memory. Copyright 2006-2009, Red Hat, Inc and Others Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import os import os.path import shutil import sys import traceback import shutil import re import utils from cexceptions import * from utils import _ import clogger class BuildIso: """ Handles conversion of internal state to the isolinux tree layout """ def __init__(self,config,verbose=False,logger=None): """ Constructor """ self.verbose = verbose self.config = config self.settings = config.settings() self.api = config.api self.distros = config.distros() self.profiles = config.profiles() self.systems = config.systems() self.distros = config.distros() self.distmap = {} self.distctr = 0 self.source = "" if logger is None: logger = clogger.Logger() self.logger = logger # grab the header from buildiso.header file header_src = open(os.path.join(self.settings.iso_template_dir,"buildiso.template")) self.iso_template = header_src.read() header_src.close() def add_remaining_kopts(self,koptdict): """ Add remaining kernel_options to append_line """ append_line = "" for (k, v) in koptdict.iteritems(): if v == None: append_line += " %s" % k else: if type(v) == list: for i in v: append_line += " %s=%s" % (k,i) else: append_line += " %s=%s" % (k,v) append_line += "\n" return append_line def make_shorter(self,distname): """ Return a short distro identifier """ if self.distmap.has_key(distname): return self.distmap[distname] else: self.distctr = self.distctr + 1 self.distmap[distname] = str(self.distctr) return str(self.distctr) def copy_boot_files(self,distro,destdir,prefix=None): """ Copy kernel/initrd to destdir with (optional) newfile prefix """ if not os.path.exists(distro.kernel): utils.die(self.logger,"path does not exist: %s" % distro.kernel) if not os.path.exists(distro.initrd): utils.die(self.logger,"path does not exist: %s" % distro.initrd) if prefix is None: shutil.copyfile(distro.kernel, os.path.join(destdir, "%s" % os.path.basename(distro.kernel))) shutil.copyfile(distro.initrd, os.path.join(destdir, "%s" % os.path.basename(distro.initrd))) else: shutil.copyfile(distro.kernel, os.path.join(destdir, "%s.krn" % prefix)) shutil.copyfile(distro.initrd, os.path.join(destdir, "%s.img" % prefix)) def sort_name(self,a,b): """ Sort profiles/systems by name """ return cmp(a.name,b.name) def generate_netboot_iso(self,imagesdir,isolinuxdir,profiles=None,systems=None,exclude_dns=None): """ Create bootable CD image to be used for network installations """ # setup all profiles/systems lists all_profiles = [profile for profile in self.api.profiles()] all_profiles.sort(self.sort_name) all_systems = [system for system in self.api.systems()] all_systems.sort(self.sort_name) # convert input to lists which_profiles = utils.input_string_or_list(profiles) which_systems = utils.input_string_or_list(systems) # no profiles/systems selection is made, let's process everything do_all_systems = False do_all_profiles = False if len(which_profiles) == 0 and len(which_systems) == 0: do_all_systems = True do_all_profiles = True # setup isolinux.cfg isolinuxcfg = os.path.join(isolinuxdir, "isolinux.cfg") cfg = open(isolinuxcfg, "w+") cfg.write(self.iso_template) # iterate through selected profiles for profile in all_profiles: if profile.name in which_profiles or do_all_profiles is True: self.logger.info("processing profile: %s" % profile.name) dist = profile.get_conceptual_parent() distname = self.make_shorter(dist.name) self.copy_boot_files(dist,isolinuxdir,distname) cfg.write("\n") cfg.write("LABEL %s\n" % profile.name) cfg.write(" MENU LABEL %s\n" % profile.name) cfg.write(" kernel %s.krn\n" % distname) data = utils.blender(self.api, False, profile) if data["kickstart"].startswith("/"): data["kickstart"] = "http://%s:%s/cblr/svc/op/ks/profile/%s" % ( data["server"], self.api.settings().http_port, profile.name ) append_line = " append initrd=%s.img" % distname if dist.breed == "suse": if data.has_key("proxy") and data["proxy"] != "": append_line += " proxy=%s" % data["proxy"] if data["kernel_options"].has_key("install") and data["kernel_options"]["install"] != "": append_line += " install=%s" % data["kernel_options"]["install"] del data["kernel_options"]["install"] else: append_line += " install=http://%s:%s/cblr/links/%s" % ( data["server"], self.api.settings().http_port, dist.name ) if data["kernel_options"].has_key("autoyast") and data["kernel_options"]["autoyast"] != "": append_line += " autoyast=%s" % data["kernel_options"]["autoyast"] del data["kernel_options"]["autoyast"] else: append_line += " autoyast=%s" % data["kickstart"] if dist.breed == "redhat": if data.has_key("proxy") and data["proxy"] != "": append_line += " proxy=%s http_proxy=%s" % (data["proxy"], data["proxy"]) append_line += " ks=%s" % data["kickstart"] if dist.breed in ["ubuntu","debian"]: append_line += " auto-install/enable=true url=%s" % data["kickstart"] if data.has_key("proxy") and data["proxy"] != "": append_line += " mirror/http/proxy=%s" % data["proxy"] append_line += self.add_remaining_kopts(data["kernel_options"]) cfg.write(append_line) cfg.write("\nMENU SEPARATOR\n") # iterate through all selected systems for system in all_systems: if system.name in which_systems or do_all_systems is True: self.logger.info("processing system: %s" % system.name) profile = system.get_conceptual_parent() dist = profile.get_conceptual_parent() distname = self.make_shorter(dist.name) self.copy_boot_files(dist,isolinuxdir,distname) cfg.write("\n") cfg.write("LABEL %s\n" % system.name) cfg.write(" MENU LABEL %s\n" % system.name) cfg.write(" kernel %s.krn\n" % distname) data = utils.blender(self.api, False, system) if data["kickstart"].startswith("/"): data["kickstart"] = "http://%s:%s/cblr/svc/op/ks/system/%s" % ( data["server"], self.api.settings().http_port, system.name ) append_line = " append initrd=%s.img" % distname if dist.breed == "suse": if data.has_key("proxy") and data["proxy"] != "": append_line += " proxy=%s" % data["proxy"] if data["kernel_options"].has_key("install") and data["kernel_options"]["install"] != "": append_line += " install=%s" % data["kernel_options"]["install"] del data["kernel_options"]["install"] else: append_line += " install=http://%s:%s/cblr/links/%s" % ( data["server"], self.api.settings().http_port, dist.name ) if data["kernel_options"].has_key("autoyast") and data["kernel_options"]["autoyast"] != "": append_line += " autoyast=%s" % data["kernel_options"]["autoyast"] del data["kernel_options"]["autoyast"] else: append_line += " autoyast=%s" % data["kickstart"] if dist.breed == "redhat": if data.has_key("proxy") and data["proxy"] != "": append_line += " proxy=%s http_proxy=%s" % (data["proxy"], data["proxy"]) append_line += " ks=%s" % data["kickstart"] if dist.breed in ["ubuntu","debian"]: append_line += " auto-install/enable=true url=%s netcfg/disable_dhcp=true" % data["kickstart"] if data.has_key("proxy") and data["proxy"] != "": append_line += " mirror/http/proxy=%s" % data["proxy"] # hostname is required as a parameter, the one in the preseed is not respected my_domain = "local.lan" if system.hostname != "": # if this is a FQDN, grab the first bit my_hostname = system.hostname.split(".")[0] _domain = system.hostname.split(".")[1:] if _domain: my_domain = ".".join(_domain) else: my_hostname = system.name.split(".")[0] _domain = system.name.split(".")[1:] if _domain: my_domain = ".".join(_domain) # at least for debian deployments configured for DHCP networking # this values are not used, but specifying here avoids questions append_line += " hostname=%s domain=%s" % (my_hostname, my_domain) # a similar issue exists with suite name, as installer requires # the existence of "stable" in the dists directory append_line += " suite=%s" % dist.os_version # try to add static ip boot options to avoid DHCP (interface/ip/netmask/gw/dns) # check for overrides first and clear them from kernel_options my_int = None; my_ip = None; my_mask = None; my_gw = None; my_dns = None if dist.breed in ["suse", "redhat"]: if data["kernel_options"].has_key("netmask") and data["kernel_options"]["netmask"] != "": my_mask = data["kernel_options"]["netmask"] del data["kernel_options"]["netmask"] if data["kernel_options"].has_key("gateway") and data["kernel_options"]["gateway"] != "": my_gw = data["kernel_options"]["gateway"] del data["kernel_options"]["gateway"] if dist.breed == "redhat": if data["kernel_options"].has_key("ksdevice") and data["kernel_options"]["ksdevice"] != "": my_int = data["kernel_options"]["ksdevice"] if my_int == "bootif": my_int = None del data["kernel_options"]["ksdevice"] if data["kernel_options"].has_key("ip") and data["kernel_options"]["ip"] != "": my_ip = data["kernel_options"]["ip"] del data["kernel_options"]["ip"] if data["kernel_options"].has_key("dns") and data["kernel_options"]["dns"] != "": my_dns = data["kernel_options"]["dns"] del data["kernel_options"]["dns"] if dist.breed == "suse": if data["kernel_options"].has_key("netdevice") and data["kernel_options"]["netdevice"] != "": my_int = data["kernel_options"]["netdevice"] del data["kernel_options"]["netdevice"] if data["kernel_options"].has_key("hostip") and data["kernel_options"]["hostip"] != "": my_ip = data["kernel_options"]["hostip"] del data["kernel_options"]["hostip"] if data["kernel_options"].has_key("nameserver") and data["kernel_options"]["nameserver"] != "": my_dns = data["kernel_options"]["nameserver"] del data["kernel_options"]["nameserver"] if dist.breed in ["ubuntu","debian"]: if data["kernel_options"].has_key("netcfg/choose_interface") and data["kernel_options"]["netcfg/choose_interface"] != "": my_int = data["kernel_options"]["netcfg/choose_interface"] del data["kernel_options"]["netcfg/choose_interface"] if data["kernel_options"].has_key("netcfg/get_ipaddress") and data["kernel_options"]["netcfg/get_ipaddress"] != "": my_ip = data["kernel_options"]["netcfg/get_ipaddress"] del data["kernel_options"]["netcfg/get_ipaddress"] if data["kernel_options"].has_key("netcfg/get_netmask") and data["kernel_options"]["netcfg/get_netmask"] != "": my_mask = data["kernel_options"]["netcfg/get_netmask"] del data["kernel_options"]["netcfg/get_netmask"] if data["kernel_options"].has_key("netcfg/get_gateway") and data["kernel_options"]["netcfg/get_gateway"] != "": my_gw = data["kernel_options"]["netcfg/get_gateway"] del data["kernel_options"]["netcfg/get_gateway"] if data["kernel_options"].has_key("netcfg/get_nameservers") and data["kernel_options"]["netcfg/get_nameservers"] != "": my_dns = data["kernel_options"]["netcfg/get_nameservers"] del data["kernel_options"]["netcfg/get_nameservers"] # if no kernel_options overrides are present find the management interface # do nothing when zero or multiple management interfaces are found if my_int is None: mgmt_ints = []; mgmt_ints_multi = []; slave_ints = [] if len(data["interfaces"].keys()) >= 1: for (iname, idata) in data["interfaces"].iteritems(): if idata["management"] == True and idata["interface_type"] in ["master","bond","bridge"]: # bonded/bridged management interface mgmt_ints_multi.append(iname) if idata["management"] == True and idata["interface_type"] not in ["master","bond","bridge","slave","bond_slave","bridge_slave","bonded_bridge_slave"]: # single management interface mgmt_ints.append(iname) if len(mgmt_ints_multi) == 1 and len(mgmt_ints) == 0: # bonded/bridged management interface, find a slave interface # if eth0 is a slave use that (it's what people expect) for (iname, idata) in data["interfaces"].iteritems(): if idata["interface_type"] in ["slave","bond_slave","bridge_slave","bonded_bridge_slave"] and idata["interface_master"] == mgmt_ints_multi[0]: slave_ints.append(iname) if "eth0" in slave_ints: my_int = "eth0" else: my_int = slave_ints[0] # set my_ip from the bonded/bridged interface here my_ip = data["ip_address_" + data["interface_master_" + my_int]] my_mask = data["netmask_" + data["interface_master_" + my_int]] if len(mgmt_ints) == 1 and len(mgmt_ints_multi) == 0: # single management interface my_int = mgmt_ints[0] # lookup tcp/ip configuration data if my_ip is None and my_int is not None: if data.has_key("ip_address_" + my_int) and data["ip_address_" + my_int] != "": my_ip = data["ip_address_" + my_int] if my_mask is None and my_int is not None: if data.has_key("netmask_" + my_int) and data["netmask_" + my_int] != "": my_mask = data["netmask_" + my_int] if my_gw is None: if data.has_key("gateway") and data["gateway"] != "": my_gw = data["gateway"] if my_dns is None: if data.has_key("name_servers") and data["name_servers"] != "": my_dns = data["name_servers"] # add information to the append_line if my_int is not None: if dist.breed == "suse": if data.has_key("mac_address_" + my_int) and data["mac_address_" + my_int] != "": append_line += " netdevice=%s" % data["mac_address_" + my_int].lower() else: append_line += " netdevice=%s" % my_int if dist.breed == "redhat": if data.has_key("mac_address_" + my_int) and data["mac_address_" + my_int] != "": append_line += " ksdevice=%s" % data["mac_address_" + my_int] else: append_line += " ksdevice=%s" % my_int if dist.breed in ["ubuntu","debian"]: append_line += " netcfg/choose_interface=%s" % my_int if my_ip is not None: if dist.breed == "suse": append_line += " hostip=%s" % my_ip if dist.breed == "redhat": append_line += " ip=%s" % my_ip if dist.breed in ["ubuntu","debian"]: append_line += " netcfg/get_ipaddress=%s" % my_ip if my_mask is not None: if dist.breed in ["suse","redhat"]: append_line += " netmask=%s" % my_mask if dist.breed in ["ubuntu","debian"]: append_line += " netcfg/get_netmask=%s" % my_mask if my_gw is not None: if dist.breed in ["suse","redhat"]: append_line += " gateway=%s" % my_gw if dist.breed in ["ubuntu","debian"]: append_line += " netcfg/get_gateway=%s" % my_gw if exclude_dns is None or my_dns is not None: if dist.breed == "suse": append_line += " nameserver=%s" % my_dns[0] if dist.breed == "redhat": if type(my_dns) == list: append_line += " dns=%s" % ",".join(my_dns) else: append_line += " dns=%s" % my_dns if dist.breed in ["ubuntu","debian"]: append_line += " netcfg/get_nameservers=%s" % ",".join(my_dns) # add remaining kernel_options to append_line append_line += self.add_remaining_kopts(data["kernel_options"]) cfg.write(append_line) cfg.write("\n") cfg.write("MENU END\n") cfg.close() def generate_standalone_iso(self,imagesdir,isolinuxdir,distname,filesource): """ Create bootable CD image to be used for handsoff CD installtions """ # Get the distro object for the requested distro # and then get all of its descendants (profiles/sub-profiles/systems) distro = self.api.find_distro(distname) if distro is None: utils.die(self.logger,"distro %s was not found, aborting" % distname) descendants = distro.get_descendants() if filesource is None: # Try to determine the source from the distro kernel path self.logger.debug("trying to locate source for distro") found_source = False (source_head, source_tail) = os.path.split(distro.kernel) while source_tail != '': if source_head == os.path.join(self.api.settings().webdir, "ks_mirror"): filesource = os.path.join(source_head, source_tail) found_source = True self.logger.debug("found source in %s" % filesource) break (source_head, source_tail) = os.path.split(source_head) # Can't find the source, raise an error if not found_source: utils.die(self.logger," Error, no installation source found. When building a standalone ISO, you must specify a --source if the distro install tree is not hosted locally") self.logger.info("copying kernels and initrds for standalone distro") self.copy_boot_files(distro,isolinuxdir,None) cmd = "rsync -rlptgu --exclude=boot.cat --exclude=TRANS.TBL --exclude=isolinux/ %s/ %s/../" % (filesource, isolinuxdir) self.logger.info("- copying distro %s files (%s)" % (distname,cmd)) rc = utils.subprocess_call(self.logger, cmd, shell=True) if rc: utils.die(self.logger,"rsync of files failed") self.logger.info("generating a isolinux.cfg") isolinuxcfg = os.path.join(isolinuxdir, "isolinux.cfg") cfg = open(isolinuxcfg, "w+") cfg.write(self.iso_template) for descendant in descendants: data = utils.blender(self.api, False, descendant) cfg.write("\n") cfg.write("LABEL %s\n" % descendant.name) cfg.write(" MENU LABEL %s\n" % descendant.name) cfg.write(" kernel %s\n" % os.path.basename(distro.kernel)) append_line = " append initrd=%s" % os.path.basename(distro.initrd) if distro.breed == "redhat": append_line += " ks=cdrom:/isolinux/%s.cfg" % descendant.name if distro.breed == "suse": append_line += " autoyast=file:///isolinux/%s.cfg install=cdrom:///" % descendant.name if data["kernel_options"].has_key("install"): del data["kernel_options"]["install"] if distro.breed in ["ubuntu","debian"]: append_line += " auto-install/enable=true preseed/file=/cdrom/isolinux/%s.cfg" % descendant.name # add remaining kernel_options to append_line append_line += self.add_remaining_kopts(data["kernel_options"]) cfg.write(append_line) if descendant.COLLECTION_TYPE == 'profile': kickstart_data = self.api.kickgen.generate_kickstart_for_profile(descendant.name) elif descendant.COLLECTION_TYPE == 'system': kickstart_data = self.api.kickgen.generate_kickstart_for_system(descendant.name) if distro.breed == "redhat": cdregex = re.compile("url .*\n", re.IGNORECASE) kickstart_data = cdregex.sub("cdrom\n", kickstart_data) ks_name = os.path.join(isolinuxdir, "%s.cfg" % descendant.name) ks_file = open(ks_name, "w+") ks_file.write(kickstart_data) ks_file.close() self.logger.info("done writing config") cfg.write("\n") cfg.write("MENU END\n") cfg.close() return def run(self,iso=None,buildisodir=None,profiles=None,systems=None,distro=None,standalone=None,source=None,exclude_dns=None,mkisofs_opts=None): # the distro option is for stand-alone builds only if not standalone and distro is not None: utils.die(self.logger,"The --distro option should only be used when creating a standalone ISO") # if building standalone, we only want --distro, # profiles/systems are disallowed if standalone: if profiles is not None or systems is not None: utils.die(self.logger,"When building a standalone ISO, use --distro only instead of --profiles/--systems") elif distro is None: utils.die(self.logger,"When building a standalone ISO, you must specify a --distro") if source != None and not os.path.exists(source): utils.die(self.logger,"The source specified (%s) does not exist" % source) # if iso is none, create it in . as "kickstart.iso" if iso is None: iso = "kickstart.iso" if buildisodir is None: buildisodir = self.settings.buildisodir else: if not os.path.isdir(buildisodir): utils.die(self.logger,"The --tempdir specified is not a directory") (buildisodir_head,buildisodir_tail) = os.path.split(os.path.normpath(buildisodir)) if buildisodir_tail != "buildiso": buildisodir = os.path.join(buildisodir, "buildiso") self.logger.info("using/creating buildisodir: %s" % buildisodir) if not os.path.exists(buildisodir): os.makedirs(buildisodir) else: shutil.rmtree(buildisodir) os.makedirs(buildisodir) # if base of buildisodir does not exist, fail # create all profiles unless filtered by "profiles" imagesdir = os.path.join(buildisodir, "images") isolinuxdir = os.path.join(buildisodir, "isolinux") self.logger.info("building tree for isolinux") if not os.path.exists(imagesdir): os.makedirs(imagesdir) if not os.path.exists(isolinuxdir): os.makedirs(isolinuxdir) self.logger.info("copying miscellaneous files") isolinuxbin = "/usr/share/syslinux/isolinux.bin" if not os.path.exists(isolinuxbin): isolinuxbin = "/usr/lib/syslinux/isolinux.bin" menu = "/usr/share/syslinux/menu.c32" if not os.path.exists(menu): menu = "/var/lib/cobbler/loaders/menu.c32" chain = "/usr/share/syslinux/chain.c32" if not os.path.exists(chain): chain = "/usr/lib/syslinux/chain.c32" files = [ isolinuxbin, menu, chain ] for f in files: if not os.path.exists(f): utils.die(self.logger,"Required file not found: %s" % f) else: utils.copyfile(f, os.path.join(isolinuxdir, os.path.basename(f)), self.api) if standalone: self.generate_standalone_iso(imagesdir,isolinuxdir,distro,source) else: self.generate_netboot_iso(imagesdir,isolinuxdir,profiles,systems,exclude_dns) if mkisofs_opts == None: mkisofs_opts = "" else: mkisofs_opts = mkisofs_opts.strip() # removed --quiet cmd = "mkisofs -o %s %s -r -b isolinux/isolinux.bin -c isolinux/boot.cat" % (iso,mkisofs_opts) cmd = cmd + " -no-emul-boot -boot-load-size 4" cmd = cmd + " -boot-info-table -V Cobbler\ Install -R -J -T %s" % buildisodir rc = utils.subprocess_call(self.logger, cmd, shell=True) if rc != 0: utils.die(self.logger,"mkisofs failed") self.logger.info("ISO build complete") self.logger.info("You may wish to delete: %s" % buildisodir) self.logger.info("The output file is: %s" % iso) return True cobbler-2.4.1/cobbler/action_check.py000066400000000000000000000457531227367477500175640ustar00rootroot00000000000000""" Validates whether the system is reasonably well configured for serving up content. This is the code behind 'cobbler check'. Copyright 2006-2009, Red Hat, Inc and Others Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import os import re import action_sync import utils import glob from utils import _ import clogger class BootCheck: def __init__(self,config,logger=None): """ Constructor """ self.config = config self.settings = config.settings() if logger is None: logger = clogger.Logger() self.logger = logger def run(self): """ Returns None if there are no errors, otherwise returns a list of things to correct prior to running application 'for real'. (The CLI usage is "cobbler check" before "cobbler sync") """ status = [] self.checked_dist = utils.check_dist() self.check_name(status) self.check_selinux(status) if self.settings.manage_dhcp: mode = self.config.api.get_sync().dhcp.what() if mode == "isc": self.check_dhcpd_bin(status) self.check_dhcpd_conf(status) self.check_service(status,"dhcpd") elif mode == "dnsmasq": self.check_dnsmasq_bin(status) self.check_service(status,"dnsmasq") if self.settings.manage_dns: mode = self.config.api.get_sync().dns.what() if mode == "bind": self.check_bind_bin(status) self.check_service(status,"named") elif mode == "dnsmasq" and not self.settings.manage_dhcp: self.check_dnsmasq_bin(status) self.check_service(status,"dnsmasq") mode = self.config.api.get_sync().tftpd.what() if mode == "in_tftpd": self.check_tftpd_bin(status) self.check_tftpd_dir(status) self.check_tftpd_conf(status) elif mode == "tftpd_py": self.check_ctftpd_bin(status) self.check_ctftpd_dir(status) self.check_ctftpd_conf(status) self.check_service(status, "cobblerd") self.check_bootloaders(status) self.check_for_wget_curl(status) self.check_rsync_conf(status) self.check_httpd(status) self.check_iptables(status) self.check_yum(status) self.check_debmirror(status) self.check_for_ksvalidator(status) self.check_for_default_password(status) self.check_for_unreferenced_repos(status) self.check_for_unsynced_repos(status) self.check_for_cman(status) return status def check_for_ksvalidator(self, status): if self.checked_dist in ["debian", "ubuntu"]: return if not os.path.exists("/usr/bin/ksvalidator"): status.append("ksvalidator was not found, install pykickstart") return True def check_for_cman(self, status): # not doing rpm -q here to be cross-distro friendly if not os.path.exists("/sbin/fence_ilo") and not os.path.exists("/usr/sbin/fence_ilo"): status.append("fencing tools were not found, and are required to use the (optional) power management features. install cman or fence-agents to use them") return True def check_service(self, status, which, notes=""): if notes != "": notes = " (NOTE: %s)" % notes rc = 0 if self.checked_dist in ("redhat","fedora","centos","scientific linux","suse"): if os.path.exists("/etc/rc.d/init.d/%s" % which): rc = utils.subprocess_call(self.logger,"/sbin/service %s status > /dev/null 2>/dev/null" % which, shell=True) if rc != 0: status.append(_("service %s is not running%s") % (which,notes)) return False elif self.checked_dist in ["debian", "ubuntu"]: # we still use /etc/init.d if os.path.exists("/etc/init.d/%s" % which): rc = utils.subprocess_call(self.logger,"/etc/init.d/%s status /dev/null 2>/dev/null" % which, shell=True) if rc != 0: status.append(_("service %s is not running%s") % (which,notes)) return False elif self.checked_dist == "ubuntu": if os.path.exists("/etc/init/%s.conf" % which): rc = utils.subprocess_call(self.logger,"status %s > /dev/null 2>&1" % which, shell=True) if rc != 0: status.append(_("service %s is not running%s") % (which,notes)) else: status.append(_("Unknown distribution type, cannot check for running service %s" % which)) return False return True def check_iptables(self, status): if os.path.exists("/etc/rc.d/init.d/iptables"): rc = utils.subprocess_call(self.logger,"/sbin/service iptables status >/dev/null 2>/dev/null", shell=True) if rc == 0: status.append(_("since iptables may be running, ensure 69, 80/443, and %(xmlrpc)s are unblocked") % { "xmlrpc" : self.settings.xmlrpc_port }) def check_yum(self,status): if self.checked_dist in ["debian", "ubuntu"]: return if not os.path.exists("/usr/bin/createrepo"): status.append(_("createrepo package is not installed, needed for cobbler import and cobbler reposync, install createrepo?")) if not os.path.exists("/usr/bin/reposync"): status.append(_("reposync is not installed, need for cobbler reposync, install/upgrade yum-utils?")) if not os.path.exists("/usr/bin/yumdownloader"): status.append(_("yumdownloader is not installed, needed for cobbler repo add with --rpm-list parameter, install/upgrade yum-utils?")) if self.settings.reposync_flags.find("-l"): if self.checked_dist in ("redhat","fedora","centos","scientific linux","suse"): yum_utils_ver = utils.subprocess_get(self.logger,"/usr/bin/rpmquery --queryformat=%{VERSION} yum-utils", shell=True) if yum_utils_ver < "1.1.17": status.append(_("yum-utils need to be at least version 1.1.17 for reposync -l, current version is %s") % yum_utils_ver ) def check_debmirror(self,status): if not os.path.exists("/usr/bin/debmirror"): status.append(_("debmirror package is not installed, it will be required to manage debian deployments and repositories")) if os.path.exists("/etc/debmirror.conf"): f = open("/etc/debmirror.conf") re_dists = re.compile(r'@dists=') re_arches = re.compile(r'@arches=') for line in f.readlines(): if re_dists.search(line) and not line.strip().startswith("#"): status.append(_("comment out 'dists' on /etc/debmirror.conf for proper debian support")) if re_arches.search(line) and not line.strip().startswith("#"): status.append(_("comment out 'arches' on /etc/debmirror.conf for proper debian support")) def check_name(self,status): """ If the server name in the config file is still set to localhost kickstarts run from koan will not have proper kernel line parameters. """ if self.settings.server == "127.0.0.1": status.append(_("The 'server' field in /etc/cobbler/settings must be set to something other than localhost, or kickstarting features will not work. This should be a resolvable hostname or IP for the boot server as reachable by all machines that will use it.")) if self.settings.next_server == "127.0.0.1": status.append(_("For PXE to be functional, the 'next_server' field in /etc/cobbler/settings must be set to something other than 127.0.0.1, and should match the IP of the boot server on the PXE network.")) def check_selinux(self,status): """ Suggests various SELinux rules changes to run Cobbler happily with SELinux in enforcing mode. FIXME: this method could use some refactoring in the future. """ if self.checked_dist in ["debian", "ubuntu"]: return enabled = self.config.api.is_selinux_enabled() if enabled: status.append(_("SELinux is enabled. Please review the following wiki page for details on ensuring cobbler works correctly in your SELinux environment:\n https://github.com/cobbler/cobbler/wiki/Selinux")) def check_for_default_password(self,status): default_pass = self.settings.default_password_crypted if default_pass == "$1$mF86/UHC$WvcIcX2t6crBz2onWxyac.": status.append(_("The default password used by the sample templates for newly installed machines (default_password_crypted in /etc/cobbler/settings) is still set to 'cobbler' and should be changed, try: \"openssl passwd -1 -salt 'random-phrase-here' 'your-password-here'\" to generate new one")) def check_for_unreferenced_repos(self,status): repos = [] referenced = [] not_found = [] for r in self.config.api.repos(): repos.append(r.name) for p in self.config.api.profiles(): my_repos = p.repos if my_repos != "<>": referenced.extend(my_repos) for r in referenced: if r not in repos and r != "<>": not_found.append(r) if len(not_found) > 0: status.append(_("One or more repos referenced by profile objects is no longer defined in cobbler: %s") % ", ".join(not_found)) def check_for_unsynced_repos(self,status): need_sync = [] for r in self.config.repos(): if r.mirror_locally == 1: lookfor = os.path.join(self.settings.webdir, "repo_mirror", r.name) if not os.path.exists(lookfor): need_sync.append(r.name) if len(need_sync) > 0: status.append(_("One or more repos need to be processed by cobbler reposync for the first time before kickstarting against them: %s") % ", ".join(need_sync)) def check_httpd(self,status): """ Check if Apache is installed. """ if self.checked_dist in ("redhat","fedora","centos","scientific linux"): rc = utils.subprocess_get(self.logger,"httpd -v") elif self.checked_dist == "suse": rc = utils.subprocess_get(self.logger,"httpd2 -v") else: rc = utils.subprocess_get(self.logger,"apache2 -v") if rc.find("Server") == -1: status.append("Apache (httpd) is not installed and/or in path") def check_dhcpd_bin(self,status): """ Check if dhcpd is installed """ if not os.path.exists("/usr/sbin/dhcpd"): status.append("dhcpd is not installed") def check_dnsmasq_bin(self,status): """ Check if dnsmasq is installed """ rc = utils.subprocess_get(self.logger,"dnsmasq --help") if rc.find("Valid options") == -1: status.append("dnsmasq is not installed and/or in path") def check_bind_bin(self,status): """ Check if bind is installed. """ rc = utils.subprocess_get(self.logger,"named -v") # it should return something like "BIND 9.6.1-P1-RedHat-9.6.1-6.P1.fc11" if rc.find("BIND") == -1: status.append("named is not installed and/or in path") def check_for_wget_curl(self,status): """ Check to make sure wget or curl is installed """ rc1 = utils.subprocess_call(self.logger,"which wget") rc2 = utils.subprocess_call(self.logger,"which curl") if rc1 != 0 and rc2 != 0: status.append("Neither wget nor curl are installed and/or available in $PATH. Cobbler requires that one of these utilities be installed.") def check_bootloaders(self,status): """ Check if network bootloaders are installed """ # FIXME: move zpxe.rexx to loaders bootloaders = { "elilo" : [ "/var/lib/cobbler/loaders/elilo*.efi" ], "menu.c32" : [ "/usr/share/syslinux/menu.c32", "/usr/lib/syslinux/menu.c32", "/var/lib/cobbler/loaders/menu.c32" ], "yaboot" : [ "/var/lib/cobbler/loaders/yaboot*" ], "pxelinux.0" : [ "/usr/share/syslinux/pxelinux.0", "/usr/lib/syslinux/pxelinux.0", "/var/lib/cobbler/loaders/pxelinux.0" ], "efi" : [ "/var/lib/cobbler/loaders/grub-x86.efi", "/var/lib/cobbler/loaders/grub-x86_64.efi" ], } # look for bootloaders at the glob locations above found_bootloaders = [] items = bootloaders.keys() for loader_name in items: patterns = bootloaders[loader_name] for pattern in patterns: matches = glob.glob(pattern) if len(matches) > 0: found_bootloaders.append(loader_name) not_found = [] # invert the list of what we've found so we can report on what we haven't found for loader_name in items: if loader_name not in found_bootloaders: not_found.append(loader_name) if len(not_found) > 0: status.append("some network boot-loaders are missing from /var/lib/cobbler/loaders, you may run 'cobbler get-loaders' to download them, or, if you only want to handle x86/x86_64 netbooting, you may ensure that you have installed a *recent* version of the syslinux package installed and can ignore this message entirely. Files in this directory, should you want to support all architectures, should include pxelinux.0, menu.c32, elilo.efi, and yaboot. The 'cobbler get-loaders' command is the easiest way to resolve these requirements.") def check_tftpd_bin(self,status): """ Check if tftpd is installed """ if self.checked_dist in ["debian", "ubuntu"]: return if not os.path.exists("/etc/xinetd.d/tftp"): status.append("missing /etc/xinetd.d/tftp, install tftp-server?") def check_tftpd_dir(self,status): """ Check if cobbler.conf's tftpboot directory exists """ if self.checked_dist in ["debian", "ubuntu"]: return bootloc = utils.tftpboot_location() if not os.path.exists(bootloc): status.append(_("please create directory: %(dirname)s") % { "dirname" : bootloc }) def check_tftpd_conf(self,status): """ Check that configured tftpd boot directory matches with actual Check that tftpd is enabled to autostart """ if self.checked_dist in ["debian", "ubuntu"]: return if os.path.exists("/etc/xinetd.d/tftp"): f = open("/etc/xinetd.d/tftp") re_disable = re.compile(r'disable.*=.*yes') for line in f.readlines(): if re_disable.search(line) and not line.strip().startswith("#"): status.append(_("change 'disable' to 'no' in %(file)s") % { "file" : "/etc/xinetd.d/tftp" }) else: status.append("missing configuration file: /etc/xinetd.d/tftp") def check_ctftpd_bin(self,status): """ Check if the Cobbler tftp server is installed """ if self.checked_dist in ["debian", "ubuntu"]: return if not os.path.exists("/etc/xinetd.d/ctftp"): status.append("missing /etc/xinetd.d/ctftp") def check_ctftpd_dir(self,status): """ Check if cobbler.conf's tftpboot directory exists """ if self.checked_dist in ["debian", "ubuntu"]: return bootloc = utils.tftpboot_location() if not os.path.exists(bootloc): status.append(_("please create directory: %(dirname)s") % { "dirname" : bootloc }) def check_ctftpd_conf(self,status): """ Check that configured tftpd boot directory matches with actual Check that tftpd is enabled to autostart """ if self.checked_dist in ["debian", "ubuntu"]: return if os.path.exists("/etc/xinetd.d/tftp"): f = open("/etc/xinetd.d/tftp") re_disable = re.compile(r'disable.*=.*no') for line in f.readlines(): if re_disable.search(line) and not line.strip().startswith("#"): status.append(_("change 'disable' to 'yes' in %(file)s") % { "file" : "/etc/xinetd.d/tftp" }) if os.path.exists("/etc/xinetd.d/ctftp"): f = open("/etc/xinetd.d/ctftp") re_disable = re.compile(r'disable.*=.*yes') for line in f.readlines(): if re_disable.search(line) and not line.strip().startswith("#"): status.append(_("change 'disable' to 'no' in %(file)s") % { "file" : "/etc/xinetd.d/ctftp" }) else: status.append("missing configuration file: /etc/xinetd.d/ctftp") def check_rsync_conf(self,status): """ Check that rsync is enabled to autostart """ if self.checked_dist in ["debian", "ubuntu"]: return if os.path.exists("/etc/xinetd.d/rsync"): f = open("/etc/xinetd.d/rsync") re_disable = re.compile(r'disable.*=.*yes') for line in f.readlines(): if re_disable.search(line) and not line.strip().startswith("#"): status.append(_("change 'disable' to 'no' in %(file)s") % { "file" : "/etc/xinetd.d/rsync" }) else: status.append(_("file %(file)s does not exist") % { "file" : "/etc/xinetd.d/rsync" }) def check_dhcpd_conf(self,status): """ NOTE: this code only applies if cobbler is *NOT* set to generate a dhcp.conf file Check that dhcpd *appears* to be configured for pxe booting. We can't assure file correctness. Since a cobbler user might have dhcp on another server, it's okay if it's not there and/or not configured correctly according to automated scans. """ if not (self.settings.manage_dhcp == 0): return if os.path.exists(self.settings.dhcpd_conf): match_next = False match_file = False f = open(self.settings.dhcpd_conf) for line in f.readlines(): if line.find("next-server") != -1: match_next = True if line.find("filename") != -1: match_file = True if not match_next: status.append(_("expecting next-server entry in %(file)s") % { "file" : self.settings.dhcpd_conf }) if not match_file: status.append(_("missing file: %(file)s") % { "file" : self.settings.dhcpd_conf }) else: status.append(_("missing file: %(file)s") % { "file" : self.settings.dhcpd_conf }) cobbler-2.4.1/cobbler/action_dlcontent.py000066400000000000000000000064531227367477500204730ustar00rootroot00000000000000""" Downloads bootloader content for all arches for when the user doesn't want to supply their own. Copyright 2009, Red Hat, Inc and Others Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import os import urlgrabber import clogger class ContentDownloader: def __init__(self,config,logger=None): """ Constructor """ self.config = config self.settings = config.settings() if logger is None: logger = clogger.Logger() self.logger = logger def run(self,force=False): """ Download bootloader content for all of the latest bootloaders, since the user has chosen to not supply their own. You may ask "why not get this from yum", though Fedora has no IA64 repo, for instance, and we also want this to be able to work on Debian and further do not want folks to have to install a cross compiler. For those that don't like this approach they can still source their cross-arch bootloader content manually. """ content_server = "http://www.cobblerd.org/loaders" dest = "/var/lib/cobbler/loaders" files = ( ( "%s/README" % content_server, "%s/README" % dest ), ( "%s/COPYING.elilo" % content_server, "%s/COPYING.elilo" % dest ), ( "%s/COPYING.yaboot" % content_server, "%s/COPYING.yaboot" % dest), ( "%s/COPYING.syslinux" % content_server, "%s/COPYING.syslinux" % dest), ( "%s/elilo-3.8-ia64.efi" % content_server, "%s/elilo-ia64.efi" % dest ), ( "%s/yaboot-1.3.14-12" % content_server, "%s/yaboot" % dest), ( "%s/pxelinux.0-3.86" % content_server, "%s/pxelinux.0" % dest), ( "%s/menu.c32-3.86" % content_server, "%s/menu.c32" % dest), ( "%s/grub-0.97-x86.efi" % content_server, "%s/grub-x86.efi" % dest), ( "%s/grub-0.97-x86_64.efi" % content_server, "%s/grub-x86_64.efi" % dest), ) proxies = {} if os.environ.has_key("HTTP_PROXY"): proxies['http'] = os.environ["HTTP_PROXY"] if os.environ.has_key("HTTPS_PROXY"): proxies['https'] = os.environ["HTTPS_PROXY"] if os.environ.has_key("FTP_PROXY"): proxies['ftp'] = os.environ["FTP_PROXY"] if len(proxies) == 0: proxies = None for src,dst in files: if os.path.exists(dst) and not force: self.logger.info("path %s already exists, not overwriting existing content, use --force if you wish to update" % dst) continue self.logger.info("downloading %s to %s" % (src,dst)) urlgrabber.grabber.urlgrab(src, filename=dst, proxies=proxies) return True cobbler-2.4.1/cobbler/action_hardlink.py000066400000000000000000000051651227367477500202740ustar00rootroot00000000000000""" Hard links cobbler content together to save space. Copyright 2009, Red Hat, Inc and Others Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import os import utils from cexceptions import * import clogger class HardLinker: def __init__(self,config,logger=None): """ Constructor """ #self.config = config #self.api = config.api #self.settings = config.settings() if logger is None: logger = clogger.Logger() self.logger = logger self.distro = utils.check_dist() if self.distro == "ubuntu" or self.distro == "debian": self.hardlink = "/usr/bin/hardlink" self.hardlink_args = "-f -p -o -t -v /var/www/cobbler/ks_mirror /var/www/cobbler/repo_mirror" else: self.hardlink = "/usr/sbin/hardlink" self.hardlink_args = "-c -v /var/www/cobbler/ks_mirror /var/www/cobbler/repo_mirror" self.hardlink_cmd = "%s %s" % (self.hardlink, self.hardlink_args) def run(self): """ Simply hardlinks directories that are cobbler managed. This is a /very/ simple command but may grow more complex and intelligent over time. """ # FIXME: if these directories become configurable some # changes will be required here. if not os.path.exists(self.hardlink): utils.die(self.logger,"please install 'hardlink' (%s) to use this feature" % self.hardlink) self.logger.info("now hardlinking to save space, this may take some time.") rc = utils.subprocess_call(self.logger,self.hardlink_cmd,shell=True) # FIXME: how about settings? (self.settings.webdir) webdir = "/var/www/cobbler" if os.path.exists("/srv/www"): webdir = "/srv/www/cobbler" rc = utils.subprocess_call(self.logger,"/usr/sbin/hardlink -c -v "+webdir+"/ks_mirror /var/www/cobbler/repo_mirror",shell=True) return rc cobbler-2.4.1/cobbler/action_litesync.py000066400000000000000000000161051227367477500203260ustar00rootroot00000000000000""" Running small pieces of cobbler sync when certain actions are taken, such that we don't need a time consuming sync when adding new systems if nothing has changed for systems that have already been created. Copyright 2006-2009, Red Hat, Inc and Others Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import os import os.path import utils import traceback import clogger import module_loader class BootLiteSync: """ Handles conversion of internal state to the tftpboot tree layout """ def __init__(self,config,verbose=False,logger=None): """ Constructor """ self.verbose = verbose self.config = config self.distros = config.distros() self.profiles = config.profiles() self.systems = config.systems() self.images = config.images() self.settings = config.settings() self.repos = config.repos() if logger is None: logger = clogger.Logger() self.logger = logger self.tftpd = module_loader.get_module_from_file( "tftpd","module","in_tftpd" ).get_manager(config,logger) self.sync = config.api.get_sync(verbose,logger=self.logger) self.sync.make_tftpboot() def add_single_distro(self, name): # get the distro record distro = self.distros.find(name=name) if distro is None: return # copy image files to images/$name in webdir & tftpboot: self.sync.pxegen.copy_single_distro_files(distro, self.settings.webdir,True) self.tftpd.add_single_distro(distro) # create the symlink for this distro src_dir = utils.find_distro_path(self.settings,distro) dst_dir = os.path.join(self.settings.webdir,"links",name) if os.path.exists(dst_dir): self.logger.warning("skipping symlink, destination (%s) exists" % dst_dir) elif utils.path_tail(os.path.join(self.settings.webdir,"ks_mirror"),src_dir) == "": self.logger.warning("skipping symlink, the source (%s) is not in %s" % (src_dir,os.path.join(self.settings.webdir,"ks_mirror"))) else: try: self.logger.info("trying symlink %s -> %s" % (src_dir,dst_dir)) os.symlink(src_dir, dst_dir) except (IOError, OSError): self.logger.error("symlink failed (%s -> %s)" % (src_dir,dst_dir)) # generate any templates listed in the distro self.sync.pxegen.write_templates(distro) # cascade sync kids = distro.get_children() for k in kids: self.add_single_profile(k.name, rebuild_menu=False) self.sync.pxegen.make_pxe_menu() def add_single_image(self, name): image = self.images.find(name=name) self.sync.pxegen.copy_single_image_files(image) kids = image.get_children() for k in kids: self.add_single_system(k.name) self.sync.pxegen.make_pxe_menu() def remove_single_distro(self, name): bootloc = utils.tftpboot_location() # delete contents of images/$name directory in webdir utils.rmtree(os.path.join(self.settings.webdir, "images", name)) # delete contents of images/$name in tftpboot utils.rmtree(os.path.join(bootloc, "images", name)) # delete potential symlink to tree in webdir/links utils.rmfile(os.path.join(self.settings.webdir, "links", name)) def remove_single_image(self, name): bootloc = utils.tftpboot_location() utils.rmfile(os.path.join(bootloc, "images2", name)) def add_single_profile(self, name, rebuild_menu=True): # get the profile object: profile = self.profiles.find(name=name) if profile is None: # most likely a subprofile's kid has been # removed already, though the object tree has # not been reloaded ... and this is just noise. return # rebuild the yum configuration files for any attached repos # generate any templates listed in the distro self.sync.pxegen.write_templates(profile) # cascade sync kids = profile.get_children() for k in kids: if k.COLLECTION_TYPE == "profile": self.add_single_profile(k.name, rebuild_menu=False) else: self.add_single_system(k.name) if rebuild_menu: self.sync.pxegen.make_pxe_menu() return True def remove_single_profile(self, name, rebuild_menu=True): # delete profiles/$name file in webdir utils.rmfile(os.path.join(self.settings.webdir, "profiles", name)) # delete contents on kickstarts/$name directory in webdir utils.rmtree(os.path.join(self.settings.webdir, "kickstarts", name)) if rebuild_menu: self.sync.pxegen.make_pxe_menu() def update_system_netboot_status(self,name): self.tftpd.update_netboot(name) def add_single_system(self, name): # get the system object: system = self.systems.find(name=name) if system is None: return # rebuild system_list file in webdir if self.settings.manage_dhcp: self.sync.dhcp.regen_ethers() if self.settings.manage_dns: self.sync.dns.regen_hosts() # write the PXE files for the system self.tftpd.add_single_system(system) def remove_single_system(self, name): bootloc = utils.tftpboot_location() system_record = self.systems.find(name=name) # delete contents of kickstarts_sys/$name in webdir system_record = self.systems.find(name=name) itanic = False profile = self.profiles.find(name=system_record.profile) if profile is not None: distro = self.distros.find(name=profile.get_conceptual_parent().name) if distro is not None and distro in [ "ia64", "IA64"]: itanic = True for (name,interface) in system_record.interfaces.iteritems(): filename = utils.get_config_filename(system_record,interface=name) if not itanic: utils.rmfile(os.path.join(bootloc, "pxelinux.cfg", filename)) utils.rmfile(os.path.join(bootloc, "grub", filename.upper())) else: utils.rmfile(os.path.join(bootloc, filename)) cobbler-2.4.1/cobbler/action_log.py000066400000000000000000000041461227367477500172570ustar00rootroot00000000000000""" Copyright 2009, Red Hat, Inc and Others Bill Peck This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import os import os.path import sys import traceback import clogger import utils from cexceptions import * import glob class LogTool: """ Helpers for dealing with System logs, consoles, anamon, etc.. """ def __init__(self,config,system,api,logger=None): """ Log library constructor requires a cobbler system object. """ self.system = system self.config = config self.settings = config.settings() self.api = api if logger is None: logger = clogger.Logger() self.logger = logger def clear(self): """ Clears the system logs """ consoles = self.settings.consoles logs = filter(os.path.isfile, glob.glob('%s/%s' % (consoles, self.system.name))) anamon_dir = '/var/log/cobbler/anamon/%s' % self.system.name if os.path.isdir(anamon_dir): logs.extend(filter(os.path.isfile, glob.glob('%s/*' % anamon_dir))) for log in logs: try: f = open(log, 'w') f.truncate() f.close() except IOError, e: self.logger.info("Failed to Truncate '%s':%s " % (log, e)) except OSError, e: self.logger.info("Failed to Truncate '%s':%s " % (log, e)) return 0 cobbler-2.4.1/cobbler/action_power.py000066400000000000000000000115111227367477500176240ustar00rootroot00000000000000""" Power management library. For cobbler objects with power management configured encapsulate the logic to run power management commands so that the admin does not have to use seperate tools and remember how each of the power management tools are set up. This makes power cycling a system for reinstallation much easier. See https://github.com/cobbler/cobbler/wiki/Power-management Copyright 2008-2009, Red Hat, Inc and Others Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import os import os.path import traceback import time import re import utils import func_utils from cexceptions import * import templar import clogger class PowerTool: """ Handles conversion of internal state to the tftpboot tree layout """ def __init__(self,config,system,api,force_user=None,force_pass=None,logger=None): """ Power library constructor requires a cobbler system object. """ self.system = system self.config = config self.settings = config.settings() self.api = api self.logger = self.api.logger self.force_user = force_user self.force_pass = force_pass if logger is None: logger = clogger.Logger() self.logger = logger def power(self, desired_state): """ state is either "on" or "off". Rebooting is implemented at the api.py level. The user and password need not be supplied. If not supplied they will be taken from the environment, COBBLER_POWER_USER and COBBLER_POWER_PASS. If provided, these will override any other data and be used instead. Users interested in maximum security should take that route. """ power_command = utils.get_power(self.system.power_type) if not power_command: utils.die(self.logger,"no power type set for system") meta = utils.blender(self.api, False, self.system) meta["power_mode"] = desired_state # allow command line overrides of the username/password if self.force_user is not None: meta["power_user"] = self.force_user if self.force_pass is not None: meta["power_pass"] = self.force_pass self.logger.info("cobbler power configuration is:") self.logger.info(" type : %s" % self.system.power_type) self.logger.info(" address: %s" % self.system.power_address) self.logger.info(" user : %s" % self.system.power_user) self.logger.info(" id : %s" % self.system.power_id) # if no username/password data, check the environment if meta.get("power_user","") == "": meta["power_user"] = os.environ.get("COBBLER_POWER_USER","") if meta.get("power_pass","") == "": meta["power_pass"] = os.environ.get("COBBLER_POWER_PASS","") template = utils.get_power_template(self.system.power_type) tmp = templar.Templar(self.api._config) template_data = tmp.render(template, meta, None, self.system) # Try the power command 5 times before giving up. # Some power switches are flakey for x in range(0,5): output, rc = utils.subprocess_sp(self.logger, power_command, shell=False, input=template_data) if rc == 0: # If the desired state is actually a query for the status # return different information than command return code if desired_state == 'status': match = re.match('(^Status:\s)(on|off)', output, re.IGNORECASE) if match: power_status = match.groups()[1] if power_status.lower() == 'on': return True else: return False utils.die(self.logger,"command succeeded (rc=%s), but output ('%s') was not understood" % (rc, output)) return None break else: time.sleep(2) if not rc == 0: utils.die(self.logger,"command failed (rc=%s), please validate the physical setup and cobbler config" % rc) return rc cobbler-2.4.1/cobbler/action_replicate.py000066400000000000000000000401561227367477500204470ustar00rootroot00000000000000""" Replicate from a cobbler master. Copyright 2007-2009, Red Hat, Inc and Others Michael DeHaan Scott Henson This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import os import os.path import xmlrpclib import api as cobbler_api import utils from utils import _ from cexceptions import * import clogger import fnmatch OBJ_TYPES = [ "distro", "profile", "system", "repo", "image", "mgmtclass", "package", "file" ] class Replicate: def __init__(self,config,logger=None): """ Constructor """ self.config = config self.settings = config.settings() self.api = config.api self.remote = None self.uri = None if logger is None: logger = clogger.Logger() self.logger = logger def rsync_it(self,from_path,to_path,type=None): from_path = "%s::%s" % (self.master, from_path) if type == 'repo': cmd = "rsync %s %s %s" % (self.settings.replicate_repo_rsync_options, from_path, to_path) else: cmd = "rsync %s %s %s" % (self.settings.replicate_rsync_options, from_path, to_path) rc = utils.subprocess_call(self.logger, cmd, shell=True) if rc !=0: self.logger.info("rsync failed") # ------------------------------------------------------- def remove_objects_not_on_master(self, obj_type): locals = utils.loh_to_hoh(self.local_data[obj_type],"uid") remotes = utils.loh_to_hoh(self.remote_data[obj_type],"uid") for (luid, ldata) in locals.iteritems(): if not remotes.has_key(luid): try: self.logger.info("removing %s %s" % (obj_type, ldata["name"])) self.api.remove_item(obj_type, ldata["name"], recursive=True, logger=self.logger) except Exception, e: utils.log_exc(self.logger) # ------------------------------------------------------- def add_objects_not_on_local(self, obj_type): locals = utils.loh_to_hoh(self.local_data[obj_type], "uid") remotes = utils.loh_sort_by_key(self.remote_data[obj_type],"depth") remotes2 = utils.loh_to_hoh(self.remote_data[obj_type],"depth") for rdata in remotes: # do not add the system if it is not on the transfer list if not self.must_include[obj_type].has_key(rdata["name"]): continue if not locals.has_key(rdata["uid"]): creator = getattr(self.api, "new_%s" % obj_type) newobj = creator() newobj.from_datastruct(rdata) try: self.logger.info("adding %s %s" % (obj_type, rdata["name"])) if not self.api.add_item(obj_type, newobj,logger=self.logger): self.logger.error("failed to add %s %s" % (obj_type, rdata["name"])) except Exception, e: utils.log_exc(self.logger) # ------------------------------------------------------- def replace_objects_newer_on_remote(self, obj_type): locals = utils.loh_to_hoh(self.local_data[obj_type],"uid") remotes = utils.loh_to_hoh(self.remote_data[obj_type],"uid") for (ruid, rdata) in remotes.iteritems(): # do not add the system if it is not on the transfer list if not self.must_include[obj_type].has_key(rdata["name"]): continue if locals.has_key(ruid): ldata = locals[ruid] if ldata["mtime"] < rdata["mtime"]: if ldata["name"] != rdata["name"]: self.logger.info("removing %s %s" % (obj_type, ldata["name"])) self.api.remove_item(obj_type, ldata["name"], recursive=True, logger=self.logger) creator = getattr(self.api, "new_%s" % obj_type) newobj = creator() newobj.from_datastruct(rdata) try: self.logger.info("updating %s %s" % (obj_type, rdata["name"])) if not self.api.add_item(obj_type, newobj): self.logger.error("failed to update %s %s" % (obj_type, rdata["name"])) except Exception, e: utils.log_exc(self.logger) # ------------------------------------------------------- def replicate_data(self): self.local_data = {} self.remote_data = {} self.remote_settings = self.remote.get_settings() self.logger.info("Querying Both Servers") for what in OBJ_TYPES: self.remote_data[what] = self.remote.get_items(what) self.local_data[what] = self.local.get_items(what) self.generate_include_map() if self.prune: self.logger.info("Removing Objects Not Stored On Master") obj_types = OBJ_TYPES[:] if len(self.system_patterns) == 0 and "system" in obj_types: obj_types.remove("system") for what in obj_types: self.remove_objects_not_on_master(what) else: self.logger.info("*NOT* Removing Objects Not Stored On Master") if not self.omit_data: self.logger.info("Rsyncing distros") for distro in self.must_include["distro"].keys(): if self.must_include["distro"][distro] == 1: self.logger.info("Rsyncing distro %s" % distro) target = self.remote.get_distro(distro) target_webdir = os.path.join(self.remote_settings["webdir"],"ks_mirror") tail = utils.path_tail(target_webdir,target["kernel"]) if tail != "": try: # path_tail(a,b) returns something that looks like # an absolute path, but it's really the sub-path # from a that is contained in b. That means we want # the first element of the path dest = os.path.join(self.settings.webdir,"ks_mirror",tail.split("/")[1]) self.rsync_it("distro-%s" % target["name"], dest) except: self.logger.error("Failed to rsync distro %s" % distro) continue else: self.logger.warning("Skipping distro %s, as it doesn't appear to live under ks_mirror" % distro) self.logger.info("Rsyncing repos") for repo in self.must_include["repo"].keys(): if self.must_include["repo"][repo] == 1: self.rsync_it("repo-%s"%repo, os.path.join(self.settings.webdir,"repo_mirror",repo),"repo") self.logger.info("Rsyncing distro repo configs") self.rsync_it("cobbler-distros/config/", os.path.join(self.settings.webdir,"ks_mirror","config")) self.logger.info("Rsyncing kickstart templates & snippets") self.rsync_it("cobbler-kickstarts","/var/lib/cobbler/kickstarts") self.rsync_it("cobbler-snippets","/var/lib/cobbler/snippets") self.logger.info("Rsyncing triggers") self.rsync_it("cobbler-triggers","/var/lib/cobbler/triggers") self.logger.info("Rsyncing scripts") self.rsync_it("cobbler-scripts","/var/lib/cobbler/scripts") else: self.logger.info("*NOT* Rsyncing Data") self.logger.info("Removing Objects Not Stored On Local") for what in OBJ_TYPES: self.add_objects_not_on_local(what) self.logger.info("Updating Objects Newer On Remote") for what in OBJ_TYPES: self.replace_objects_newer_on_remote(what) def link_distros(self): for distro in self.api.distros(): self.logger.debug("Linking Distro %s" % distro.name) utils.link_distro(self.settings, distro) def generate_include_map(self): self.remote_names = {} self.remote_dict = {} self.must_include = { "distro" : {}, "profile" : {}, "system" : {}, "image" : {}, "repo" : {}, "mgmtclass" : {}, "package" : {}, "file" : {} } for ot in OBJ_TYPES: self.remote_names[ot] = utils.loh_to_hoh(self.remote_data[ot],"name").keys() self.remote_dict[ot] = utils.loh_to_hoh(self.remote_data[ot],"name") if self.sync_all: for names in self.remote_dict[ot]: self.must_include[ot][names] = 1 self.logger.debug("remote names struct is %s" % self.remote_names) if not self.sync_all: # include all profiles that are matched by a pattern for obj_type in OBJ_TYPES: patvar = getattr(self, "%s_patterns" % obj_type) self.logger.debug("* Finding Explicit %s Matches" % obj_type) for pat in patvar: for remote in self.remote_names[obj_type]: self.logger.debug("?: seeing if %s looks like %s" % (remote,pat)) if fnmatch.fnmatch(remote, pat): self.logger.debug("Adding %s for pattern match %s."%(remote, pat)) self.must_include[obj_type][remote] = 1 # include all profiles that systems require # whether they are explicitly included or not self.logger.debug("* Adding Profiles Required By Systems") for sys in self.must_include["system"].keys(): pro = self.remote_dict["system"][sys].get("profile","") self.logger.debug("?: system %s requires profile %s."%(sys, pro)) if pro != "": self.logger.debug("Adding profile %s for system %s."%(pro, sys)) self.must_include["profile"][pro] = 1 # include all profiles that subprofiles require # whether they are explicitly included or not # very deep nesting is possible self.logger.debug("* Adding Profiles Required By SubProfiles") while True: loop_exit = True for pro in self.must_include["profile"].keys(): parent = self.remote_dict["profile"][pro].get("parent","") if parent != "": if not self.must_include["profile"].has_key(parent): self.logger.debug("Adding parent profile %s for profile %s."%(parent, pro)) self.must_include["profile"][parent] = 1 loop_exit = False if loop_exit: break # require all distros that any profiles in the generated list requires # whether they are explicitly included or not self.logger.debug("* Adding Distros Required By Profiles") for p in self.must_include["profile"].keys(): distro = self.remote_dict["profile"][p].get("distro","") if not distro == "<>" and not distro == "~": self.logger.debug("Adding distro %s for profile %s."%(distro, p)) self.must_include["distro"][distro] = 1 # require any repos that any profiles in the generated list requires # whether they are explicitly included or not self.logger.debug("* Adding Repos Required By Profiles") for p in self.must_include["profile"].keys(): repos = self.remote_dict["profile"][p].get("repos",[]) if repos != "<>": for r in repos: self.logger.debug("Adding repo %s for profile %s."%(r, p)) self.must_include["repo"][r] = 1 # include all images that systems require # whether they are explicitly included or not self.logger.debug("* Adding Images Required By Systems") for sys in self.must_include["system"].keys(): img = self.remote_dict["system"][sys].get("image","") self.logger.debug("?: system %s requires image %s."%(sys, img)) if img != "": self.logger.debug("Adding image %s for system %s."%(img, sys)) self.must_include["image"][img] = 1 # FIXME: remove debug for ot in OBJ_TYPES: self.logger.debug("transfer list for %s is %s" % (ot, self.must_include[ot].keys())) # ------------------------------------------------------- def run(self, cobbler_master=None, distro_patterns=None, profile_patterns=None, system_patterns=None, repo_patterns=None, image_patterns=None, mgmtclass_patterns=None, package_patterns=None, file_patterns=None, prune=False, omit_data=False, sync_all=False, use_ssl=False): """ Get remote profiles and distros and sync them locally """ self.distro_patterns = distro_patterns.split() self.profile_patterns = profile_patterns.split() self.system_patterns = system_patterns.split() self.repo_patterns = repo_patterns.split() self.image_patterns = image_patterns.split() self.mgmtclass_patterns = mgmtclass_patterns.split() self.package_patterns = package_patterns.split() self.file_patterns = file_patterns.split() self.omit_data = omit_data self.prune = prune self.sync_all = sync_all self.use_ssl = use_ssl if self.use_ssl: protocol = 'https' else: protocol = 'http' if cobbler_master is not None: self.master = cobbler_master elif len(self.settings.cobbler_master) > 0: self.master = self.settings.cobbler_master else: utils.die('No cobbler master specified, try --master.') self.uri = '%s://%s/cobbler_api' % (protocol,self.master) self.logger.info("cobbler_master = %s" % cobbler_master) self.logger.info("distro_patterns = %s" % self.distro_patterns) self.logger.info("profile_patterns = %s" % self.profile_patterns) self.logger.info("system_patterns = %s" % self.system_patterns) self.logger.info("repo_patterns = %s" % self.repo_patterns) self.logger.info("image_patterns = %s" % self.image_patterns) self.logger.info("mgmtclass_patterns = %s" % self.mgmtclass_patterns) self.logger.info("package_patterns = %s" % self.package_patterns) self.logger.info("file_patterns = %s" % self.file_patterns) self.logger.info("omit_data = %s" % self.omit_data) self.logger.info("sync_all = %s" % self.sync_all) self.logger.info("use_ssl = %s" % self.use_ssl) self.logger.info("XMLRPC endpoint: %s" % self.uri) self.logger.debug("test ALPHA") self.remote = xmlrpclib.Server(self.uri) self.logger.debug("test BETA") self.remote.ping() self.local = xmlrpclib.Server("http://127.0.0.1/cobbler_api") self.local.ping() self.replicate_data() self.link_distros() self.logger.info("Syncing") self.api.sync(logger=self.logger) self.logger.info("Done") return True cobbler-2.4.1/cobbler/action_report.py000066400000000000000000000276251227367477500200200ustar00rootroot00000000000000""" Report from a cobbler master. FIXME: reinstante functionality for 2.0 Copyright 2007-2009, Red Hat, Inc and Others Anderson Silva Michael DeHaan This software may be freely redistributed under the terms of the GNU general public license. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. """ import re from cexceptions import * import utils import clogger class Report: def __init__(self, config, logger=None): """ Constructor """ self.config = config self.settings = config.settings() self.api = config.api self.report_type = None self.report_what = None self.report_name = None self.report_fields = None self.report_noheaders = None self.array_re = re.compile('([^[]+)\[([^]]+)\]') if logger is None: logger = clogger.Logger() self.logger = logger def fielder(self, structure, fields_list): """ Return data from a subset of fields of some item """ item = {} for field in fields_list: internal = self.array_re.search(field) # check if field is primary field if field in structure.keys(): item[field] = structure[field] # check if subfield in 'interfaces' field elif internal and internal.group(1) in structure.keys(): outer = internal.group(1) inner = internal.group(2) if type(structure[outer]) is type({}) and inner in structure[outer]: item[field] = structure[outer][inner] elif "interfaces" in structure.keys(): for device in structure['interfaces'].keys(): if field in structure['interfaces'][device]: item[field] = device + ': ' + structure['interfaces'][device][field] return item def reporting_csv(self, info, order, noheaders): """ Formats data on 'info' for csv output """ outputheaders = '' outputbody = '' sep = ',' info_count = 0 for item in info: item_count = 0 for key in order: if info_count == 0: outputheaders += str(key) + sep if key in item.keys(): outputbody += str(item[key]) + sep else: outputbody += '-' + sep item_count = item_count + 1 info_count = info_count + 1 outputbody += '\n' outputheaders += '\n' if noheaders: outputheaders = ''; return outputheaders + outputbody def reporting_trac(self, info, order, noheaders): """ Formats data on 'info' for trac wiki table output """ outputheaders = '' outputbody = '' sep = '||' info_count = 0 for item in info: item_count = 0 for key in order: if info_count == 0: outputheaders += sep + str(key) if key in item.keys(): outputbody += sep + str(item[key]) else: outputbody += sep + '-' item_count = item_count + 1 info_count = info_count + 1 outputbody += '||\n' outputheaders += '||\n' if noheaders: outputheaders = ''; return outputheaders + outputbody def reporting_doku(self, info, order, noheaders): """ Formats data on 'info' for doku wiki table output """ outputheaders = '' outputbody = '' sep1 = '^' sep2 = '|' info_count = 0 for item in info: item_count = 0 for key in order: if info_count == 0: outputheaders += sep1 + key if key in item.keys(): outputbody += sep2 + item[key] else: outputbody += sep2 + '-' item_count = item_count + 1 info_count = info_count + 1 outputbody += sep2 + '\n' outputheaders += sep1 + '\n' if noheaders: outputheaders = ''; return outputheaders + outputbody def reporting_mediawiki(self, info, order, noheaders): """ Formats data on 'info' for mediawiki table output """ outputheaders = '' outputbody = '' opentable = '{| border="1"\n' closetable = '|}\n' sep1 = '||' sep2 = '|' sep3 = '|-' info_count = 0 for item in info: item_count = 0 for key in order: if info_count == 0 and item_count == 0: outputheaders += sep2 + key elif info_count == 0: outputheaders += sep1 + key if item_count == 0: if key in item.keys(): outputbody += sep2 + str(item[key]) else: outputbody += sep2 + '-' else: if key in item.keys(): outputbody += sep1 + str(item[key]) else: outputbody += sep1 + '-' item_count = item_count + 1 info_count = info_count + 1 outputbody += '\n' + sep3 + '\n' outputheaders += '\n' + sep3 + '\n' if noheaders: outputheaders = ''; return opentable + outputheaders + outputbody + closetable def print_formatted_data(self, data, order, report_type, noheaders): """ Used for picking the correct format to output data as """ if report_type == "csv": self.logger.flat(self.reporting_csv(data, order, noheaders)) if report_type == "mediawiki": self.logger.flat(self.reporting_mediawiki(data, order, noheaders)) if report_type == "trac": self.logger.flat(self.reporting_trac(data, order, noheaders)) if report_type == "doku": self.logger.flat(self.reporting_doku(data, order, noheaders)) return True def reporting_sorter(self, a, b): """ Used for sorting cobbler objects for report commands """ return cmp(a.name, b.name) def reporting_print_sorted(self, collection): """ Prints all objects in a collection sorted by name """ collection = [x for x in collection] collection.sort(self.reporting_sorter) for x in collection: self.logger.flat(x.printable()) return True def reporting_list_names2(self, collection, name): """ Prints a specific object in a collection. """ obj = collection.get(name) if obj is not None: self.logger.flat(obj.printable()) return True def reporting_print_all_fields(self, collection, report_name, report_type, report_noheaders): """ Prints all fields in a collection as a table given the report type """ # per-item hack if report_name: collection = collection.find(name=report_name) if collection: collection = [collection] else: return collection = [x for x in collection] collection.sort(self.reporting_sorter) data = [] out_order = [] count = 0 for x in collection: item = {} structure = x.to_datastruct() for (key, value) in structure.iteritems(): # exception for systems which could have > 1 interface if key == "interfaces": for (device, info) in value.iteritems(): for (info_header, info_value) in info.iteritems(): item[info_header] = str(device) + ': ' + str(info_value) # needs to create order list for print_formatted_fields if count == 0: out_order.append(info_header) else: item[key] = value # needs to create order list for print_formatted_fields if count == 0: out_order.append(key) count = count + 1 data.append(item) self.print_formatted_data(data = data, order = out_order, report_type = report_type, noheaders = report_noheaders) return True def reporting_print_x_fields(self, collection, report_name, report_type, report_fields, report_noheaders): """ Prints specific fields in a collection as a table given the report type """ # per-item hack if report_name: collection = collection.find(name=report_name) if collection: collection = [collection] else: return collection = [x for x in collection] collection.sort(self.reporting_sorter) data = [] fields_list = report_fields.replace(' ', '').split(',') for x in collection: structure = x.to_datastruct() item = self.fielder(structure, fields_list) data.append(item) self.print_formatted_data(data = data, order = fields_list, report_type = report_type, noheaders = report_noheaders) return True # ------------------------------------------------------- def run(self, report_what = None, report_name = None, report_type = None, report_fields = None, report_noheaders = None): """ Get remote profiles and distros and sync them locally """ """ 1. Handles original report output 2. Handles all fields of report outputs as table given a format 3. Handles specific fields of report outputs as table given a format """ if report_type == 'text' and report_fields == 'all': for collection_name in ["distro","profile","system","repo","network","image","mgmtclass","package","file"]: if report_what=="all" or report_what==collection_name or report_what=="%ss"%collection_name or report_what=="%ses"%collection_name: if report_name: self.reporting_list_names2(self.api.get_items(collection_name), report_name) else: self.reporting_print_sorted(self.api.get_items(collection_name)) elif report_type == 'text' and report_fields != 'all': utils.die(self.logger,"The 'text' type can only be used with field set to 'all'") elif report_type != 'text' and report_fields == 'all': for collection_name in ["distro","profile","system","repo","network","image","mgmtclass","package","file"]: if report_what=="all" or report_what==collection_name or report_what=="%ss"%collection_name or report_what=="%ses"%collection_name: self.reporting_print_all_fields(self.api.get_items(collection_name), report_name, report_type, report_noheaders) else: for collection_name in ["distro","profile","system","repo","network","image","mgmtclass","package","file"]: if report_what=="all" or report_what==collection_name or report_what=="%ss"%collection_name or report_what=="%ses"%collection_name: self.reporting_print_x_fields(self.api.get_items(collection_name), report_name, report_type, report_fields, report_noheaders) cobbler-2.4.1/cobbler/action_reposync.py000066400000000000000000000562251227367477500203450ustar00rootroot00000000000000""" Builds out and synchronizes yum repo mirrors. Initial support for rsync, perhaps reposync coming later. Copyright 2006-2007, Red Hat, Inc and Others Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import os import os.path import time import sys HAS_YUM = True try: import yum except: HAS_YUM = False import utils from cexceptions import * import traceback import errno from utils import _ import clogger class RepoSync: """ Handles conversion of internal state to the tftpboot tree layout """ # ================================================================================== def __init__(self,config,tries=1,nofail=False,logger=None): """ Constructor """ self.verbose = True self.api = config.api self.config = config self.distros = config.distros() self.profiles = config.profiles() self.systems = config.systems() self.settings = config.settings() self.repos = config.repos() self.rflags = self.settings.reposync_flags self.tries = tries self.nofail = nofail self.logger = logger if logger is None: self.logger = clogger.Logger() self.logger.info("hello, reposync") # =================================================================== def run(self, name=None, verbose=True): """ Syncs the current repo configuration file with the filesystem. """ self.logger.info("run, reposync, run!") try: self.tries = int(self.tries) except: utils.die(self.logger,"retry value must be an integer") self.verbose = verbose report_failure = False for repo in self.repos: if name is not None and repo.name != name: # invoked to sync only a specific repo, this is not the one continue elif name is None and not repo.keep_updated: # invoked to run against all repos, but this one is off self.logger.info("%s is set to not be updated" % repo.name) continue repo_mirror = os.path.join(self.settings.webdir, "repo_mirror") repo_path = os.path.join(repo_mirror, repo.name) mirror = repo.mirror if not os.path.isdir(repo_path) and not repo.mirror.lower().startswith("rhn://"): os.makedirs(repo_path) # set the environment keys specified for this repo, # save the old ones if they modify an existing variable env = repo.environment old_env = {} for k in env.keys(): self.logger.debug("setting repo environment: %s=%s" % (k,env[k])) if env[k] is not None: if os.getenv(k): old_env[k] = os.getenv(k) else: os.environ[k] = env[k] # which may actually NOT reposync if the repo is set to not mirror locally # but that's a technicality for x in range(self.tries+1,1,-1): success = False try: self.sync(repo) success = True break except: utils.log_exc(self.logger) self.logger.warning("reposync failed, tries left: %s" % (x-2)) # cleanup/restore any environment variables that were # added or changed above for k in env.keys(): if env[k] is not None: if old_env.has_key(k): self.logger.debug("resetting repo environment: %s=%s" % (k,old_env[k])) os.environ[k] = old_env[k] else: self.logger.debug("removing repo environment: %s=%s" % (k,env[k])) del os.environ[k] if not success: report_failure = True if not self.nofail: utils.die(self.logger,"reposync failed, retry limit reached, aborting") else: self.logger.error("reposync failed, retry limit reached, skipping") self.update_permissions(repo_path) if report_failure: utils.die(self.logger,"overall reposync failed, at least one repo failed to synchronize") return True # ================================================================================== def sync(self, repo): """ Conditionally sync a repo, based on type. """ if repo.breed == "rhn": return self.rhn_sync(repo) elif repo.breed == "yum": return self.yum_sync(repo) elif repo.breed == "apt": return self.apt_sync(repo) elif repo.breed == "rsync": return self.rsync_sync(repo) else: utils.die(self.logger,"unable to sync repo (%s), unknown or unsupported repo type (%s)" % (repo.name, repo.breed)) # ==================================================================================== def createrepo_walker(self, repo, dirname, fnames): """ Used to run createrepo on a copied Yum mirror. """ if os.path.exists(dirname) or repo['breed'] == 'rsync': utils.remove_yum_olddata(dirname) # add any repo metadata we can use mdoptions = [] if os.path.isfile("%s/.origin/repomd.xml" % (dirname)): if not HAS_YUM: utils.die(self.logger,"yum is required to use this feature") rmd = yum.repoMDObject.RepoMD('', "%s/.origin/repomd.xml" % (dirname)) if rmd.repoData.has_key("group"): groupmdfile = rmd.getData("group").location[1] mdoptions.append("-g %s" % groupmdfile) if rmd.repoData.has_key("prestodelta"): # need createrepo >= 0.9.7 to add deltas if utils.check_dist() in ("redhat","fedora","centos","scientific linux","suse"): cmd = "/usr/bin/rpmquery --queryformat=%{VERSION} createrepo" createrepo_ver = utils.subprocess_get(self.logger, cmd) if createrepo_ver >= "0.9.7": mdoptions.append("--deltas") else: self.logger.error("this repo has presto metadata; you must upgrade createrepo to >= 0.9.7 first and then need to resync the repo through cobbler.") blended = utils.blender(self.api, False, repo) flags = blended.get("createrepo_flags","(ERROR: FLAGS)") try: # BOOKMARK cmd = "createrepo %s %s %s" % (" ".join(mdoptions), flags, dirname) utils.subprocess_call(self.logger, cmd) except: utils.log_exc(self.logger) self.logger.error("createrepo failed.") del fnames[:] # we're in the right place # ==================================================================================== def rsync_sync(self, repo): """ Handle copying of rsync:// and rsync-over-ssh repos. """ repo_mirror = repo.mirror if not repo.mirror_locally: utils.die(self.logger,"rsync:// urls must be mirrored locally, yum cannot access them directly") if repo.rpm_list != "" and repo.rpm_list != []: self.logger.warning("--rpm-list is not supported for rsync'd repositories") # FIXME: don't hardcode dest_path = os.path.join(self.settings.webdir+"/repo_mirror", repo.name) spacer = "" if not repo.mirror.startswith("rsync://") and not repo.mirror.startswith("/"): spacer = "-e ssh" if not repo.mirror.endswith("/"): repo.mirror = "%s/" % repo.mirror # FIXME: wrapper for subprocess that logs to logger cmd = "rsync -rltDv --copy-unsafe-links --delete-after %s --delete --exclude-from=/etc/cobbler/rsync.exclude %s %s" % (spacer, repo.mirror, dest_path) rc = utils.subprocess_call(self.logger, cmd) if rc !=0: utils.die(self.logger,"cobbler reposync failed") os.path.walk(dest_path, self.createrepo_walker, repo) self.create_local_file(dest_path, repo) # ==================================================================================== def rhn_sync(self, repo): """ Handle mirroring of RHN repos. """ repo_mirror = repo.mirror # FIXME? warn about not having yum-utils. We don't want to require it in the package because # RHEL4 and RHEL5U0 don't have it. if not os.path.exists("/usr/bin/reposync"): utils.die(self.logger,"no /usr/bin/reposync found, please install yum-utils") cmd = "" # command to run has_rpm_list = False # flag indicating not to pull the whole repo # detect cases that require special handling if repo.rpm_list != "" and repo.rpm_list != []: has_rpm_list = True # create yum config file for use by reposync # FIXME: don't hardcode dest_path = os.path.join(self.settings.webdir+"/repo_mirror", repo.name) temp_path = os.path.join(dest_path, ".origin") if not os.path.isdir(temp_path): # FIXME: there's a chance this might break the RHN D/L case os.makedirs(temp_path) # how we invoke yum-utils depends on whether this is RHN content or not. # this is the somewhat more-complex RHN case. # NOTE: this requires that you have entitlements for the server and you give the mirror as rhn://$channelname if not repo.mirror_locally: utils.die("rhn:// repos do not work with --mirror-locally=1") if has_rpm_list: self.logger.warning("warning: --rpm-list is not supported for RHN content") rest = repo.mirror[6:] # everything after rhn:// cmd = "/usr/bin/reposync %s -r %s --download_path=%s" % (self.rflags, rest, self.settings.webdir+"/repo_mirror") if repo.name != rest: args = { "name" : repo.name, "rest" : rest } utils.die(self.logger,"ERROR: repository %(name)s needs to be renamed %(rest)s as the name of the cobbler repository must match the name of the RHN channel" % args) if repo.arch == "i386": # counter-intuitive, but we want the newish kernels too repo.arch = "i686" if repo.arch != "": cmd = "%s -a %s" % (cmd, repo.arch) # now regardless of whether we're doing yumdownloader or reposync # or whether the repo was http://, ftp://, or rhn://, execute all queued # commands here. Any failure at any point stops the operation. if repo.mirror_locally: rc = utils.subprocess_call(self.logger, cmd) # Don't die if reposync fails, it is logged # if rc !=0: # utils.die(self.logger,"cobbler reposync failed") # some more special case handling for RHN. # create the config file now, because the directory didn't exist earlier temp_file = self.create_local_file(temp_path, repo, output=False) # now run createrepo to rebuild the index if repo.mirror_locally: os.path.walk(dest_path, self.createrepo_walker, repo) # create the config file the hosts will use to access the repository. self.create_local_file(dest_path, repo) # ==================================================================================== def yum_sync(self, repo): """ Handle copying of http:// and ftp:// yum repos. """ repo_mirror = repo.mirror # warn about not having yum-utils. We don't want to require it in the package because # RHEL4 and RHEL5U0 don't have it. if not os.path.exists("/usr/bin/reposync"): utils.die(self.logger,"no /usr/bin/reposync found, please install yum-utils") cmd = "" # command to run has_rpm_list = False # flag indicating not to pull the whole repo # detect cases that require special handling if repo.rpm_list != "" and repo.rpm_list != []: has_rpm_list = True # create yum config file for use by reposync dest_path = os.path.join(self.settings.webdir+"/repo_mirror", repo.name) temp_path = os.path.join(dest_path, ".origin") if not os.path.isdir(temp_path) and repo.mirror_locally: # FIXME: there's a chance this might break the RHN D/L case os.makedirs(temp_path) # create the config file that yum will use for the copying if repo.mirror_locally: temp_file = self.create_local_file(temp_path, repo, output=False) if not has_rpm_list and repo.mirror_locally: # if we have not requested only certain RPMs, use reposync cmd = "/usr/bin/reposync %s --config=%s --repoid=%s --download_path=%s" % (self.rflags, temp_file, repo.name, self.settings.webdir+"/repo_mirror") if repo.arch != "": if repo.arch == "x86": repo.arch = "i386" # FIX potential arch errors if repo.arch == "i386": # counter-intuitive, but we want the newish kernels too cmd = "%s -a i686" % (cmd) else: cmd = "%s -a %s" % (cmd, repo.arch) elif repo.mirror_locally: # create the output directory if it doesn't exist if not os.path.exists(dest_path): os.makedirs(dest_path) use_source = "" if repo.arch == "src": use_source = "--source" # older yumdownloader sometimes explodes on --resolvedeps # if this happens to you, upgrade yum & yum-utils extra_flags = self.settings.yumdownloader_flags cmd = "/usr/bin/yumdownloader %s %s --disablerepo=* --enablerepo=%s -c %s --destdir=%s %s" % (extra_flags, use_source, repo.name, temp_file, dest_path, " ".join(repo.rpm_list)) # now regardless of whether we're doing yumdownloader or reposync # or whether the repo was http://, ftp://, or rhn://, execute all queued # commands here. Any failure at any point stops the operation. if repo.mirror_locally: rc = utils.subprocess_call(self.logger, cmd) if rc !=0: utils.die(self.logger,"cobbler reposync failed") repodata_path = os.path.join(dest_path, "repodata") if not os.path.exists("/usr/bin/wget"): utils.die(self.logger,"no /usr/bin/wget found, please install wget") # grab repomd.xml and use it to download any metadata we can use cmd2 = "/usr/bin/wget -q %s/repodata/repomd.xml -O %s/repomd.xml" % (repo_mirror, temp_path) rc = utils.subprocess_call(self.logger,cmd2) if rc == 0: # create our repodata directory now, as any extra metadata we're # about to download probably lives there if not os.path.isdir(repodata_path): os.makedirs(repodata_path) rmd = yum.repoMDObject.RepoMD('', "%s/repomd.xml" % (temp_path)) for mdtype in rmd.repoData.keys(): # don't download metadata files that are created by default if mdtype not in ["primary", "primary_db", "filelists", "filelists_db", "other", "other_db"]: mdfile = rmd.getData(mdtype).location[1] cmd3 = "/usr/bin/wget -q %s/%s -O %s/%s" % (repo_mirror, mdfile, dest_path, mdfile) utils.subprocess_call(self.logger,cmd3) if rc !=0: utils.die(self.logger,"wget failed") # now run createrepo to rebuild the index if repo.mirror_locally: os.path.walk(dest_path, self.createrepo_walker, repo) # create the config file the hosts will use to access the repository. self.create_local_file(dest_path, repo) # ==================================================================================== def apt_sync(self, repo): """ Handle copying of http:// and ftp:// debian repos. """ repo_mirror = repo.mirror # warn about not having mirror program. mirror_program = "/usr/bin/debmirror" if not os.path.exists(mirror_program): utils.die(self.logger,"no %s found, please install it"%(mirror_program)) cmd = "" # command to run has_rpm_list = False # flag indicating not to pull the whole repo # detect cases that require special handling if repo.rpm_list != "" and repo.rpm_list != []: utils.die(self.logger,"has_rpm_list not yet supported on apt repos") if not repo.arch: utils.die(self.logger,"Architecture is required for apt repositories") # built destination path for the repo dest_path = os.path.join("/var/www/cobbler/repo_mirror", repo.name) if repo.mirror_locally: # NOTE: Dropping @@suite@@ replace as it is also dropped from # from manage_import_debian_ubuntu.py due that repo has no os_version # attribute. If it is added again it will break the Web UI! #mirror = repo.mirror.replace("@@suite@@",repo.os_version) mirror = repo.mirror idx = mirror.find("://") method = mirror[:idx] mirror = mirror[idx+3:] idx = mirror.find("/") host = mirror[:idx] mirror = mirror[idx:] dists = ",".join(repo.apt_dists) components = ",".join(repo.apt_components) mirror_data = "--method=%s --host=%s --root=%s --dist=%s --section=%s" % (method,host,mirror,dists,components) rflags = "--nocleanup" for x in repo.yumopts: if repo.yumopts[x]: rflags += " %s %s" % ( x , repo.yumopts[x] ) else: rflags += " %s" % x cmd = "%s %s %s %s" % (mirror_program, rflags, mirror_data, dest_path) if repo.arch == "src": cmd = "%s --source" % cmd else: arch = repo.arch if arch == "x86": arch = "i386" # FIX potential arch errors if arch == "x86_64": arch = "amd64" # FIX potential arch errors cmd = "%s --nosource -a %s" % (cmd, arch) # Set's an environment variable for subprocess, otherwise debmirror will fail # as it needs this variable to exist. # FIXME: might this break anything? So far it doesn't os.putenv("HOME", "/var/lib/cobbler") rc = utils.subprocess_call(self.logger, cmd) if rc !=0: utils.die(self.logger,"cobbler reposync failed") def create_local_file(self, dest_path, repo, output=True): """ Creates Yum config files for use by reposync Two uses: (A) output=True, Create local files that can be used with yum on provisioned clients to make use of this mirror. (B) output=False, Create a temporary file for yum to feed into yum for mirroring """ # the output case will generate repo configuration files which are usable # for the installed systems. They need to be made compatible with --server-override # which means they are actually templates, which need to be rendered by a cobbler-sync # on per profile/system basis. if output: fname = os.path.join(dest_path,"config.repo") else: fname = os.path.join(dest_path, "%s.repo" % repo.name) self.logger.debug("creating: %s" % fname) if not os.path.exists(dest_path): utils.mkdir(dest_path) config_file = open(fname, "w+") config_file.write("[%s]\n" % repo.name) config_file.write("name=%s\n" % repo.name) if 'exclude' in repo.yumopts.keys(): config_file.write("exclude=%s\n" % repo.yumopts['exclude']) self.logger.debug("excluding: %s" % repo.yumopts['exclude']) optenabled = False optgpgcheck = False if output: if repo.mirror_locally: line = "baseurl=http://${http_server}/cobbler/repo_mirror/%s\n" % (repo.name) else: mstr = repo.mirror if mstr.startswith("/"): mstr = "file://%s" % mstr line = "baseurl=%s\n" % mstr config_file.write(line) # user may have options specific to certain yum plugins # add them to the file for x in repo.yumopts: config_file.write("%s=%s\n" % (x, repo.yumopts[x])) if x == "enabled": optenabled = True if x == "gpgcheck": optgpgcheck = True else: mstr = repo.mirror if mstr.startswith("/"): mstr = "file://%s" % mstr line = "baseurl=%s\n" % mstr if self.settings.http_port not in (80, '80'): http_server = "%s:%s" % (self.settings.server, self.settings.http_port) else: http_server = self.settings.server line = line.replace("@@server@@",http_server) config_file.write(line) if not optenabled: config_file.write("enabled=1\n") config_file.write("priority=%s\n" % repo.priority) # FIXME: potentially might want a way to turn this on/off on a per-repo basis if not optgpgcheck: config_file.write("gpgcheck=0\n") config_file.close() return fname # ================================================================================== def update_permissions(self, repo_path): """ Verifies that permissions and contexts after an rsync are as expected. Sending proper rsync flags should prevent the need for this, though this is largely a safeguard. """ # all_path = os.path.join(repo_path, "*") owner = "root:apache" if os.path.exists("/etc/SuSE-release"): owner = "root:www" cmd1 = "chown -R "+owner+" %s" % repo_path utils.subprocess_call(self.logger, cmd1) cmd2 = "chmod -R 755 %s" % repo_path utils.subprocess_call(self.logger, cmd2) cobbler-2.4.1/cobbler/action_status.py000066400000000000000000000114621227367477500200200ustar00rootroot00000000000000""" Reports on kickstart activity by examining the logs in /var/log/cobbler. Copyright 2007-2009, Red Hat, Inc and Others Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import os import os.path import glob import time import api as cobbler_api import clogger import utils #from utils import _ # ARRAY INDEXES MOST_RECENT_START = 0 MOST_RECENT_STOP = 1 MOST_RECENT_TARGET = 2 SEEN_START = 3 SEEN_STOP = 4 STATE = 5 class BootStatusReport: def __init__(self,config,mode,logger=None): """ Constructor """ self.config = config self.settings = config.settings() self.ip_data = {} self.mode = mode if logger is None: logger = clogger.Logger() self.logger = logger # ------------------------------------------------------- def scan_logfiles(self): #profile foosball ? 127.0.0.1 start 1208294043.58 #system neo ? 127.0.0.1 start 1208295122.86 files = glob.glob("/var/log/cobbler/install.log*") for fname in files: fd = open(fname) data = fd.read() for line in data.split("\n"): tokens = line.split() if len(tokens) == 0: continue (profile_or_system, name, ip, start_or_stop, ts) = tokens self.catalog(profile_or_system,name,ip,start_or_stop,ts) fd.close() # ------------------------------------------------------ def catalog(self,profile_or_system,name,ip,start_or_stop,ts): ip_data = self.ip_data if not ip_data.has_key(ip): ip_data[ip] = [ -1, -1, "?", 0, 0, "?" ] elem = ip_data[ip] ts = float(ts) mrstart = elem[MOST_RECENT_START] mrstop = elem[MOST_RECENT_STOP] mrtarg = elem[MOST_RECENT_TARGET] snstart = elem[SEEN_START] snstop = elem[SEEN_STOP] if start_or_stop == "start": if mrstart < ts: mrstart = ts mrtarg = "%s:%s" % (profile_or_system, name) elem[SEEN_START] = elem[SEEN_START] + 1 if start_or_stop == "stop": if mrstop < ts: mrstop = ts mrtarg = "%s:%s" % (profile_or_system, name) elem[SEEN_STOP] = elem[SEEN_STOP] + 1 elem[MOST_RECENT_START] = mrstart elem[MOST_RECENT_STOP] = mrstop elem[MOST_RECENT_TARGET] = mrtarg # ------------------------------------------------------- def process_results(self): # FIXME: this should update the times here tnow = int(time.time()) for ip in self.ip_data.keys(): elem = self.ip_data[ip] start = int(elem[MOST_RECENT_START]) stop = int(elem[MOST_RECENT_STOP]) if (stop > start): elem[STATE] = "finished" else: delta = tnow - start min = delta / 60 sec = delta % 60 if min > 100: elem[STATE] = "unknown/stalled" else: elem[STATE] = "installing (%sm %ss)" % (min,sec) return self.ip_data def get_printable_results(self): format = "%-15s|%-20s|%-17s|%-17s" ip_data = self.ip_data ips = ip_data.keys() ips.sort() line = ( "ip", "target", "start", "state", ) buf = format % line for ip in ips: elem = ip_data[ip] line = ( ip, elem[MOST_RECENT_TARGET], time.ctime(elem[MOST_RECENT_START]), elem[STATE] ) buf = buf + "\n" + format % line return buf # ------------------------------------------------------- def run(self): """ Calculate and print a kickstart-status report. """ self.scan_logfiles() results = self.process_results() if self.mode == "text": return self.get_printable_results() else: return results cobbler-2.4.1/cobbler/action_sync.py000066400000000000000000000263421227367477500174540ustar00rootroot00000000000000""" Builds out filesystem trees/data based on the object tree. This is the code behind 'cobbler sync'. Copyright 2006-2009, Red Hat, Inc and Others Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import os import os.path import glob import shutil import time import sys import glob import traceback import errno import utils from cexceptions import * import templar import pxegen import item_distro import item_profile import item_repo import item_system from Cheetah.Template import Template import clogger from utils import _ import cobbler.module_loader as module_loader class BootSync: """ Handles conversion of internal state to the tftpboot tree layout """ def __init__(self,config,verbose=True,dhcp=None,dns=None,logger=None,tftpd=None): """ Constructor """ self.logger = logger if logger is None: self.logger = clogger.Logger() self.verbose = verbose self.config = config self.api = config.api self.distros = config.distros() self.profiles = config.profiles() self.systems = config.systems() self.settings = config.settings() self.repos = config.repos() self.templar = templar.Templar(config, self.logger) self.pxegen = pxegen.PXEGen(config, self.logger) self.dns = dns self.dhcp = dhcp self.tftpd = tftpd self.bootloc = utils.tftpboot_location() self.pxegen.verbose = verbose self.dns.verbose = verbose self.dhcp.verbose = verbose self.pxelinux_dir = os.path.join(self.bootloc, "pxelinux.cfg") self.grub_dir = os.path.join(self.bootloc, "grub") self.images_dir = os.path.join(self.bootloc, "images") self.yaboot_bin_dir = os.path.join(self.bootloc, "ppc") self.yaboot_cfg_dir = os.path.join(self.bootloc, "etc") self.s390_dir = os.path.join(self.bootloc, "s390x") self.rendered_dir = os.path.join(self.settings.webdir, "rendered") def run(self): """ Syncs the current configuration file with the config tree. Using the Check().run_ functions previously is recommended """ if not os.path.exists(self.bootloc): utils.die(self.logger,"cannot find directory: %s" % self.bootloc) self.logger.info("running pre-sync triggers") # run pre-triggers... utils.run_triggers(self.api, None, "/var/lib/cobbler/triggers/sync/pre/*") self.distros = self.config.distros() self.profiles = self.config.profiles() self.systems = self.config.systems() self.settings = self.config.settings() self.repos = self.config.repos() # execute the core of the sync operation self.logger.info("cleaning trees") self.clean_trees() # Have the tftpd module handle copying bootloaders, # distros, images, and all_system_files self.tftpd.sync(self.verbose) # Copy distros to the webdir # Adding in the exception handling to not blow up if files have # been moved (or the path references an NFS directory that's no longer # mounted) for d in self.distros: try: self.logger.info("copying files for distro: %s" % d.name) self.pxegen.copy_single_distro_files(d, self.settings.webdir,True) self.pxegen.write_templates(d,write_file=True) except CX, e: self.logger.error(e.value) # make the default pxe menu anyway... self.pxegen.make_pxe_menu() if self.settings.manage_dhcp: self.write_dhcp() if self.settings.manage_dns: self.logger.info("rendering DNS files") self.dns.regen_hosts() self.dns.write_dns_files() if self.settings.manage_tftpd: # xinetd.d/tftpd, basically self.logger.info("rendering TFTPD files") self.tftpd.write_tftpd_files() # copy in boot_files self.tftpd.write_boot_files() self.logger.info("cleaning link caches") self.clean_link_cache() if self.settings.manage_rsync: self.logger.info("rendering Rsync files") self.rsync_gen() # run post-triggers self.logger.info("running post-sync triggers") utils.run_triggers(self.api, None, "/var/lib/cobbler/triggers/sync/post/*", logger=self.logger) utils.run_triggers(self.api, None, "/var/lib/cobbler/triggers/change/*", logger=self.logger) return True def make_tftpboot(self): """ Make directories for tftpboot images """ if not os.path.exists(self.pxelinux_dir): utils.mkdir(self.pxelinux_dir,logger=self.logger) if not os.path.exists(self.grub_dir): utils.mkdir(self.grub_dir,logger=self.logger) grub_images_link = os.path.join(self.grub_dir, "images") if not os.path.exists(grub_images_link): os.symlink("../images", grub_images_link) if not os.path.exists(self.images_dir): utils.mkdir(self.images_dir,logger=self.logger) if not os.path.exists(self.s390_dir): utils.mkdir(self.s390_dir,logger=self.logger) if not os.path.exists(self.rendered_dir): utils.mkdir(self.rendered_dir,logger=self.logger) if not os.path.exists(self.yaboot_bin_dir): utils.mkdir(self.yaboot_bin_dir,logger=self.logger) if not os.path.exists(self.yaboot_cfg_dir): utils.mkdir(self.yaboot_cfg_dir,logger=self.logger) def clean_trees(self): """ Delete any previously built pxelinux.cfg tree and virt tree info and then create directories. Note: for SELinux reasons, some information goes in /tftpboot, some in /var/www/cobbler and some must be duplicated in both. This is because PXE needs tftp, and auto-kickstart and Virt operations need http. Only the kernel and initrd images are duplicated, which is unfortunate, though SELinux won't let me give them two contexts, so symlinks are not a solution. *Otherwise* duplication is minimal. """ # clean out parts of webdir and all of /tftpboot/images and /tftpboot/pxelinux.cfg for x in os.listdir(self.settings.webdir): path = os.path.join(self.settings.webdir,x) if os.path.isfile(path): if not x.endswith(".py"): utils.rmfile(path,logger=self.logger) if os.path.isdir(path): if not x in ["aux", "web", "webui", "localmirror","repo_mirror","ks_mirror","images","links","pub","repo_profile","repo_system","svc","rendered",".link_cache"] : # delete directories that shouldn't exist utils.rmtree(path,logger=self.logger) if x in ["kickstarts","kickstarts_sys","images","systems","distros","profiles","repo_profile","repo_system","rendered"]: # clean out directory contents utils.rmtree_contents(path,logger=self.logger) # self.make_tftpboot() utils.rmtree_contents(self.pxelinux_dir,logger=self.logger) utils.rmtree_contents(self.grub_dir,logger=self.logger) utils.rmtree_contents(self.images_dir,logger=self.logger) utils.rmtree_contents(self.s390_dir,logger=self.logger) utils.rmtree_contents(self.yaboot_bin_dir,logger=self.logger) utils.rmtree_contents(self.yaboot_cfg_dir,logger=self.logger) utils.rmtree_contents(self.rendered_dir,logger=self.logger) def write_dhcp(self): self.logger.info("rendering DHCP files") self.dhcp.write_dhcp_file() self.dhcp.regen_ethers() def sync_dhcp(self): restart_dhcp = str(self.settings.restart_dhcp).lower() which_dhcp_module = module_loader.get_module_from_file("dhcp","module",just_name=True).strip() if self.settings.manage_dhcp: self.write_dhcp() if which_dhcp_module == "manage_isc": service_name = utils.dhcp_service_name(self.api) if restart_dhcp != "0": rc = utils.subprocess_call(self.logger, "dhcpd -t -q", shell=True) if rc != 0: self.logger.error("dhcpd -t failed") return False service_restart = "service %s restart" % service_name rc = utils.subprocess_call(self.logger, service_restart, shell=True) if rc != 0: self.logger.error("%s failed" % service_name) return False elif which_dhcp_module == "manage_dnsmasq": if restart_dhcp != "0": rc = utils.subprocess_call(self.logger, "service dnsmasq restart") if rc != 0: self.logger.error("service dnsmasq restart failed") return False return True def clean_link_cache(self): for dirtree in [os.path.join(self.bootloc,'images'), self.settings.webdir]: cachedir = os.path.join(dirtree,'.link_cache') if os.path.isdir(cachedir): cmd = "find %s -maxdepth 1 -type f -links 1 -exec rm -f '{}' ';'"%cachedir utils.subprocess_call(self.logger,cmd) def rsync_gen(self): """ Generate rsync modules of all repositories and distributions """ template_file = "/etc/cobbler/rsync.template" try: template = open(template_file,"r") except: raise CX(_("error reading template %s") % template_file) template_data = "" template_data = template.read() template.close() distros = [] for link in glob.glob(os.path.join(self.settings.webdir,'links','*')): distro = {} distro["path"] = os.path.realpath(link) distro["name"] = os.path.basename(link) distros.append(distro) repos = [ repo.name for repo in self.api.repos() if os.path.isdir(os.path.join(self.settings.webdir,"repo_mirror", repo.name)) ] metadata = { "date" : time.asctime(time.gmtime()), "cobbler_server" : self.settings.server, "distros" : distros, "repos" : repos, "webdir" : self.settings.webdir } self.templar.render(template_data, metadata, "/etc/rsyncd.conf", None) cobbler-2.4.1/cobbler/action_validate.py000066400000000000000000000076211227367477500202700ustar00rootroot00000000000000""" Validates rendered kickstart files. Copyright 2007-2009, Red Hat, Inc and Others Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import os import re from utils import _ import utils import kickgen import clogger class Validate: def __init__(self,config,logger=None): """ Constructor """ self.config = config self.settings = config.settings() self.kickgen = kickgen.KickGen(config) if logger is None: logger = clogger.Logger() self.logger = logger def run(self): """ Returns True if there are no errors, otherwise False. """ if not os.path.exists("/usr/bin/ksvalidator"): utils.die(self.logger,"ksvalidator not installed, please install pykickstart") failed = False for x in self.config.profiles(): (result, errors) = self.checkfile(x, True) if not result: failed = True if len(errors) > 0: self.log_errors(errors) for x in self.config.systems(): (result, errors) = self.checkfile(x, False) if not result: failed = True if len(errors) > 0: self.log_errors(errors) if failed: self.logger.warning("*** potential errors detected in kickstarts ***") else: self.logger.info("*** all kickstarts seem to be ok ***") return not(failed) def checkfile(self,obj,is_profile): last_errors = [] blended = utils.blender(self.config.api, False, obj) os_version = blended["os_version"] self.logger.info("----------------------------") self.logger.debug("osversion: %s" % os_version) ks = blended["kickstart"] if ks is None or ks == "": self.logger.info("%s has no kickstart, skipping" % obj.name) return [True, last_errors] breed = blended["breed"] if breed != "redhat": self.logger.info("%s has a breed of %s, skipping" % (obj.name, breed)) return [True, last_errors] server = blended["server"] if not ks.startswith("/"): url = self.kickstart else: if is_profile: url = "http://%s/cblr/svc/op/ks/profile/%s" % (server,obj.name) self.kickgen.generate_kickstart_for_profile(obj.name) else: url = "http://%s/cblr/svc/op/ks/system/%s" % (server,obj.name) self.kickgen.generate_kickstart_for_system(obj.name) last_errors = self.kickgen.get_last_errors() self.logger.info("checking url: %s" % url) rc = utils.subprocess_call(self.logger,"/usr/bin/ksvalidator -v \"%s\" \"%s\"" % (os_version, url), shell=True) if rc != 0: return [False, last_errors] return [True, last_errors] def log_errors(self, errors): self.logger.warning("Potential templating errors:") for error in errors: (line,col) = error["lineCol"] line -= 1 # we add some lines to the template data, so numbering is off self.logger.warning("Unknown variable found at line %d, column %d: '%s'" % (line,col,error["rawCode"])) cobbler-2.4.1/cobbler/api.py000066400000000000000000001267351227367477500157230ustar00rootroot00000000000000""" python API module for Cobbler see source for cobbler.py, or pydoc, for example usage. CLI apps and daemons should import api.py, and no other cobbler code. Copyright 2006-2009, Red Hat, Inc and Others Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import sys import yaml import config import utils import action_sync import action_check import action_reposync import action_status import action_validate import action_buildiso import action_replicate import action_acl import action_report import action_power import action_log import action_hardlink import action_dlcontent from cexceptions import * import module_loader import kickgen import yumgen import pxegen from utils import _ import logging import time import random import simplejson import os import xmlrpclib import traceback import exceptions import clogger import tempfile import urllib2 import item_distro import item_profile import item_system import item_repo import item_image import item_mgmtclass import item_package import item_file ERROR = 100 INFO = 10 DEBUG = 5 # FIXME: add --quiet depending on if not --verbose? RSYNC_CMD = "rsync -a %s '%s' %s --progress" # notes on locking: # BootAPI is a singleton object # the XMLRPC variants allow 1 simultaneous request # therefore we flock on /etc/cobbler/settings for now # on a request by request basis. class BootAPI: __shared_state = {} __has_loaded = False # =========================================================== def __init__(self, is_cobblerd=False): """ Constructor """ # FIXME: this should be switchable through some simple system self.__dict__ = BootAPI.__shared_state self.perms_ok = False if not BootAPI.__has_loaded: if os.path.exists("/etc/cobbler/use.couch"): self.use_couch = True else: self.use_couch = False # NOTE: we do not log all API actions, because # a simple CLI invocation may call adds and such # to load the config, which would just fill up # the logs, so we'll do that logging at CLI # level (and remote.py web service level) instead. random.seed() self.is_cobblerd = is_cobblerd try: self.logger = clogger.Logger("/var/log/cobbler/cobbler.log") except CX: # return to CLI/other but perms are not valid # perms_ok is False return # FIXME: conslidate into 1 server instance self.selinux_enabled = utils.is_selinux_enabled() self.dist = utils.check_dist() self.os_version = utils.os_release() BootAPI.__has_loaded = True # load the modules first, or nothing else works... module_loader.load_modules() self._config = config.Config(self) self.deserialize() # import signatures if not utils.load_signatures(self.settings().signature_path): return else: self.log("%d breeds and %d OS versions read from the signature file" % ( \ len(utils.get_valid_breeds()), \ len(utils.get_valid_os_versions()))) self.authn = self.get_module_from_file( "authentication", "module", "authn_configfile" ) self.authz = self.get_module_from_file( "authorization", "module", "authz_allowall" ) # FIXME: pass more loggers around, and also see that those # using things via tasks construct their own kickgen/yumgen/ # pxegen versus reusing this one, which has the wrong logger # (most likely) for background tasks. self.kickgen = kickgen.KickGen(self._config) self.yumgen = yumgen.YumGen(self._config) self.pxegen = pxegen.PXEGen(self._config, logger=self.logger) self.logger.debug("API handle initialized") self.perms_ok = True # ========================================================== def is_selinux_enabled(self): """ Returns whether selinux is enabled on the cobbler server. We check this just once at cobbler API init time, because a restart is required to change this; this does /not/ check enforce/permissive, nor does it need to. """ return self.selinux_enabled def is_selinux_supported(self): """ Returns whether or not the OS is sufficient enough to run with SELinux enabled (currently EL 5 or later). """ self.dist if self.dist == "redhat" and self.os_version < 5: # doesn't support public_content_t return False return True # ========================================================== def last_modified_time(self): """ Returns the time of the last modification to cobbler, made by any API instance, regardless of the serializer type. """ if not os.path.exists("/var/lib/cobbler/.mtime"): fd = os.open("/var/lib/cobbler/.mtime", os.O_CREAT|os.O_RDWR, 0200) os.write(fd, "0") os.close(fd) return 0 fd = open("/var/lib/cobbler/.mtime") data = fd.read().strip() return float(data) # ========================================================== def log(self,msg,args=None,debug=False): if debug: logger = self.logger.debug else: logger = self.logger.info if args is None: logger("%s" % msg) else: logger("%s; %s" % (msg, str(args))) # ========================================================== def version(self, extended=False): """ What version is cobbler? If extended == False, returns a float for backwards compatibility If extended == True, returns a dict: gitstamp -- the last git commit hash gitdate -- the last git commit date on the builder machine builddate -- the time of the build version -- something like "1.3.2" version_tuple -- something like [ 1, 3, 2 ] """ fd = open("/etc/cobbler/version") ydata = fd.read() fd.close() data = yaml.safe_load(ydata) if not extended: # for backwards compatibility and use with koan's comparisons elems = data["version_tuple"] return int(elems[0]) + 0.1*int(elems[1]) + 0.001*int(elems[2]) else: return data # ========================================================== def clear(self): """ Forget about current list of profiles, distros, and systems # FIXME: is this used anymore? """ return self._config.clear() def __cmp(self,a,b): return cmp(a.name,b.name) # ========================================================== def get_item(self, what, name): self.log("get_item",[what,name],debug=True) item = self._config.get_items(what).get(name) self.log("done with get_item",[what,name],debug=True) return item #self._config.get_items(what).get(name) # ============================================================= def get_items(self, what): self.log("get_items",[what],debug=True) items = self._config.get_items(what) self.log("done with get_items",[what],debug=True) return items #self._config.get_items(what) def distros(self): """ Return the current list of distributions """ return self.get_items("distro") def profiles(self): """ Return the current list of profiles """ return self.get_items("profile") def systems(self): """ Return the current list of systems """ return self.get_items("system") def repos(self): """ Return the current list of repos """ return self.get_items("repo") def images(self): """ Return the current list of images """ return self.get_items("image") def settings(self): """ Return the application configuration """ return self._config.settings() def mgmtclasses(self): """ Return the current list of mgmtclasses """ return self.get_items("mgmtclass") def packages(self): """ Return the current list of packages """ return self.get_items("package") def files(self): """ Return the current list of files """ return self.get_items("file") # ======================================================================= def update(self): """ This can be called is no longer used by cobbler. And is here to just avoid breaking older scripts. """ return True # ======================================================================== def copy_item(self, what, ref, newname, logger=None): self.log("copy_item(%s)"%what,[ref.name, newname]) return self.get_items(what).copy(ref,newname,logger=logger) def copy_distro(self, ref, newname): return self.copy_item("distro", ref, newname, logger=None) def copy_profile(self, ref, newname): return self.copy_item("profile", ref, newname, logger=None) def copy_system(self, ref, newname): return self.copy_item("system", ref, newname, logger=None) def copy_repo(self, ref, newname): return self.copy_item("repo", ref, newname, logger=None) def copy_image(self, ref, newname): return self.copy_item("image", ref, newname, logger=None) def copy_mgmtclass(self, ref, newname): return self.copy_item("mgmtclass", ref, newname, logger=None) def copy_package(self, ref, newname): return self.copy_item("package", ref, newname, logger=None) def copy_file(self, ref, newname): return self.copy_item("file", ref, newname, logger=None) # ========================================================================== def remove_item(self, what, ref, recursive=False, delete=True, with_triggers=True, logger=None): if isinstance(what, basestring): if isinstance(ref, basestring): ref = self.get_item(what, ref) if ref is None: return # nothing to remove self.log("remove_item(%s)" % what, [ref.name]) return self.get_items(what).remove(ref.name, recursive=recursive, with_delete=delete, with_triggers=with_triggers, logger=logger) def remove_distro(self, ref, recursive=False, delete=True, with_triggers=True, logger=None): return self.remove_item("distro", ref, recursive=recursive, delete=delete, with_triggers=with_triggers, logger=logger) def remove_profile(self,ref, recursive=False, delete=True, with_triggers=True, logger=None): return self.remove_item("profile", ref, recursive=recursive, delete=delete, with_triggers=with_triggers, logger=logger) def remove_system(self, ref, recursive=False, delete=True, with_triggers=True, logger=None): return self.remove_item("system", ref, recursive=recursive, delete=delete, with_triggers=with_triggers, logger=logger) def remove_repo(self, ref, recursive=False, delete=True, with_triggers=True, logger=None): return self.remove_item("repo", ref, recursive=recursive, delete=delete, with_triggers=with_triggers, logger=logger) def remove_image(self, ref, recursive=False, delete=True, with_triggers=True, logger=None): return self.remove_item("image", ref, recursive=recursive, delete=delete, with_triggers=with_triggers, logger=logger) def remove_mgmtclass(self, ref, recursive=False, delete=True, with_triggers=True, logger=None): return self.remove_item("mgmtclass", ref, recursive=recursive, delete=delete, with_triggers=with_triggers, logger=logger) def remove_package(self, ref, recursive=False, delete=True, with_triggers=True, logger=None): return self.remove_item("package", ref, recursive=recursive, delete=delete, with_triggers=with_triggers, logger=logger) def remove_file(self, ref, recursive=False, delete=True, with_triggers=True, logger=None): return self.remove_item("file", ref, recursive=recursive, delete=delete, with_triggers=with_triggers, logger=logger) # ========================================================================== def rename_item(self, what, ref, newname, logger=None): self.log("rename_item(%s)"%what,[ref.name,newname]) return self.get_items(what).rename(ref,newname,logger=logger) def rename_distro(self, ref, newname, logger=None): return self.rename_item("distro", ref, newname, logger=logger) def rename_profile(self, ref, newname, logger=None): return self.rename_item("profile", ref, newname, logger=logger) def rename_system(self, ref, newname, logger=None): return self.rename_item("system", ref, newname, logger=logger) def rename_repo(self, ref, newname, logger=None): return self.rename_item("repo", ref, newname, logger=logger) def rename_image(self, ref, newname, logger=None): return self.rename_item("image", ref, newname, logger=logger) def rename_mgmtclass(self, ref, newname, logger=None): return self.rename_item("mgmtclass", ref, newname, logger=logger) def rename_package(self, ref, newname, logger=None): return self.rename_item("package", ref, newname, logger=logger) def rename_file(self, ref, newname, logger=None): return self.rename_item("file", ref, newname, logger=logger) # ========================================================================== # FIXME: add a new_item method def new_distro(self,is_subobject=False): self.log("new_distro",[is_subobject]) return self._config.new_distro(is_subobject=is_subobject) def new_profile(self,is_subobject=False): self.log("new_profile",[is_subobject]) return self._config.new_profile(is_subobject=is_subobject) def new_system(self,is_subobject=False): self.log("new_system",[is_subobject]) return self._config.new_system(is_subobject=is_subobject) def new_repo(self,is_subobject=False): self.log("new_repo",[is_subobject]) return self._config.new_repo(is_subobject=is_subobject) def new_image(self,is_subobject=False): self.log("new_image",[is_subobject]) return self._config.new_image(is_subobject=is_subobject) def new_mgmtclass(self,is_subobject=False): self.log("new_mgmtclass",[is_subobject]) return self._config.new_mgmtclass(is_subobject=is_subobject) def new_package(self,is_subobject=False): self.log("new_package",[is_subobject]) return self._config.new_package(is_subobject=is_subobject) def new_file(self,is_subobject=False): self.log("new_file",[is_subobject]) return self._config.new_file(is_subobject=is_subobject) # ========================================================================== def add_item(self, what, ref, check_for_duplicate_names=False, save=True,logger=None): self.log("add_item(%s)"%what,[ref.name]) return self.get_items(what).add(ref,check_for_duplicate_names=check_for_duplicate_names,save=save,logger=logger) def add_distro(self, ref, check_for_duplicate_names=False, save=True, logger=None): return self.add_item("distro", ref, check_for_duplicate_names=check_for_duplicate_names, save=save,logger=logger) def add_profile(self, ref, check_for_duplicate_names=False,save=True, logger=None): return self.add_item("profile", ref, check_for_duplicate_names=check_for_duplicate_names, save=save,logger=logger) def add_system(self, ref, check_for_duplicate_names=False, check_for_duplicate_netinfo=False, save=True, logger=None): return self.add_item("system", ref, check_for_duplicate_names=check_for_duplicate_names, save=save,logger=logger) def add_repo(self, ref, check_for_duplicate_names=False,save=True,logger=None): return self.add_item("repo", ref, check_for_duplicate_names=check_for_duplicate_names, save=save,logger=logger) def add_image(self, ref, check_for_duplicate_names=False,save=True, logger=None): return self.add_item("image", ref, check_for_duplicate_names=check_for_duplicate_names, save=save,logger=logger) def add_mgmtclass(self, ref, check_for_duplicate_names=False,save=True, logger=None): return self.add_item("mgmtclass", ref, check_for_duplicate_names=check_for_duplicate_names, save=save,logger=logger) def add_package(self, ref, check_for_duplicate_names=False,save=True, logger=None): return self.add_item("package", ref, check_for_duplicate_names=check_for_duplicate_names, save=save,logger=logger) def add_file(self, ref, check_for_duplicate_names=False,save=True, logger=None): return self.add_item("file", ref, check_for_duplicate_names=check_for_duplicate_names, save=save,logger=logger) # ========================================================================== # FIXME: find_items should take all the arguments the other find # methods do. def find_items(self, what, criteria=None): self.log("find_items",[what]) # defaults if criteria is None: criteria={} items=self._config.get_items(what) # empty criteria returns everything if criteria == {}: res=items else: res=items.find(return_list=True, no_errors=False, **criteria) return res def find_distro(self, name=None, return_list=False, no_errors=False, **kargs): return self._config.distros().find(name=name, return_list=return_list, no_errors=no_errors, **kargs) def find_profile(self, name=None, return_list=False, no_errors=False, **kargs): return self._config.profiles().find(name=name, return_list=return_list, no_errors=no_errors, **kargs) def find_system(self, name=None, return_list=False, no_errors=False, **kargs): return self._config.systems().find(name=name, return_list=return_list, no_errors=no_errors, **kargs) def find_repo(self, name=None, return_list=False, no_errors=False, **kargs): return self._config.repos().find(name=name, return_list=return_list, no_errors=no_errors, **kargs) def find_image(self, name=None, return_list=False, no_errors=False, **kargs): return self._config.images().find(name=name, return_list=return_list, no_errors=no_errors, **kargs) def find_mgmtclass(self, name=None, return_list=False, no_errors=False, **kargs): return self._config.mgmtclasses().find(name=name, return_list=return_list, no_errors=no_errors, **kargs) def find_package(self, name=None, return_list=False, no_errors=False, **kargs): return self._config.packages().find(name=name, return_list=return_list, no_errors=no_errors, **kargs) def find_file(self, name=None, return_list=False, no_errors=False, **kargs): return self._config.files().find(name=name, return_list=return_list, no_errors=no_errors, **kargs) # ========================================================================== def __since(self,mtime,collector,collapse=False): """ Called by get_*_since functions. """ results1 = collector() results2 = [] for x in results1: if x.mtime == 0 or x.mtime >= mtime: if not collapse: results2.append(x) else: results2.append(x.to_datastruct()) return results2 def get_distros_since(self,mtime,collapse=False): """ Returns distros modified since a certain time (in seconds since Epoch) collapse=True specifies returning a hash instead of objects. """ return self.__since(mtime,self.distros,collapse=collapse) def get_profiles_since(self,mtime,collapse=False): return self.__since(mtime,self.profiles,collapse=collapse) def get_systems_since(self,mtime,collapse=False): return self.__since(mtime,self.systems,collapse=collapse) def get_repos_since(self,mtime,collapse=False): return self.__since(mtime,self.repos,collapse=collapse) def get_images_since(self,mtime,collapse=False): return self.__since(mtime,self.images,collapse=collapse) def get_mgmtclasses_since(self,mtime,collapse=False): return self.__since(mtime,self.mgmtclasses,collapse=collapse) def get_packages_since(self,mtime,collapse=False): return self.__since(mtime,self.packages,collapse=collapse) def get_files_since(self,mtime,collapse=False): return self.__since(mtime,self.files,collapse=collapse) # ========================================================================== def get_signatures(self): return utils.SIGNATURE_CACHE def signature_update(self, logger): try: tmpfile = tempfile.NamedTemporaryFile() response = urllib2.urlopen(self.settings().signature_url) sigjson = response.read() tmpfile.write(sigjson) tmpfile.flush() logger.debug("Successfully got file from %s" % self.settings().signature_url) # test the import without caching it if not utils.load_signatures(tmpfile.name,cache=False): logger.error("Downloaded signatures failed test load (tempfile = %s)" % tmpfile.name) return False # rewrite the real signature file and import it for real f = open(self.settings().signature_path,"w") f.write(sigjson) f.close() return utils.load_signatures(self.settings().signature_path) except: utils.log_exc(logger) return False # ========================================================================== def dump_vars(self, obj, format=False): return obj.dump_vars(format) # ========================================================================== def auto_add_repos(self): """ Import any repos this server knows about and mirror them. Credit: Seth Vidal. """ self.log("auto_add_repos") try: import yum except: raise CX(_("yum is not installed")) version = yum.__version__ (a,b,c) = version.split(".") version = a* 1000 + b*100 + c if version < 324: raise CX(_("need yum > 3.2.4 to proceed")) base = yum.YumBase() base.doRepoSetup() repos = base.repos.listEnabled() if len(repos) == 0: raise CX(_("no repos enabled/available -- giving up.")) for repo in repos: url = repo.urls[0] cobbler_repo = self.new_repo() auto_name = repo.name.replace(" ","") # FIXME: probably doesn't work for yum-rhn-plugin ATM cobbler_repo.set_mirror(url) cobbler_repo.set_name(auto_name) print "auto adding: %s (%s)" % (auto_name, url) self._config.repos().add(cobbler_repo,save=True) # run cobbler reposync to apply changes return True # ========================================================================== def get_repo_config_for_profile(self,obj): return self.yumgen.get_yum_config(obj,True) def get_repo_config_for_system(self,obj): return self.yumgen.get_yum_config(obj,False) # ========================================================================== def get_template_file_for_profile(self,obj,path): template_results = self.pxegen.write_templates(obj,False,path) if template_results.has_key(path): return template_results[path] else: return "# template path not found for specified profile" def get_template_file_for_system(self,obj,path): template_results = self.pxegen.write_templates(obj,False,path) if template_results.has_key(path): return template_results[path] else: return "# template path not found for specified system" # ========================================================================== def generate_kickstart(self,profile,system): self.log("generate_kickstart") if system: return self.kickgen.generate_kickstart_for_system(system) else: return self.kickgen.generate_kickstart_for_profile(profile) # ========================================================================== def generate_gpxe(self,profile,system): self.log("generate_gpxe") if system: return self.pxegen.generate_gpxe("system",system) else: return self.pxegen.generate_gpxe("profile",profile) # ========================================================================== def generate_bootcfg(self,profile,system): self.log("generate_bootcfg") if system: return self.pxegen.generate_bootcfg("system",system) else: return self.pxegen.generate_bootcfg("profile",profile) # ========================================================================== def generate_script(self,profile,system,name): self.log("generate_script") if system: return self.pxegen.generate_script("system",system,name) else: return self.pxegen.generate_script("profile",profile,name) # ========================================================================== def check(self, logger=None): """ See if all preqs for network booting are valid. This returns a list of strings containing instructions on things to correct. An empty list means there is nothing to correct, but that still doesn't mean there are configuration errors. This is mainly useful for human admins, who may, for instance, forget to properly set up their TFTP servers for PXE, etc. """ self.log("check") check = action_check.BootCheck(self._config, logger=logger) return check.run() # ========================================================================== def dlcontent(self,force=False,logger=None): """ Downloads bootloader content that may not be avialable in packages for the given arch, ex: if installing on PPC, get syslinux. If installing on x86_64, get elilo, etc. """ # FIXME: teach code that copies it to grab from the right place self.log("dlcontent") grabber = action_dlcontent.ContentDownloader(self._config, logger=logger) return grabber.run(force) # ========================================================================== def validateks(self, logger=None): """ Use ksvalidator (from pykickstart, if available) to determine whether the cobbler kickstarts are going to be (likely) well accepted by Anaconda. Presence of an error does not indicate the kickstart is bad, only that the possibility exists. ksvalidator is not available on all platforms and can not detect "future" kickstart format correctness. """ self.log("validateks") validator = action_validate.Validate(self._config, logger=logger) return validator.run() # ========================================================================== def sync(self,verbose=False, logger=None): """ Take the values currently written to the configuration files in /etc, and /var, and build out the information tree found in /tftpboot. Any operations done in the API that have not been saved with serialize() will NOT be synchronized with this command. """ self.log("sync") sync = self.get_sync(verbose=verbose, logger=logger) return sync.run() # ========================================================================== def sync_dhcp(self, verbose=False, logger=None): """ Only build out the DHCP configuration """ self.log("sync_dhcp") sync = self.get_sync(verbose=verbose, logger=logger) return sync.sync_dhcp() # ========================================================================== def get_sync(self,verbose=False,logger=None): self.dhcp = self.get_module_from_file( "dhcp", "module", "manage_isc" ).get_manager(self._config,logger) self.dns = self.get_module_from_file( "dns", "module", "manage_bind" ).get_manager(self._config,logger) self.tftpd = self.get_module_from_file( "tftpd", "module", "in_tftpd", ).get_manager(self._config,logger) return action_sync.BootSync(self._config,dhcp=self.dhcp,dns=self.dns,tftpd=self.tftpd,verbose=verbose,logger=logger) # ========================================================================== def reposync(self, name=None, tries=1, nofail=False, logger=None): """ Take the contents of /var/lib/cobbler/repos and update them -- or create the initial copy if no contents exist yet. """ self.log("reposync",[name]) reposync = action_reposync.RepoSync(self._config, tries=tries, nofail=nofail, logger=logger) return reposync.run(name) # ========================================================================== def status(self,mode,logger=None): statusifier = action_status.BootStatusReport(self._config,mode,logger=logger) return statusifier.run() # ========================================================================== def import_tree(self,mirror_url,mirror_name,network_root=None,kickstart_file=None,rsync_flags=None,arch=None,breed=None,os_version=None,logger=None): """ Automatically import a directory tree full of distribution files. mirror_url can be a string that represents a path, a user@host syntax for SSH, or an rsync:// address. If mirror_url is a filesystem path and mirroring is not desired, set network_root to something like "nfs://path/to/mirror_url/root" """ self.log("import_tree",[mirror_url, mirror_name, network_root, kickstart_file, rsync_flags]) # both --path and --name are required arguments if mirror_url is None: self.log("import failed. no --path specified") return False if mirror_name is None: self.log("import failed. no --name specified") return False path = os.path.normpath("%s/ks_mirror/%s" % (self.settings().webdir, mirror_name)) if arch is not None: arch = arch.lower() if arch == "x86": # be consistent arch = "i386" if path.split("-")[-1] != arch: path += ("-%s" % arch) # we need to mirror (copy) the files self.log("importing from a network location, running rsync to fetch the files first") utils.mkdir(path) # prevent rsync from creating the directory name twice # if we are copying via rsync if not mirror_url.endswith("/"): mirror_url = "%s/" % mirror_url if mirror_url.startswith("http://") or mirror_url.startswith("ftp://") or mirror_url.startswith("nfs://"): # http mirrors are kind of primative. rsync is better. # that's why this isn't documented in the manpage and we don't support them. # TODO: how about adding recursive FTP as an option? self.log("unsupported protocol") return False else: # good, we're going to use rsync.. # we don't use SSH for public mirrors and local files. # presence of user@host syntax means use SSH spacer = "" if not mirror_url.startswith("rsync://") and not mirror_url.startswith("/"): spacer = ' -e "ssh" ' rsync_cmd = RSYNC_CMD if rsync_flags: rsync_cmd = rsync_cmd + " " + rsync_flags # if --available-as was specified, limit the files we # pull down via rsync to just those that are critical # to detecting what the distro is if network_root is not None: rsync_cmd = rsync_cmd + " --include-from=/etc/cobbler/import_rsync_whitelist" # kick off the rsync now utils.run_this(rsync_cmd, (spacer, mirror_url, path), self.logger) if network_root is not None: # in addition to mirroring, we're going to assume the path is available # over http, ftp, and nfs, perhaps on an external filer. scanning still requires # --mirror is a filesystem path, but --available-as marks the network path. # this allows users to point the path at a directory containing just the network # boot files while the rest of the distro files are available somewhere else. # find the filesystem part of the path, after the server bits, as each distro # URL needs to be calculated relative to this. if not network_root.endswith("/"): network_root = network_root + "/" valid_roots = [ "nfs://", "ftp://", "http://" ] for valid_root in valid_roots: if network_root.startswith(valid_root): break else: self.log("Network root given to --available-as must be nfs://, ftp://, or http://") return False if network_root.startswith("nfs://"): try: (a,b,rest) = network_root.split(":",3) except: self.log("Network root given to --available-as is missing a colon, please see the manpage example.") return False #importer_modules = self.get_modules_in_category("manage/import") #for importer_module in importer_modules: # manager = importer_module.get_import_manager(self._config,logger) # try: # (found,pkgdir) = manager.check_for_signature(path,breed) # if found: # self.log("running import manager: %s" % manager.what()) # return manager.run(pkgdir,mirror_name,path,network_root,kickstart_file,rsync_flags,arch,breed,os_version) # except: # self.log("an exception occured while running the import manager") # self.log("error was: %s" % sys.exc_info()[1]) # continue #self.log("No import managers found a valid signature at the location specified") ## FIXME: since we failed, we should probably remove the ## path tree we created above so we don't leave cruft around #return False import_module = self.get_module_by_name("manage_import_signatures").get_import_manager(self._config,logger) return import_module.run(path,mirror_name,network_root,kickstart_file,arch,breed,os_version) # ========================================================================== def acl_config(self,adduser=None,addgroup=None,removeuser=None,removegroup=None, logger=None): """ Configures users/groups to run the cobbler CLI as non-root. Pass in only one option at a time. Powers "cobbler aclconfig" """ acl = action_acl.AclConfig(self._config, logger) return acl.run( adduser=adduser, addgroup=addgroup, removeuser=removeuser, removegroup=removegroup ) # ========================================================================== def serialize(self): """ Save the config file(s) to disk. Cobbler internal use only. """ return self._config.serialize() def deserialize(self): """ Load the current configuration from config file(s) Cobbler internal use only. """ return self._config.deserialize() def deserialize_raw(self,collection_name): """ Get the collection back just as raw data. Cobbler internal use only. """ return self._config.deserialize_raw(collection_name) def deserialize_item_raw(self,collection_name,obj_name): """ Get an object back as raw data. Can be very fast for shelve or catalog serializers Cobbler internal use only. """ return self._config.deserialize_item_raw(collection_name,obj_name) # ========================================================================== def get_module_by_name(self,module_name): """ Returns a loaded cobbler module named 'name', if one exists, else None. Cobbler internal use only. """ return module_loader.get_module_by_name(module_name) def get_module_from_file(self,section,name,fallback=None): """ Looks in /etc/cobbler/modules.conf for a section called 'section' and a key called 'name', and then returns the module that corresponds to the value of that key. Cobbler internal use only. """ return module_loader.get_module_from_file(section,name,fallback) def get_module_name_from_file(self,section,name,fallback=None): """ Looks up a module the same as get_module_from_file but returns the module name rather than the module itself """ return module_loader.get_module_from_file(section,name,fallback,just_name=True) def get_modules_in_category(self,category): """ Returns all modules in a given category, for instance "serializer", or "cli". Cobbler internal use only. """ return module_loader.get_modules_in_category(category) # ========================================================================== def authenticate(self,user,password): """ (Remote) access control. Cobbler internal use only. """ rc = self.authn.authenticate(self,user,password) self.log("authenticate",[user,rc]) return rc def authorize(self,user,resource,arg1=None,arg2=None): """ (Remote) access control. Cobbler internal use only. """ rc = self.authz.authorize(self,user,resource,arg1,arg2) self.log("authorize",[user,resource,arg1,arg2,rc],debug=True) return rc # ========================================================================== def build_iso(self,iso=None,profiles=None,systems=None,buildisodir=None,distro=None,standalone=None,source=None, exclude_dns=None, mkisofs_opts=None, logger=None): builder = action_buildiso.BuildIso(self._config, logger=logger) return builder.run( iso=iso, profiles=profiles, systems=systems, buildisodir=buildisodir, distro=distro, standalone=standalone, source=source, exclude_dns=exclude_dns, mkisofs_opts=mkisofs_opts ) # ========================================================================== def hardlink(self, logger=None): linker = action_hardlink.HardLinker(self._config, logger=logger) return linker.run() # ========================================================================== def replicate(self, cobbler_master = None, distro_patterns="", profile_patterns="", system_patterns="", repo_patterns="", image_patterns="", mgmtclass_patterns=None, package_patterns=None, file_patterns=None, prune=False, omit_data=False, sync_all=False, use_ssl=False, logger=None): """ Pull down data/configs from a remote cobbler server that is a master to this server. """ replicator = action_replicate.Replicate(self._config, logger=logger) return replicator.run( cobbler_master = cobbler_master, distro_patterns = distro_patterns, profile_patterns = profile_patterns, system_patterns = system_patterns, repo_patterns = repo_patterns, image_patterns = image_patterns, mgmtclass_patterns = mgmtclass_patterns, package_patterns = package_patterns, file_patterns = file_patterns, prune = prune, omit_data = omit_data, sync_all = sync_all, use_ssl = use_ssl ) # ========================================================================== def report(self, report_what = None, report_name = None, report_type = None, report_fields = None, report_noheaders = None): """ Report functionality for cobbler """ reporter = action_report.Report(self._config) return reporter.run(report_what = report_what, report_name = report_name,\ report_type = report_type, report_fields = report_fields,\ report_noheaders = report_noheaders) # ========================================================================== def get_kickstart_templates(self): return utils.get_kickstar_templates(self) # ========================================================================== def power_on(self, system, user=None, password=None, logger=None): """ Powers up a system that has power management configured. """ return action_power.PowerTool(self._config,system,self,user,password,logger=logger).power("on") def power_off(self, system, user=None, password=None, logger=None): """ Powers down a system that has power management configured. """ return action_power.PowerTool(self._config,system,self,user,password,logger=logger).power("off") def reboot(self,system, user=None, password=None, logger=None): """ Cycles power on a system that has power management configured. """ self.power_off(system, user, password, logger=logger) time.sleep(5) return self.power_on(system, user, password, logger=logger) def power_status(self, system, user=None, password=None, logger=None): """ Returns the power status for a system that has power management configured. @return: 0 the system is powered on, False if it's not or None on error """ return action_power.PowerTool(self._config, system, self, user, password, logger = logger).power("status") # ========================================================================== def clear_logs(self, system, logger=None): """ Clears console and anamon logs for system """ return action_log.LogTool(self._config,system,self, logger=logger).clear() def get_os_details(self): return (self.dist, self.os_version) cobbler-2.4.1/cobbler/cexceptions.py000066400000000000000000000023221227367477500174570ustar00rootroot00000000000000""" Custom exceptions for Cobbler Copyright 2006-2009, Red Hat, Inc and Others Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import exceptions TEST_MODE = False class CobblerException(exceptions.Exception): def __init__(self, value, *args): self.value = value % args # this is a hack to work around some odd exception handling # in older pythons self.from_cobbler = 1 def __str__(self): return repr(self.value) class CX(CobblerException): pass class FileNotFoundException(CobblerException): pass cobbler-2.4.1/cobbler/cli.py000077500000000000000000000676361227367477500157300ustar00rootroot00000000000000""" Command line interface for cobbler. Copyright 2006-2009, Red Hat, Inc and Others Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import sys import xmlrpclib import traceback import optparse import exceptions import time import os import utils import module_loader import item_distro import item_profile import item_system import item_repo import item_image import item_mgmtclass import item_package import item_file import settings OBJECT_ACTIONS_MAP = { "distro" : "add copy edit find list remove rename report".split(" "), "profile" : "add copy dumpvars edit find getks list remove rename report".split(" "), "system" : "add copy dumpvars edit find getks list remove rename report poweron poweroff powerstatus reboot".split(" "), "image" : "add copy edit find list remove rename report".split(" "), "repo" : "add copy edit find list remove rename report".split(" "), "mgmtclass" : "add copy edit find list remove rename report".split(" "), "package" : "add copy edit find list remove rename report".split(" "), "file" : "add copy edit find list remove rename report".split(" "), "setting" : "edit report".split(" "), "signature" : "report update".split(" "), } OBJECT_TYPES = OBJECT_ACTIONS_MAP.keys() # would like to use from_iterable here, but have to support python 2.4 OBJECT_ACTIONS = [] for actions in OBJECT_ACTIONS_MAP.values(): OBJECT_ACTIONS += actions DIRECT_ACTIONS = "aclsetup buildiso import list replicate report reposync sync validateks version signature get-loaders hardlink".split() #################################################### def report_items(remote, otype): if otype == "setting": items = remote.get_settings() keys = items.keys() keys.sort() for key in keys: item = {'name':key, 'value':items[key]} report_item(remote,otype,item=item) elif otype == "signature": items = remote.get_signatures() total_breeds = 0 total_sigs = 0 if items.has_key("breeds"): print "Currently loaded signatures:" bkeys = items["breeds"].keys() bkeys.sort() total_breeds = len(bkeys) for breed in bkeys: print "%s:" % breed oskeys = items["breeds"][breed].keys() oskeys.sort() if len(oskeys) > 0: total_sigs += len(oskeys) for osversion in oskeys: print "\t%s" % osversion else: print "\t(none)" print "\n%d breeds with %d total signatures loaded" % (total_breeds,total_sigs) else: print "No breeds found in the signature, a signature update is recommended" sys.exit(1) else: items = remote.get_items(otype) for x in items: report_item(remote,otype,item=x) def report_item(remote,otype,item=None,name=None): if item is None: if otype == "setting": cur_settings = remote.get_settings() try: item = {'name':name, 'value':cur_settings[name]} except: print "Setting not found: %s" % name sys.exit(1) elif otype == "signature": items = remote.get_signatures() total_sigs = 0 if items.has_key("breeds"): print "Currently loaded signatures:" if items["breeds"].has_key(name): print "%s:" % name oskeys = items["breeds"][name].keys() oskeys.sort() if len(oskeys) > 0: total_sigs += len(oskeys) for osversion in oskeys: print "\t%s" % osversion else: print "\t(none)" print "\nBreed '%s' has %d total signatures" % (name,total_sigs) else: print "No breed named '%s' found" % name sys.exit(1) else: print "No breeds found in the signature, a signature update is recommended" sys.exit(1) return else: item = remote.get_item(otype, name) if item == "~": print "No %s found: %s" % (otype, name) sys.exit(1) if otype == "distro": data = utils.printable_from_fields(item, item_distro.FIELDS) elif otype == "profile": data = utils.printable_from_fields(item, item_profile.FIELDS) elif otype == "system": data = utils.printable_from_fields(item, item_system.FIELDS) elif otype == "repo": data = utils.printable_from_fields(item, item_repo.FIELDS) elif otype == "image": data = utils.printable_from_fields(item, item_image.FIELDS) elif otype == "mgmtclass": data = utils.printable_from_fields(item,item_mgmtclass.FIELDS) elif otype == "package": data = utils.printable_from_fields(item,item_package.FIELDS) elif otype == "file": data = utils.printable_from_fields(item,item_file.FIELDS) elif otype == "setting": data = "%-40s: %s" % (item['name'],item['value']) print data def list_items(remote,otype): items = remote.get_item_names(otype) items.sort() for x in items: print " %s" % x def n2s(data): """ Return spaces for None """ if data is None: return "" return data def opt(options, k): """ Returns an option from an Optparse values instance """ try: data = getattr(options, k) except: # FIXME: debug only traceback.print_exc() return "" return n2s(data) class BootCLI: def __init__(self): # Load server ip and ports from local config self.url_cobbler_api = utils.local_get_cobbler_api_url() self.url_cobbler_xmlrpc = utils.local_get_cobbler_xmlrpc_url() # FIXME: allow specifying other endpoints, and user+pass self.parser = optparse.OptionParser() self.remote = xmlrpclib.Server(self.url_cobbler_api) self.shared_secret = utils.get_shared_secret() def start_task(self, name, options): options = utils.strip_none(vars(options), omit_none=True) fn = getattr(self.remote, "background_%s" % name) return fn(options, self.token) def get_object_type(self, args): """ If this is a CLI command about an object type, e.g. "cobbler distro add", return the type, like "distro" """ if len(args) < 2: return None elif args[1] in OBJECT_TYPES: return args[1] return None def get_object_action(self, object_type, args): """ If this is a CLI command about an object type, e.g. "cobbler distro add", return the action, like "add" """ if object_type is None or len(args) < 3: return None if args[2] in OBJECT_ACTIONS_MAP[object_type]: return args[2] return None def get_direct_action(self, object_type, args): """ If this is a general command, e.g. "cobbler hardlink", return the action, like "hardlink" """ if object_type is not None: return None elif len(args) < 2: return None elif args[1] == "--help": return None elif args[1] == "--version": return "version" else: return args[1] def check_setup(self): """ Detect permissions and service accessibility problems and provide nicer error messages for them. """ s = xmlrpclib.Server(self.url_cobbler_xmlrpc) try: s.ping() except: print >> sys.stderr, "cobblerd does not appear to be running/accessible" sys.exit(411) s = xmlrpclib.Server(self.url_cobbler_api) try: s.ping() except: print >> sys.stderr, "httpd does not appear to be running and proxying cobbler, or SELinux is in the way. Original traceback:" traceback.print_exc() sys.exit(411) if not os.path.exists("/var/lib/cobbler/web.ss"): print >> sys.stderr, "Missing login credentials file. Has cobblerd failed to start?" sys.exit(411) if not os.access("/var/lib/cobbler/web.ss", os.R_OK): print >> sys.stderr, "User cannot run command line, need read access to /var/lib/cobbler/web.ss" sys.exit(411) def run(self, args): """ Process the command line and do what the user asks. """ self.token = self.remote.login("", self.shared_secret) object_type = self.get_object_type(args) object_action = self.get_object_action(object_type, args) direct_action = self.get_direct_action(object_type, args) try: if object_type is not None: if object_action is not None: self.object_command(object_type, object_action) else: self.print_object_help(object_type) elif direct_action is not None: self.direct_command(direct_action) else: self.print_help() except xmlrpclib.Fault, err: if err.faultString.find("cobbler.cexceptions.CX") != -1: print self.cleanup_fault_string(err.faultString) else: print "### ERROR ###" print "Unexpected remote error, check the server side logs for further info" print err.faultString sys.exit(1) def cleanup_fault_string(self,str): """ Make a remote exception nicely readable by humans so it's not evident that is a remote fault. Users should not have to understand tracebacks. """ if str.find(">:") != -1: (first, rest) = str.split(">:",1) if rest.startswith("\"") or rest.startswith("\'"): rest = rest[1:] if rest.endswith("\"") or rest.endswith("\'"): rest = rest[:-1] return rest else: return str def get_fields(self, object_type): """ For a given name of an object type, return the FIELDS data structure. """ # FIXME: this should be in utils, or is it already? if object_type == "distro": return item_distro.FIELDS elif object_type == "profile": return item_profile.FIELDS elif object_type == "system": return item_system.FIELDS elif object_type == "repo": return item_repo.FIELDS elif object_type == "image": return item_image.FIELDS elif object_type == "mgmtclass": return item_mgmtclass.FIELDS elif object_type == "package": return item_package.FIELDS elif object_type == "file": return item_file.FIELDS elif object_type == "setting": return settings.FIELDS def object_command(self, object_type, object_action): """ Process object-based commands such as "distro add" or "profile rename" """ task_id = -1 # if assigned, we must tail the logfile fields = self.get_fields(object_type) if object_action in [ "add", "edit", "copy", "rename", "find", "remove" ]: utils.add_options_from_fields(object_type, self.parser, fields, object_action) elif object_action in [ "list" ]: pass elif object_action != "update": self.parser.add_option("--name", dest="name", help="name of object") (options, args) = self.parser.parse_args() # the first three don't require a name if object_action == "report": if options.name is not None: report_item(self.remote,object_type,None,options.name) else: report_items(self.remote,object_type) elif object_action == "list": list_items(self.remote, object_type) elif object_action == "find": items = self.remote.find_items(object_type, utils.strip_none(vars(options), omit_none=True), "name", False) for item in items: print item elif object_action in OBJECT_ACTIONS: if opt(options, "name") == "" and object_action != "update": print "--name is required" sys.exit(1) if object_action in [ "add", "edit", "copy", "rename", "remove" ]: try: if object_type == "setting": settings = self.remote.get_settings() if options.value == None: raise RuntimeError("You must specify a --value when editing a setting") elif not settings.get('allow_dynamic_settings',False): raise RuntimeError("Dynamic settings changes are not enabled. Change the allow_dynamic_settings to 1 and restart cobblerd to enable dynamic settings changes") elif options.name == 'allow_dynamic_settings': raise RuntimeError("Cannot modify that setting live") elif self.remote.modify_setting(options.name,options.value,self.token): raise RuntimeError("Changing the setting failed") else: self.remote.xapi_object_edit(object_type, options.name, object_action, utils.strip_none(vars(options), omit_none=True), self.token) except xmlrpclib.Fault, (err): (etype, emsg) = err.faultString.split(":",1) print "exception on server: %s" % emsg sys.exit(1) except RuntimeError, (err): print err.args[0] sys.exit(1) elif object_action == "getks": if object_type == "profile": data = self.remote.generate_kickstart(options.name,"") elif object_type == "system": data = self.remote.generate_kickstart("",options.name) print data elif object_action == "dumpvars": if object_type == "profile": data = self.remote.get_blended_data(options.name,"") elif object_type == "system": data = self.remote.get_blended_data("",options.name) # FIXME: pretty-printing and sorting here keys = data.keys() keys.sort() for x in keys: print "%s : %s" % (x, data[x]) elif object_action in [ "poweron", "poweroff", "powerstatus", "reboot" ]: power={} power["power"] = object_action.replace("power","") power["systems"] = [options.name] task_id = self.remote.background_power_system(power, self.token) elif object_action == "update": task_id = self.remote.background_signature_update(utils.strip_none(vars(options),omit_none=True), self.token) else: raise exceptions.NotImplementedError() else: raise exceptions.NotImplementedError() # FIXME: add tail/polling code here if task_id != -1: self.print_task(task_id) self.follow_task(task_id) return True # BOOKMARK def direct_command(self, action_name): """ Process non-object based commands like "sync" and "hardlink" """ task_id = -1 # if assigned, we must tail the logfile if action_name == "buildiso": defaultiso = os.path.join(os.getcwd(), "generated.iso") self.parser.add_option("--iso", dest="iso", default=defaultiso, help="(OPTIONAL) output ISO to this path") self.parser.add_option("--profiles", dest="profiles", help="(OPTIONAL) use these profiles only") self.parser.add_option("--systems", dest="systems", help="(OPTIONAL) use these systems only") self.parser.add_option("--tempdir", dest="buildisodir", help="(OPTIONAL) working directory") self.parser.add_option("--distro", dest="distro", help="(OPTIONAL) used with --standalone to create a distro-based ISO including all associated profiles/systems") self.parser.add_option("--standalone", dest="standalone", action="store_true", help="(OPTIONAL) creates a standalone ISO with all required distro files on it") self.parser.add_option("--source", dest="source", help="(OPTIONAL) used with --standalone to specify a source for the distribution files") self.parser.add_option("--exclude-dns", dest="exclude_dns", action="store_true", help="(OPTIONAL) prevents addition of name server addresses to the kernel boot options") self.parser.add_option("--mkisofs-opts", dest="mkisofs_opts", help="(OPTIONAL) extra options for mkisofs") (options, args) = self.parser.parse_args() task_id = self.start_task("buildiso",options) elif action_name == "replicate": self.parser.add_option("--master", dest="master", help="Cobbler server to replicate from.") self.parser.add_option("--distros", dest="distro_patterns", help="patterns of distros to replicate") self.parser.add_option("--profiles", dest="profile_patterns", help="patterns of profiles to replicate") self.parser.add_option("--systems", dest="system_patterns", help="patterns of systems to replicate") self.parser.add_option("--repos", dest="repo_patterns", help="patterns of repos to replicate") self.parser.add_option("--image", dest="image_patterns", help="patterns of images to replicate") self.parser.add_option("--mgmtclasses", dest="mgmtclass_patterns", help="patterns of mgmtclasses to replicate") self.parser.add_option("--packages", dest="package_patterns", help="patterns of packages to replicate") self.parser.add_option("--files", dest="file_patterns", help="patterns of files to replicate") self.parser.add_option("--omit-data", dest="omit_data", action="store_true", help="do not rsync data") self.parser.add_option("--sync-all", dest="sync_all", action="store_true", help="sync all data") self.parser.add_option("--prune", dest="prune", action="store_true", help="remove objects (of all types) not found on the master") self.parser.add_option("--use-ssl", dest="use_ssl", action="store_true", help="use ssl to access the Cobbler master server api") (options, args) = self.parser.parse_args() task_id = self.start_task("replicate",options) elif action_name == "aclsetup": self.parser.add_option("--adduser", dest="adduser", help="give acls to this user") self.parser.add_option("--addgroup", dest="addgroup", help="give acls to this group") self.parser.add_option("--removeuser", dest="removeuser", help="remove acls from this user") self.parser.add_option("--removegroup", dest="removegroup", help="remove acls from this group") (options, args) = self.parser.parse_args() task_id = self.start_task("aclsetup",options) elif action_name == "version": version = self.remote.extended_version() print "Cobbler %s" % version["version"] print " source: %s, %s" % (version["gitstamp"], version["gitdate"]) print " build time: %s" % version["builddate"] elif action_name == "hardlink": (options, args) = self.parser.parse_args() task_id = self.start_task("hardlink",options) elif action_name == "reserialize": (options, args) = self.parser.parse_args() task_id = self.start_task("reserialize",options) elif action_name == "status": (options, args) = self.parser.parse_args() print self.remote.get_status("text",self.token) elif action_name == "validateks": (options, args) = self.parser.parse_args() task_id = self.start_task("validateks",options) elif action_name == "get-loaders": self.parser.add_option("--force", dest="force", action="store_true", help="overwrite any existing content in /var/lib/cobbler/loaders") (options, args) = self.parser.parse_args() task_id = self.start_task("dlcontent",options) elif action_name == "import": self.parser.add_option("--arch", dest="arch", help="OS architecture being imported") self.parser.add_option("--breed", dest="breed", help="the breed being imported") self.parser.add_option("--os-version", dest="os_version", help="the version being imported") self.parser.add_option("--path", dest="path", help="local path or rsync location") self.parser.add_option("--name", dest="name", help="name, ex 'RHEL-5'") self.parser.add_option("--available-as", dest="available_as", help="tree is here, don't mirror") self.parser.add_option("--kickstart", dest="kickstart_file", help="assign this kickstart file") self.parser.add_option("--rsync-flags", dest="rsync_flags", help="pass additional flags to rsync") (options, args) = self.parser.parse_args() task_id = self.start_task("import",options) elif action_name == "reposync": self.parser.add_option("--only", dest="only", help="update only this repository name") self.parser.add_option("--tries", dest="tries", help="try each repo this many times", default=1) self.parser.add_option("--no-fail", dest="nofail", help="don't stop reposyncing if a failure occurs", action="store_true") (options, args) = self.parser.parse_args() task_id = self.start_task("reposync",options) elif action_name == "aclsetup": (options, args) = self.parser.parse_args() # FIXME: missing options, add them here task_id = self.start_task("aclsetup",options) elif action_name == "check": results = self.remote.check(self.token) ct = 0 if len(results) > 0: print "The following are potential configuration items that you may want to fix:\n" for r in results: ct = ct + 1 print "%s : %s" % (ct, r) print "\nRestart cobblerd and then run 'cobbler sync' to apply changes." else: print "No configuration problems found. All systems go." elif action_name == "sync": (options, args) = self.parser.parse_args() self.parser.add_option("--verbose", dest="verbose", action="store_true", help="run sync with more output") task_id = self.start_task("sync",options) elif action_name == "report": (options, args) = self.parser.parse_args() print "distros:\n==========" report_items(self.remote,"distro") print "\nprofiles:\n==========" report_items(self.remote,"profile") print "\nsystems:\n==========" report_items(self.remote,"system") print "\nrepos:\n==========" report_items(self.remote,"repo") print "\nimages:\n==========" report_items(self.remote,"image") print "\nmgmtclasses:\n==========" report_items(self.remote,"mgmtclass") print "\npackages:\n==========" report_items(self.remote,"package") print "\nfiles:\n==========" report_items(self.remote,"file") elif action_name == "list": # no tree view like 1.6? This is more efficient remotely # for large configs and prevents xfering the whole config # though we could consider that... (options, args) = self.parser.parse_args() print "distros:" list_items(self.remote,"distro") print "\nprofiles:" list_items(self.remote,"profile") print "\nsystems:" list_items(self.remote,"system") print "\nrepos:" list_items(self.remote,"repo") print "\nimages:" list_items(self.remote,"image") print "\nmgmtclasses:" list_items(self.remote,"mgmtclass") print "\npackages:" list_items(self.remote,"package") print "\nfiles:" list_items(self.remote,"file") else: print "No such command: %s" % action_name sys.exit(1) # FIXME: run here # FIXME: add tail/polling code here if task_id != -1: self.print_task(task_id) self.follow_task(task_id) return True def print_task(self, task_id): print "task started: %s" % task_id events = self.remote.get_events() (etime, name, status, who_viewed) = events[task_id] atime = time.asctime(time.localtime(etime)) print "task started (id=%s, time=%s)" % (name, atime) def follow_task(self, task_id): logfile = "/var/log/cobbler/tasks/%s.log" % task_id # adapted from: http://code.activestate.com/recipes/157035/ file = open(logfile,'r') #Find the size of the file and move to the end #st_results = os.stat(filename) #st_size = st_results[6] #file.seek(st_size) while 1: where = file.tell() line = file.readline() if line.find("### TASK COMPLETE ###") != -1: print "*** TASK COMPLETE ***" sys.exit(0) if line.find("### TASK FAILED ###") != -1: print "!!! TASK FAILED !!!" sys.exit(1) if not line: time.sleep(1) file.seek(where) else: if line.find(" | "): line = line.split(" | ")[-1] print line, # already has newline def print_object_help(self, object_type): """ Prints the subcommands for a given object, e.g. "cobbler distro --help" """ commands = OBJECT_ACTIONS_MAP[object_type] commands.sort() print "usage\n=====" for c in commands: print "cobbler %s %s" % (object_type, c) sys.exit(2) def print_help(self): """ Prints general-top level help, e.g. "cobbler --help" or "cobbler" or "cobbler command-does-not-exist" """ print "usage\n=====" print "cobbler ... " print " [add|edit|copy|getks*|list|remove|rename|report] [options|--help]" print "cobbler <%s> [options|--help]" % "|".join(DIRECT_ACTIONS) sys.exit(2) def main(): """ CLI entry point """ cli = BootCLI() cli.check_setup() rc = cli.run(sys.argv) if rc == True or rc is None: sys.exit(0) elif rc == False: sys.exit(1) return sys.exit(rc) if __name__ == "__main__": main() cobbler-2.4.1/cobbler/clogger.py000066400000000000000000000042161227367477500165610ustar00rootroot00000000000000""" Python standard logging doesn't super-intelligent and won't expose filehandles, which we want. So we're not using it. Copyright 2009, Red Hat, Inc and Others Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import time import os ERROR = "ERROR" WARNING = "WARNING" DEBUG = "DEBUG" INFO = "INFO" class Logger: def __init__(self, logfile="/var/log/cobbler/cobbler.log"): self.logfile = None # Main logfile is append mode, other logfiles not. if not os.path.exists(logfile) and os.path.exists(os.path.dirname(logfile)): self.logfile = open(logfile, "a") self.logfile.close() try: self.logfile = open(logfile, "a") except IOError: # You likely don't have write access, this logger will just print # things to stdout. pass def warning(self, msg): self.__write(WARNING, msg) def error(self, msg): self.__write(ERROR, msg) def debug(self, msg): self.__write(DEBUG, msg) def info(self, msg): self.__write(INFO, msg) def flat(self, msg): self.__write(None, msg) def __write(self, level, msg): if level is not None: msg = "%s - %s | %s" % (time.asctime(), level, msg) if self.logfile is not None: self.logfile.write(msg) self.logfile.write("\n") self.logfile.flush() else: print(msg) def handle(self): return self.logfile def close(self): self.logfile.close() cobbler-2.4.1/cobbler/cobblerd.py000066400000000000000000000074131227367477500167150ustar00rootroot00000000000000""" cobbler daemon for logging remote syslog traffic during kickstart Copyright 2007-2009, Red Hat, Inc and Others Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import sys import socket import time import os import SimpleXMLRPCServer import glob from utils import _ import xmlrpclib import binascii import utils import pwd import api as cobbler_api import utils import remote def main(): core(logger=None) def core(api): bootapi = api settings = bootapi.settings() xmlrpc_port = settings.xmlrpc_port regen_ss_file() do_xmlrpc_tasks(bootapi, settings, xmlrpc_port) def regen_ss_file(): # this is only used for Kerberos auth at the moment. # it identifies XMLRPC requests from Apache that have already # been cleared by Kerberos. ssfile = "/var/lib/cobbler/web.ss" fd = open("/dev/urandom") data = fd.read(512) fd.close() fd = os.open(ssfile,os.O_CREAT|os.O_RDWR,0600) os.write(fd,binascii.hexlify(data)) os.close(fd) http_user = "apache" if utils.check_dist() in [ "debian", "ubuntu" ]: http_user = "www-data" elif utils.check_dist() in [ "suse", "opensuse" ]: http_user = "wwwrun" os.lchown("/var/lib/cobbler/web.ss", pwd.getpwnam(http_user)[2], -1) return 1 def do_xmlrpc_tasks(bootapi, settings, xmlrpc_port): do_xmlrpc_rw(bootapi, settings, xmlrpc_port) #def do_other_tasks(bootapi, settings, syslog_port, logger): # # # FUTURE: this should also start the Web UI, if the dependencies # # are available. # # if os.path.exists("/usr/bin/avahi-publish-service"): # pid2 = os.fork() # if pid2 == 0: # do_syslog(bootapi, settings, syslog_port, logger) # else: # do_avahi(bootapi, settings, logger) # os.waitpid(pid2, 0) # else: # do_syslog(bootapi, settings, syslog_port, logger) def log(logger,msg): if logger is not None: logger.info(msg) else: print >>sys.stderr, msg #def do_avahi(bootapi, settings, logger): # # publish via zeroconf. This command will not terminate # log(logger, "publishing avahi service") # cmd = [ "/usr/bin/avahi-publish-service", # "cobblerd", # "_http._tcp", # "%s" % settings.xmlrpc_port ] # proc = sub_process.Popen(cmd, shell=False, stderr=sub_process.PIPE, stdout=sub_process.PIPE, close_fds=True) # proc.communicate()[0] # log(logger, "avahi service terminated") def do_xmlrpc_rw(bootapi,settings,port): xinterface = remote.ProxiedXMLRPCInterface(bootapi,remote.CobblerXMLRPCInterface) server = remote.CobblerXMLRPCServer(('127.0.0.1', port)) server.logRequests = 0 # don't print stuff xinterface.logger.debug("XMLRPC running on %s" % port) server.register_instance(xinterface) while True: try: print "SERVING!" server.serve_forever() except IOError: # interrupted? try to serve again time.sleep(0.5) if __name__ == "__main__": bootapi = cobbler_api.BootAPI() settings = bootapi.settings() regen_ss_file() do_xmlrpc_rw(bootapi, settings, 25151) cobbler-2.4.1/cobbler/codes.py000066400000000000000000000015561227367477500162400ustar00rootroot00000000000000""" various codes and constants used by Cobbler Copyright 2006-2009, Red Hat, Inc and Others Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ VALID_REPO_BREEDS = [ "rsync", "rhn", "yum", "apt" ] cobbler-2.4.1/cobbler/collection.py000066400000000000000000000452761227367477500173050ustar00rootroot00000000000000""" Base class for any serializable list of things... Copyright 2006-2009, Red Hat, Inc and Others Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import exceptions from cexceptions import * import utils import glob import time import random import os from threading import Lock import action_litesync import item_system import item_profile import item_distro import item_repo import item_image import item_mgmtclass import item_package import item_file from utils import _ class Collection: def __init__(self,config): """ Constructor. """ self.config = config self.clear() self.api = self.config.api self.lite_sync = None self.lock = Lock() def factory_produce(self,config,seed_data): """ Must override in subclass. Factory_produce returns an Item object from datastructure seed_data """ raise exceptions.NotImplementedError def clear(self): """ Forget about objects in the collection. """ self.listing = {} def get(self, name): """ Return object with name in the collection """ return self.listing.get(name.lower(), None) def find(self, name=None, return_list=False, no_errors=False, **kargs): """ Return first object in the collection that maches all item='value' pairs passed, else return None if no objects can be found. When return_list is set, can also return a list. Empty list would be returned instead of None in that case. """ matches = [] # support the old style innovation without kwargs if name is not None: kargs["name"] = name kargs = self.__rekey(kargs) # no arguments is an error, so we don't return a false match if len(kargs) == 0: raise CX(_("calling find with no arguments")) # performance: if the only key is name we can skip the whole loop if len(kargs) == 1 and kargs.has_key("name") and not return_list: return self.listing.get(kargs["name"].lower(), None) self.lock.acquire() try: for (name, obj) in self.listing.iteritems(): if obj.find_match(kargs, no_errors=no_errors): matches.append(obj) finally: self.lock.release() if not return_list: if len(matches) == 0: return None return matches[0] else: return matches SEARCH_REKEY = { 'kopts' : 'kernel_options', 'kopts_post' : 'kernel_options_post', 'ksmeta' : 'ks_meta', 'inherit' : 'parent', 'ip' : 'ip_address', 'mac' : 'mac_address', 'virt-auto-boot' : 'virt_auto_boot', 'virt-file-size' : 'virt_file_size', 'virt-disk-driver': 'virt_disk_driver', 'virt-ram' : 'virt_ram', 'virt-path' : 'virt_path', 'virt-type' : 'virt_type', 'virt-bridge' : 'virt_bridge', 'virt-cpus' : 'virt_cpus', 'virt-host' : 'virt_host', 'virt-group' : 'virt_group', 'dhcp-tag' : 'dhcp_tag', 'netboot-enabled' : 'netboot_enabled', 'ldap-enabled' : 'ldap_enabled', 'monit-enabled' : 'monit_enabled' } def __rekey(self,hash): """ Find calls from the command line ("cobbler system find") don't always match with the keys from the datastructs and this makes them both line up without breaking compatibility with either. Thankfully we don't have a LOT to remap. """ newhash = {} for x in hash.keys(): if self.SEARCH_REKEY.has_key(x): newkey = self.SEARCH_REKEY[x] newhash[newkey] = hash[x] else: newhash[x] = hash[x] return newhash def to_datastruct(self): """ Serialize the collection """ datastruct = [x.to_datastruct() for x in self.listing.values()] return datastruct def from_datastruct(self,datastruct): if datastruct is None: return for seed_data in datastruct: item = self.factory_produce(self.config,seed_data) self.add(item) def copy(self,ref,newname,logger=None): ref = ref.make_clone() ref.uid = self.config.generate_uid() ref.ctime = 0 ref.set_name(newname) if ref.COLLECTION_TYPE == "system": # this should only happen for systems for iname in ref.interfaces.keys(): # clear all these out to avoid DHCP/DNS conflicts ref.set_dns_name("",iname) ref.set_mac_address("",iname) ref.set_ip_address("",iname) return self.add(ref,save=True,with_copy=True,with_triggers=True,with_sync=True,check_for_duplicate_names=True,check_for_duplicate_netinfo=False) def rename(self,ref,newname,with_sync=True,with_triggers=True,logger=None): """ Allows an object "ref" to be given a newname without affecting the rest of the object tree. """ # Nothing to do when it is the same name if newname == ref.name: return True # make a copy of the object, but give it a new name. oldname = ref.name newref = ref.make_clone() newref.set_name(newname) self.add(newref, with_triggers=with_triggers,save=True) # for mgmt classes, update all objects that use it if ref.COLLECTION_TYPE == "mgmtclass": for what in ["distro","profile","system"]: items = self.api.find_items(what,{"mgmt_classes":oldname}) for item in items: for i in range(0,len(item.mgmt_classes)): if item.mgmt_classes[i] == oldname: item.mgmt_classes[i] = newname self.api.add_item(what,item,save=True) # for a repo, rename the mirror directory if ref.COLLECTION_TYPE == "repo": path = "/var/www/cobbler/repo_mirror/%s" % ref.name if os.path.exists(path): newpath = "/var/www/cobbler/repo_mirror/%s" % newref.name os.renames(path, newpath) # for a distro, rename the mirror and references to it if ref.COLLECTION_TYPE == 'distro': path = utils.find_distro_path(self.api.settings(), ref) # create a symlink for the new distro name utils.link_distro(self.api.settings(), newref) # test to see if the distro path is based directly # on the name of the distro. If it is, things need # to updated accordingly if os.path.exists(path) and path == "/var/www/cobbler/ks_mirror/%s" % ref.name: newpath = "/var/www/cobbler/ks_mirror/%s" % newref.name os.renames(path, newpath) # update any reference to this path ... distros = self.api.distros() for d in distros: if d.kernel.find(path) == 0: d.set_kernel(d.kernel.replace(path, newpath)) d.set_initrd(d.initrd.replace(path, newpath)) self.config.serialize_item(self, d) # now descend to any direct ancestors and point them at the new object allowing # the original object to be removed without orphanage. Direct ancestors # will either be profiles or systems. Note that we do have to care as # set_parent is only really meaningful for subprofiles. We ideally want a more # generic set_parent. kids = ref.get_children() for k in kids: if k.COLLECTION_TYPE == "distro": raise CX(_("internal error, not expected to have distro child objects")) elif k.COLLECTION_TYPE == "profile": if k.parent != "": k.set_parent(newname) else: k.set_distro(newname) self.api.profiles().add(k, save=True, with_sync=with_sync, with_triggers=with_triggers) elif k.COLLECTION_TYPE == "system": k.set_profile(newname) self.api.systems().add(k, save=True, with_sync=with_sync, with_triggers=with_triggers) elif k.COLLECTION_TYPE == "repo": raise CX(_("internal error, not expected to have repo child objects")) else: raise CX(_("internal error, unknown child type (%s), cannot finish rename" % k.COLLECTION_TYPE)) # now delete the old version self.remove(oldname, with_delete=True, with_triggers=with_triggers) return True def add(self,ref,save=False,with_copy=False,with_triggers=True,with_sync=True,quick_pxe_update=False,check_for_duplicate_names=False,check_for_duplicate_netinfo=False,logger=None): """ Add an object to the collection, if it's valid. Returns True if the object was added to the collection. Returns False if the object specified by ref deems itself invalid (and therefore won't be added to the collection). with_copy is a bit of a misnomer, but lots of internal add operations can run with "with_copy" as False. True means a real final commit, as if entered from the command line (or basically, by a user). With with_copy as False, the particular add call might just be being run during deserialization, in which case extra semantics around the add don't really apply. So, in that case, don't run any triggers and don't deal with any actual files. """ if ref is None or ref.name is None: return False try: ref.check_if_valid() except CX, error: return False if ref.uid == '': ref.uid = self.config.generate_uid() if save is True: now = time.time() if ref.ctime == 0: ref.ctime = now ref.mtime = now if self.lite_sync is None: self.lite_sync = action_litesync.BootLiteSync(self.config, logger=logger) # migration path for old API parameter that I've renamed. if with_copy and not save: save = with_copy if not save: # for people that aren't quite aware of the API # if not saving the object, you can't run these features with_triggers = False with_sync = False # Avoid adding objects to the collection # if an object of the same/ip/mac already exists. self.__duplication_checks(ref,check_for_duplicate_names,check_for_duplicate_netinfo) if ref.COLLECTION_TYPE != self.collection_type(): raise CX(_("API error: storing wrong data type in collection")) if not save: # don't need to run triggers, so add it already ... self.lock.acquire() try: self.listing[ref.name.lower()] = ref finally: self.lock.release() # perform filesystem operations if save: # failure of a pre trigger will prevent the object from being added if with_triggers: utils.run_triggers(self.api, ref,"/var/lib/cobbler/triggers/add/%s/pre/*" % self.collection_type(), [], logger) self.lock.acquire() try: self.listing[ref.name.lower()] = ref finally: self.lock.release() # save just this item if possible, if not, save # the whole collection self.config.serialize_item(self, ref) if with_sync: if isinstance(ref, item_system.System): # we don't need openvz containers to be network bootable if ref.virt_type == "openvz": ref.netboot_enabled = False self.lite_sync.add_single_system(ref.name) elif isinstance(ref, item_profile.Profile): # we don't need openvz containers to be network bootable if ref.virt_type == "openvz": ref.enable_menu = 0 self.lite_sync.add_single_profile(ref.name) elif isinstance(ref, item_distro.Distro): self.lite_sync.add_single_distro(ref.name) elif isinstance(ref, item_image.Image): self.lite_sync.add_single_image(ref.name) elif isinstance(ref, item_repo.Repo): pass elif isinstance(ref, item_mgmtclass.Mgmtclass): pass elif isinstance(ref, item_package.Package): pass elif isinstance(ref, item_file.File): pass else: print _("Internal error. Object type not recognized: %s") % type(ref) if not with_sync and quick_pxe_update: if isinstance(ref, item_system.System): self.lite_sync.update_system_netboot_status(ref.name) # save the tree, so if neccessary, scripts can examine it. if with_triggers: utils.run_triggers(self.api, ref, "/var/lib/cobbler/triggers/change/*", [], logger) utils.run_triggers(self.api, ref,"/var/lib/cobbler/triggers/add/%s/post/*" % self.collection_type(), [], logger) # update children cache in parent object parent = ref.get_parent() if parent != None: parent.children[ref.name] = ref return True def __duplication_checks(self,ref,check_for_duplicate_names,check_for_duplicate_netinfo): """ Prevents adding objects with the same name. Prevents adding or editing to provide the same IP, or MAC. Enforcement is based on whether the API caller requests it. """ # always protect against duplicate names if check_for_duplicate_names: match = None if isinstance(ref, item_system.System): match = self.api.find_system(ref.name) elif isinstance(ref, item_profile.Profile): match = self.api.find_profile(ref.name) elif isinstance(ref, item_distro.Distro): match = self.api.find_distro(ref.name) elif isinstance(ref, item_repo.Repo): match = self.api.find_repo(ref.name) elif isinstance(ref, item_image.Image): match = self.api.find_image(ref.name) elif isinstance(ref, item_mgmtclass.Mgmtclass): match = self.api.find_mgmtclass(ref.name) elif isinstance(ref, item_package.Package): match = self.api.find_package(ref.name) elif isinstance(ref, item_file.File): match = self.api.find_file(ref.name) else: raise CX("internal error, unknown object type") if match: raise CX(_("An object already exists with that name. Try 'edit'?")) # the duplicate mac/ip checks can be disabled. if not check_for_duplicate_netinfo: return if isinstance(ref, item_system.System): for (name, intf) in ref.interfaces.iteritems(): match_ip = [] match_mac = [] match_hosts = [] input_mac = intf["mac_address"] input_ip = intf["ip_address"] input_dns = intf["dns_name"] if not self.api.settings().allow_duplicate_macs and input_mac is not None and input_mac != "": match_mac = self.api.find_system(mac_address=input_mac,return_list=True) if not self.api.settings().allow_duplicate_ips and input_ip is not None and input_ip != "": match_ip = self.api.find_system(ip_address=input_ip,return_list=True) # it's ok to conflict with your own net info. if not self.api.settings().allow_duplicate_hostnames and input_dns is not None and input_dns != "": match_hosts = self.api.find_system(dns_name=input_dns,return_list=True) for x in match_mac: if x.name != ref.name: raise CX(_("Can't save system %s. The MAC address (%s) is already used by system %s (%s)") % (ref.name, intf["mac_address"], x.name, name)) for x in match_ip: if x.name != ref.name: raise CX(_("Can't save system %s. The IP address (%s) is already used by system %s (%s)") % (ref.name, intf["ip_address"], x.name, name)) for x in match_hosts: if x.name != ref.name: raise CX(_("Can't save system %s. The dns name (%s) is already used by system %s (%s)") % (ref.name, intf["dns_name"], x.name, name)) def printable(self): """ Creates a printable representation of the collection suitable for reading by humans or parsing from scripts. Actually scripts would be better off reading the YAML in the config files directly. """ values = self.listing.values()[:] # copy the values values.sort() # sort the copy (2.3 fix) results = [] for i,v in enumerate(values): results.append(v.printable()) if len(values) > 0: return "\n\n".join(results) else: return _("No objects found") def __iter__(self): """ Iterator for the collection. Allows list comprehensions, etc """ for a in self.listing.values(): yield a def __len__(self): """ Returns size of the collection """ return len(self.listing.values()) def collection_type(self): """ Returns the string key for the name of the collection (for use in messages for humans) """ return exceptions.NotImplementedError cobbler-2.4.1/cobbler/collection_distros.py000066400000000000000000000100721227367477500210360ustar00rootroot00000000000000""" A distro represents a network bootable matched set of kernels and initrd files Copyright 2006-2009, Red Hat, Inc and Others Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import utils import collection import item_distro as distro from cexceptions import * import action_litesync from utils import _ import os.path import glob class Distros(collection.Collection): def collection_type(self): return "distro" def factory_produce(self,config,seed_data): """ Return a Distro forged from seed_data """ return distro.Distro(config).from_datastruct(seed_data) def remove(self,name,with_delete=True,with_sync=True,with_triggers=True,recursive=False,logger=None): """ Remove element named 'name' from the collection """ name = name.lower() # first see if any Groups use this distro if not recursive: for v in self.config.profiles(): if v.distro and v.distro.lower() == name: raise CX(_("removal would orphan profile: %s") % v.name) obj = self.find(name=name) if obj is not None: kernel = obj.kernel if recursive: kids = obj.get_children() for k in kids: self.config.api.remove_profile(k.name, recursive=recursive, delete=with_delete, with_triggers=with_triggers, logger=logger) if with_delete: if with_triggers: utils.run_triggers(self.config.api, obj, "/var/lib/cobbler/triggers/delete/distro/pre/*", [], logger) if with_sync: lite_sync = action_litesync.BootLiteSync(self.config, logger=logger) lite_sync.remove_single_distro(name) self.lock.acquire() try: del self.listing[name] finally: self.lock.release() self.config.serialize_delete(self, obj) if with_delete: if with_triggers: utils.run_triggers(self.config.api, obj, "/var/lib/cobbler/triggers/delete/distro/post/*", [], logger) utils.run_triggers(self.config.api, obj, "/var/lib/cobbler/triggers/change/*", [], logger) # look through all mirrored directories and find if any directory is holding # this particular distribution's kernel and initrd settings = self.config.settings() possible_storage = glob.glob(settings.webdir+"/ks_mirror/*") path = None for storage in possible_storage: if os.path.dirname(obj.kernel).find(storage) != -1: path = storage continue # if we found a mirrored path above, we can delete the mirrored storage /if/ # no other object is using the same mirrored storage. if with_delete and path is not None and os.path.exists(path) and kernel.find(settings.webdir) != -1: # this distro was originally imported so we know we can clean up the associated # storage as long as nothing else is also using this storage. found = False distros = self.api.distros() for d in distros: if d.kernel.find(path) != -1: found = True if not found: utils.rmtree(path) return True cobbler-2.4.1/cobbler/collection_files.py000066400000000000000000000043571227367477500204620ustar00rootroot00000000000000""" Files provide a container for file resources. Copyright 2010, Kelsey Hightower Kelsey Hightower This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import item_file as file import utils import collection from cexceptions import CX from utils import _ #-------------------------------------------- class Files(collection.Collection): def collection_type(self): return "file" def factory_produce(self,config,seed_data): """ Return a File forged from seed_data """ return file.File(config).from_datastruct(seed_data) def remove(self,name,with_delete=True,with_sync=True,with_triggers=True,recursive=False,logger=None): """ Remove element named 'name' from the collection """ name = name.lower() obj = self.find(name=name) if obj is not None: if with_delete: if with_triggers: utils.run_triggers(self.config.api, obj, "/var/lib/cobbler/triggers/delete/file/*", [], logger) self.lock.acquire() try: del self.listing[name] finally: self.lock.release() self.config.serialize_delete(self, obj) if with_delete: if with_triggers: utils.run_triggers(self.config.api, obj, "/var/lib/cobbler/triggers/delete/file/post/*", [], logger) utils.run_triggers(self.config.api, obj, "/var/lib/cobbler/triggers/change/*", [], logger) return True raise CX(_("cannot delete an object that does not exist: %s") % name) cobbler-2.4.1/cobbler/collection_images.py000066400000000000000000000053321227367477500206170ustar00rootroot00000000000000""" A image instance represents a ISO or virt image we want to track and repeatedly install. It differs from a answer-file based installation. Copyright 2006-2009, Red Hat, Inc and Others Michael DeHaan This software may be freely redistributed under the terms of the GNU general public license. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. """ import item_image as image import utils import collection from cexceptions import * from utils import _ import action_litesync #-------------------------------------------- class Images(collection.Collection): def collection_type(self): return "image" def factory_produce(self,config,seed_data): """ Return a Distro forged from seed_data """ return image.Image(config).from_datastruct(seed_data) def remove(self,name,with_delete=True,with_sync=True,with_triggers=True,recursive=True, logger=None): """ Remove element named 'name' from the collection """ # NOTE: with_delete isn't currently meaningful for repos # but is left in for consistancy in the API. Unused. name = name.lower() # first see if any Groups use this distro if not recursive: for v in self.config.systems(): if v.image is not None and v.image.lower() == name: raise CX(_("removal would orphan system: %s") % v.name) obj = self.find(name=name) if obj is not None: if recursive: kids = obj.get_children() for k in kids: self.config.api.remove_system(k, recursive=True, logger=logger) if with_delete: if with_triggers: utils.run_triggers(self.config.api, obj, "/var/lib/cobbler/triggers/delete/image/pre/*", [], logger) if with_sync: lite_sync = action_litesync.BootLiteSync(self.config, logger=logger) lite_sync.remove_single_image(name) self.lock.acquire() try: del self.listing[name] finally: self.lock.release() self.config.serialize_delete(self, obj) if with_delete: if with_triggers: utils.run_triggers(self.config.api, obj, "/var/lib/cobbler/triggers/delete/image/post/*", [], logger) utils.run_triggers(self.config.api, obj, "/var/lib/cobbler/triggers/change/*", [], logger) return True raise CX(_("cannot delete an object that does not exist: %s") % name) cobbler-2.4.1/cobbler/collection_mgmtclasses.py000066400000000000000000000044571227367477500217030ustar00rootroot00000000000000""" A mgmtclass provides a container for management resources. Copyright 2010, Kelsey Hightower Kelsey Hightower This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import item_mgmtclass as mgmtclass import utils import collection from cexceptions import CX from utils import _ #-------------------------------------------- class Mgmtclasses(collection.Collection): def collection_type(self): return "mgmtclass" def factory_produce(self,config,seed_data): """ Return a mgmtclass forged from seed_data """ return mgmtclass.Mgmtclass(config).from_datastruct(seed_data) def remove(self,name,with_delete=True,with_sync=True,with_triggers=True,recursive=False,logger=None): """ Remove element named 'name' from the collection """ name = name.lower() obj = self.find(name=name) if obj is not None: if with_delete: if with_triggers: utils.run_triggers(self.config.api, obj, "/var/lib/cobbler/triggers/delete/mgmtclass/pre/*", [], logger) self.lock.acquire() try: del self.listing[name] finally: self.lock.release() self.config.serialize_delete(self, obj) if with_delete: if with_triggers: utils.run_triggers(self.config.api, obj, "/var/lib/cobbler/triggers/delete/mgmtclass/post/*", [], logger) utils.run_triggers(self.config.api, obj, "/var/lib/cobbler/triggers/change/*", [], logger) return True raise CX(_("cannot delete an object that does not exist: %s") % name) cobbler-2.4.1/cobbler/collection_packages.py000066400000000000000000000044221227367477500211270ustar00rootroot00000000000000""" A package provides a container for package resources. Copyright 2010, Kelsey Hightower Kelsey Hightower This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import item_package as package import utils import collection from cexceptions import CX from utils import _ #-------------------------------------------- class Packages(collection.Collection): def collection_type(self): return "package" def factory_produce(self,config,seed_data): """ Return a Package forged from seed_data """ return package.Package(config).from_datastruct(seed_data) def remove(self,name,with_delete=True,with_sync=True,with_triggers=True,recursive=False,logger=None): """ Remove element named 'name' from the collection """ name = name.lower() obj = self.find(name=name) if obj is not None: if with_delete: if with_triggers: utils.run_triggers(self.config.api, obj, "/var/lib/cobbler/triggers/delete/package/*", [], logger) self.lock.acquire() try: del self.listing[name] finally: self.lock.release() self.config.serialize_delete(self, obj) if with_delete: if with_triggers: utils.run_triggers(self.config.api, obj, "/var/lib/cobbler/triggers/delete/package/post/*", [], logger) utils.run_triggers(self.config.api, obj, "/var/lib/cobbler/triggers/change/*", [], logger) return True raise CX(_("cannot delete an object that does not exist: %s") % name) cobbler-2.4.1/cobbler/collection_profiles.py000066400000000000000000000065361227367477500212040ustar00rootroot00000000000000""" A profile represents a distro paired with a kickstart file. For instance, FC5 with a kickstart file specifying OpenOffice might represent a 'desktop' profile. For Virt, there are many additional options, with client-side defaults (not kept here). Copyright 2006-2009, Red Hat, Inc and Others Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import item_profile as profile import utils import collection from cexceptions import * import action_litesync from utils import _ #-------------------------------------------- class Profiles(collection.Collection): def collection_type(self): return "profile" def factory_produce(self,config,seed_data): """ Return a Distro forged from seed_data """ return profile.Profile(config).from_datastruct(seed_data) def remove(self,name,with_delete=True,with_sync=True,with_triggers=True,recursive=False,logger=None): """ Remove element named 'name' from the collection """ name = name.lower() if not recursive: for v in self.config.systems(): if v.profile is not None and v.profile.lower() == name: raise CX(_("removal would orphan system: %s") % v.name) obj = self.find(name=name) if obj is not None: if recursive: kids = obj.get_children() for k in kids: if k.COLLECTION_TYPE == "profile": self.config.api.remove_profile(k.name, recursive=recursive, delete=with_delete, with_triggers=with_triggers, logger=logger) else: self.config.api.remove_system(k.name, recursive=recursive, delete=with_delete, with_triggers=with_triggers, logger=logger) if with_delete: if with_triggers: utils.run_triggers(self.config.api, obj, "/var/lib/cobbler/triggers/delete/profile/pre/*", [], logger) self.lock.acquire() try: del self.listing[name] finally: self.lock.release() self.config.serialize_delete(self, obj) if with_delete: if with_triggers: utils.run_triggers(self.config.api, obj, "/var/lib/cobbler/triggers/delete/profile/post/*", [], logger) utils.run_triggers(self.config.api, obj, "/var/lib/cobbler/triggers/change/*", [], logger) if with_sync: lite_sync = action_litesync.BootLiteSync(self.config, logger=logger) lite_sync.remove_single_profile(name) return True raise CX(_("cannot delete an object that does not exist: %s") % name) cobbler-2.4.1/cobbler/collection_repos.py000066400000000000000000000056351227367477500205100ustar00rootroot00000000000000""" Repositories in cobbler are way to create a local mirror of a yum repository. When used in conjunction with a mirrored kickstart tree (see "cobbler import") outside bandwidth needs can be reduced and/or eliminated. Copyright 2006-2009, Red Hat, Inc and Others Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import item_repo as repo import utils import collection from cexceptions import * from utils import _ import os.path TESTMODE = False #-------------------------------------------- class Repos(collection.Collection): def collection_type(self): return "repo" def factory_produce(self,config,seed_data): """ Return a Distro forged from seed_data """ return repo.Repo(config).from_datastruct(seed_data) def remove(self,name,with_delete=True,with_sync=True,with_triggers=True,recursive=False,logger=None): """ Remove element named 'name' from the collection """ # NOTE: with_delete isn't currently meaningful for repos # but is left in for consistancy in the API. Unused. name = name.lower() obj = self.find(name=name) if obj is not None: if with_delete: if with_triggers: utils.run_triggers(self.config.api, obj, "/var/lib/cobbler/triggers/delete/repo/pre/*", [], logger) self.lock.acquire() try: del self.listing[name] finally: self.lock.release() self.config.serialize_delete(self, obj) if with_delete: if with_triggers: utils.run_triggers(self.config.api, obj, "/var/lib/cobbler/triggers/delete/repo/post/*", [], logger) utils.run_triggers(self.config.api, obj, "/var/lib/cobbler/triggers/change/*", [], logger) #FIXME: better use config.settings() webdir? path = "/var/www/cobbler/repo_mirror/%s" % obj.name if os.path.exists("/srv/www/"): path = "/srv/www/cobbler/repo_mirror/%s" % obj.name if os.path.exists(path): utils.rmtree(path) return True raise CX(_("cannot delete an object that does not exist: %s") % name) cobbler-2.4.1/cobbler/collection_systems.py000066400000000000000000000050131227367477500210550ustar00rootroot00000000000000""" Systems are hostnames/MACs/IP names and the associated profile they belong to. Copyright 2008-2009, Red Hat, Inc and Others Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import item_system as system import utils import collection from cexceptions import * import action_litesync from utils import _ #-------------------------------------------- class Systems(collection.Collection): def collection_type(self): return "system" def factory_produce(self,config,seed_data): """ Return a Distro forged from seed_data """ return system.System(config).from_datastruct(seed_data) def remove(self,name,with_delete=True,with_sync=True,with_triggers=True,recursive=False, logger=None): """ Remove element named 'name' from the collection """ name = name.lower() obj = self.find(name=name) if obj is not None: if with_delete: if with_triggers: utils.run_triggers(self.config.api, obj, "/var/lib/cobbler/triggers/delete/system/pre/*", [], logger) if with_sync: lite_sync = action_litesync.BootLiteSync(self.config, logger=logger) lite_sync.remove_single_system(name) self.lock.acquire() try: del self.listing[name] finally: self.lock.release() self.config.serialize_delete(self, obj) if with_delete: if with_triggers: utils.run_triggers(self.config.api, obj, "/var/lib/cobbler/triggers/delete/system/post/*", [], logger) utils.run_triggers(self.config.api, obj, "/var/lib/cobbler/triggers/change/*", [], logger) return True raise CX(_("cannot delete an object that does not exist: %s") % name) cobbler-2.4.1/cobbler/config.py000066400000000000000000000215761227367477500164140ustar00rootroot00000000000000""" Config.py is a repository of the Cobbler object model Copyright 2006-2009, Red Hat, Inc and Others Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import os import weakref import time import random import string import binascii import item_distro as distro import item_profile as profile import item_system as system import item_repo as repo import item_image as image import item_mgmtclass as mgmtclass import item_package as package import item_file as file import collection_distros as distros import collection_profiles as profiles import collection_systems as systems import collection_repos as repos import collection_images as images import collection_mgmtclasses as mgmtclasses import collection_packages as packages import collection_files as files import settings import serializer import traceback from utils import _ from cexceptions import * class Config: has_loaded = False __shared_state = {} def __init__(self,api): """ Constructor. Manages a definitive copy of all data collections with weakrefs pointing back into the class so they can understand each other's contents """ self.__dict__ = Config.__shared_state if not Config.has_loaded: self.__load(api) def __load(self,api): Config.has_loaded = True self.init_time = time.time() self.current_id = 0 self.api = api self._distros = distros.Distros(weakref.proxy(self)) self._repos = repos.Repos(weakref.proxy(self)) self._profiles = profiles.Profiles(weakref.proxy(self)) self._systems = systems.Systems(weakref.proxy(self)) self._images = images.Images(weakref.proxy(self)) self._mgmtclasses = mgmtclasses.Mgmtclasses(weakref.proxy(self)) self._packages = packages.Packages(weakref.proxy(self)) self._files = files.Files(weakref.proxy(self)) self._settings = settings.Settings() # not a true collection def generate_uid(self): """ Cobbler itself does not use this GUID's though they are provided to allow for easier API linkage with other applications. Cobbler uses unique names in each collection as the object id aka primary key """ data = "%s%s" % (time.time(), random.uniform(1,9999999)) return binascii.b2a_base64(data).replace("=","").strip() def __cmp(self,a,b): return cmp(a.name,b.name) def distros(self): """ Return the definitive copy of the Distros collection """ return self._distros def profiles(self): """ Return the definitive copy of the Profiles collection """ return self._profiles def systems(self): """ Return the definitive copy of the Systems collection """ return self._systems def settings(self): """ Return the definitive copy of the application settings """ return self._settings def repos(self): """ Return the definitive copy of the Repos collection """ return self._repos def images(self): """ Return the definitive copy of the Images collection """ return self._images def mgmtclasses(self): """ Return the definitive copy of the Mgmtclasses collection """ return self._mgmtclasses def packages(self): """ Return the definitive copy of the Packages collection """ return self._packages def files(self): """ Return the definitive copy of the Files collection """ return self._files def new_distro(self,is_subobject=False): """ Create a new distro object with a backreference to this object """ return distro.Distro(weakref.proxy(self),is_subobject=is_subobject) def new_system(self,is_subobject=False): """ Create a new system with a backreference to this object """ return system.System(weakref.proxy(self),is_subobject=is_subobject) def new_profile(self,is_subobject=False): """ Create a new profile with a backreference to this object """ return profile.Profile(weakref.proxy(self),is_subobject=is_subobject) def new_repo(self,is_subobject=False): """ Create a new mirror to keep track of... """ return repo.Repo(weakref.proxy(self),is_subobject=is_subobject) def new_image(self,is_subobject=False): """ Create a new image object... """ return image.Image(weakref.proxy(self),is_subobject=is_subobject) def new_mgmtclass(self,is_subobject=False): """ Create a new mgmtclass object... """ return mgmtclass.Mgmtclass(weakref.proxy(self),is_subobject=is_subobject) def new_package(self,is_subobject=False): """ Create a new package object... """ return package.Package(weakref.proxy(self),is_subobject=is_subobject) def new_file(self,is_subobject=False): """ Create a new image object... """ return file.File(weakref.proxy(self),is_subobject=is_subobject) def clear(self): """ Forget about all loaded configuration data """ self._distros.clear(), self._repos.clear(), self._profiles.clear(), self._images.clear() self._systems.clear(), self._mgmtclasses.clear(), self._packages.clear(), self._files.clear(), return True def serialize(self): """ Save the object hierarchy to disk, using the filenames referenced in each object. """ serializer.serialize(self._distros) serializer.serialize(self._repos) serializer.serialize(self._profiles) serializer.serialize(self._images) serializer.serialize(self._systems) serializer.serialize(self._mgmtclasses) serializer.serialize(self._packages) serializer.serialize(self._files) return True def serialize_item(self,collection,item): """ Save item in the collection, resaving the whole collection if needed, but ideally just saving the item. """ return serializer.serialize_item(collection,item) def serialize_delete(self,collection,item): """ Erase item from a storage file, if neccessary rewritting the file. """ return serializer.serialize_delete(collection,item) def deserialize(self): """ Load the object hierachy from disk, using the filenames referenced in each object. """ for item in [ self._settings, self._distros, self._repos, self._profiles, self._images, self._systems, self._mgmtclasses, self._packages, self._files, ]: try: if not serializer.deserialize(item): raise "" except: raise CX("serializer: error loading collection %s. Check /etc/cobbler/modules.conf" % item.collection_type()) return True def deserialize_raw(self,collection_type): """ Get object data from disk, not objects. """ return serializer.deserialize_raw(collection_type) def deserialize_item_raw(self,collection_type,obj_name): """ Get a raw single object. """ return serializer.deserialize_item_raw(collection_type,obj_name) def get_items(self,collection_type): if collection_type == "distro": result=self._distros elif collection_type == "profile": result=self._profiles elif collection_type == "system": result=self._systems elif collection_type == "repo": result=self._repos elif collection_type == "image": result=self._images elif collection_type == "mgmtclass": result=self._mgmtclasses elif collection_type == "package": result=self._packages elif collection_type == "file": result=self._files elif collection_type == "settings": result=self._settings else: raise CX("internal error, collection name %s not supported" % collection_type) return result cobbler-2.4.1/cobbler/configgen.py000066400000000000000000000146701227367477500171030ustar00rootroot00000000000000""" configgen.py: Generate configuration data. Copyright 2010 Kelsey Hightower Kelsey Hightower This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA module for generating configuration manifest using ksmeta data, mgmtclasses, resources, and templates for a given system (hostname) """ from Cheetah.Template import Template from cexceptions import CX import cobbler.utils import cobbler.api as capi import simplejson as json import string import utils import clogger class ConfigGen: """ Generate configuration data for Cobbler's management resources: repos, ldap, files, packages, and monit. Mainly used by Koan to configure remote systems. """ def __init__(self,hostname): """Constructor. Requires a Cobbler API handle.""" self.hostname = hostname self.handle = capi.BootAPI() self.system = self.handle.find_system(hostname=self.hostname) self.host_vars = self.get_cobbler_resource('ks_meta') self.logger = clogger.Logger("/var/log/cobbler/cobbler.log") self.mgmtclasses = self.get_cobbler_resource('mgmt_classes') #---------------------------------------------------------------------- def resolve_resource_var(self,string_data): """Substitute variables in strings.""" data = string.Template(string_data).substitute(self.host_vars) return data #---------------------------------------------------------------------- def resolve_resource_list(self,list_data): """Substitute variables in lists. Return new list.""" new_list = [] for item in list_data: new_list.append(string.Template(item).substitute(self.host_vars)) return new_list #---------------------------------------------------------------------- def get_cobbler_resource(self,resource): """Wrapper around cobbler blender method""" return cobbler.utils.blender(self.handle, False, self.system)[resource] #---------------------------------------------------------------------- def gen_config_data(self): """ Generate configuration data for repos, ldap, files, packages, and monit. Returns a dict. """ config_data = { 'repo_data' : self.handle.get_repo_config_for_system(self.system), 'repos_enabled': self.get_cobbler_resource('repos_enabled'), 'ldap_enabled' : self.get_cobbler_resource('ldap_enabled'), 'monit_enabled': self.get_cobbler_resource('monit_enabled') } package_set = set() file_set = set() for mgmtclass in self.mgmtclasses: _mgmtclass = self.handle.find_mgmtclass(name=mgmtclass) for package in _mgmtclass.packages: package_set.add(package) for file in _mgmtclass.files: file_set.add(file) # Generate LDAP data if self.get_cobbler_resource("ldap_enabled"): if self.system.ldap_type in [ "", "none" ]: utils.die(self.logger, "LDAP management type not set for this system (%s, %s)" % (self.system.ldap_type, self.system.name)) else: template = utils.get_ldap_template(self.system.ldap_type) t = Template(file=template, searchList=[self.host_vars]) print t config_data['ldap_data'] = t.respond() # Generate Package data pkg_data = {} for package in package_set: _package = self.handle.find_package(name=package) if _package is None: raise CX('%s package resource is not defined' % package) else: pkg_data[package] = {} pkg_data[package]['action'] = self.resolve_resource_var(_package.action) pkg_data[package]['installer'] = _package.installer pkg_data[package]['version'] = self.resolve_resource_var(_package.version) if pkg_data[package]['version'] != "": pkg_data[package]["install_name"] = "%s-%s" % (package, pkg_data[package]['version']) else: pkg_data[package]["install_name"] = package config_data['packages'] = pkg_data # Generate File data file_data = {} for file in file_set: _file = self.handle.find_file(name=file) if _file is None: raise CX('%s file resource is not defined' % file) file_data[file] = {} file_data[file]['is_dir'] = _file.is_dir file_data[file]['action'] = self.resolve_resource_var(_file.action) file_data[file]['group'] = self.resolve_resource_var(_file.group) file_data[file]['mode'] = self.resolve_resource_var(_file.mode) file_data[file]['owner'] = self.resolve_resource_var(_file.owner) file_data[file]['path'] = self.resolve_resource_var(_file.path) if not _file.is_dir: file_data[file]['template'] = self.resolve_resource_var(_file.template) try: t = Template(file=file_data[file]['template'], searchList=[self.host_vars]) file_data[file]['content'] = t.respond() except: utils.die(self.logger, "Missing template for this file resource %s" % (file_data[file])) config_data['files'] = file_data return config_data #---------------------------------------------------------------------- def gen_config_data_for_koan(self): """Encode configuration data. Return json object for Koan.""" json_config_data = json.JSONEncoder(sort_keys=True, indent=4).encode(self.gen_config_data()) return json_config_datacobbler-2.4.1/cobbler/couch.py000066400000000000000000000060031227367477500162340ustar00rootroot00000000000000 import httplib, simplejson # http://cheeseshop.python.org/pypi/simplejson # Here only used for prettyprinting def prettyPrint(s): """Prettyprints the json response of an HTTPResponse object""" # HTTPResponse instance -> Python object -> str print simplejson.dumps(simplejson.loads(s.read()), sort_keys=True, indent=4) class Couch: """Basic wrapper class for operations on a couchDB""" def __init__(self, host, port=5984, options=None): self.host = host self.port = port def connect(self): return httplib.HTTPConnection(self.host, self.port) # No close() # Database operations def createDb(self, dbName): """Creates a new database on the server""" r = self.put(''.join(['/',dbName,'/']), "") return r.read() def deleteDb(self, dbName): """Deletes the database on the server""" r = self.delete(''.join(['/',dbName,'/'])) return r.read() def listDb(self): """List the databases on the server""" r = self.get('/_all_dbs') return r.read() def infoDb(self, dbName): """Returns info about the couchDB""" r = self.get(''.join(['/', dbName, '/'])) return r.read() # Document operations def listDoc(self, dbName): """List all documents in a given database""" r = self.get(''.join(['/', dbName, '/', '_all_docs'])) return r.read() def openDoc(self, dbName, docId): """Open a document in a given database""" r = self.get(''.join(['/', dbName, '/', docId,])) return r.read() def saveDoc(self, dbName, body, docId=None): """Save/create a document to/in a given database""" if docId: r = self.put(''.join(['/', dbName, '/', docId]), body) else: r = self.post(''.join(['/', dbName, '/']), body) return r.read() def deleteDoc(self, dbName, docId): # XXX Crashed if resource is non-existent; not so for DELETE on db. Bug? # XXX Does not work any more, on has to specify an revid # Either do html head to get the recten revid or provide it as parameter r = self.delete(''.join(['/', dbName, '/', docId])) return r.read() # Basic http methods def get(self, uri): c = self.connect() headers = {"Accept": "application/json"} c.request("GET", uri, None, headers) return c.getresponse() def post(self, uri, body): c = self.connect() headers = {"Content-type": "application/json"} c.request('POST', uri, body, headers) return c.getresponse() def put(self, uri, body): c = self.connect() if len(body) > 0: headers = {"Content-type": "application/json"} c.request("PUT", uri, body, headers) else: c.request("PUT", uri, body) return c.getresponse() def delete(self, uri): c = self.connect() c.request("DELETE", uri) return c.getresponse() cobbler-2.4.1/cobbler/field_info.py000066400000000000000000000150471227367477500172410ustar00rootroot00000000000000""" Describes additional properties of cobbler fields otherwise defined in item_*.py. These values are common to all versions of the fields, so they don't have to be repeated in each file. Copyright 2009, Red Hat, Inc and Others Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ # fields that display as a text area in the web app # note: not the same as a text field, this is the big one. USES_TEXTAREA = [ "comment", "mgmt_parameters", "template_files", "fetchable_files", "boot_files" ] # fields that use a multi select in the web app USES_MULTI_SELECT = [ "repos", "mgmt_classes", "packages", "files", ] # fields that use a select in the web app USES_SELECT = [ "profile", "distro", "image", "image_type", "virt_type", "arch", "*interface_type", "parent", "breed", "os_version", "status", "power_type", ] # fields that should use the checkbox in the web app USES_CHECKBOX = [ "enable_gpxe", "enable_menu", "*netboot_enabled", "netboot_enabled", "*static", "*management", "management", "ipv6_autoconfiguration", "keep_updated", "mirror_locally", "virt_auto_boot", "virt_pxe_boot", "*repos_enabled", "repos_enabled", "*ldap_enabled", "ldap_enabled", "*monit_enabled", "monit_enabled", "is_definition", ] # select killed the radio button # we should not make anything use a radio button, we hate radio buttons. USES_RADIO = [ ] # this is the map of what color to color code each field type. # it may also be used to expand/collapse certain web elements as a set. BLOCK_MAPPINGS = { "virt_ram" : "Virtualization", "virt_disk" : "Virtualization", "virt_cpus" : "Virtualization", "virt_bridge" : "Virtualization", "virt_path" : "Virtualization", "virt_file_size" : "Virtualization", "virt_disk_driver": "Virtualization", "virt_type" : "Virtualization", "virt_auto_boot" : "Virtualization", "virt_pxe_boot" : "Virtualization", "virt_host" : "Virtualization", "virt_group" : "Virtualization", "virt_guests" : "Virtualization", "*virt_ram" : "Virtualization", "*virt_disk" : "Virtualization", "*virt_path" : "Virtualization", "*virt_cpus" : "Virtualization", "*virt_bridge" : "Networking", "*virt_type" : "Virtualization", "*virt_file_size" : "Virtualization", "*virt_disk_driver" : "Virtualization", "power_id" : "Power Management", "power_address" : "Power Management", "power_user" : "Power Management", "power_pass" : "Power Management", "power_type" : "Power Management", "address" : "Networking", # from network "cidr" : "Networking", # ditto "broadcast" : "Networking", # .. "reserved" : "Networking", # .. "*mac_address" : "Networking", "network_widget_c": "Networking", "*mtu" : "Networking", "*ip_address" : "Networking", "*dhcp_tag" : "Networking", "*static" : "Networking", "*interface_type" : "Networking", "*interface_master" : "Networking", "*bonding_opts" : "Networking", "*bridge_opts" : "Networking", "*management" : "Networking", "*dns_name" : "Networking", "*cnames" : "Networking", "*static_routes" : "Networking", "*netmask" : "Networking", "*if_gateway" : "Networking", "*ipv6_address" : "Networking", "*ipv6_secondaries" : "Networking", "*ipv6_mtu" : "Networking", "*ipv6_static_routes" : "Networking", "*ipv6_default_gateway" : "Networking", "hostname" : "Networking (Global)", "gateway" : "Networking (Global)", "name_servers" : "Networking (Global)", "name_servers_search" : "Networking (Global)", "ipv6_default_device" : "Networking (Global)", "ipv6_autoconfiguration" : "Networking (Global)", "proxy" : "General", "repos" : "General", "dhcp_tag" : "Advanced", "enable_gpxe" : "Advanced", "mgmt_classes" : "Management", "mgmt_parameters" : "Management", "template_files" : "Management", "boot_files" : "Management", "fetchable_files" : "Management", "network_widget_a" : "Networking", "network_widget_b" : "Networking", "server" : "Advanced", "redhat_management_key" : "Management", "redhat_management_server" : "Management", "createrepo_flags" : "Advanced", "environment" : "Advanced", "mirror_locally" : "Advanced", "priority" : "Advanced", "yumopts" : "Advanced", "apt_components" : "Advanced", "apt_dists" : "Advanced", "packages" : "Resources", "files" : "Resources", "repos_enabled" : "Management", "ldap_enabled" : "Management", "ldap_type" : "Management", "monit_enabled" : "Management", } BLOCK_MAPPINGS_ORDER = { "General" : 0, "Advanced" : 1, "Networking (Global)" : 2, "Networking" : 3, "Management" : 4, "Virtualization" : 5, "Power Management" : 6, "Resources" : 7, } # Certain legacy fields need to have different CLI options than the direct translation of their # name in the FIELDS data structure. We should not add any more of these under any conditions. ALTERNATE_OPTIONS = { "ks_meta" : "--ksmeta", "kernel_options" : "--kopts", "kernel_options_post" : "--kopts-post", } # Deprecated fields that have been renamed, but we need to account for them appearing in older # datastructs that may not have been saved since the code change DEPRECATED_FIELDS = { "subnet" : "netmask", "bonding" : "interface_type", "bonding_master" : "interface_master", } cobbler-2.4.1/cobbler/func_utils.py000066400000000000000000000021641227367477500173120ustar00rootroot00000000000000""" Misc func functions for cobbler Copyright 2006-2008, Red Hat, Inc and Others Scott Henson This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ try: import func.overlord.client as func from func.CommonErrors import Func_Client_Exception HAZFUNC=True except ImportError: HAZFUNC=False except IOError: # cant import Func because we're not root, for instance, we're likely # running from Apache and we've pulled this in from importing utils HAZFUNC=False cobbler-2.4.1/cobbler/item.py000066400000000000000000000370731227367477500161040ustar00rootroot00000000000000""" An Item is a serializable thing that can appear in a Collection Copyright 2006-2009, Red Hat, Inc and Others Michael DeHaan This software may be freely redistributed under the terms of the GNU general public license. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. """ import exceptions import utils from cexceptions import * from utils import _ import pprint import fnmatch class Item: TYPE_NAME = "generic" def __init__(self,config,is_subobject=False): """ Constructor. Requires a back reference to the Config management object. NOTE: is_subobject is used for objects that allow inheritance in their trees. This inheritance refers to conceptual inheritance, not Python inheritance. Objects created with is_subobject need to call their set_parent() method immediately after creation and pass in a value of an object of the same type. Currently this is only supported for profiles. Subobjects blend their data with their parent objects and only require a valid parent name and a name for themselves, so other required options can be gathered from items further up the cobbler tree. Old cobbler: New cobbler: distro distro profile profile system profile <-- created with is_subobject=True system <-- created as normal For consistancy, there is some code supporting this in all object types, though it is only usable (and only should be used) for profiles at this time. Objects that are children of objects of the same type (i.e. subprofiles) need to pass this in as True. Otherwise, just use False for is_subobject and the parent object will (therefore) have a different type. """ self.config = config self.settings = self.config._settings self.clear(is_subobject) # reset behavior differs for inheritance cases self.parent = '' # all objects by default are not subobjects self.children = {} # caching for performance reasons, not serialized self.log_func = self.config.api.log self.ctime = 0 # to be filled in by collection class self.mtime = 0 # to be filled in by collection class self.uid = "" # to be filled in by collection class self.last_cached_mtime = 0 self.cached_datastruct = "" def clear(self,is_subobject=False): """ Reset this object. """ utils.clear_from_fields(self,self.get_fields(),is_subobject=is_subobject) def make_clone(self): raise exceptions.NotImplementedError def from_datastruct(self,seed_data): """ Modify this object to take on values in seed_data """ return utils.from_datastruct_from_fields(self,seed_data,self.get_fields()) def to_datastruct(self): return utils.to_datastruct_from_fields(self, self.get_fields()) def printable(self): return utils.printable_from_fields(self,self.get_fields()) def remote_methods(self): return utils.get_remote_methods_from_fields(self,self.get_fields()) def set_uid(self,uid): self.uid = uid def get_children(self,sorted=True): """ Get direct children of this object. """ keys = self.children.keys() if sorted: keys.sort() results = [] for k in keys: results.append(self.children[k]) return results def get_descendants(self): """ Get objects that depend on this object, i.e. those that would be affected by a cascading delete, etc. """ results = [] kids = self.get_children(sorted=False) results.extend(kids) for kid in kids: grandkids = kid.get_descendants() results.extend(grandkids) return results def get_parent(self): """ For objects with a tree relationship, what's the parent object? """ return None def get_conceptual_parent(self): """ The parent may just be a superclass for something like a subprofile. Get the first parent of a different type. """ # FIXME: this is a workaround to get the type of an instance var # what's a more clean way to do this that's python 2.3 friendly? # this returns something like: cobbler.item_system.System mtype = str(self).split(" ")[0][1:] parent = self.get_parent() while parent is not None: ptype = str(parent).split(" ")[0][1:] if mtype != ptype: self.conceptual_parent = parent return parent parent = parent.get_parent() return None def set_name(self,name): """ All objects have names, and with the exception of System they aren't picky about it. """ if self.name not in ["",None] and self.parent not in ["",None] and self.name == self.parent: raise CX(_("self parentage is weird")) if not isinstance(name, basestring): raise CX(_("name must be a string")) for x in name: if not x.isalnum() and not x in [ "_", "-", ".", ":", "+" ] : raise CX(_("invalid characters in name: '%s'" % name)) self.name = name return True def set_comment(self, comment): if comment is None: comment = "" self.comment = comment return True def set_owners(self,data): """ The owners field is a comment unless using an authz module that pays attention to it, like authz_ownership, which ships with Cobbler but is off by default. Consult the Wiki docs for more info on CustomizableAuthorization. """ owners = utils.input_string_or_list(data) self.owners = owners return True def set_kernel_options(self,options,inplace=False): """ Kernel options are a space delimited list, like 'a=b c=d e=f g h i=j' or a hash. """ (success, value) = utils.input_string_or_hash(options,allow_multiples=True) if not success: raise CX(_("invalid kernel options")) else: if inplace: for key in value.keys(): if key.startswith("~"): del self.kernel_options[key[1:]] else: self.kernel_options[key] = value[key] else: self.kernel_options = value return True def set_kernel_options_post(self,options,inplace=False): """ Post kernel options are a space delimited list, like 'a=b c=d e=f g h i=j' or a hash. """ (success, value) = utils.input_string_or_hash(options,allow_multiples=True) if not success: raise CX(_("invalid post kernel options")) else: if inplace: for key in value.keys(): if key.startswith("~"): del self.self.kernel_options_post[key[1:]] else: self.kernel_options_post[key] = value[key] else: self.kernel_options_post = value return True def set_ks_meta(self,options,inplace=False): """ A comma delimited list of key value pairs, like 'a=b,c=d,e=f' or a hash. The meta tags are used as input to the templating system to preprocess kickstart files """ (success, value) = utils.input_string_or_hash(options,allow_multiples=True) if not success: return False else: if inplace: for key in value.keys(): if key.startswith("~"): del self.ks_meta[key[1:]] else: self.ks_meta[key] = value[key] else: self.ks_meta = value return True def set_mgmt_classes(self,mgmt_classes): """ Assigns a list of configuration management classes that can be assigned to any object, such as those used by Puppet's external_nodes feature. """ mgmt_classes_split = utils.input_string_or_list(mgmt_classes) self.mgmt_classes = utils.input_string_or_list(mgmt_classes_split) return True def set_mgmt_parameters(self,mgmt_parameters): """ A YAML string which can be assigned to any object, this is used by Puppet's external_nodes feature. """ if mgmt_parameters == "<>": self.mgmt_parameters = mgmt_parameters else: import yaml data = yaml.safe_load(mgmt_parameters) if type(data) is not dict: raise CX(_("Input YAML in Puppet Parameter field must evaluate to a dictionary.")) self.mgmt_parameters = data return True def set_template_files(self,template_files,inplace=False): """ A comma seperated list of source=destination templates that should be generated during a sync. """ (success, value) = utils.input_string_or_hash(template_files,allow_multiples=False) if not success: return False else: if inplace: for key in value.keys(): if key.startswith("~"): del self.template_files[key[1:]] else: self.template_files[key] = value[key] else: self.template_files = value return True def set_boot_files(self,boot_files,inplace=False): """ A comma seperated list of req_name=source_file_path that should be fetchable via tftp """ (success, value) = utils.input_string_or_hash(boot_files,allow_multiples=False) if not success: return False else: if inplace: for key in value.keys(): if key.startswith("~"): del self.boot_files[key[1:]] else: self.boot_files[key] = value[key] else: self.boot_files= value return True def set_fetchable_files(self,fetchable_files,inplace=False): """ A comma seperated list of virt_name=path_to_template that should be fetchable via tftp or a webserver """ (success, value) = utils.input_string_or_hash(fetchable_files,allow_multiples=False) if not success: return False else: if inplace: for key in value.keys(): if key.startswith("~"): del self.fetchable_files[key[1:]] else: self.fetchable_files[key] = value[key] else: self.fetchable_files= value return True def sort_key(self,sort_fields=[]): data = self.to_datastruct() return [data.get(x,"") for x in sort_fields] def find_match(self,kwargs,no_errors=False): # used by find() method in collection.py data = self.to_datastruct() for (key, value) in kwargs.iteritems(): # Allow ~ to negate the compare if value is not None and value.startswith("~"): res=not self.find_match_single_key(data,key,value[1:],no_errors) else: res=self.find_match_single_key(data,key,value,no_errors) if not res: return False return True def find_match_single_key(self,data,key,value,no_errors=False): # special case for systems key_found_already = False if data.has_key("interfaces"): if key in [ "mac_address", "ip_address", "subnet", "netmask", "virt_bridge", \ "dhcp_tag", "dns_name", "static_routes", "interface_type", \ "interface_master", "bonding_opts", "bridge_opts", "bonding", "bonding_master" ]: if key == "bonding": key = "interface_type" # bonding is deprecated elif key == "bonding_master": key = "interface_master" # bonding_master is deprecated key_found_already = True for (name, interface) in data["interfaces"].iteritems(): if value is not None and interface.has_key(key): if self.__find_compare(interface[key], value): return True if not data.has_key(key): if not key_found_already: if not no_errors: # FIXME: removed for 2.0 code, shouldn't cause any problems to not have an exception here? # raise CX(_("searching for field that does not exist: %s" % key)) return False else: if value is not None: # FIXME: new? return False if value is None: return True else: return self.__find_compare(value, data[key]) def __find_compare(self, from_search, from_obj): if isinstance(from_obj, basestring): # FIXME: fnmatch is only used for string to string comparisions # which should cover most major usage, if not, this deserves fixing if fnmatch.fnmatch(from_obj.lower(), from_search.lower()): return True else: return False else: if isinstance(from_search, basestring): if type(from_obj) == type([]): from_search = utils.input_string_or_list(from_search) for x in from_search: if x not in from_obj: return False return True if type(from_obj) == type({}): (junk, from_search) = utils.input_string_or_hash(from_search,allow_multiples=True) for x in from_search.keys(): y = from_search[x] if not from_obj.has_key(x): return False if not (y == from_obj[x]): return False return True if type(from_obj) == type(True): if from_search.lower() in [ "true", "1", "y", "yes" ]: inp = True else: inp = False if inp == from_obj: return True return False raise CX(_("find cannot compare type: %s") % type(from_obj)) def dump_vars(self,data,format=True): raw = utils.blender(self.config.api, False, self) if format: return pprint.pformat(raw) else: return raw def set_depth(self,depth): self.depth = depth def set_ctime(self,ctime): self.ctime = ctime def set_mtime(self,mtime): self.mtime = mtime def set_parent(self,parent): self.parent = parent def check_if_valid(self): """ Raise exceptions if the object state is inconsistent """ if self.name is None or self.name == "": raise CX("Name is required") cobbler-2.4.1/cobbler/item_distro.py000066400000000000000000000257041227367477500174660ustar00rootroot00000000000000""" A cobbler distribution. A distribution is a kernel, and initrd, and potentially some kernel options. Copyright 2006-2009, Red Hat, Inc and Others Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import utils import item import weakref import os import time from cexceptions import * from utils import _ # the fields has controls what data elements are part of each object. To add a new field, just add a new # entry to the list following some conventions to be described later. You must also add a method called # set_$fieldname. Do not write a method called get_$fieldname, that will not be called. # # name | default | subobject default | display name | editable? | tooltip | values ? | type # # name -- what the filed should be called. For the command line, underscores will be replaced with # a hyphen programatically, so use underscores to seperate things that are seperate words # # default value -- when a new object is created, what is the default value for this field? # # subobject default -- this applies ONLY to subprofiles, and is most always set to <>. If this # is not item_profile.py it does not matter. # # display name -- how the field shows up in the web application and the "cobbler report" command # # editable -- should the field be editable in the CLI and web app? Almost always yes unless # it is an internalism. Fields that are not editable are "hidden" # # tooltip -- the caption to be shown in the web app or in "commandname --help" in the CLI # # values -- for fields that have a limited set of valid options and those options are always fixed # (such as architecture type), the list of valid options goes in this field. This should # almost always be a constant from codes.py # # type -- the type of the field. Used to determine which HTML form widget is used in the web interface # # you will also notice some names start with "*" ... this denotes that the fields belong to # interfaces, and only item_system.py should have these. Each system may have multiple interfaces. # # the order in which the fields are listed (for all non-hidden fields) are the order they will # appear in the web application (top to bottom). The command line sorts fields alphabetically. # # field_info.py also contains a set of "Groups" that describe what other fields are associated with # what other fields. This affects color coding and other display hints. If you add a field # please edit field_info.py carefully to match. # # additional: see field_info.py for some display hints. By default in the web app all fields # are text fields unless field_info.py lists the field in one of those hashes. # # hidden fields should not be added without just cause, explanations about these are: # # ctime, mtime -- times the object was modified, used internally by cobbler for API purposes # uid -- also used for some external API purposes # source_repos -- an artifiact of import, this is too complicated to explain on IRC so we just hide it # for RHEL split repos, this is a list of each of them in the install tree, used to generate # repo lines in the kickstart to allow installation of x>=RHEL5. Otherwise unimportant. # depth -- used for "cobbler list" to print the tree, makes it easier to load objects from disk also # tree_build_time -- loaded from import, this is not useful to many folks so we just hide it. Avail over API. # # so to add new fields # (A) understand the above # (B) add a field below # (C) add a set_fieldname method # (D) you do not need to modify the CLI or webapp # # in general the set_field_name method should raise exceptions on invalid fields, always. There are adtl # validation fields in is_valid to check to see that two seperate fields do not conflict, but in general # design issues that require this should be avoided forever more, and there are few exceptions. Cobbler # must operate as normal with the default value for all fields and not choke on the default values. FIELDS = [ [ "name","",0,"Name",True,"Ex: Fedora-11-i386",0,"str"], ["ctime",0,0,"",False,"",0,"float"], ["mtime",0,0,"",False,"",0,"float"], [ "uid","",0,"",False,"",0,"str"], [ "owners","SETTINGS:default_ownership",0,"Owners",True,"Owners list for authz_ownership (space delimited)",0,"list"], [ "kernel",None,0,"Kernel",True,"Absolute path to kernel on filesystem",0,"str"], [ "initrd",None,0,"Initrd",True,"Absolute path to kernel on filesystem",0,"str"], [ "kernel_options",{},0,"Kernel Options",True,"Ex: selinux=permissive",0,"dict"], [ "kernel_options_post",{},0,"Kernel Options (Post Install)",True,"Ex: clocksource=pit noapic",0,"dict"], [ "ks_meta",{},0,"Kickstart Metadata",True,"Ex: dog=fang agent=86", 0,"dict"], [ "arch",'i386',0,"Architecture",True,"", ['i386','x86_64','ia64','ppc','ppc64','s390', 'arm'],"str"], [ "breed",'redhat',0,"Breed",True,"What is the type of distribution?",utils.get_valid_breeds(),"str"], [ "os_version","generic26",0,"OS Version",True,"Needed for some virtualization optimizations",utils.get_valid_os_versions(),"str"], [ "source_repos",[],0,"Source Repos", False,"",0,"list"], [ "depth",0,0,"Depth",False,"",0,"int"], [ "comment","",0,"Comment",True,"Free form text description",0,"str"], [ "tree_build_time",0,0,"Tree Build Time",False,"",0,"str"], [ "mgmt_classes",[],0,"Management Classes",True,"Management classes for external config management",0,"list"], [ "boot_files",{},0,"TFTP Boot Files",True,"Files copied into tftpboot beyond the kernel/initrd",0,"list"], [ "fetchable_files",{},0,"Fetchable Files",True,"Templates for tftp or wget",0,"list"], [ "template_files",{},0,"Template Files",True,"File mappings for built-in config management",0,"list"], [ "redhat_management_key","<>",0,"Red Hat Management Key",True,"Registration key for RHN, Spacewalk, or Satellite",0,"str"], [ "redhat_management_server", "<>",0,"Red Hat Management Server",True,"Address of Spacewalk or Satellite Server",0,"str"] ] class Distro(item.Item): TYPE_NAME = _("distro") COLLECTION_TYPE = "distro" def __init__(self, *args, **kwargs): item.Item.__init__(self, *args, **kwargs) self.ks_meta = {} self.source_repos = [] def make_clone(self): ds = self.to_datastruct() cloned = Distro(self.config) cloned.from_datastruct(ds) return cloned def get_fields(self): return FIELDS def get_parent(self): """ Return object next highest up the tree. NOTE: conceptually there is no need for subdistros """ return None def set_kernel(self,kernel): """ Specifies a kernel. The kernel parameter is a full path, a filename in the configured kernel directory (set in /etc/cobbler.conf) or a directory path that would contain a selectable kernel. Kernel naming conventions are checked, see docs in the utils module for find_kernel. """ if kernel is None or kernel == "": raise CX("kernel not specified") if utils.find_kernel(kernel): self.kernel = kernel return True raise CX("kernel not found: %s" % kernel) def set_tree_build_time(self, datestamp): """ Sets the import time of the distro. If not imported, this field is not meaningful. """ self.tree_build_time = float(datestamp) return True def set_breed(self, breed): return utils.set_breed(self,breed) def set_os_version(self, os_version): return utils.set_os_version(self,os_version) def set_initrd(self,initrd): """ Specifies an initrd image. Path search works as in set_kernel. File must be named appropriately. """ if initrd is None or initrd == "": raise CX("initrd not specified") if utils.find_initrd(initrd): self.initrd = initrd return True raise CX(_("initrd not found")) def set_redhat_management_key(self,key): return utils.set_redhat_management_key(self,key) def set_redhat_management_server(self,server): return utils.set_redhat_management_server(self,server) def set_source_repos(self, repos): """ A list of http:// URLs on the cobbler server that point to yum configuration files that can be used to install core packages. Use by cobbler import only. """ self.source_repos = repos def set_arch(self,arch): """ The field is mainly relevant to PXE provisioning. Should someone have Itanium machines on a network, having syslinux (pxelinux.0) be the only option in the config file causes problems. Using an alternative distro type allows for dhcpd.conf templating to "do the right thing" with those systems -- this also relates to bootloader configuration files which have different syntax for different distro types (because of the bootloaders). This field is named "arch" because mainly on Linux, we only care about the architecture, though if (in the future) new provisioning types are added, an arch value might be something like "bsd_x86". Update: (7/2008) this is now used to build fake PXE trees for s390x also """ return utils.set_arch(self,arch) def check_if_valid(self): if self.name is None: raise CX("name is required") if self.kernel is None: raise CX("Error with distro %s - kernel is required" % (self.name)) if self.initrd is None: raise CX("Error with distro %s - initrd is required" % (self.name)) if utils.file_is_remote(self.kernel): if not utils.remote_file_exists(self.kernel): raise CX("Error with distro %s - kernel '%s' not found" % (self.name,self.kernel)) elif not os.path.exists(self.kernel): raise CX("Error with distro %s - kernel not found" % (self.name)) if utils.file_is_remote(self.initrd): if not utils.remote_file_exists(self.initrd): raise CX("Error with distro %s - initrd path not found" % (self.name)) elif not os.path.exists(self.initrd): raise CX("Error with distro %s - initrd path not found" % (self.name)) cobbler-2.4.1/cobbler/item_file.py000066400000000000000000000061021227367477500170700ustar00rootroot00000000000000""" Copyright 2006-2009, MadHatter Kelsey Hightower This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import resource import utils from utils import _ from cexceptions import CX # this datastructure is described in great detail in item_distro.py -- read the comments there. FIELDS = [ [ "uid","",0,"",False,"",0,"str"], ["depth",2,0,"",False,"",0,"float"], ["comment","",0,"Comment",True,"Free form text description",0,"str"], ["ctime",0,0,"",False,"",0,"float"], ["mtime",0,0,"",False,"",0,"float"], ["owners","SETTINGS:default_ownership",0,"Owners",False,"Owners list for authz_ownership (space delimited)",[],"list"], ["name","",0,"Name",True,"Name of file resource",0,"str"], ["is_dir",False,0,"Is Directory",True,"Treat file resource as a directory",0,"bool"], ["action","create",0,"Action",True,"Create or remove file resource",0,"str"], ["group","",0,"Group",True,"The group owner of the file",0,"str"], ["mode","",0,"Mode",True,"The mode of the file",0,"str"], ["owner","",0,"Owner",True,"The owner for the file",0,"str"], ["path","",0,"Path",True,"The path for the file",0,"str"], ["template","",0,"Template",True,"The template for the file",0,"str"] ] class File(resource.Resource): TYPE_NAME = _("file") COLLECTION_TYPE = "file" def make_clone(self): ds = self.to_datastruct() cloned = File(self.config) cloned.from_datastruct(ds) return cloned def get_fields(self): return FIELDS def set_is_dir(self,is_dir): """ If true, treat file resource as a directory. Templates are ignored. """ self.is_dir = utils.input_boolean(is_dir) return True def check_if_valid(self): """ Insure name, path, owner, group, and mode are set. Templates are only required for files, is_dir = False """ if self.name is None or self.name == "": raise CX("name is required") if self.path is None or self.path == "": raise CX("path is required") if self.owner is None or self.owner == "": raise CX("owner is required") if self.group is None or self.group == "": raise CX("group is required") if self.mode is None or self.mode == "": raise CX("mode is required") if self.is_dir == False and self.template == "": raise CX("Template is required when not a directory") cobbler-2.4.1/cobbler/item_image.py000066400000000000000000000175321227367477500172440ustar00rootroot00000000000000""" A Cobbler Image. Tracks a virtual or physical image, as opposed to a answer file (kickstart) led installation. Copyright 2006-2009, Red Hat, Inc and Others Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import string import utils import item import time from cexceptions import * from utils import _ # this datastructure is described in great detail in item_distro.py -- read the comments there. FIELDS = [ ['name','',0,"Name",True,"",0,"str"], ['arch','i386',0,"Architecture",True,"",["i386","x86_64","ia64","s390","ppc","ppc64", "arm"],"str"], ['breed','redhat',0,"Breed",True,"",utils.get_valid_breeds(),"str"], ['comment','',0,"Comment",True,"Free form text description",0,"str"], ['ctime',0,0,"",False,"",0,"float"], ['mtime',0,0,"",False,"",0,"float"], ['file','',0,"File",True,"Path to local file or nfs://user@host:path",0,"str"], ['depth',0,0,"",False,"",0,"int"], ['image_type',"iso",0,"Image Type",True,"", ["iso","direct","memdisk","virt-image"],"str"], #FIXME:complete? ['network_count',1,0,"Virt NICs",True,"",0,"int"], ['os_version','',0,"OS Version",True,"ex: rhel4",utils.get_valid_os_versions(),"str"], ['owners',"SETTINGS:default_ownership",0,"Owners",True,"Owners list for authz_ownership (space delimited)",[],"list"], ['parent','',0,"",False,"",0,"str"], ['kickstart','',0,"Kickstart",True,"Path to kickstart/answer file template",0,"str"], ['virt_auto_boot',"SETTINGS:virt_auto_boot",0,"Virt Auto Boot",True,"Auto boot this VM?",0,"bool"], ['virt_bridge',"SETTINGS:default_virt_bridge",0,"Virt Bridge",True,"",0,"str"], ['virt_cpus',1,0,"Virt CPUs",True,"",0,"int"], ['virt_file_size',"SETTINGS:default_virt_file_size",0,"Virt File Size (GB)",True,"",0,"float"], ["virt_disk_driver","SETTINGS:default_virt_disk_driver",0,"Virt Disk Driver Type",True,"The on-disk format for the virtualization disk","raw","str"], ['virt_path','',0,"Virt Path",True,"Ex: /directory or VolGroup00",0,"str"], ['virt_ram',"SETTINGS:default_virt_ram",0,"Virt RAM (MB)",True,"",0,"int"], ['virt_type',"SETTINGS:default_virt_type",0,"Virt Type",True,"",["xenpv","xenfv","qemu","kvm", "vmware"],"str"], ['uid',"",0,"",False,"",0,"str"] ] class Image(item.Item): TYPE_NAME = _("image") COLLECTION_TYPE = "image" def make_clone(self): ds = self.to_datastruct() cloned = Image(self.config) cloned.from_datastruct(ds) return cloned def get_fields(self): return FIELDS def set_arch(self,arch): """ The field is mainly relevant to PXE provisioning. see comments for set_arch in item_distro.py, this works the same. """ return utils.set_arch(self,arch) def set_kickstart(self,kickstart): """ It may not make sense for images to have kickstarts. It really doesn't. However if the image type is 'iso' koan can create a virtual floppy and shove an answer file on it, to script an installation. This may not be a kickstart per se, it might be a windows answer file (SIF) etc. """ if kickstart is None or kickstart == "" or kickstart == "delete": self.kickstart = "" return True kickstart = utils.find_kickstart(kickstart) if kickstart: self.kickstart = kickstart return True raise CX(_("kickstart not found for image")) def set_file(self,filename): """ Stores the image location. This should be accessible on all nodes that need to access it. Format: can be one of the following: * username:password@hostname:/path/to/the/filename.ext * username@hostname:/path/to/the/filename.ext * hostname:/path/to/the/filename.ext * /path/to/the/filename.ext """ uri = "" scheme = auth = hostname = path = "" # we'll discard the protocol if it's supplied, for legacy support if filename.find("://") != -1: scheme, uri = filename.split("://") filename = uri else: uri = filename if filename.find("@") != -1: auth, filename = filename.split("@") # extract the hostname # 1. if we have a colon, then everything before it is a hostname # 2. if we don't have a colon, then check if we had a scheme; if # we did, then grab all before the first forward slash as the # hostname; otherwise, we've got a bad file if filename.find(":") != -1: hostname, filename = filename.split(":") elif filename[0] != '/': if len(scheme) > 0: index = filename.find("/") hostname = filename[:index] filename = filename[index:] else: raise CX(_("invalid file: %s" % filename)) # raise an exception if we don't have a valid path if len(filename) > 0 and filename[0] != '/': raise CX(_("file contains an invalid path: %s" % filename)) if filename.find("/") != -1: path, filename = filename.rsplit("/", 1) if len(filename) == 0: raise CX(_("missing filename")) if len(auth) > 0 and len(hostname) == 0: raise CX(_("a hostname must be specified with authentication details")) self.file = uri return True def set_os_version(self,os_version): return utils.set_os_version(self,os_version) def set_breed(self,breed): return utils.set_breed(self,breed) def set_image_type(self,image_type): """ Indicates what type of image this is. direct = something like "memdisk", physical only iso = a bootable ISO that pxe's or can be used for virt installs, virtual only virt-clone = a cloned virtual disk (FIXME: not yet supported), virtual only memdisk = hdd image (physical only) """ if not image_type in self.get_valid_image_types(): raise CX(_("image type must be on of the following: %s") % string.join(self.get_valid_image_types(),", ")) self.image_type = image_type return True def set_virt_cpus(self,num): return utils.set_virt_cpus(self,num) def set_network_count(self, num): if num is None or num == "": num = 1 try: self.network_count = int(num) except: raise CX("invalid network count (%s)" % num) return True def set_virt_auto_boot(self,num): return utils.set_virt_auto_boot(self,num) def set_virt_file_size(self,num): return utils.set_virt_file_size(self,num) def set_virt_disk_driver(self,driver): return utils.set_virt_disk_driver(self,driver) def set_virt_ram(self,num): return utils.set_virt_ram(self,num) def set_virt_type(self,vtype): return utils.set_virt_type(self,vtype) def set_virt_bridge(self,vbridge): return utils.set_virt_bridge(self,vbridge) def set_virt_path(self,path): return utils.set_virt_path(self,path) def get_valid_image_types(self): return ["direct","iso","memdisk","virt-clone"] def get_parent(self): """ Return object next highest up the tree. """ return None # no parent cobbler-2.4.1/cobbler/item_mgmtclass.py000066400000000000000000000066441227367477500201560ustar00rootroot00000000000000""" Copyright 2010, Kelsey Hightower Kelsey Hightower This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import utils import item from cexceptions import CX from utils import _ # this datastructure is described in great detail in item_distro.py -- read the comments there. FIELDS = [ [ "uid","",0,"",False,"",0,"str"], ["depth",2,0,"",False,"",0,"float"], ["name","",0,"Name",True,"Ex: F10-i386-webserver",0,"str"], ["owners","SETTINGS:default_ownership","SETTINGS:default_ownership","Owners",True,"Owners list for authz_ownership (space delimited)",0,"list"], ["comment","",0,"Comment",True,"Free form text description",0,"str"], ["ctime",0,0,"",False,"",0,"int"], ["mtime",0,0,"",False,"",0,"int"], ["class_name","",0,"Class Name",True,"Actual Class Name (leave blank to use the name field)",0,"str"], ["is_definition",False,0,"Is Definition?",True,"Treat this class as a definition (puppet only)",0,"bool"], ["params",{},0,"Parameters/Variables",True,"List of parameters/variables",0,"dict"], ["packages",[],0,"Packages",True,"Package resources",0,"list"], ["files",[],0,"Files",True,"File resources",0,"list"], ] class Mgmtclass(item.Item): TYPE_NAME = _("mgmtclass") COLLECTION_TYPE = "mgmtclass" def make_clone(self): ds = self.to_datastruct() cloned = Mgmtclass(self.config) cloned.from_datastruct(ds) return cloned def get_fields(self): return FIELDS def set_packages(self,packages): self.packages = utils.input_string_or_list(packages) return True def set_files(self,files): self.files = utils.input_string_or_list(files) return True def set_params(self,params,inplace=False): (success, value) = utils.input_string_or_hash(params,allow_multiples=True) if not success: raise CX(_("invalid parameters")) else: if inplace: for key in value.keys(): if key.startswith("~"): del self.params[key[1:]] else: self.params[key] = value[key] else: self.params = value return True def set_is_definition(self,isdef): self.is_definition = utils.input_boolean(isdef) return True def set_class_name(self,name): if not isinstance(name, basestring): raise CX(_("class name must be a string")) for x in name: if not x.isalnum() and not x in [ "_", "-", ".", ":", "+" ] : raise CX(_("invalid characters in class name: '%s'" % name)) self.class_name = name return True def check_if_valid(self): if self.name is None or self.name == "": raise CX("name is required") cobbler-2.4.1/cobbler/item_package.py000066400000000000000000000042331227367477500175470ustar00rootroot00000000000000""" Copyright 2006-2009, MadHatter Kelsey Hightower This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import resource from cexceptions import CX from utils import _ # this datastructure is described in great detail in item_distro.py -- read the comments there. FIELDS = [ [ "uid","",0,"",False,"",0,"str"], ["depth",2,0,"",False,"",0,"float"], ["comment","",0,"Comment",True,"Free form text description",0,"str"], ["ctime",0,0,"",False,"",0,"float"], ["mtime",0,0,"",False,"",0,"float"], ["owners","SETTINGS:default_ownership",0,"Owners",True,"Owners list for authz_ownership (space delimited)",[],"list"], ["name","",0,"Name",True,"Name of file resource",0,"str"], ["action","create",0,"Action",True,"Install or remove package resource",0,"str"], ["installer","yum",0,"Installer",True,"Package Manager",0,"str"], ["version","",0,"Version",True,"Package Version",0,"str"], ] class Package(resource.Resource): TYPE_NAME = _("package") COLLECTION_TYPE = "package" def make_clone(self): ds = self.to_datastruct() cloned = Package(self.config) cloned.from_datastruct(ds) return cloned def get_fields(self): return FIELDS def set_installer(self,installer): self.installer = installer.lower() return True def set_version(self,version): self.version = version return True def check_if_valid(self): if self.name is None or self.name == "": raise CX("name is required") cobbler-2.4.1/cobbler/item_profile.py000066400000000000000000000256751227367477500176310ustar00rootroot00000000000000""" A Cobbler Profile. A profile is a reference to a distribution, possibly some kernel options, possibly some Virt options, and some kickstart data. Copyright 2006-2009, Red Hat, Inc and Others Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import utils import item import time from cexceptions import * from utils import _ # this datastructure is described in great detail in item_distro.py -- read the comments there. FIELDS = [ ["name","",None,"Name",True,"Ex: F10-i386-webserver",0,"str"], ["uid","","","",False,"",0,"str"], ["owners","SETTINGS:default_ownership","SETTINGS:default_ownership","Owners",True,"Owners list for authz_ownership (space delimited)",0,"list"], ["distro",None,'<>',"Distribution",True,"Parent distribution",[],"str"], ["parent",'','',"Parent Profile",True,"",[],"str"], ["enable_gpxe","SETTINGS:enable_gpxe",0,"Enable gPXE?",True,"Use gPXE instead of PXELINUX for advanced booting options",0,"bool"], ["enable_menu","SETTINGS:enable_menu",'<>',"Enable PXE Menu?",True,"Show this profile in the PXE menu?",0,"bool"], ["kickstart","SETTINGS:default_kickstart",'<>',"Kickstart",True,"Path to kickstart template",0,"str"], ["kernel_options",{},'<>',"Kernel Options",True,"Ex: selinux=permissive",0,"dict"], ["kernel_options_post",{},'<>',"Kernel Options (Post Install)",True,"Ex: clocksource=pit noapic",0,"dict"], ["ks_meta",{},'<>',"Kickstart Metadata",True,"Ex: dog=fang agent=86",0,"dict"], ["proxy","",None,"Proxy",True,"Proxy URL",0,"str"], ["repos",[],'<>',"Repos",True,"Repos to auto-assign to this profile",[],"list"], ["comment","","","Comment",True,"Free form text description",0,"str"], ["virt_auto_boot","SETTINGS:virt_auto_boot",'<>',"Virt Auto Boot",True,"Auto boot this VM?",0,"bool"], ["virt_cpus",1,'<>',"Virt CPUs",True,"integer",0,"int"], ["virt_file_size","SETTINGS:default_virt_file_size",'<>',"Virt File Size(GB)",True,"",0,"int"], ["virt_disk_driver","SETTINGS:default_virt_disk_driver",'<>',"Virt Disk Driver Type",True,"The on-disk format for the virtualization disk","raw","str"], ["virt_ram","SETTINGS:default_virt_ram",'<>',"Virt RAM (MB)",True,"",0,"int"], ["depth",1,1,"",False,"",0,"int"], ["virt_type","SETTINGS:default_virt_type",'<>',"Virt Type",True,"Virtualization technology to use",["xenpv","xenfv","qemu", "kvm", "vmware", "openvz"],"str"], ["virt_path","",'<>',"Virt Path",True,"Ex: /directory OR VolGroup00",0,"str"], ["virt_bridge","SETTINGS:default_virt_bridge",'<>',"Virt Bridge",True,"",0,"str"], ["dhcp_tag","default",'<>',"DHCP Tag",True,"See manpage or leave blank",0,"str"], ["server","<>",'<>',"Server Override",True,"See manpage or leave blank",0,"str"], ["ctime",0,0,"",False,"",0,"int"], ["mtime",0,0,"",False,"",0,"int"], ["name_servers","SETTINGS:default_name_servers",[],"Name Servers",True,"space delimited",0,"list"], ["name_servers_search","SETTINGS:default_name_servers_search",[],"Name Servers Search Path",True,"space delimited",0,"list"], ["mgmt_classes",[],'<>',"Management Classes",True,"For external configuration management",0,"list"], ["mgmt_parameters","<>","<>","Management Parameters",True,"Parameters which will be handed to your management application (Must be valid YAML dictionary)", 0,"str"], ["boot_files",{},'<>',"TFTP Boot Files",True,"Files copied into tftpboot beyond the kernel/initrd",0,"list"], ["fetchable_files",{},'<>',"Fetchable Files",True,"Templates for tftp or wget",0,"dict"], ["template_files",{},'<>',"Template Files",True,"File mappings for built-in config management",0,"dict"], ["redhat_management_key","<>","<>","Red Hat Management Key",True,"Registration key for RHN, Spacewalk, or Satellite",0,"str"], ["redhat_management_server","<>","<>","Red Hat Management Server",True,"Address of Spacewalk or Satellite Server",0,"str"], ["template_remote_kickstarts", "SETTINGS:template_remote_kickstarts", "SETTINGS:template_remote_kickstarts", "", False, "", 0, "bool"] ] class Profile(item.Item): TYPE_NAME = _("profile") COLLECTION_TYPE = "profile" def make_clone(self): ds = self.to_datastruct() cloned = Profile(self.config) cloned.from_datastruct(ds) return cloned def get_fields(self): return FIELDS def set_parent(self,parent_name): """ Instead of a --distro, set the parent of this object to another profile and use the values from the parent instead of this one where the values for this profile aren't filled in, and blend them together where they are hashes. Basically this enables profile inheritance. To use this, the object MUST have been constructed with is_subobject=True or the default values for everything will be screwed up and this will likely NOT work. So, API users -- make sure you pass is_subobject=True into the constructor when using this. """ old_parent = self.get_parent() if isinstance(old_parent, item.Item): old_parent.children.pop(self.name, 'pass') if parent_name is None or parent_name == '': self.parent = '' return True if parent_name == self.name: # check must be done in two places as set_parent could be called before/after # set_name... raise CX(_("self parentage is weird")) found = self.config.profiles().find(name=parent_name) if found is None: raise CX(_("profile %s not found, inheritance not possible") % parent_name) self.parent = parent_name self.depth = found.depth + 1 parent = self.get_parent() if isinstance(parent, item.Item): parent.children[self.name] = self return True def set_distro(self,distro_name): """ Sets the distro. This must be the name of an existing Distro object in the Distros collection. """ d = self.config.distros().find(name=distro_name) if d is not None: old_parent = self.get_parent() if isinstance(old_parent, item.Item): old_parent.children.pop(self.name, 'pass') self.distro = distro_name self.depth = d.depth +1 # reset depth if previously a subprofile and now top-level d.children[self.name] = self return True raise CX(_("distribution not found")) def set_redhat_management_key(self,key): return utils.set_redhat_management_key(self,key) def set_redhat_management_server(self,server): return utils.set_redhat_management_server(self,server) def set_name_servers(self,data): # FIXME: move to utils since shared with system if data == "<>": data = [] data = utils.input_string_or_list(data) self.name_servers = data return True def set_name_servers_search(self,data): if data == "<>": data = [] data = utils.input_string_or_list(data) self.name_servers_search = data return True def set_proxy(self,proxy): self.proxy = proxy return True def set_enable_gpxe(self,enable_gpxe): """ Sets whether or not the profile will use gPXE for booting. """ self.enable_gpxe = utils.input_boolean(enable_gpxe) return True def set_enable_menu(self,enable_menu): """ Sets whether or not the profile will be listed in the default PXE boot menu. This is pretty forgiving for YAML's sake. """ self.enable_menu = utils.input_boolean(enable_menu) return True def set_template_remote_kickstarts(self, template): """ Sets whether or not the server is configured to template remote kickstarts. """ self.template_remote_kickstarts = utils.input_boolean(template) return True def set_dhcp_tag(self,dhcp_tag): if dhcp_tag is None: dhcp_tag = "" self.dhcp_tag = dhcp_tag return True def set_server(self,server): if server is None or server == "": server = "<>" self.server = server return True def set_kickstart(self,kickstart): """ Sets the kickstart. This must be a NFS, HTTP, or FTP URL. Or filesystem path. Minor checking of the URL is performed here. """ if kickstart == "" or kickstart is None: self.kickstart = "" return True if kickstart == "<>": self.kickstart = kickstart return True kickstart = utils.find_kickstart(kickstart) if kickstart: self.kickstart = kickstart return True raise CX(_("kickstart not found: %s") % kickstart) def set_virt_auto_boot(self,num): return utils.set_virt_auto_boot(self,num) def set_virt_cpus(self,num): return utils.set_virt_cpus(self,num) def set_virt_file_size(self,num): return utils.set_virt_file_size(self,num) def set_virt_disk_driver(self,driver): return utils.set_virt_disk_driver(self,driver) def set_virt_ram(self,num): return utils.set_virt_ram(self,num) def set_virt_type(self,vtype): return utils.set_virt_type(self,vtype) def set_virt_bridge(self,vbridge): return utils.set_virt_bridge(self,vbridge) def set_virt_path(self,path): return utils.set_virt_path(self,path) def set_repos(self,repos,bypass_check=False): return utils.set_repos(self,repos,bypass_check) def get_parent(self): """ Return object next highest up the tree. """ if self.parent is None or self.parent == '': if self.distro is None: return None result = self.config.distros().find(name=self.distro) else: result = self.config.profiles().find(name=self.parent) return result def check_if_valid(self): if self.name is None or self.name == "": raise CX("name is required") distro = self.get_conceptual_parent() if distro is None: raise CX("Error with profile %s - distro is required" % (self.name)) cobbler-2.4.1/cobbler/item_repo.py000066400000000000000000000167301227367477500171260ustar00rootroot00000000000000""" A Cobbler repesentation of a yum repo. Copyright 2006-2009, Red Hat, Inc and Others Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import utils import item from cexceptions import * from utils import _ import time import codes # this datastructure is described in great detail in item_distro.py -- read the comments there. FIELDS = [ ["apt_components","",0,"Apt Components (apt only)",True,"ex: main restricted universe",[],"list"], ["apt_dists","",0,"Apt Dist Names (apt only)",True,"ex: precise precise-updates",[],"list"], ["arch","",0,"Arch",True,"ex: i386, x86_64",['i386','x86_64','ia64','ppc','ppc64','s390', "arm", 'noarch', 'src'],"str"], ["breed","",0,"Breed",True,"",codes.VALID_REPO_BREEDS,"str"], ["comment","",0,"Comment",True,"Free form text description",0,"str"], ["ctime",0,0,"",False,"",0,"float"], ["depth",2,0,"",False,"",0,"float"], ["keep_updated",True,0,"Keep Updated",True,"Update this repo on next 'cobbler reposync'?",0,"bool"], ["mirror",None,0,"Mirror",True,"Address of yum or rsync repo to mirror",0,"str"], ["mtime",0,0,"",False,"",0,"float"], ["name","",0,"Name",True,"Ex: f10-i386-updates",0,"str"], ["owners","SETTINGS:default_ownership",0,"Owners",True,"Owners list for authz_ownership (space delimited)",[],"list"], ["parent",None,0,"",False,"",0,"str"], ["rpm_list",[],0,"RPM List",True,"Mirror just these RPMs (yum only)",0,"list"], ["uid",None,0,"",False,"",0,"str"], ["createrepo_flags",'<>',0,"Createrepo Flags",True,"Flags to use with createrepo",0,"dict"], ["environment",{},0,"Environment Variables",True,"Use these environment variables during commands (key=value, space delimited)",0,"dict"], ["mirror_locally",True,0,"Mirror locally",True,"Copy files or just reference the repo externally?",0,"bool"], ["priority",99,0,"Priority",True,"Value for yum priorities plugin, if installed",0,"int"], ["yumopts",{},0,"Yum Options",True,"Options to write to yum config file",0,"dict"] ] class Repo(item.Item): TYPE_NAME = _("repo") COLLECTION_TYPE = "repo" def make_clone(self): ds = self.to_datastruct() cloned = Repo(self.config) cloned.from_datastruct(ds) return cloned def get_fields(self): return FIELDS def _guess_breed(self): # backwards compatibility if (self.breed == "" or self.breed is None): if self.mirror.startswith("http://") or self.mirror.startswith("ftp://"): self.set_breed("yum") elif self.mirror.startswith("rhn://"): self.set_breed("rhn") else: self.set_breed("rsync") def set_mirror(self,mirror): """ A repo is (initially, as in right now) is something that can be rsynced. reposync/repotrack integration over HTTP might come later. """ self.mirror = mirror if self.arch is None or self.arch == "": if mirror.find("x86_64") != -1: self.set_arch("x86_64") elif mirror.find("x86") != -1 or mirror.find("i386") != -1: self.set_arch("i386") elif mirror.find("ia64") != -1: self.set_arch("ia64") elif mirror.find("s390") != -1: self.set_arch("s390x") self._guess_breed() return True def set_keep_updated(self,keep_updated): """ This allows the user to disable updates to a particular repo for whatever reason. """ self.keep_updated = utils.input_boolean(keep_updated) return True def set_yumopts(self,options,inplace=False): """ Kernel options are a space delimited list, like 'a=b c=d e=f g h i=j' or a hash. """ (success, value) = utils.input_string_or_hash(options,allow_multiples=False) if not success: raise CX(_("invalid yum options")) else: if inplace: for key in value.keys(): self.yumopts[key] = value[key] else: self.yumopts = value return True def set_environment(self,options,inplace=False): """ Yum can take options from the environment. This puts them there before each reposync. """ (success, value) = utils.input_string_or_hash(options,allow_multiples=False) if not success: raise CX(_("invalid environment options")) else: if inplace: for key in value.keys(): self.environment[key] = value[key] else: self.environment = value return True def set_priority(self,priority): """ Set the priority of the repository. 1= highest, 99=default Only works if host is using priorities plugin for yum. """ try: priority = int(str(priority)) except: raise CX(_("invalid priority level: %s") % priority) self.priority = priority return True def set_rpm_list(self,rpms): """ Rather than mirroring the entire contents of a repository (Fedora Extras, for instance, contains games, and we probably don't want those), make it possible to list the packages one wants out of those repos, so only those packages + deps can be mirrored. """ self.rpm_list = utils.input_string_or_list(rpms) return True def set_createrepo_flags(self,createrepo_flags): """ Flags passed to createrepo when it is called. Common flags to use would be -c cache or -g comps.xml to generate group information. """ if createrepo_flags is None: createrepo_flags = "" self.createrepo_flags = createrepo_flags return True def set_breed(self,breed): if breed: return utils.set_repo_breed(self,breed) def set_os_version(self,os_version): if os_version: return utils.set_repo_os_version(self,os_version) def set_arch(self,arch): """ Override the arch used for reposync """ return utils.set_arch(self,arch,repo=True) def set_mirror_locally(self,value): self.mirror_locally = utils.input_boolean(value) return True def set_apt_components(self,value): self.apt_components = utils.input_string_or_list(value) return True def set_apt_dists(self,value): self.apt_dists = utils.input_string_or_list(value) return True def get_parent(self): """ currently the Cobbler object space does not support subobjects of this object as it is conceptually not useful. """ return None def check_if_valid(self): if self.name is None: raise CX("name is required") if self.mirror is None: raise CX("Error with repo %s - mirror is required" % (self.name)) cobbler-2.4.1/cobbler/item_system.py000066400000000000000000001022521227367477500175000ustar00rootroot00000000000000""" A Cobbler System. Copyright 2006-2009, Red Hat, Inc and Others Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import utils import item import time from cexceptions import * from utils import _ import sys # this datastructure is described in great detail in item_distro.py -- read the comments there. FIELDS = [ ["name","",0,"Name",True,"Ex: vanhalen.example.org",0,"str"], ["uid","",0,"",False,"",0,"str"], ["owners","SETTINGS:default_ownership",0,"Owners",True,"Owners list for authz_ownership (space delimited)",0,"list"], ["profile",None,0,"Profile",True,"Parent profile",[],"str"], ["image",None,0,"Image",True,"Parent image (if not a profile)",0,"str"], ["status","production",0,"Status",True,"System status",["development","testing","acceptance","production"],"str"], ["kernel_options",{},0,"Kernel Options",True,"Ex: selinux=permissive",0,"dict"], ["kernel_options_post",{},0,"Kernel Options (Post Install)",True,"Ex: clocksource=pit noapic",0,"dict"], ["ks_meta",{},0,"Kickstart Metadata",True,"Ex: dog=fang agent=86",0,"dict"], ["enable_gpxe","SETTINGS:enable_gpxe",0,"Enable gPXE?",True,"Use gPXE instead of PXELINUX for advanced booting options",0,"bool"], ["proxy","<>",0,"Proxy",True,"Proxy URL",0,"str"], ["netboot_enabled",True,0,"Netboot Enabled",True,"PXE (re)install this machine at next boot?",0,"bool"], ["kickstart","<>",0,"Kickstart",True,"Path to kickstart template",0,"str"], ["comment","",0,"Comment",True,"Free form text description",0,"str"], ["depth",2,0,"",False,"",0,"int"], ["server","<>",0,"Server Override",True,"See manpage or leave blank",0,"str"], ["virt_path","<>",0,"Virt Path",True,"Ex: /directory or VolGroup00",0,"str"], ["virt_type","<>",0,"Virt Type",True,"Virtualization technology to use",["xenpv","xenfv","qemu","kvm","vmware","openvz"],"str"], ["virt_cpus","<>",0,"Virt CPUs",True,"",0,"int"], ["virt_file_size","<>",0,"Virt File Size(GB)",True,"",0,"float"], ["virt_disk_driver","<>",0,"Virt Disk Driver Type",True,"The on-disk format for the virtualization disk","raw","str"], ["virt_ram","<>",0,"Virt RAM (MB)",True,"",0,"int"], ["virt_auto_boot","<>",0,"Virt Auto Boot",True,"Auto boot this VM?",0,"bool"], ["virt_pxe_boot",0,0,"Virt PXE Boot",True,"Use PXE to build this VM?",0,"bool"], ["ctime",0,0,"",False,"",0,"float"], ["mtime",0,0,"",False,"",0,"float"], ["power_type","SETTINGS:power_management_default_type",0,"Power Management Type",True,"Power management script to use",utils.get_power_types(),"str"], ["power_address","",0,"Power Management Address",True,"Ex: power-device.example.org",0,"str"], ["power_user","",0,"Power Management Username ",True,"",0,"str"], ["power_pass","",0,"Power Management Password",True,"",0,"str"], ["power_id","",0,"Power Management ID",True,"Usually a plug number or blade name, if power type requires it",0,"str"], ["hostname","",0,"Hostname",True,"",0,"str"], ["gateway","",0,"Gateway",True,"",0,"str"], ["name_servers",[],0,"Name Servers",True,"space delimited",0,"list"], ["name_servers_search",[],0,"Name Servers Search Path",True,"space delimited",0,"list"], ["ipv6_default_device","",0,"IPv6 Default Device",True,"",0,"str"], ["ipv6_autoconfiguration",False,0,"IPv6 Autoconfiguration",True,"",0,"bool"], ["network_widget_a","",0,"Add Interface",True,"",0,"str"], # not a real field, a marker for the web app ["network_widget_b","",0,"Edit Interface",True,"",0,"str"], # not a real field, a marker for the web app ["*mac_address","",0,"MAC Address",True,"(Place \"random\" in this field for a random MAC Address.)",0,"str"], ["network_widget_c","",0,"",True,"",0,"str"], # not a real field, a marker for the web app ["*mtu","",0,"MTU",True,"",0,"str"], ["*ip_address","",0,"IP Address",True,"Should be used with --interface",0,"str"], ["*interface_type","na",0,"Interface Type",True,"Should be used with --interface",["na","master","slave","bond","bond_slave","bridge","bridge_slave","bonded_bridge_slave"],"str"], ["*interface_master","",0,"Master Interface",True,"Should be used with --interface",0,"str"], ["*bonding_opts","",0,"Bonding Opts",True,"Should be used with --interface",0,"str"], ["*bridge_opts","",0,"Bridge Opts",True,"Should be used with --interface",0,"str"], ["*management",False,0,"Management Interface",True,"Is this the management interface? Should be used with --interface",0,"bool"], ["*static",False,0,"Static",True,"Is this interface static? Should be used with --interface",0,"bool"], ["*netmask","",0,"Subnet Mask",True,"Should be used with --interface",0,"str"], ["*if_gateway","",0,"Per-Interface Gateway",True,"Should be used with --interface",0,"str"], ["*dhcp_tag","",0,"DHCP Tag",True,"Should be used with --interface",0,"str"], ["*dns_name","",0,"DNS Name",True,"Should be used with --interface",0,"str"], ["*static_routes",[],0,"Static Routes",True,"Should be used with --interface",0,"list"], ["*virt_bridge","",0,"Virt Bridge",True,"Should be used with --interface",0,"str"], ["*ipv6_address","",0,"IPv6 Address",True,"Should be used with --interface",0,"str"], ["*ipv6_secondaries",[],0,"IPv6 Secondaries",True,"Space delimited. Should be used with --interface",0,"list"], ["*ipv6_mtu","",0,"IPv6 MTU",True,"Should be used with --interface",0,"str"], ["*ipv6_static_routes",[],0,"IPv6 Static Routes",True,"Should be used with --interface",0,"list"], ["*ipv6_default_gateway","",0,"IPv6 Default Gateway",True,"Should be used with --interface",0,"str"], ["mgmt_classes",[],0,"Management Classes",True,"For external config management",0,"list"], ["mgmt_parameters","<>",0,"Management Parameters",True,"Parameters which will be handed to your management application (Must be valid YAML dictionary)", 0,"str"], [ "boot_files",{},'<>',"TFTP Boot Files",True,"Files copied into tftpboot beyond the kernel/initrd",0,"list"], ["fetchable_files",{},'<>',"Fetchable Files",True,"Templates for tftp or wget",0,"dict"], ["template_files",{},0,"Template Files",True,"File mappings for built-in configuration management",0,"dict"], ["redhat_management_key","<>",0,"Red Hat Management Key",True,"Registration key for RHN, Satellite, or Spacewalk",0,"str"], ["redhat_management_server","<>",0,"Red Hat Management Server",True,"Address of Satellite or Spacewalk Server",0,"str"], ["template_remote_kickstarts", "SETTINGS:template_remote_kickstarts", "SETTINGS:template_remote_kickstarts", "", False, "", 0, "bool"], ["repos_enabled",False,0,"Repos Enabled",True,"(re)configure local repos on this machine at next config update?",0,"bool"], ["ldap_enabled",False,0,"LDAP Enabled",True,"(re)configure LDAP on this machine at next config update?",0,"bool"], ["ldap_type","SETTINGS:ldap_management_default_type",0,"LDAP Management Type",True,"Ex: authconfig",0,"str"], ["monit_enabled",False,0,"Monit Enabled",True,"(re)configure monit on this machine at next config update?",0,"bool"], ["*cnames",[],0,"CNAMES",True,"Cannonical Name Records, should be used with --interface, In quotes, space delimited",0,"list"], ] class System(item.Item): TYPE_NAME = _("system") COLLECTION_TYPE = "system" def get_fields(self): return FIELDS def make_clone(self): ds = self.to_datastruct() cloned = System(self.config) cloned.from_datastruct(ds) return cloned def delete_interface(self,name): """ Used to remove an interface. """ if self.interfaces.has_key(name) and len(self.interfaces) > 1: del self.interfaces[name] else: if not self.interfaces.has_key(name): # no interface here to delete pass else: raise CX(_("At least one interface needs to be defined.")) return True def rename_interface(self,names): """ Used to rename an interface. """ (name,newname) = names if not self.interfaces.has_key(name): raise CX(_("Interface %s does not exist" % name)) if self.interfaces.has_key(newname): raise CX(_("Interface %s already exists" % newname)) else: self.interfaces[newname] = self.interfaces[name] del self.interfaces[name] return True def __get_interface(self,name): if name == "" and len(self.interfaces.keys()) == 0: raise CX(_("No interfaces defined. Please use --interface ")) elif name == "" and len(self.interfaces.keys()) == 1: name = self.interfaces.keys()[0] elif name == "" and len(self.interfaces.keys()) > 1: raise CX(_("Multiple interfaces defined. Please use --interface ")) elif not self.interfaces.has_key(name): self.interfaces[name] = { "mac_address" : "", "mtu" : "", "ip_address" : "", "dhcp_tag" : "", "subnet" : "", # deprecated "netmask" : "", "if_gateway" : "", "virt_bridge" : "", "static" : False, "interface_type" : "", "interface_master" : "", "bonding" : "", # deprecated "bonding_master" : "", # deprecated "bonding_opts" : "", "bridge_opts" : "", "management" : False, "dns_name" : "", "static_routes" : [], "ipv6_address" : "", "ipv6_secondaries" : [], "ipv6_mtu" : "", "ipv6_static_routes" : [], "ipv6_default_gateway" : "", "cnames" : [], } return self.interfaces[name] def from_datastruct(self,seed_data): # FIXME: most definitely doesn't grok interfaces yet. return utils.from_datastruct_from_fields(self,seed_data,FIELDS) def get_parent(self): """ Return object next highest up the tree. """ if (self.parent is None or self.parent == '') and self.profile: return self.config.profiles().find(name=self.profile) elif (self.parent is None or self.parent == '') and self.image: return self.config.images().find(name=self.image) else: return self.config.systems().find(name=self.parent) def set_name(self,name): """ Set the name. If the name is a MAC or IP, and the first MAC and/or IP is not defined, go ahead and fill that value in. """ if self.name not in ["",None] and self.parent not in ["",None] and self.name == self.parent: raise CX(_("self parentage is weird")) if not isinstance(name, basestring): raise CX(_("name must be a string")) for x in name: if not x.isalnum() and not x in [ "_", "-", ".", ":", "+" ] : raise CX(_("invalid characters in name: %s") % x) # Stuff here defaults to eth0. Yes, it's ugly and hardcoded, but so was # the default interface behaviour that's now removed. ;) # --Jasper Capel if utils.is_mac(name): intf = self.__get_interface("eth0") if intf["mac_address"] == "": intf["mac_address"] = name elif utils.is_ip(name): intf = self.__get_interface("eth0") if intf["ip_address"] == "": intf["ip_address"] = name self.name = name return True def set_redhat_management_key(self,key): return utils.set_redhat_management_key(self,key) def set_redhat_management_server(self,server): return utils.set_redhat_management_server(self,server) def set_server(self,server): """ If a system can't reach the boot server at the value configured in settings because it doesn't have the same name on it's subnet this is there for an override. """ if server is None or server == "": server = "<>" self.server = server return True def set_proxy(self,proxy): if proxy is None or proxy == "": proxy = "<>" self.proxy = proxy return True def get_mac_address(self,interface): """ Get the mac address, which may be implicit in the object name or explicit with --mac-address. Use the explicit location first. """ intf = self.__get_interface(interface) if intf["mac_address"] != "": return intf["mac_address"].strip() else: return None def get_ip_address(self,interface): """ Get the IP address, which may be implicit in the object name or explict with --ip-address. Use the explicit location first. """ intf = self.__get_interface(interface) if intf["ip_address"] != "": return intf["ip_address"].strip() else: return "" def is_management_supported(self,cidr_ok=True): """ Can only add system PXE records if a MAC or IP address is available, else it's a koan only record. Actually Itanium goes beyond all this and needs the IP all of the time though this is enforced elsewhere (action_sync.py). """ if self.name == "default": return True for (name,x) in self.interfaces.iteritems(): mac = x.get("mac_address",None) ip = x.get("ip_address",None) if ip is not None and not cidr_ok and ip.find("/") != -1: # ip is in CIDR notation return False if mac is not None or ip is not None: # has ip and/or mac return True return False def set_dhcp_tag(self,dhcp_tag,interface): intf = self.__get_interface(interface) intf["dhcp_tag"] = dhcp_tag return True def set_dns_name(self,dns_name,interface): intf = self.__get_interface(interface) # FIXME: move duplicate supression code to the object validation # functions to take a harder line on supression? if dns_name != "" and not str(self.config._settings.allow_duplicate_hostnames).lower() in [ "1", "y", "yes"]: matched = self.config.api.find_items("system", {"dns_name" : dns_name}) for x in matched: if x.name != self.name: raise CX("dns-name duplicated: %s" % dns_name) intf["dns_name"] = dns_name return True def set_cnames(self,cnames,interface): intf = self.__get_interface(interface) data = utils.input_string_or_list(cnames) intf["cnames"] = data return True def set_static_routes(self,routes,interface): intf = self.__get_interface(interface) data = utils.input_string_or_list(routes) intf["static_routes"] = data return True def set_hostname(self,hostname): if hostname is None: hostname = "" self.hostname = hostname return True def set_status(self,status): self.status = status return True def set_static(self,truthiness,interface): intf = self.__get_interface(interface) intf["static"] = utils.input_boolean(truthiness) return True def set_management(self,truthiness,interface): intf = self.__get_interface(interface) intf["management"] = utils.input_boolean(truthiness) return True def set_ip_address(self,address,interface): """ Assign a IP or hostname in DHCP when this MAC boots. Only works if manage_dhcp is set in /etc/cobbler/settings """ intf = self.__get_interface(interface) # FIXME: move duplicate supression code to the object validation # functions to take a harder line on supression? if address != "" and not str(self.config._settings.allow_duplicate_ips).lower() in [ "1", "y", "yes"]: matched = self.config.api.find_items("system", {"ip_address" : address}) for x in matched: if x.name != self.name: raise CX("IP address duplicated: %s" % address) if address == "" or utils.is_ip(address): intf["ip_address"] = address.strip() return True raise CX(_("invalid format for IP address (%s)") % address) def set_mac_address(self,address,interface): if address == "random": address = utils.get_random_mac(self.config.api) # FIXME: move duplicate supression code to the object validation # functions to take a harder line on supression? if address != "" and not str(self.config._settings.allow_duplicate_macs).lower() in [ "1", "y", "yes"]: matched = self.config.api.find_items("system", {"mac_address" : address}) for x in matched: if x.name != self.name: raise CX("MAC address duplicated: %s" % address) intf = self.__get_interface(interface) if address == "" or utils.is_mac(address): intf["mac_address"] = address.strip() return True raise CX(_("invalid format for MAC address (%s)" % address)) def set_gateway(self,gateway): if gateway is None: gateway = "" if utils.is_ip(gateway) or gateway == "": self.gateway = gateway else: raise CX(_("invalid format for gateway IP address (%s)") % gateway) return True def set_name_servers(self,data): if data == "<>": data = [] data = utils.input_string_or_list(data) self.name_servers = data return True def set_name_servers_search(self,data): if data == "<>": data = [] data = utils.input_string_or_list(data) self.name_servers_search = data return True def set_netmask(self,netmask,interface): intf = self.__get_interface(interface) intf["netmask"] = netmask return True def set_if_gateway(self,gateway,interface): intf = self.__get_interface(interface) if gateway == "" or utils.is_ip(gateway): intf["if_gateway"] = gateway return True raise CX(_("invalid gateway: %s" % gateway)) def set_virt_bridge(self,bridge,interface): if bridge == "": bridge = self.settings.default_virt_bridge intf = self.__get_interface(interface) intf["virt_bridge"] = bridge return True def set_interface_type(self,type,interface): # master and slave are deprecated, and will # be assumed to mean bonding slave/master interface_types = ["bridge","bridge_slave","bond","bond_slave","bonded_bridge_slave","master","slave","na",""] if type not in interface_types: raise CX(_("interface type value must be one of: %s or blank" % interface_types.join(","))) if type == "na": type = "" elif type == "master": type = "bond" elif type == "slave": type = "bond_slave" intf = self.__get_interface(interface) intf["interface_type"] = type return True def set_interface_master(self,interface_master,interface): intf = self.__get_interface(interface) intf["interface_master"] = interface_master return True def set_bonding_opts(self,bonding_opts,interface): intf = self.__get_interface(interface) intf["bonding_opts"] = bonding_opts return True def set_bridge_opts(self,bridge_opts,interface): intf = self.__get_interface(interface) intf["bridge_opts"] = bridge_opts return True def set_ipv6_autoconfiguration(self,truthiness): self.ipv6_autoconfiguration = utils.input_boolean(truthiness) return True def set_ipv6_default_device(self,interface_name): if interface_name is None: interface_name = "" self.ipv6_default_device = interface_name return True def set_ipv6_address(self,address,interface): """ Assign a IP or hostname in DHCP when this MAC boots. Only works if manage_dhcp is set in /etc/cobbler/settings """ intf = self.__get_interface(interface) if address == "" or utils.is_ip(address): intf["ipv6_address"] = address.strip() return True raise CX(_("invalid format for IPv6 IP address (%s)") % address) def set_ipv6_secondaries(self,addresses,interface): intf = self.__get_interface(interface) data = utils.input_string_or_list(addresses) secondaries = [] for address in data: if address == "" or utils.is_ip(address): secondaries.append(address) else: raise CX(_("invalid format for IPv6 IP address (%s)") % address) intf["ipv6_secondaries"] = secondaries return True def set_ipv6_default_gateway(self,address,interface): intf = self.__get_interface(interface) if address == "" or utils.is_ip(address): intf["ipv6_default_gateway"] = address.strip() return True raise CX(_("invalid format for IPv6 IP address (%s)") % address) def set_ipv6_static_routes(self,routes,interface): intf = self.__get_interface(interface) data = utils.input_string_or_list(routes) intf["ipv6_static_routes"] = data return True def set_ipv6_mtu(self,mtu,interface): intf = self.__get_interface(interface) intf["ipv6_mtu"] = mtu return True def set_mtu(self,mtu,interface): intf = self.__get_interface(interface) intf["mtu"] = mtu return True def set_enable_gpxe(self,enable_gpxe): """ Sets whether or not the system will use gPXE for booting. """ self.enable_gpxe = utils.input_boolean(enable_gpxe) return True def set_profile(self,profile_name): """ Set the system to use a certain named profile. The profile must have already been loaded into the Profiles collection. """ old_parent = self.get_parent() if profile_name in [ "delete", "None", "~", ""] or profile_name is None: self.profile = "" if isinstance(old_parent, item.Item): old_parent.children.pop(self.name, 'pass') return True self.image = "" # mutual exclusion rule p = self.config.profiles().find(name=profile_name) if p is not None: self.profile = profile_name self.depth = p.depth + 1 # subprofiles have varying depths. if isinstance(old_parent, item.Item): old_parent.children.pop(self.name, 'pass') new_parent = self.get_parent() if isinstance(new_parent, item.Item): new_parent.children[self.name] = self return True raise CX(_("invalid profile name: %s") % profile_name) def set_image(self,image_name): """ Set the system to use a certain named image. Works like set_profile but cannot be used at the same time. It's one or the other. """ old_parent = self.get_parent() if image_name in [ "delete", "None", "~", ""] or image_name is None: self.image = "" if isinstance(old_parent, item.Item): old_parent.children.pop(self.name, 'pass') return True self.profile = "" # mutual exclusion rule img = self.config.images().find(name=image_name) if img is not None: self.image = image_name self.depth = img.depth + 1 if isinstance(old_parent, item.Item): old_parent.children.pop(self.name, 'pass') new_parent = self.get_parent() if isinstance(new_parent, item.Item): new_parent.children[self.name] = self return True raise CX(_("invalid image name (%s)") % image_name) def set_virt_cpus(self,num): return utils.set_virt_cpus(self,num) def set_virt_file_size(self,num): return utils.set_virt_file_size(self,num) def set_virt_disk_driver(self,driver): return utils.set_virt_disk_driver(self,driver) def set_virt_auto_boot(self,num): return utils.set_virt_auto_boot(self,num) def set_virt_pxe_boot(self,num): return utils.set_virt_pxe_boot(self,num) def set_virt_ram(self,num): return utils.set_virt_ram(self,num) def set_virt_type(self,vtype): return utils.set_virt_type(self,vtype) def set_virt_path(self,path): return utils.set_virt_path(self,path,for_system=True) def set_netboot_enabled(self,netboot_enabled): """ If true, allows per-system PXE files to be generated on sync (or add). If false, these files are not generated, thus eliminating the potential for an infinite install loop when systems are set to PXE boot first in the boot order. In general, users who are PXE booting first in the boot order won't create system definitions, so this feature primarily comes into play for programmatic users of the API, who want to initially create a system with netboot enabled and then disable it after the system installs, as triggered by some action in kickstart %post. For this reason, this option is not surfaced in the CLI, output, or documentation (yet). Use of this option does not affect the ability to use PXE menus. If an admin has machines set up to PXE only after local boot fails, this option isn't even relevant. """ self.netboot_enabled = utils.input_boolean(netboot_enabled) return True def set_kickstart(self,kickstart): """ Sets the kickstart. This must be a NFS, HTTP, or FTP URL. Or filesystem path. Minor checking of the URL is performed here. NOTE -- usage of the --kickstart parameter in the profile is STRONGLY encouraged. This is only for exception cases where a user already has kickstarts made for each system and can't leverage templating. Profiles provide an important abstraction layer -- assigning systems to defined and repeatable roles. """ if kickstart == "": self.kickstart = kickstart return True if kickstart is None or kickstart in [ "delete", "<>" ]: self.kickstart = "<>" return True kickstart = utils.find_kickstart(kickstart) if kickstart: self.kickstart = kickstart return True raise CX(_("kickstart not found: %s" % kickstart)) def set_power_type(self, power_type): # FIXME: modularize this better if power_type is None: power_type = "" choices = utils.get_power_types() choices.sort() if power_type not in choices: raise CX("power management type must be one of: %s" % ",".join(choices)) self.power_type = power_type return True def set_power_user(self, power_user): if power_user is None: power_user = "" utils.safe_filter(power_user) self.power_user = power_user return True def set_power_pass(self, power_pass): if power_pass is None: power_pass = "" utils.safe_filter(power_pass) self.power_pass = power_pass return True def set_power_address(self, power_address): if power_address is None: power_address = "" utils.safe_filter(power_address) self.power_address = power_address return True def set_power_id(self, power_id): if power_id is None: power_id = "" utils.safe_filter(power_id) self.power_id = power_id return True def modify_interface(self, hash): """ Used by the WUI to modify an interface more-efficiently """ for (key,value) in hash.iteritems(): (field,interface) = key.split("-",1) field = field.replace("_","").replace("-","") if field == "macaddress" : self.set_mac_address(value, interface) if field == "mtu" : self.set_mtu(value, interface) if field == "ipaddress" : self.set_ip_address(value, interface) if field == "dnsname" : self.set_dns_name(value, interface) if field == "static" : self.set_static(value, interface) if field == "dhcptag" : self.set_dhcp_tag(value, interface) if field == "netmask" : self.set_netmask(value, interface) if field == "subnet" : self.set_netmask(value, interface) if field == "ifgateway" : self.set_if_gateway(value, interface) if field == "virtbridge" : self.set_virt_bridge(value, interface) if field == "interfacetype" : self.set_interface_type(value, interface) if field == "interfacemaster" : self.set_interface_master(value, interface) if field == "bonding" : self.set_interface_type(value, interface) # deprecated if field == "bondingmaster" : self.set_interface_master(value, interface) # deprecated if field == "bondingopts" : self.set_bonding_opts(value, interface) if field == "bridgeopts" : self.set_bridge_opts(value, interface) if field == "management" : self.set_management(value, interface) if field == "staticroutes" : self.set_static_routes(value, interface) if field == "ipv6address" : self.set_ipv6_address(value, interface) if field == "ipv6secondaries" : self.set_ipv6_secondaries(value, interface) if field == "ipv6mtu" : self.set_ipv6_mtu(value, interface) if field == "ipv6staticroutes" : self.set_ipv6_static_routes(value, interface) if field == "ipv6defaultgateway" : self.set_ipv6_default_gateway(value, interface) if field == "cnames" : self.set_cnames(value, interface) return True def check_if_valid(self): if self.name is None or self.name == "": raise CX("name is required") if self.profile is None or self.profile == "": if self.image is None or self.image == "": raise CX("Error with system %s - profile or image is required" % (self.name)) def set_template_remote_kickstarts(self, template): """ Sets whether or not the server is configured to template remote kickstarts. """ self.template_remote_kickstarts = utils.input_boolean(template) return True def set_monit_enabled(self,monit_enabled): """ If true, allows per-system to start Monit to monitor system services such as apache. If monit is not running it will start the service. If false, no management of monit will take place. If monit is not running it will not be started. If monit is running it will not be stopped or restarted. """ self.monit_enabled = utils.input_boolean(monit_enabled) return True def set_ldap_enabled(self,ldap_enabled): """ If true, allows per-system to start Monit to monitor system services such as apache. If monit is not running it will start the service. If false, no management of monit will take place. If monit is not running it will not be started. If monit is running it will not be stopped or restarted. """ self.ldap_enabled = utils.input_boolean(ldap_enabled) return True def set_repos_enabled(self,repos_enabled): """ If true, allows per-system to start Monit to monitor system services such as apache. If monit is not running it will start the service. If false, no management of monit will take place. If monit is not running it will not be started. If monit is running it will not be stopped or restarted. """ self.repos_enabled = utils.input_boolean(repos_enabled) return True def set_ldap_type(self, ldap_type): if ldap_type is None: ldap_type = "" ldap_type = ldap_type.lower() self.ldap_type = ldap_type return True cobbler-2.4.1/cobbler/kickgen.py000066400000000000000000000310741227367477500165540ustar00rootroot00000000000000""" Builds out filesystem trees/data based on the object tree. This is the code behind 'cobbler sync'. Copyright 2006-2009, Red Hat, Inc and Others Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import os import os.path import shutil import time import sys import glob import traceback import errno import urlparse import utils from cexceptions import * import templar import item_distro import item_profile import item_repo import item_system from utils import _ import xml.dom.minidom class KickGen: """ Handles conversion of internal state to the tftpboot tree layout """ def __init__(self,config): """ Constructor """ self.config = config self.api = config.api self.distros = config.distros() self.profiles = config.profiles() self.systems = config.systems() self.settings = config.settings() self.repos = config.repos() self.templar = templar.Templar(config) def createAutoYaSTScript( self, document, script, name ): newScript = document.createElement("script") newScriptSource = document.createElement("source") newScriptSourceText = document.createTextNode(script) newScript.appendChild(newScriptSource) newScriptFile = document.createElement("filename") newScriptFileText = document.createTextNode(name) newScript.appendChild(newScriptFile) newScriptSource.appendChild(newScriptSourceText) newScriptFile.appendChild(newScriptFileText) return newScript def addAutoYaSTScript( self, document, type, source ): scripts = document.getElementsByTagName("scripts") if scripts.length == 0: newScripts = document.createElement("scripts") document.documentElement.appendChild( newScripts ) scripts = document.getElementsByTagName("scripts") added = 0 for stype in scripts[0].childNodes: if stype.nodeType == stype.ELEMENT_NODE and stype.tagName == type: stype.appendChild( self.createAutoYaSTScript( document, source, type+"_cobbler" ) ) added = 1 if added == 0: newChrootScripts = document.createElement( type ) newChrootScripts.setAttribute( "config:type", "list" ) newChrootScripts.appendChild( self.createAutoYaSTScript( document, source, type+"_cobbler" ) ) scripts[0].appendChild( newChrootScripts ) def generate_autoyast(self, profile=None, system=None, raw_data=None): self.api.logger.info("autoyast XML file found. Checkpoint: profile=%s system=%s" % (profile,system) ) nopxe = "\nwget \"http://%s/cblr/svc/op/nopxe/system/%s\" -O /dev/null" runpost = "\ncurl \"http://%s/cblr/svc/op/trig/mode/post/%s/%s\" > /dev/null" runpre = "\nwget \"http://%s/cblr/svc/op/trig/mode/pre/%s/%s\" -O /dev/null" what = "profile" blend_this = profile if system: what = "system" blend_this = system blended = utils.blender(self.api, False, blend_this) srv = blended["http_server"] document = xml.dom.minidom.parseString(raw_data) # do we already have the #raw comment in the XML? (addComment = 0 means, don't add #raw comment) addComment = 1 for node in document.childNodes[1].childNodes: if node.nodeType == node.ELEMENT_NODE and node.tagName == "cobbler": addComment = 0 break # add some cobbler information to the XML file # maybe that should be configureable if addComment == 1: #startComment = document.createComment("\ncobbler_system_name=$system_name\ncobbler_server=$server\n#raw\n") #endComment = document.createComment("\n#end raw\n") cobblerElement = document.createElement("cobbler") cobblerElementSystem = xml.dom.minidom.Element("system_name") cobblerElementProfile = xml.dom.minidom.Element("profile_name") if( system is not None ): cobblerTextSystem = document.createTextNode(system.name) cobblerElementSystem.appendChild( cobblerTextSystem ) if( profile is not None ): cobblerTextProfile = document.createTextNode(profile.name) cobblerElementProfile.appendChild( cobblerTextProfile ) cobblerElementServer = document.createElement("server") cobblerTextServer = document.createTextNode(blended["http_server"]) cobblerElementServer.appendChild( cobblerTextServer ) cobblerElement.appendChild( cobblerElementServer ) cobblerElement.appendChild( cobblerElementSystem ) cobblerElement.appendChild( cobblerElementProfile ) # FIXME: this is all broken and no longer works. # this entire if block should probably not be # hard-coded anyway #self.api.log(document.childNodes[2].childNodes) #document.childNodes[1].insertBefore( cobblerElement, document.childNodes[2].childNodes[1]) #document.childNodes[1].insertBefore( cobblerElement, document.childNodes[1].childNodes[0]) name = profile.name if system is not None: name = system.name if str(self.settings.pxe_just_once).upper() in [ "1", "Y", "YES", "TRUE" ]: self.addAutoYaSTScript( document, "chroot-scripts", nopxe % (srv, name) ) if self.settings.run_install_triggers: # notify cobblerd when we start/finished the installation self.addAutoYaSTScript( document, "pre-scripts", runpre % ( srv, what, name ) ) self.addAutoYaSTScript( document, "init-scripts", runpost % ( srv, what, name ) ) return document.toxml() def generate_repo_stanza(self, obj, is_profile=True): """ Automatically attaches yum repos to profiles/systems in kickstart files that contain the magic $yum_repo_stanza variable. This includes repo objects as well as the yum repos that are part of split tree installs, whose data is stored with the distro (example: RHEL5 imports) """ buf = "" blended = utils.blender(self.api, False, obj) repos = blended["repos"] # keep track of URLs and be sure to not include any duplicates included = {} for repo in repos: # see if this is a source_repo or not repo_obj = self.api.find_repo(repo) if repo_obj is not None: yumopts='' for opt in repo_obj.yumopts: yumopts = yumopts + " %s=%s" % (opt,repo_obj.yumopts[opt]) if not repo_obj.yumopts.has_key('enabled') or repo_obj.yumopts['enabled'] == '1': if repo_obj.mirror_locally: baseurl = "http://%s/cobbler/repo_mirror/%s" % (blended["http_server"], repo_obj.name) if not included.has_key(baseurl): buf = buf + "repo --name=%s --baseurl=%s\n" % (repo_obj.name, baseurl) included[baseurl] = 1 else: if not included.has_key(repo_obj.mirror): buf = buf + "repo --name=%s --baseurl=%s %s\n" % (repo_obj.name, repo_obj.mirror, yumopts) included[repo_obj.mirror] = 1 else: # FIXME: what to do if we can't find the repo object that is listed? # this should be a warning at another point, probably not here # so we'll just not list it so the kickstart will still work # as nothing will be here to read the output noise. Logging might # be useful. pass if is_profile: distro = obj.get_conceptual_parent() else: distro = obj.get_conceptual_parent().get_conceptual_parent() source_repos = distro.source_repos count = 0 for x in source_repos: count = count + 1 if not included.has_key(x[1]): buf = buf + "repo --name=source-%s --baseurl=%s\n" % (count, x[1]) included[x[1]] = 1 return buf def generate_config_stanza(self, obj, is_profile=True): """ Add in automatic to configure /etc/yum.repos.d on the remote system if the kickstart file contains the magic $yum_config_stanza. """ if not self.settings.yum_post_install_mirror: return "" blended = utils.blender(self.api, False, obj) if is_profile: url = "http://%s/cblr/svc/op/yum/profile/%s" % (blended["http_server"], obj.name) else: url = "http://%s/cblr/svc/op/yum/system/%s" % (blended["http_server"], obj.name) return "wget \"%s\" --output-document=/etc/yum.repos.d/cobbler-config.repo\n" % (url) def generate_kickstart_for_system(self, sys_name): s = self.api.find_system(name=sys_name) if s is None: return "# system not found" p = s.get_conceptual_parent() if p is None: raise CX(_("system %(system)s references missing profile %(profile)s") % { "system" : s.name, "profile" : s.profile }) distro = p.get_conceptual_parent() if distro is None: # this is an image parented system, no kickstart available return "# image based systems do not have kickstarts" return self.generate_kickstart(profile=p, system=s) def generate_kickstart(self, profile=None, system=None): obj = system if system is None: obj = profile meta = utils.blender(self.api, False, obj) kickstart_path = utils.find_kickstart(meta["kickstart"]) if not kickstart_path: return "# kickstart is missing or invalid: %s" % meta["kickstart"] ksmeta = meta["ks_meta"] del meta["ks_meta"] meta.update(ksmeta) # make available at top level meta["yum_repo_stanza"] = self.generate_repo_stanza(obj, (system is None)) meta["yum_config_stanza"] = self.generate_config_stanza(obj, (system is None)) meta["kernel_options"] = utils.hash_to_string(meta["kernel_options"]) # meta["config_template_files"] = self.generate_template_files_stanza(g, False) # add extra variables for other distro types if "tree" in meta: urlparts = urlparse.urlsplit(meta["tree"]) meta["install_source_directory"] = urlparts[2] try: raw_data = utils.read_file_contents(kickstart_path, self.api.logger, self.settings.template_remote_kickstarts) if raw_data is None: return "# kickstart is sourced externally: %s" % meta["kickstart"] distro = profile.get_conceptual_parent() if system is not None: distro = system.get_conceptual_parent().get_conceptual_parent() data = self.templar.render(raw_data, meta, None, obj) if distro.breed == "suse": # AutoYaST profile data = self.generate_autoyast(profile,system,data) return data except FileNotFoundException: self.api.logger.warning("kickstart not found: %s" % meta["kickstart"]) return "# kickstart not found: %s" % meta["kickstart"] def generate_kickstart_for_profile(self,g): g = self.api.find_profile(name=g) if g is None: return "# profile not found" distro = g.get_conceptual_parent() if distro is None: raise CX(_("profile %(profile)s references missing distro %(distro)s") % { "profile" : g.name, "distro" : g.distro }) return self.generate_kickstart(profile=g) def get_last_errors(self): """ Returns the list of errors generated by the last template render action """ return self.templar.last_errors cobbler-2.4.1/cobbler/module_loader.py000066400000000000000000000073151227367477500177550ustar00rootroot00000000000000""" Module loader, adapted for cobbler usage Copyright 2006-2009, Red Hat, Inc and Others Adrian Likins Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import distutils.sysconfig import os import sys import glob import clogger from utils import _, log_exc from cexceptions import * import ConfigParser # python 2.3 compat. If we don't need that, drop this test try: set() except: from sets import Set as set MODULE_CACHE = {} MODULES_BY_CATEGORY = {} cp = ConfigParser.ConfigParser() cp.read("/etc/cobbler/modules.conf") plib = distutils.sysconfig.get_python_lib() mod_path="%s/cobbler/modules" % plib sys.path.insert(0, mod_path) sys.path.insert(1, "%s/cobbler" % plib) def load_modules(module_path=mod_path, blacklist=None): logger = clogger.Logger() filenames = glob.glob("%s/*.py" % module_path) filenames = filenames + glob.glob("%s/*.pyc" % module_path) filenames = filenames + glob.glob("%s/*.pyo" % module_path) mods = set() for fn in filenames: basename = os.path.basename(fn) if basename == "__init__.py": continue if basename[-3:] == ".py": modname = basename[:-3] elif basename[-4:] in [".pyc", ".pyo"]: modname = basename[:-4] # No need to try importing the same module over and over if # we have a .py, .pyc, and .pyo if modname in mods: continue mods.add(modname) try: blip = __import__("modules.%s" % ( modname), globals(), locals(), [modname]) if not hasattr(blip, "register"): if not modname.startswith("__init__"): errmsg = _("%(module_path)s/%(modname)s is not a proper module") print errmsg % {'module_path': module_path, 'modname':modname} continue category = blip.register() if category: MODULE_CACHE[modname] = blip if not MODULES_BY_CATEGORY.has_key(category): MODULES_BY_CATEGORY[category] = {} MODULES_BY_CATEGORY[category][modname] = blip except Exception, e: logger.info('Exception raised when loading module %s' % modname) log_exc(logger) return (MODULE_CACHE, MODULES_BY_CATEGORY) def get_module_by_name(name): return MODULE_CACHE.get(name, None) def get_module_from_file(category,field,fallback_module_name=None,just_name=False): try: value = cp.get(category,field) except: if fallback_module_name is not None: value = fallback_module_name else: raise CX(_("Cannot find config file setting for: %s") % field) if just_name: return value rc = MODULE_CACHE.get(value, None) if rc is None: raise CX(_("Failed to load module for %s/%s") % (category,field)) return rc def get_modules_in_category(category): if not MODULES_BY_CATEGORY.has_key(category): return [] return MODULES_BY_CATEGORY[category].values() if __name__ == "__main__": print load_modules(mod_path) cobbler-2.4.1/cobbler/modules/000077500000000000000000000000001227367477500162325ustar00rootroot00000000000000cobbler-2.4.1/cobbler/modules/__init__.py000066400000000000000000000000001227367477500203310ustar00rootroot00000000000000cobbler-2.4.1/cobbler/modules/authn_configfile.py000066400000000000000000000045221227367477500221130ustar00rootroot00000000000000""" Authentication module that uses /etc/cobbler/auth.conf Choice of authentication module is in /etc/cobbler/modules.conf Copyright 2007-2009, Red Hat, Inc and Others Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import distutils.sysconfig import ConfigParser import sys import os from utils import _ from utils import md5 import traceback plib = distutils.sysconfig.get_python_lib() mod_path="%s/cobbler" % plib sys.path.insert(0, mod_path) import cexceptions import utils def register(): """ The mandatory cobbler module registration hook. """ return "authn" def __parse_storage(): if not os.path.exists("/etc/cobbler/users.digest"): return [] fd = open("/etc/cobbler/users.digest") data = fd.read() fd.close() results = [] lines = data.split("\n") for line in lines: try: line = line.strip() tokens = line.split(":") results.append([tokens[0],tokens[1],tokens[2]]) except: pass return results def authenticate(api_handle,username,password): """ Validate a username/password combo, returning True/False Thanks to http://trac.edgewall.org/ticket/845 for supplying the algorithm info. """ # debugging only (not safe to enable) # api_handle.logger.debug("backend authenticate (%s,%s)" % (username,password)) userlist = __parse_storage() for (user,realm,actual_blob) in userlist: if user == username and realm == "Cobbler": input = ":".join([user,realm,password]) input_blob = md5(input).hexdigest() if input_blob.lower() == actual_blob.lower(): return True return False cobbler-2.4.1/cobbler/modules/authn_denyall.py000066400000000000000000000026771227367477500214470ustar00rootroot00000000000000""" Authentication module that denies everything. Used to disable the WebUI by default. Copyright 2007-2009, Red Hat, Inc and Others Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import distutils.sysconfig import sys plib = distutils.sysconfig.get_python_lib() mod_path="%s/cobbler" % plib sys.path.insert(0, mod_path) def register(): """ The mandatory cobbler module registration hook. """ return "authn" def authenticate(api_handle,username,password): """ Validate a username/password combo, returning True/False Thanks to http://trac.edgewall.org/ticket/845 for supplying the algorithm info. """ # debugging only (not safe to enable) # api_handle.logger.debug("backend authenticate (%s,%s)" % (username,password)) return False cobbler-2.4.1/cobbler/modules/authn_ldap.py000066400000000000000000000105631227367477500207300ustar00rootroot00000000000000""" Authentication module that uses ldap Settings in /etc/cobbler/authn_ldap.conf Choice of authentication module is in /etc/cobbler/modules.conf This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import distutils.sysconfig import sys import os from utils import _ import traceback # we'll import this just a bit later # to keep it from being a requirement # import ldap plib = distutils.sysconfig.get_python_lib() mod_path="%s/cobbler" % plib sys.path.insert(0, mod_path) import cexceptions import utils import api as cobbler_api def register(): """ The mandatory cobbler module registration hook. """ return "authn" def authenticate(api_handle,username,password): """ Validate an ldap bind, returning True/False """ if (not password): return False import ldap server = api_handle.settings().ldap_server basedn = api_handle.settings().ldap_base_dn port = api_handle.settings().ldap_port tls = api_handle.settings().ldap_tls anon_bind = api_handle.settings().ldap_anonymous_bind prefix = api_handle.settings().ldap_search_prefix # Support for LDAP client certificates tls_cacertfile = api_handle.settings().ldap_tls_cacertfile tls_keyfile = api_handle.settings().ldap_tls_keyfile tls_certfile = api_handle.settings().ldap_tls_certfile # allow multiple servers split by a space if server.find(" "): servers = server.split() else: servers = [server] if tls_cacertfile: ldap.set_option(ldap.OPT_X_TLS_CACERTFILE, tls_cacertfile) if tls_keyfile: ldap.set_option(ldap.OPT_X_TLS_KEYFILE, tls_keyfile) if tls_certfile: ldap.set_option(ldap.OPT_X_TLS_CERTFILE, tls_certfile) uri = "" for server in servers: # form our ldap uri based on connection port if port == '389': uri += 'ldap://' + server elif port == '636': uri += 'ldaps://' + server else: uri += 'ldap://' + "%s:%s" % (server,port) uri += ' ' uri = uri.strip() # connect to LDAP host dir = ldap.initialize(uri) # start_tls if tls is 'on', 'true' or 'yes' # and we're not already using old-SSL tls = str(tls).lower() if port != '636': if tls in [ "on", "true", "yes", "1" ]: try: dir.start_tls_s() except: traceback.print_exc() return False # if we're not allowed to search anonymously, # grok the search bind settings and attempt to bind anon_bind = str(anon_bind).lower() if anon_bind not in [ "on", "true", "yes", "1" ]: searchdn = api_handle.settings().ldap_search_bind_dn searchpw = api_handle.settings().ldap_search_passwd if searchdn == '' or searchpw == '': raise "Missing search bind settings" try: dir.simple_bind_s(searchdn, searchpw) except: traceback.print_exc() return False # perform a subtree search in basedn to find the full dn of the user # TODO: what if username is a CN? maybe it goes into the config file as well? filter = prefix + username result = dir.search_s(basedn, ldap.SCOPE_SUBTREE, filter, []) if result: for dn,entry in result: # username _should_ be unique so we should only have one result # ignore entry; we don't need it pass else: return False try: # attempt to bind as the user dir.simple_bind_s(dn,password) dir.unbind() return True except: # traceback.print_exc() return False # catch-all return False if __name__ == "__main__": api_handle = cobbler_api.BootAPI() print authenticate(api_handle, "guest", "guest") cobbler-2.4.1/cobbler/modules/authn_pam.py000066400000000000000000000117321227367477500205640ustar00rootroot00000000000000""" Authentication module that uses /etc/cobbler/auth.conf Choice of authentication module is in /etc/cobbler/modules.conf Copyright 2007-2009, Red Hat, Inc and Others Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA PAM python code based on the pam_python code created by Chris AtLee: http://atlee.ca/software/pam/ ------------------------------------------------ pam_python (c) 2007 Chris AtLee Licensed under the MIT license: http://www.opensource.org/licenses/mit-license.php PAM module for python Provides an authenticate function that will allow the caller to authenticate a user against the Pluggable Authentication Modules (PAM) on the system. Implemented using ctypes, so no compilation is necessary. """ import distutils.sysconfig import ConfigParser import sys import os from utils import _ import traceback plib = distutils.sysconfig.get_python_lib() mod_path="%s/cobbler" % plib sys.path.insert(0, mod_path) import cexceptions import utils from ctypes import CDLL, POINTER, Structure, CFUNCTYPE, cast, pointer, sizeof from ctypes import c_void_p, c_uint, c_char_p, c_char, c_int from ctypes.util import find_library LIBPAM = CDLL(find_library("pam")) LIBC = CDLL(find_library("c")) CALLOC = LIBC.calloc CALLOC.restype = c_void_p CALLOC.argtypes = [c_uint, c_uint] STRDUP = LIBC.strdup STRDUP.argstypes = [c_char_p] STRDUP.restype = POINTER(c_char) # NOT c_char_p !!!! # Various constants PAM_PROMPT_ECHO_OFF = 1 PAM_PROMPT_ECHO_ON = 2 PAM_ERROR_MSG = 3 PAM_TEXT_INFO = 4 def register(): """ The mandatory cobbler module registration hook. """ return "authn" class PamHandle(Structure): """wrapper class for pam_handle_t""" _fields_ = [ ("handle", c_void_p) ] def __init__(self): Structure.__init__(self) self.handle = 0 class PamMessage(Structure): """wrapper class for pam_message structure""" _fields_ = [ ("msg_style", c_int), ("msg", c_char_p), ] def __repr__(self): return "" % (self.msg_style, self.msg) class PamResponse(Structure): """wrapper class for pam_response structure""" _fields_ = [ ("resp", c_char_p), ("resp_retcode", c_int), ] def __repr__(self): return "" % (self.resp_retcode, self.resp) CONV_FUNC = CFUNCTYPE(c_int, c_int, POINTER(POINTER(PamMessage)), POINTER(POINTER(PamResponse)), c_void_p) class PamConv(Structure): """wrapper class for pam_conv structure""" _fields_ = [ ("conv", CONV_FUNC), ("appdata_ptr", c_void_p) ] PAM_START = LIBPAM.pam_start PAM_START.restype = c_int PAM_START.argtypes = [c_char_p, c_char_p, POINTER(PamConv), POINTER(PamHandle)] PAM_AUTHENTICATE = LIBPAM.pam_authenticate PAM_AUTHENTICATE.restype = c_int PAM_AUTHENTICATE.argtypes = [PamHandle, c_int] def authenticate(api_handle, username, password): """ Returns True if the given username and password authenticate for the given service. Returns False otherwise """ @CONV_FUNC def my_conv(n_messages, messages, p_response, app_data): """Simple conversation function that responds to any prompt where the echo is off with the supplied password""" # Create an array of n_messages response objects addr = CALLOC(n_messages, sizeof(PamResponse)) p_response[0] = cast(addr, POINTER(PamResponse)) for i in range(n_messages): if messages[i].contents.msg_style == PAM_PROMPT_ECHO_OFF: pw_copy = STRDUP(str(password)) p_response.contents[i].resp = cast(pw_copy, c_char_p) p_response.contents[i].resp_retcode = 0 return 0 try: service = api_handle.settings().authn_pam_service except: service = 'login' api_handle.logger.debug("authn_pam: PAM service is %s" % service) handle = PamHandle() conv = PamConv(my_conv, 0) retval = PAM_START(service, username, pointer(conv), pointer(handle)) if retval != 0: # TODO: This is not an authentication error, something # has gone wrong starting up PAM api_handle.logger.error("authn_pam: error initializing PAM library") return False retval = PAM_AUTHENTICATE(handle, 0) return retval == 0 cobbler-2.4.1/cobbler/modules/authn_passthru.py000066400000000000000000000020621227367477500216540ustar00rootroot00000000000000""" Authentication module that defers to Apache and trusts what Apache trusts. Copyright 2008-2009, Red Hat, Inc and Others Michael DeHaan This software may be freely redistributed under the terms of the GNU general public license. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. """ import distutils.sysconfig import sys import os from cobbler import utils from utils import _ import traceback plib = distutils.sysconfig.get_python_lib() mod_path="%s/cobbler" % plib sys.path.insert(0, mod_path) import cexceptions import utils def register(): """ The mandatory cobbler module registration hook. """ return "authn" def authenticate(api_handle,username,password): """ Validate a username/password combo, returning True/False Uses cobbler_auth_helper """ ss = utils.get_shared_secret() if password == ss: rc = True else: rc = False return rc cobbler-2.4.1/cobbler/modules/authn_spacewalk.py000066400000000000000000000120571227367477500217620ustar00rootroot00000000000000""" Authentication module that uses Spacewalk's auth system. Any org_admin or kickstart_admin can get in. Copyright 2007-2008, Red Hat, Inc and Others Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import distutils.sysconfig import sys import xmlrpclib plib = distutils.sysconfig.get_python_lib() mod_path="%s/cobbler" % plib sys.path.insert(0, mod_path) def register(): """ The mandatory cobbler module registration hook. """ return "authn" def __looks_like_a_token(password): # what spacewalk sends us could be an internal token or it could be a password # if it's long and lowercase hex, it's /likely/ a token, and we should try to treat # it as a token first, if not, we should treat it as a password. All of this # code is there to avoid extra XMLRPC calls, which are slow. # we can't use binascii.unhexlify here as it's an "odd length string" if password.lower() != password: # tokens are always lowercase, this isn't a token return False #try: # #data = binascii.unhexlify(password) # return True # looks like a token, but we can't be sure #except: # return False # definitely not a token return (len(password) > 45) def authenticate(api_handle,username,password): """ Validate a username/password combo, returning True/False This will pass the username and password back to Spacewalk to see if this authentication request is valid. See also: http://www.redhat.com/spacewalk/documentation/api/0.4/ """ if api_handle is not None: server = api_handle.settings().redhat_management_server user_enabled = api_handle.settings().redhat_management_permissive else: server = "columbia.devel.redhat.com" user_enabled = True if server == "xmlrpc.rhn.redhat.com": return False # emergency fail, don't bother RHN! spacewalk_url = "https://%s/rpc/api" % server client = xmlrpclib.Server(spacewalk_url, verbose=0) if __looks_like_a_token(password) or username == 'taskomatic_user': # The tokens # are lowercase hex, but a password can also be lowercase hex, # so we have to try it as both a token and then a password if # we are unsure. We do it this way to be faster but also to avoid # any login failed stuff in the logs that we don't need to send. try: valid = client.auth.checkAuthToken(username,password) except: # if the token is not a token this will raise an exception # rather than return an integer. valid = 0 # problem at this point, 0xdeadbeef is valid as a token but if that # fails, it's also a valid password, so we must try auth system #2 if valid != 1: # first API code returns 1 on success # the second uses exceptions for login failed. # # so... token check failed, but maybe the username/password # is just a simple username/pass! if user_enabled == 0: # this feature must be explicitly enabled. return False session = "" try: session = client.auth.login(username,password) except: # FIXME: should log exceptions that are not excepted # as we could detect spacewalk java errors here that # are not login related. return False # login success by username, role must also match roles = client.user.listRoles(session, username) if not ("config_admin" in roles or "org_admin" in roles): return False return True else: # it's an older version of spacewalk, so just try the username/pass # OR: we know for sure it's not a token because it's not lowercase hex. if user_enabled == 0: # this feature must be explicitly enabled. return False session = "" try: session = client.auth.login(username,password) except: return False # login success by username, role must also match roles = client.user.listRoles(session, username) if not ("config_admin" in roles or "org_admin" in roles): return False return True if __name__ == "__main__": print authenticate(None,"admin","redhat") cobbler-2.4.1/cobbler/modules/authn_testing.py000066400000000000000000000026331227367477500214640ustar00rootroot00000000000000""" Authentication module that denies everything. Unsafe demo. Allows anyone in with testing/testing. Copyright 2007-2009, Red Hat, Inc and Others Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import distutils.sysconfig import sys plib = distutils.sysconfig.get_python_lib() mod_path="%s/cobbler" % plib sys.path.insert(0, mod_path) def register(): """ The mandatory cobbler module registration hook. """ return "authn" def authenticate(api_handle,username,password): """ Validate a username/password combo, returning True/False Thanks to http://trac.edgewall.org/ticket/845 for supplying the algorithm info. """ if username == "testing" and password == "testing": return True return False cobbler-2.4.1/cobbler/modules/authz_allowall.py000066400000000000000000000025701227367477500216320ustar00rootroot00000000000000""" Authorization module that allows everything, which is the default for new cobbler installs. Copyright 2007-2009, Red Hat, Inc and Others Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import distutils.sysconfig import ConfigParser import sys from utils import _ plib = distutils.sysconfig.get_python_lib() mod_path="%s/cobbler" % plib sys.path.insert(0, mod_path) import cexceptions import utils def register(): """ The mandatory cobbler module registration hook. """ return "authz" def authorize(api_handle,user,resource,arg1=None,arg2=None): """ Validate a user against a resource. NOTE: acls are not enforced as there is no group support in this module """ return True cobbler-2.4.1/cobbler/modules/authz_configfile.py000066400000000000000000000031771227367477500221340ustar00rootroot00000000000000""" Authorization module that allow users listed in /etc/cobbler/users.conf to be permitted to access resources. For instance, when using authz_ldap, you want to use authn_configfile, not authz_allowall, which will most likely NOT do what you want. This software may be freely redistributed under the terms of the GNU general public license. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. """ import distutils.sysconfig import ConfigParser import sys import os from utils import _ plib = distutils.sysconfig.get_python_lib() mod_path="%s/cobbler" % plib sys.path.insert(0, mod_path) import cexceptions import utils CONFIG_FILE='/etc/cobbler/users.conf' def register(): """ The mandatory cobbler module registration hook. """ return "authz" def __parse_config(): if not os.path.exists(CONFIG_FILE): return [] config = ConfigParser.SafeConfigParser() config.read(CONFIG_FILE) alldata = {} groups = config.sections() for g in groups: alldata[str(g)] = {} opts = config.options(g) for o in opts: alldata[g][o] = 1 return alldata def authorize(api_handle,user,resource,arg1=None,arg2=None): """ Validate a user against a resource. All users in the file are permitted by this module. """ # FIXME: this must be modified to use the new ACL engine data = __parse_config() for g in data: if user.lower() in data[g]: return 1 return 0 if __name__ == "__main__": print __parse_config() cobbler-2.4.1/cobbler/modules/authz_ownership.py000066400000000000000000000161301227367477500220360ustar00rootroot00000000000000""" Authorization module that allow users listed in /etc/cobbler/users.conf to be permitted to access resources, with the further restriction that cobbler objects can be edited to only allow certain users/groups to access those specific objects. Copyright 2008-2009, Red Hat, Inc and Others Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import distutils.sysconfig import ConfigParser import sys import os from cobbler.utils import _ plib = distutils.sysconfig.get_python_lib() mod_path="%s/cobbler" % plib sys.path.insert(0, mod_path) import cexceptions import utils def register(): """ The mandatory cobbler module registration hook. """ return "authz" def __parse_config(): etcfile='/etc/cobbler/users.conf' if not os.path.exists(etcfile): raise CX(_("/etc/cobbler/users.conf does not exist")) config = ConfigParser.ConfigParser() config.read(etcfile) alldata = {} sections = config.sections() for g in sections: alldata[str(g)] = {} opts = config.options(g) for o in opts: alldata[g][o] = 1 return alldata def __authorize_kickstart(api_handle, groups, user, kickstart): # the authorization rules for kickstart editing are a bit # of a special case. Non-admin users can edit a kickstart # only if all objects that depend on that kickstart are # editable by the user in question. # # Example: # if Pinky owns ProfileA # and the Brain owns ProfileB # and both profiles use the same kickstart template # and neither Pinky nor the Brain is an admin # neither is allowed to edit the kickstart template # because they would make unwanted changes to each other # # In the above scenario the UI will explain the problem # and ask that the user asks the admin to resolve it if required. # NOTE: this function is only called by authorize so admin users are # cleared before this function is called. lst = api_handle.find_profile(kickstart=kickstart, return_list=True) lst.extend(api_handle.find_system(kickstart=kickstart, return_list=True)) for obj in lst: if not __is_user_allowed(obj, groups, user, "write_kickstart", kickstart, None): return 0 return 1 def __authorize_snippet(api_handle, groups, user, kickstart): # only allow admins to edit snippets -- since we don't have detection to see # where each snippet is in use for group in groups: if group not in [ "admins", "admin" ]: return False return True def __is_user_allowed(obj, groups, user, resource, arg1, arg2): if user == "": # system user, logged in via web.ss return True for group in groups: if group in [ "admins", "admin" ]: return True if obj.owners == []: return True for allowed in obj.owners: if user == allowed: # user match return True # else look for a group match for group in groups: if group == allowed: return True return 0 def authorize(api_handle,user,resource,arg1=None,arg2=None): """ Validate a user against a resource. All users in the file are permitted by this module. """ if user == "": # CLI should always be permitted return True # everybody can get read-only access to everything # if they pass authorization, they don't have to be in users.conf if resource is not None: # FIXME: /cobbler/web should not be subject to user check in any case for x in [ "get", "read", "/cobbler/web" ]: if resource.startswith(x): return 1 # read operation is always ok. user_groups = __parse_config() # classify the type of operation modify_operation = False for criteria in ["save","copy","rename","remove","modify","edit","xapi","background"]: if resource.find(criteria) != -1: modify_operation = True # FIXME: is everyone allowed to copy? I think so. # FIXME: deal with the problem of deleted parents and promotion found_user = False found_groups = [] grouplist = user_groups.keys() for g in grouplist: for x in user_groups[g]: if x == user: found_groups.append(g) found_user = True # if user is in the admin group, always authorize # regardless of the ownership of the object. if g == "admins" or g == "admin": return True if not found_user: # if the user isn't anywhere in the file, reject regardless # they can still use read-only XMLRPC return 0 if not modify_operation: # sufficient to allow access for non save/remove ops to all # users for now, may want to refine later. return True # now we have a modify_operation op, so we must check ownership # of the object. remove ops pass in arg1 as a string name, # saves pass in actual objects, so we must treat them differently. # kickstarts are even more special so we call those out to another # function, rather than going through the rest of the code here. if resource.find("write_kickstart") != -1: return __authorize_kickstart(api_handle,found_groups,user,arg1) elif resource.find("read_kickstart") != -1: return True # the API for editing snippets also needs to do something similar. # as with kickstarts, though since they are more widely used it's more # restrictive if resource.find("write_snippet") != -1: return __authorize_snippet(api_handle,found_groups,user,arg1) elif resource.find("read_snipppet") != -1: return True obj = None if resource.find("remove") != -1: if resource == "remove_distro": obj = api_handle.find_distro(arg1) elif resource == "remove_profile": obj = api_handle.find_profile(arg1) elif resource == "remove_system": obj = api_handle.find_system(arg1) elif resource == "remove_repo": obj = api_handle.find_repo(arg1) elif resource == "remove_image": obj = api_handle.find_image(arg1) elif resource.find("save") != -1 or resource.find("modify") != -1: obj = arg1 # if the object has no ownership data, allow access regardless if obj is None or obj.owners is None or obj.owners == []: return True return __is_user_allowed(obj,found_groups,user,resource,arg1,arg2) cobbler-2.4.1/cobbler/modules/install_post_log.py000066400000000000000000000031251227367477500221610ustar00rootroot00000000000000""" (C) 2008-2009, Red Hat Inc. Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import distutils.sysconfig import sys import os from utils import _ import traceback import cexceptions import os import sys import time plib = distutils.sysconfig.get_python_lib() mod_path="%s/cobbler" % plib sys.path.insert(0, mod_path) def register(): # this pure python trigger acts as if it were a legacy shell-trigger, but is much faster. # the return of this method indicates the trigger type return "/var/lib/cobbler/triggers/install/post/*" def run(api, args, logger): # FIXME: make everything use the logger, no prints, use util.subprocess_call, etc objtype = args[0] # "system" or "profile" name = args[1] # name of system or profile ip = args[2] # ip or "?" fd = open("/var/log/cobbler/install.log","a+") fd.write("%s\t%s\t%s\tstop\t%s\n" % (objtype,name,ip,time.time())) fd.close() return 0 cobbler-2.4.1/cobbler/modules/install_post_power.py000066400000000000000000000024571227367477500225430ustar00rootroot00000000000000# (c) 2010 # Bill Peck # # License: GPLv2+ # Post install trigger for cobbler to # power cycle the guest if needed import distutils.sysconfig import sys import os import traceback from threading import Thread import time class reboot(Thread): def __init__ (self,api, target): Thread.__init__(self) self.api = api self.target = target def run(self): time.sleep(30) self.api.reboot(self.target) plib = distutils.sysconfig.get_python_lib() mod_path="%s/cobbler" % plib sys.path.insert(0, mod_path) def register(): # this pure python trigger acts as if it were a legacy shell-trigger, but is much faster. # the return of this method indicates the trigger type return "/var/lib/cobbler/triggers/install/post/*" def run(api, args, logger): # FIXME: make everything use the logger settings = api.settings() objtype = args[0] # "target" or "profile" name = args[1] # name of target or profile boot_ip = args[2] # ip or "?" if objtype == "system": target = api.find_system(name) else: return 0 if target and 'postreboot' in target.ks_meta: # Run this in a thread so the system has a chance to finish and umount the filesystem current = reboot(api, target) current.start() return 0 cobbler-2.4.1/cobbler/modules/install_post_puppet.py000066400000000000000000000034031227367477500227140ustar00rootroot00000000000000""" This module signs newly installed client puppet certificates if the puppet master server is running on the same machine as the cobbler server. Based on: http://www.ithiriel.com/content/2010/03/29/writing-install-triggers-cobbler """ import distutils.sysconfig import re import sys import utils plib = distutils.sysconfig.get_python_lib() mod_path="%s/cobbler" % plib sys.path.insert(0, mod_path) def register(): # this pure python trigger acts as if it were a legacy shell-trigger, but is much faster. # the return of this method indicates the trigger type return "/var/lib/cobbler/triggers/install/post/*" def run(api, args, logger): objtype = args[0] # "system" or "profile" name = args[1] # name of system or profile ip = args[2] # ip or "?" if objtype != "system": return 0 settings = api.settings() if not str(settings.puppet_auto_setup).lower() in [ "1", "yes", "y", "true"]: return 0 if not str(settings.sign_puppet_certs_automatically).lower() in [ "1", "yes", "y", "true"]: return 0 system = api.find_system(name) system = utils.blender(api, False, system) hostname = system[ "hostname" ] if not re.match(r'[\w-]+\..+', hostname): search_domains = system['name_servers_search'] if search_domains: hostname += '.' + search_domains[0] puppetca_path = settings.puppetca_path cmd = [puppetca_path, 'cert', 'sign', hostname] rc = 0 try: rc = utils.subprocess_call(logger, cmd, shell=False) except: if logger is not None: logger.warning("failed to execute %s" % puppetca_path) if rc != 0: if logger is not None: logger.warning("signing of puppet cert for %s failed" % name) return 0 cobbler-2.4.1/cobbler/modules/install_post_report.py000077500000000000000000000055551227367477500227270ustar00rootroot00000000000000# (c) 2008-2009 # Jeff Schroeder # Michael DeHaan # # License: GPLv2+ # Post install trigger for cobbler to # send out a pretty email report that # contains target information. import distutils.sysconfig import sys import os import traceback plib = distutils.sysconfig.get_python_lib() mod_path="%s/cobbler" % plib sys.path.insert(0, mod_path) from utils import _ import smtplib import sys import cobbler.templar as templar from cobbler.cexceptions import CX import utils def register(): # this pure python trigger acts as if it were a legacy shell-trigger, but is much faster. # the return of this method indicates the trigger type return "/var/lib/cobbler/triggers/install/post/*" def run(api, args, logger): # FIXME: make everything use the logger settings = api.settings() # go no further if this feature is turned off if not str(settings.build_reporting_enabled).lower() in [ "1", "yes", "y", "true"]: return 0 objtype = args[0] # "target" or "profile" name = args[1] # name of target or profile boot_ip = args[2] # ip or "?" if objtype == "system": target = api.find_system(name) else: target = api.find_profile(name) # collapse the object down to a rendered datastructure target = utils.blender(api, False, target) if target == {}: raise CX("failure looking up target") to_addr = settings.build_reporting_email if to_addr == "": return 0 # add the ability to specify an MTA for servers that don't run their own smtp_server = settings.build_reporting_smtp_server if smtp_server == "": smtp_server = "localhost" # use a custom from address or fall back to a reasonable default from_addr = settings.build_reporting_sender if from_addr == "": from_addr = "cobbler@%s" % settings.server subject = settings.build_reporting_subject if subject == "": subject = '[Cobbler] install complete ' to_addr = ",".join(to_addr) metadata = { "from_addr" : from_addr, "to_addr" : to_addr, "subject" : subject, "boot_ip" : boot_ip } metadata.update(target) input_template = open("/etc/cobbler/reporting/build_report_email.template") input_data = input_template.read() input_template.close() message = templar.Templar(api._config).render(input_data, metadata, None) # for debug, call # print message sendmail = True for prefix in settings.build_reporting_ignorelist: if name.lower().startswith(prefix) == True: sendmail = False if sendmail == True: # Send the mail # FIXME: on error, return non-zero server_handle = smtplib.SMTP(smtp_server) server_handle.sendmail(from_addr, to_addr.split(','), message) server_handle.quit() return 0 cobbler-2.4.1/cobbler/modules/install_pre_clear_anamon_logs.py000077500000000000000000000041241227367477500246470ustar00rootroot00000000000000""" (C) 2008-2009, Red Hat Inc. James Laska Bill Peck This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import distutils.sysconfig import sys import os from utils import _ import traceback import cexceptions plib = distutils.sysconfig.get_python_lib() mod_path="%s/cobbler" % plib sys.path.insert(0, mod_path) import os import glob import sys def register(): # this pure python trigger acts as if it were a legacy shell-trigger, but is much faster. # the return of this method indicates the trigger type return "/var/lib/cobbler/triggers/install/pre/*" def run(api, args, logger): # FIXME: use the logger if len(args) < 3: raise CX("invalid invocation") objtype = args[0] # "system" or "profile" name = args[1] # name of system or profile ip = args[2] # ip or "?" settings = api.settings() anamon_enabled = str(settings.anamon_enabled) # Remove any files matched with the given glob pattern def unlink_files(globex): for f in glob.glob(globex): if os.path.isfile(f): try: os.unlink(f) except OSError, e: pass if str(anamon_enabled) in [ "true", "1", "y", "yes"]: dirname = "/var/log/cobbler/anamon/%s" % name if os.path.isdir(dirname): unlink_files(os.path.join(dirname, "*")) # TODO - log somewhere that we cleared a systems anamon logs return 0 cobbler-2.4.1/cobbler/modules/install_pre_log.py000066400000000000000000000014441227367477500217640ustar00rootroot00000000000000import distutils.sysconfig import sys import os from utils import _ import traceback import cexceptions import os import sys import time plib = distutils.sysconfig.get_python_lib() mod_path="%s/cobbler" % plib sys.path.insert(0, mod_path) def register(): # this pure python trigger acts as if it were a legacy shell-trigger, but is much faster. # the return of this method indicates the trigger type return "/var/lib/cobbler/triggers/install/pre/*" def run(api, args, logger): objtype = args[0] # "system" or "profile" name = args[1] # name of system or profile ip = args[2] # ip or "?" # FIXME: use the logger fd = open("/var/log/cobbler/install.log","a+") fd.write("%s\t%s\t%s\tstart\t%s\n" % (objtype,name,ip,time.time())) fd.close() return 0 cobbler-2.4.1/cobbler/modules/install_pre_puppet.py000066400000000000000000000037261227367477500225250ustar00rootroot00000000000000""" This module removes puppet certs from the puppet master prior to reinstalling a machine if the puppet master is running on the cobbler server. Based on: http://www.ithiriel.com/content/2010/03/29/writing-install-triggers-cobbler """ import distutils.sysconfig import re import sys import utils plib = distutils.sysconfig.get_python_lib() mod_path="%s/cobbler" % plib sys.path.insert(0, mod_path) def register(): # this pure python trigger acts as if it were a legacy shell-trigger, but is much faster. # the return of this method indicates the trigger type return "/var/lib/cobbler/triggers/install/pre/*" def run(api, args, logger): objtype = args[0] # "system" or "profile" name = args[1] # name of system or profile ip = args[2] # ip or "?" if objtype != "system": return 0 settings = api.settings() if not str(settings.puppet_auto_setup).lower() in [ "1", "yes", "y", "true"]: return 0 if not str(settings.remove_old_puppet_certs_automatically).lower() in [ "1", "yes", "y", "true"]: return 0 system = api.find_system(name) system = utils.blender(api, False, system) hostname = system[ "hostname" ] if not re.match(r'[\w-]+\..+', hostname): search_domains = system['name_servers_search'] if search_domains: hostname += '.' + search_domains[0] if not re.match(r'[\w-]+\..+', hostname): default_search_domains = system['default_name_servers_search'] if default_search_domains: hostname += '.' + default_search_domains[0] puppetca_path = settings.puppetca_path cmd = [puppetca_path, 'cert', 'clean', hostname] rc = 0 try: rc = utils.subprocess_call(logger, cmd, shell=False) except: if logger is not None: logger.warning("failed to execute %s" % puppetca_path) if rc != 0: if logger is not None: logger.warning("puppet cert removal for %s failed" % name) return 0 cobbler-2.4.1/cobbler/modules/manage_bind.py000066400000000000000000000517131227367477500210370ustar00rootroot00000000000000""" This is some of the code behind 'cobbler sync'. Copyright 2006-2009, Red Hat, Inc and Others Michael DeHaan John Eckersberg This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import time import sys import glob import traceback import errno import re import utils from cexceptions import * import templar import item_distro import item_profile import item_repo import item_system from utils import _ from types import * def register(): """ The mandatory cobbler module registration hook. """ return "manage" class BindManager: def what(self): return "bind" def __init__(self,config,logger): """ Constructor """ self.logger = logger self.config = config self.api = config.api self.distros = config.distros() self.profiles = config.profiles() self.systems = config.systems() self.settings = config.settings() self.repos = config.repos() self.templar = templar.Templar(config) self.settings_file = utils.namedconf_location(self.api) self.zonefile_base = utils.zonefile_base(self.api) def regen_hosts(self): pass # not used def __expand_IPv6(self, address): """ Expands an IPv6 address to long format i.e. xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx """ # Function by Chris Miller, approved for GLP use, taken verbatim from: # http://forrst.com/posts/Python_Expand_Abbreviated_IPv6_Addresses-1kQ fullAddress = "" # All groups expandedAddress = "" # Each group padded with leading zeroes validGroupCount = 8 validGroupSize = 4 if "::" not in address: # All groups are already present fullAddress = address else: # Consecutive groups of zeroes have been collapsed with "::" sides = address.split("::") groupsPresent = 0 for side in sides: if len(side) > 0: groupsPresent += len(side.split(":")) if len(sides[0]) > 0: fullAddress += sides[0] + ":" for i in range(0,validGroupCount-groupsPresent): fullAddress += "0000:" if len(sides[1]) > 0: fullAddress += sides[1] if fullAddress[-1] == ":": fullAddress = fullAddress[:-1] groups = fullAddress.split(":") for group in groups: while(len(group) < validGroupSize): group = "0" + group expandedAddress += group + ":" if expandedAddress[-1] == ":": expandedAddress = expandedAddress[:-1] return expandedAddress def __forward_zones(self): """ Returns a map of zones and the records that belong in them """ zones = {} forward_zones = self.settings.manage_forward_zones if type(forward_zones) != type([]): # gracefully handle when user inputs only a single zone # as a string instead of a list with only a single item forward_zones = [forward_zones] for zone in forward_zones: zones[zone] = {} for system in self.systems: for (name, interface) in system.interfaces.iteritems(): host = interface["dns_name"] ip = interface["ip_address"] ipv6 = interface["ipv6_address"] ipv6_sec_addrs = interface["ipv6_secondaries"] if not system.is_management_supported(cidr_ok=False): continue if not host: # gotsta have some dns_name and ip or else! continue if host.find(".") == -1: continue # match the longest zone! # e.g. if you have a host a.b.c.d.e # if manage_forward_zones has: # - c.d.e # - b.c.d.e # then a.b.c.d.e should go in b.c.d.e best_match = '' for zone in zones.keys(): if re.search('\.%s$' % zone, host) and len(zone) > len(best_match): best_match = zone if best_match == '': # no match continue # strip the zone off the dns_name host = re.sub('\.%s$' % best_match, '', host) # Create a list of IP addresses for this host ips = [] if ip: ips.append(ip) if ipv6: ips.append(ipv6) if ipv6_sec_addrs: ips = ips + ipv6_sec_addrs if ips: try: zones[best_match][host] = ips + zones[best_match][host] except KeyError: zones[best_match][host] = ips return zones def __reverse_zones(self): """ Returns a map of zones and the records that belong in them """ zones = {} reverse_zones = self.settings.manage_reverse_zones if type(reverse_zones) != type([]): # gracefully handle when user inputs only a single zone # as a string instead of a list with only a single item reverse_zones = [reverse_zones] for zone in reverse_zones: # expand and IPv6 zones if ":" in zone: zone = (self.__expand_IPv6(zone + '::1'))[:19] zones[zone] = {} for sys in self.systems: for (name, interface) in sys.interfaces.iteritems(): host = interface["dns_name"] ip = interface["ip_address"] ipv6 = interface["ipv6_address"] ipv6_sec_addrs = interface["ipv6_secondaries"] if not sys.is_management_supported(cidr_ok=False): continue if not host or ( ( not ip ) and ( not ipv6) ): # gotsta have some dns_name and ip or else! continue if ip: # match the longest zone! # e.g. if you have an ip 1.2.3.4 # if manage_reverse_zones has: # - 1.2 # - 1.2.3 # then 1.2.3.4 should go in 1.2.3 best_match = '' for zone in zones.keys(): if re.search('^%s\.' % zone, ip) and len(zone) > len(best_match): best_match = zone if best_match != '': # strip the zone off the front of the ip # reverse the rest of the octets # append the remainder + dns_name ip = ip.replace(best_match, '', 1) if ip[0] == '.': # strip leading '.' if it's there ip = ip[1:] tokens = ip.split('.') tokens.reverse() ip = '.'.join(tokens) zones[best_match][ip] = host + '.' if ipv6 or ipv6_sec_addrs: ip6s = [] if ipv6: ip6s.append(ipv6) for each_ipv6 in ip6s + ipv6_sec_addrs: # convert the IPv6 address to long format long_ipv6 = self.__expand_IPv6(each_ipv6) # All IPv6 zones are forced to have the format # xxxx:xxxx:xxxx:xxxx zone = long_ipv6[:19] ipv6_host_part = long_ipv6[20:] tokens = list(re.sub(':', '', ipv6_host_part)) tokens.reverse() ip = '.'.join(tokens) zones[zone][ip] = host + '.' return zones def __write_named_conf(self): """ Write out the named.conf main config file from the template. """ settings_file = self.settings.bind_chroot_path + self.settings_file template_file = "/etc/cobbler/named.template" forward_zones = self.settings.manage_forward_zones reverse_zones = self.settings.manage_reverse_zones metadata = {'forward_zones': self.__forward_zones().keys(), 'reverse_zones': [], 'zone_include': ''} for zone in metadata['forward_zones']: txt = """ zone "%(zone)s." { type master; file "%(zone)s"; }; """ % {'zone': zone} metadata['zone_include'] = metadata['zone_include'] + txt for zone in self.__reverse_zones().keys(): # IPv6 zones are : delimited if ":" in zone: # if IPv6, assume xxxx:xxxx:xxxx:xxxx # 0123456789012345678 long_zone = (self.__expand_IPv6(zone + '::1'))[:19] tokens = list(re.sub(':', '', long_zone)) tokens.reverse() arpa = '.'.join(tokens) + '.ip6.arpa' else: # IPv4 address split by '.' tokens = zone.split('.') tokens.reverse() arpa = '.'.join(tokens) + '.in-addr.arpa' # metadata['reverse_zones'].append((zone, arpa)) txt = """ zone "%(arpa)s." { type master; file "%(zone)s"; }; """ % {'arpa': arpa, 'zone': zone} metadata['zone_include'] = metadata['zone_include'] + txt try: f2 = open(template_file,"r") except: raise CX(_("error reading template from file: %s") % template_file) template_data = "" template_data = f2.read() f2.close() if self.logger is not None: self.logger.info("generating %s" % settings_file) self.templar.render(template_data, metadata, settings_file, None) def __write_secondary_conf(self): """ Write out the secondary.conf secondary config file from the template. """ settings_file = self.settings.bind_chroot_path + '/etc/secondary.conf' template_file = "/etc/cobbler/secondary.template" forward_zones = self.settings.manage_forward_zones reverse_zones = self.settings.manage_reverse_zones metadata = {'forward_zones': self.__forward_zones().keys(), 'reverse_zones': [], 'zone_include': ''} for zone in metadata['forward_zones']: txt = """ zone "%(zone)s." { type slave; masters { %(master)s; }; file "data/%(zone)s"; }; """ % {'zone': zone, 'master': self.settings.bind_master} metadata['zone_include'] = metadata['zone_include'] + txt for zone in self.__reverse_zones().keys(): # IPv6 zones are : delimited if ":" in zone: # if IPv6, assume xxxx:xxxx:xxxx:xxxx for the zone # 0123456789012345678 long_zone = (self.__expand_IPv6(zone + '::1'))[:19] tokens = list(re.sub(':', '', long_zone)) tokens.reverse() arpa = '.'.join(tokens) + '.ip6.arpa' else: # IPv4 zones split by '.' tokens = zone.split('.') tokens.reverse() arpa = '.'.join(tokens) + '.in-addr.arpa' # metadata['reverse_zones'].append((zone, arpa)) txt = """ zone "%(arpa)s." { type slave; masters { %(master)s; }; file "data/%(zone)s"; }; """ % {'arpa': arpa, 'zone': zone, 'master': self.settings.bind_master} metadata['zone_include'] = metadata['zone_include'] + txt try: f2 = open(template_file,"r") except: raise CX(_("error reading template from file: %s") % template_file) template_data = "" template_data = f2.read() f2.close() if self.logger is not None: self.logger.info("generating %s" % settings_file) self.templar.render(template_data, metadata, settings_file, None) def __ip_sort(self, ips): """ Sorts IP addresses (or partial addresses) in a numerical fashion per-octet or quartet """ quartets = [] octets = [] for each_ip in ips: # IPv6 addresses are ':' delimited if ":" in each_ip: # IPv6 # strings to integer quartet chunks so we can sort numerically quartets.append([int(i,16) for i in each_ip.split(':')]) else: # IPv4 # strings to integer octet chunks so we can sort numerically octets.append([int(i) for i in each_ip.split('.')]) quartets.sort() # integers back to four character hex strings quartets = map(lambda x: [format(i, '04x') for i in x], quartets) # octets.sort() # integers back to strings octets = map(lambda x: [str(i) for i in x], octets) # return ['.'.join(i) for i in octets] + [':'.join(i) for i in quartets] def __pretty_print_host_records(self, hosts, rectype='A', rclass='IN'): """ Format host records by order and with consistent indentation """ # Warns on hosts without dns_name, need to iterate over system to name the # particular system for system in self.systems: for (name, interface) in system.interfaces.iteritems(): if interface["dns_name"] == "": self.logger.info(("Warning: dns_name unspecified in the system: %s, while writing host records") % system.name) names = [k for k,v in hosts.iteritems()] if not names: return '' # zones with no hosts if rectype == 'PTR': names = self.__ip_sort(names) else: names.sort() max_name = max([len(i) for i in names]) s = "" for name in names: spacing = " " * (max_name - len(name)) my_name = "%s%s" % (name, spacing) my_host_record = hosts[name] my_host_list = [] if type( my_host_record ) is StringType: my_host_list = [ my_host_record ] else: my_host_list = my_host_record for my_host in my_host_list: my_rectype = rectype[:] if rectype == 'A': if ":" in my_host: my_rectype = 'AAAA' else: my_rectype = 'A ' s += "%s %s %s %s;\n" % (my_name, rclass, my_rectype, my_host) return s def __pretty_print_cname_records(self, hosts, rectype='CNAME'): """ Format CNAMEs and with consistent indentation """ s = "" # This loop warns and skips the host without dns_name instead of outright exiting # Which results in empty records without any warning to the users for system in self.systems: for (name, interface) in system.interfaces.iteritems(): cnames = interface.get("cnames", []) try: if interface.get("dns_name", "") != "": dnsname = interface["dns_name"].split('.')[0] for cname in cnames: s += "%s %s %s;\n" % (cname.split('.')[0], rectype, dnsname) else: self.logger.info(("Warning: dns_name unspecified in the system: %s, Skipped!, while writing cname records") % system.name) continue except: pass return s def __write_zone_files(self): """ Write out the forward and reverse zone files for all configured zones """ default_template_file = "/etc/cobbler/zone.template" cobbler_server = self.settings.server #this could be a config option too serial_filename="/var/lib/cobbler/bind_serial" #need a counter for new bind format serial = time.strftime("%Y%m%d00") try: serialfd = open(serial_filename,"r") old_serial = serialfd.readline() #same date if serial[0:8] == old_serial[0:8]: if int(old_serial[8:10]) < 99 : serial= "%s%.2i" % (serial[0:8],int(old_serial[8:10]) +1) else: pass serialfd.close() except: pass serialfd = open(serial_filename,"w") serialfd.write(serial) serialfd.close() forward = self.__forward_zones() reverse = self.__reverse_zones() try: f2 = open(default_template_file,"r") except: raise CX(_("error reading template from file: %s") % default_template_file) default_template_data = "" default_template_data = f2.read() f2.close() zonefileprefix = self.settings.bind_chroot_path + self.zonefile_base for (zone, hosts) in forward.iteritems(): metadata = { 'cobbler_server': cobbler_server, 'serial': serial, 'zonetype': 'forward', 'cname_record': '', 'host_record': '' } if ":" in zone: long_zone = (self.__expand_IPv6(zone + '::1'))[:19] tokens = list(re.sub(':', '', long_zone)) tokens.reverse() zone_origin = '.'.join(tokens) + '.ip6.arpa.' else: zone_origin = '' # grab zone-specific template if it exists try: fd = open('/etc/cobbler/zone_templates/%s' % zone) # If this is an IPv6 zone, set the origin to the zone for this # template if zone_origin: template_data = "\$ORIGIN " + zone_origin + "\n" + fd.read() else: template_data = fd.read() fd.close() except: # If this is an IPv6 zone, set the origin to the zone for this # template if zone_origin: template_data = "\$ORIGIN " + zone_origin + "\n" + default_template_data else: template_data = default_template_data metadata['cname_record'] = self.__pretty_print_cname_records(hosts) metadata['host_record'] = self.__pretty_print_host_records(hosts) zonefilename=zonefileprefix + zone if self.logger is not None: self.logger.info("generating (forward) %s" % zonefilename) self.templar.render(template_data, metadata, zonefilename, None) for (zone, hosts) in reverse.iteritems(): metadata = { 'cobbler_server': cobbler_server, 'serial': serial, 'zonetype': 'reverse', 'cname_record': '', 'host_record': '' } # grab zone-specific template if it exists try: fd = open('/etc/cobbler/zone_templates/%s' % zone) template_data = fd.read() fd.close() except: template_data = default_template_data metadata['cname_record'] = self.__pretty_print_cname_records(hosts) metadata['host_record'] = self.__pretty_print_host_records(hosts, rectype='PTR') zonefilename=zonefileprefix + zone if self.logger is not None: self.logger.info("generating (reverse) %s" % zonefilename) self.templar.render(template_data, metadata, zonefilename, None) def write_dns_files(self): """ BIND files are written when manage_dns is set in /var/lib/cobbler/settings. """ self.__write_named_conf() self.__write_secondary_conf() self.__write_zone_files() def get_manager(config,logger): return BindManager(config,logger) cobbler-2.4.1/cobbler/modules/manage_dnsmasq.py000066400000000000000000000162231227367477500215660ustar00rootroot00000000000000""" This is some of the code behind 'cobbler sync'. Copyright 2006-2009, Red Hat, Inc and Others Michael DeHaan John Eckersberg This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import os import os.path import shutil import time import sys import glob import traceback import errno from shlex import shlex import utils from cexceptions import * import templar import item_distro import item_profile import item_repo import item_system from utils import _ def register(): return "manage" class DnsmasqManager: """ Handles conversion of internal state to the tftpboot tree layout """ def __init__(self,config,logger,dhcp=None): """ Constructor """ self.logger = logger self.config = config self.api = config.api self.distros = config.distros() self.profiles = config.profiles() self.systems = config.systems() self.settings = config.settings() self.repos = config.repos() self.templar = templar.Templar(config) def what(self): return "dnsmasq" def write_dhcp_lease(self,port,host,ip,mac): pass def remove_dhcp_lease(self,port,host): pass def write_dhcp_file(self): """ DHCP files are written when manage_dhcp is set in /var/lib/cobbler/settings. """ settings_file = "/etc/dnsmasq.conf" template_file = "/etc/cobbler/dnsmasq.template" try: f2 = open(template_file,"r") except: raise CX(_("error writing template to file: %s") % template_file) template_data = "" template_data = f2.read() f2.close() # build each per-system definition # as configured, this only works for ISC, patches accepted # from those that care about Itanium. elilo seems to be unmaintained # so additional maintaince in other areas may be required to keep # this working. elilo = "/var/lib/cobbler/elilo-3.6-ia64.efi" system_definitions = {} counter = 0 # we used to just loop through each system, but now we must loop # through each network interface of each system. for system in self.systems: if not system.is_management_supported(cidr_ok=False): continue profile = system.get_conceptual_parent() distro = profile.get_conceptual_parent() for (name, interface) in system.interfaces.iteritems(): mac = interface["mac_address"] ip = interface["ip_address"] host = interface["dns_name"] if mac is None or mac == "": # can't write a DHCP entry for this system continue counter = counter + 1 # In many reallife situations there is a need to control the IP address # and hostname for a specific client when only the MAC address is available. # In addition to that in some scenarios there is a need to explicitly # label a host with the applicable architecture in order to correctly # handle situations where we need something other than pxelinux.0. # So we always write a dhcp-host entry with as much info as possible # to allow maximum control and flexibility within the dnsmasq config systxt = "dhcp-host=net:" + distro.arch.lower() + "," + mac if host is not None and host != "": systxt = systxt + "," + host if ip is not None and ip != "": systxt = systxt + "," + ip systxt = systxt + "\n" dhcp_tag = interface["dhcp_tag"] if dhcp_tag == "": dhcp_tag = "default" if not system_definitions.has_key(dhcp_tag): system_definitions[dhcp_tag] = "" system_definitions[dhcp_tag] = system_definitions[dhcp_tag] + systxt # we are now done with the looping through each interface of each system metadata = { "insert_cobbler_system_definitions" : system_definitions.get("default",""), "date" : time.asctime(time.gmtime()), "cobbler_server" : self.settings.server, "next_server" : self.settings.next_server, "elilo" : elilo } # now add in other DHCP expansions that are not tagged with "default" for x in system_definitions.keys(): if x == "default": continue metadata["insert_cobbler_system_definitions_%s" % x] = system_definitions[x] self.templar.render(template_data, metadata, settings_file, None) def regen_ethers(self): # dnsmasq knows how to read this database of MACs -> IPs, so we'll keep it up to date # every time we add a system. # read 'man ethers' for format info fh = open("/etc/ethers","w+") for system in self.systems: if not system.is_management_supported(cidr_ok=False): continue for (name, interface) in system.interfaces.iteritems(): mac = interface["mac_address"] ip = interface["ip_address"] if mac is None or mac == "": # can't write this w/o a MAC address continue if ip is not None and ip != "": fh.write(mac.upper() + "\t" + ip + "\n") fh.close() def regen_hosts(self): # dnsmasq knows how to read this database for host info # (other things may also make use of this later) fh = open("/var/lib/cobbler/cobbler_hosts","w+") for system in self.systems: if not system.is_management_supported(cidr_ok=False): continue for (name, interface) in system.interfaces.iteritems(): mac = interface["mac_address"] host = interface["dns_name"] ip = interface["ip_address"] if mac is None or mac == "": continue if host is not None and host != "" and ip is not None and ip != "": fh.write(ip + "\t" + host + "\n") fh.close() def write_dns_files(self): # already taken care of by the regen_hosts() pass def get_manager(config,logger): return DnsmasqManager(config,logger) cobbler-2.4.1/cobbler/modules/manage_import_debian_ubuntu.py000066400000000000000000000667441227367477500243530ustar00rootroot00000000000000""" This is some of the code behind 'cobbler sync'. Copyright 2006-2009, Red Hat, Inc and Others Michael DeHaan John Eckersberg This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import os import os.path import glob import traceback import errno import re import utils from cexceptions import * import templar import item_distro import item_profile import item_repo import item_system from utils import _ # Import aptsources module if available to obtain repo mirror. try: from aptsources import distro from aptsources import sourceslist apt_available = True except: apt_available = False def register(): """ The mandatory cobbler module registration hook. """ return "manage/import" class ImportDebianUbuntuManager: def __init__(self,config,logger): """ Constructor """ self.logger = logger self.config = config self.api = config.api self.distros = config.distros() self.profiles = config.profiles() self.systems = config.systems() self.settings = config.settings() self.repos = config.repos() self.templar = templar.Templar(config) # required function for import modules def what(self): return "import/debian_ubuntu" # required function for import modules def check_for_signature(self,path,cli_breed): signatures = [ 'pool', 'main/debian-installer', ] #self.logger.info("scanning %s for a debian/ubuntu distro signature" % path) for signature in signatures: d = os.path.join(path,signature) if os.path.exists(d): self.logger.info("Found a debian/ubuntu compatible signature: %s" % signature) return (True,signature) if cli_breed and cli_breed in self.get_valid_breeds(): self.logger.info("Warning: No distro signature for kernel at %s, using value from command line" % path) return (True,None) return (False,None) # required function for import modules def run(self,pkgdir,name,path,network_root=None,kickstart_file=None,rsync_flags=None,arch=None,breed=None,os_version=None): self.pkgdir = pkgdir self.name = name self.network_root = network_root self.kickstart_file = kickstart_file self.rsync_flags = rsync_flags self.arch = arch self.breed = breed self.os_version = os_version self.path = path # some fixups for the XMLRPC interface, which does not use "None" if self.arch == "": self.arch = None if self.kickstart_file == "": self.kickstart_file = None if self.os_version == "": self.os_version = None if self.rsync_flags == "": self.rsync_flags = None if self.network_root == "": self.network_root = None # If no breed was specified on the command line, figure it out if self.breed == None: self.breed = self.get_breed_from_directory() if not self.breed: utils.die(self.logger,"import failed - could not determine breed of debian-based distro") # if we're going to do any copying, set where to put things # and then make sure nothing is already there. # import takes a --kickstart for forcing selection that can't be used in all circumstances if self.kickstart_file and not self.breed: utils.die(self.logger,"Kickstart file can only be specified when a specific breed is selected") if self.os_version and not self.breed: utils.die(self.logger,"OS version can only be specified when a specific breed is selected") if self.breed and self.breed.lower() not in self.get_valid_breeds(): utils.die(self.logger,"Supplied import breed is not supported by this module") # now walk the filesystem looking for distributions that match certain patterns self.logger.info("adding distros") distros_added = [] # FIXME : search below self.path for isolinux configurations or known directories from TRY_LIST os.path.walk(self.path, self.distro_adder, distros_added) if len(distros_added) == 0: self.logger.warning("No distros imported, bailing out") return False # find out if we can auto-create any repository records from the install tree if self.network_root is None: self.logger.info("associating repos") # FIXME: this automagic is not possible (yet) without mirroring self.repo_finder(distros_added) # find the most appropriate answer files for each profile object self.logger.info("associating kickstarts") self.kickstart_finder(distros_added) # ensure bootloaders are present self.api.pxegen.copy_bootloaders() return True # required function for import modules def get_valid_arches(self): return ["i386", "ppc", "x86_64", "x86", "arm",] # required function for import modules def get_valid_breeds(self): return ["debian","ubuntu"] # required function for import modules def get_valid_os_versions(self): if self.breed == "debian": return utils.get_valid_os_versions_for_breed("debian") elif self.breed == "ubuntu": return utils.get_valid_os_versions_for_breed("ubuntu") else: return [] def get_valid_repo_breeds(self): return ["apt",] def get_release_files(self): """ Find distro release packages. """ return glob.glob(os.path.join(self.get_rootdir(), "dists/*")) def get_breed_from_directory(self): for breed in self.get_valid_breeds(): # NOTE : Although we break the loop after the first match, # multiple debian derived distros can actually live at the same pool -- JP d = os.path.join(self.path, breed) if (os.path.islink(d) and os.path.isdir(d) and os.path.realpath(d) == os.path.realpath(self.path)) or os.path.basename(self.path) == breed: return breed else: return None def get_tree_location(self, distro): """ Once a distribution is identified, find the part of the distribution that has the URL in it that we want to use for kickstarting the distribution, and create a ksmeta variable $tree that contains this. """ base = self.get_rootdir() if self.network_root is None: dists_path = os.path.join(self.path, "dists") if os.path.isdir(dists_path): tree = "http://@@http_server@@/cblr/ks_mirror/%s" % (self.name) else: tree = "http://@@http_server@@/cblr/repo_mirror/%s" % (distro.name) self.set_install_tree(distro, tree) else: # where we assign the kickstart source is relative to our current directory # and the input start directory in the crawl. We find the path segments # between and tack them on the network source path to find the explicit # network path to the distro that Anaconda can digest. tail = utils.path_tail(self.path, base) tree = self.network_root[:-1] + tail self.set_install_tree(distro, tree) return def repo_finder(self, distros_added): for distro in distros_added: self.logger.info("traversing distro %s" % distro.name) # FIXME : Shouldn't decide this the value of self.network_root ? if distro.kernel.find("ks_mirror") != -1: basepath = os.path.dirname(distro.kernel) top = self.get_rootdir() self.logger.info("descent into %s" % top) dists_path = os.path.join(self.path, "dists") if not os.path.isdir(dists_path): self.process_repos(self, distro) else: self.logger.info("this distro isn't mirrored") def get_repo_mirror_from_apt(self): """ This tries to determine the apt mirror/archive to use (when processing repos) if the host machine is Debian or Ubuntu. """ try: sources = sourceslist.SourcesList() release = distro.get_distro() release.get_sources(sources) mirrors = release.get_server_list() for mirror in mirrors: if mirror[2] == True: mirror = mirror[1] break except: return False return mirror def distro_adder(self,distros_added,dirname,fnames): """ This is an os.path.walk routine that finds distributions in the directory to be scanned and then creates them. """ # FIXME: If there are more than one kernel or initrd image on the same directory, # results are unpredictable initrd = None kernel = None for x in fnames: adtls = [] fullname = os.path.join(dirname,x) if os.path.islink(fullname) and os.path.isdir(fullname): if fullname.startswith(self.path): self.logger.warning("avoiding symlink loop") continue self.logger.info("following symlink: %s" % fullname) os.path.walk(fullname, self.distro_adder, distros_added) if ( x.startswith("initrd") or x.startswith("ramdisk.image.gz") or x.startswith("vmkboot.gz") or x.startswith("uInitrd") ) and x != "initrd.size": initrd = os.path.join(dirname,x) if ( x.startswith("vmlinu") or x.startswith("kernel.img") or x.startswith("linux") or x.startswith("mboot.c32") or x.startswith("uImage") ) and x.find("initrd") == -1: kernel = os.path.join(dirname,x) # if we've collected a matching kernel and initrd pair, turn the in and add them to the list if initrd is not None and kernel is not None: adtls.append(self.add_entry(dirname,kernel,initrd)) kernel = None initrd = None for adtl in adtls: distros_added.extend(adtl) def add_entry(self,dirname,kernel,initrd): """ When we find a directory with a valid kernel/initrd in it, create the distribution objects as appropriate and save them. This includes creating xen and rescue distros/profiles if possible. """ proposed_name = self.get_proposed_name(dirname,kernel) proposed_arch = self.get_proposed_arch(dirname) if self.arch and proposed_arch and self.arch != proposed_arch: self.logger.error("Arch from pathname (%s) does not match with supplied one (%s)"%(proposed_arch,self.arch)) return archs = self.learn_arch_from_tree() if not archs: if self.arch: archs.append( self.arch ) else: if self.arch and self.arch not in archs: utils.die(self.logger, "Given arch (%s) not found on imported tree %s"%(self.arch,self.get_pkgdir())) if proposed_arch: if archs and proposed_arch not in archs: self.logger.warning("arch from pathname (%s) not found on imported tree %s" % (proposed_arch,self.get_pkgdir())) return archs = [ proposed_arch ] if len(archs)>1: self.logger.warning("- Warning : Multiple archs found : %s" % (archs)) distros_added = [] for pxe_arch in archs: name = proposed_name + "-" + pxe_arch existing_distro = self.distros.find(name=name) if existing_distro is not None: self.logger.warning("skipping import, as distro name already exists: %s" % name) continue else: self.logger.info("creating new distro: %s" % name) distro = self.config.new_distro() if name.find("-autoboot") != -1: # this is an artifact of some EL-3 imports continue distro.set_name(name) distro.set_kernel(kernel) distro.set_initrd(initrd) distro.set_arch(pxe_arch) distro.set_breed(self.breed) # If a version was supplied on command line, we set it now if self.os_version: distro.set_os_version(self.os_version) self.distros.add(distro,save=True) distros_added.append(distro) existing_profile = self.profiles.find(name=name) # see if the profile name is already used, if so, skip it and # do not modify the existing profile if existing_profile is None: self.logger.info("creating new profile: %s" % name) #FIXME: The created profile holds a default kickstart, and should be breed specific profile = self.config.new_profile() else: self.logger.info("skipping existing profile, name already exists: %s" % name) continue # save our minimal profile which just points to the distribution and a good # default answer file profile.set_name(name) profile.set_distro(name) profile.set_kickstart(self.kickstart_file) # depending on the name of the profile we can define a good virt-type # for usage with koan if name.find("-xen") != -1: profile.set_virt_type("xenpv") elif name.find("vmware") != -1: profile.set_virt_type("vmware") else: profile.set_virt_type("qemu") # save our new profile to the collection self.profiles.add(profile,save=True) return distros_added def get_proposed_name(self,dirname,kernel=None): """ Given a directory name where we have a kernel/initrd pair, try to autoname the distribution (and profile) object based on the contents of that path """ if self.network_root is not None: name = self.name #+ "-".join(utils.path_tail(os.path.dirname(self.path),dirname).split("/")) else: # remove the part that says /var/www/cobbler/ks_mirror/name name = "-".join(dirname.split("/")[5:]) if kernel is not None and kernel.find("PAE") != -1: name = name + "-PAE" # These are all Ubuntu's doing, the netboot images are buried pretty # deep. ;-) -JC name = name.replace("-netboot","") name = name.replace("-ubuntu-installer","") name = name.replace("-amd64","") name = name.replace("-i386","") # we know that some kernel paths should not be in the name name = name.replace("-images","") name = name.replace("-pxeboot","") name = name.replace("-install","") name = name.replace("-isolinux","") # some paths above the media root may have extra path segments we want # to clean up name = name.replace("-os","") name = name.replace("-tree","") name = name.replace("var-www-cobbler-", "") name = name.replace("ks_mirror-","") name = name.replace("--","-") # remove any architecture name related string, as real arch will be appended later name = name.replace("chrp","ppc64") for separator in [ '-' , '_' , '.' ] : for arch in [ "i386" , "x86_64" , "ia64" , "ppc64", "ppc32", "ppc", "x86" , "s390x", "s390" , "386" , "amd", "arm" ]: name = name.replace("%s%s" % ( separator , arch ),"") return name def get_proposed_arch(self,dirname): """ Given an directory name, can we infer an architecture from a path segment? """ if dirname.find("x86_64") != -1 or dirname.find("amd") != -1: return "x86_64" if dirname.find("ia64") != -1: return "ia64" if dirname.find("i386") != -1 or dirname.find("386") != -1 or dirname.find("x86") != -1: return "i386" if dirname.find("s390x") != -1: return "s390x" if dirname.find("s390") != -1: return "s390" if dirname.find("ppc64") != -1 or dirname.find("chrp") != -1: return "ppc64" if dirname.find("ppc32") != -1: return "ppc" if dirname.find("ppc") != -1: return "ppc" return None def arch_walker(self,foo,dirname,fnames): """ See docs on learn_arch_from_tree. The TRY_LIST is used to speed up search, and should be dropped for default importer Searched kernel names are kernel-header, linux-headers-, kernel-largesmp, kernel-hugemem This method is useful to get the archs, but also to package type and a raw guess of the breed """ # try to find a kernel header RPM and then look at it's arch. for x in fnames: if self.match_kernelarch_file(x): for arch in self.get_valid_arches(): if x.find(arch) != -1: foo[arch] = 1 for arch in [ "i686" , "amd64" ]: if x.find(arch) != -1: foo[arch] = 1 def kickstart_finder(self,distros_added): """ For all of the profiles in the config w/o a kickstart, use the given kickstart file, or look at the kernel path, from that, see if we can guess the distro, and if we can, assign a kickstart if one is available for it. """ for profile in self.profiles: distro = self.distros.find(name=profile.get_conceptual_parent().name) if distro is None or not (distro in distros_added): continue kdir = os.path.dirname(distro.kernel) if self.kickstart_file == None: for file in self.get_release_files(): results = self.scan_pkg_filename(file) # FIXME : If os is not found on tree but set with CLI, no kickstart is searched if results is None: self.logger.warning("skipping %s" % file) continue (flavor, major, minor, release) = results # Why use set_variance()? scan_pkg_filename() does everything we need now - jcammarata #version , ks = self.set_variance(flavor, major, minor, distro.arch) if self.os_version: if self.os_version != flavor: utils.die(self.logger,"CLI version differs from tree : %s vs. %s" % (self.os_version,flavor)) distro.set_comment("%s %s (%s.%s.%s) %s" % (self.breed,flavor,major,minor,release,self.arch)) distro.set_os_version(flavor) # is this even valid for debian/ubuntu? - jcammarata #ds = self.get_datestamp() #if ds is not None: # distro.set_tree_build_time(ds) profile.set_kickstart("/var/lib/cobbler/kickstarts/sample.seed") self.profiles.add(profile,save=True) self.configure_tree_location(distro) self.distros.add(distro,save=True) # re-save self.api.serialize() def configure_tree_location(self, distro): """ Once a distribution is identified, find the part of the distribution that has the URL in it that we want to use for kickstarting the distribution, and create a ksmeta variable $tree that contains this. """ base = self.get_rootdir() if self.network_root is None: dists_path = os.path.join( self.path , "dists" ) if os.path.isdir( dists_path ): tree = "http://@@http_server@@/cblr/ks_mirror/%s" % (self.name) else: tree = "http://@@http_server@@/cblr/repo_mirror/%s" % (distro.name) self.set_install_tree(distro, tree) else: # where we assign the kickstart source is relative to our current directory # and the input start directory in the crawl. We find the path segments # between and tack them on the network source path to find the explicit # network path to the distro that Anaconda can digest. tail = utils.path_tail(self.path, base) tree = self.network_root[:-1] + tail self.set_install_tree(distro, tree) def get_rootdir(self): return self.path def get_pkgdir(self): if not self.pkgdir: return None return os.path.join(self.get_rootdir(),self.pkgdir) def set_install_tree(self, distro, url): distro.ks_meta["tree"] = url def learn_arch_from_tree(self): """ If a distribution is imported from DVD, there is a good chance the path doesn't contain the arch and we should add it back in so that it's part of the meaningful name ... so this code helps figure out the arch name. This is important for producing predictable distro names (and profile names) from differing import sources """ result = {} # FIXME : this is called only once, should not be a walk if self.get_pkgdir(): os.path.walk(self.get_pkgdir(), self.arch_walker, result) if result.pop("amd64",False): result["x86_64"] = 1 if result.pop("i686",False): result["i386"] = 1 if result.pop("x86",False): result["i386"] = 1 return result.keys() def match_kernelarch_file(self, filename): """ Is the given filename a kernel filename? """ if not filename.endswith("deb"): return False if filename.startswith("linux-headers-"): return True return False def scan_pkg_filename(self, file): """ Determine what the distro is based on the release package filename. """ dist_names = utils.get_valid_os_versions_for_breed(self.breed) if os.path.basename(file) in dist_names: release_file = os.path.join(file,'Release') self.logger.info("Found %s release file: %s" % (self.breed,release_file)) f = open(release_file,'r') lines = f.readlines() f.close() for line in lines: if line.lower().startswith('version: '): version = line.split(':')[1].strip() values = version.split('.') if len(values) == 1: # I don't think you'd ever hit this currently with debian or ubuntu, # just including it for safety reasons return (os.path.basename(file), values[0], "0", "0") elif len(values) == 2: return (os.path.basename(file), values[0], values[1], "0") elif len(values) > 2: return (os.path.basename(file), values[0], values[1], values[2]) return None def get_datestamp(self): """ Not used for debian/ubuntu... should probably be removed? - jcammarata """ pass def set_variance(self, flavor, major, minor, arch): """ Set distro specific versioning. """ # I don't think this is required anymore, as the scan_pkg_filename() function # above does everything we need it to - jcammarata # #if self.breed == "debian": # dist_names = { '4.0' : "etch" , '5.0' : "lenny" } # dist_vers = "%s.%s" % ( major , minor ) # os_version = dist_names[dist_vers] # # return os_version , "/var/lib/cobbler/kickstarts/sample.seed" #elif self.breed == "ubuntu": # # Release names taken from wikipedia # dist_names = { '6.4' :"dapper", # '8.4' :"hardy", # '8.10' :"intrepid", # '9.4' :"jaunty", # '9.10' :"karmic", # '10.4' :"lynx", # '10.10':"maverick", # '11.4' :"natty", # } # dist_vers = "%s.%s" % ( major , minor ) # if not dist_names.has_key( dist_vers ): # dist_names['4ubuntu2.0'] = "IntrepidIbex" # os_version = dist_names[dist_vers] # # return os_version , "/var/lib/cobbler/kickstarts/sample.seed" #else: # return None pass def process_repos(self, main_importer, distro): # Create a disabled repository for the new distro, and the security updates # # NOTE : We cannot use ks_meta nor os_version because they get fixed at a later stage # Obtain repo mirror from APT if available mirror = False if apt_available: # Example returned URL: http://us.archive.ubuntu.com/ubuntu mirror = self.get_repo_mirror_from_apt() if mirror: mirror = mirror + "/dists" if not mirror: mirror = "http://archive.ubuntu.com/ubuntu/dists/" repo = item_repo.Repo(main_importer.config) repo.set_breed( "apt" ) repo.set_arch( distro.arch ) repo.set_keep_updated( False ) repo.yumopts["--ignore-release-gpg"] = None repo.yumopts["--verbose"] = None repo.set_name( distro.name ) repo.set_os_version( distro.os_version ) if distro.breed == "ubuntu": repo.set_mirror( "%s/%s" % (mirror, distro.os_version) ) else: # NOTE : The location of the mirror should come from timezone repo.set_mirror( "http://ftp.%s.debian.org/debian/dists/%s" % ( 'us' , distro.os_version ) ) security_repo = item_repo.Repo(main_importer.config) security_repo.set_breed( "apt" ) security_repo.set_arch( distro.arch ) security_repo.set_keep_updated( False ) security_repo.yumopts["--ignore-release-gpg"] = None security_repo.yumopts["--verbose"] = None security_repo.set_name( distro.name + "-security" ) security_repo.set_os_version( distro.os_version ) # There are no official mirrors for security updates if distro.breed == "ubuntu": security_repo.set_mirror( "%s/%s-security" % (mirror, distro.os_version) ) else: security_repo.set_mirror( "http://security.debian.org/debian-security/dists/%s/updates" % distro.os_version ) self.logger.info("Added repos for %s" % distro.name) repos = main_importer.config.repos() repos.add(repo,save=True) repos.add(security_repo,save=True) # ========================================================================== def get_import_manager(config,logger): return ImportDebianUbuntuManager(config,logger) cobbler-2.4.1/cobbler/modules/manage_import_freebsd.py000066400000000000000000000653171227367477500231340ustar00rootroot00000000000000""" This is some of the code behind 'cobbler sync'. Copyright 2006-2009, Red Hat, Inc and Others Michael DeHaan John Eckersberg This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import os import os.path import shutil import time import sys import glob import re import traceback import errno from utils import popen2 from shlex import shlex import utils from cexceptions import * import templar import item_distro import item_profile import item_repo import item_system from utils import _ def register(): """ The mandatory cobbler module registration hook. """ return "manage/import" class ImportFreeBSDManager: def __init__(self,config,logger): """ Constructor """ self.logger = logger self.config = config self.api = config.api self.distros = config.distros() self.profiles = config.profiles() self.systems = config.systems() self.settings = config.settings() self.repos = config.repos() self.templar = templar.Templar(config) # required function for import modules def what(self): return "import/freebsd" # required function for import modules def check_for_signature(self,path,cli_breed): signatures = [ 'etc/freebsd-update.conf', 'boot/frames.4th', '8.2-RELEASE', ] #self.logger.info("scanning %s for a redhat-based distro signature" % path) for signature in signatures: d = os.path.join(path,signature) if os.path.exists(d): self.logger.info("Found a freebsd compatible signature: %s" % signature) return (True,signature) if cli_breed and cli_breed in self.get_valid_breeds(): self.logger.info("Warning: No distro signature for kernel at %s, using value from command line" % path) return (True,None) return (False,None) # required function for import modules def run(self,pkgdir,name,path,network_root=None,kickstart_file=None,rsync_flags=None,arch=None,breed=None,os_version=None): self.pkgdir = pkgdir self.network_root = network_root self.kickstart_file = kickstart_file self.rsync_flags = rsync_flags self.arch = arch self.breed = breed self.os_version = os_version self.name = name self.path = path self.rootdir = path # some fixups for the XMLRPC interface, which does not use "None" if self.arch == "": self.arch = None if self.kickstart_file == "": self.kickstart_file = None if self.os_version == "": self.os_version = None if self.rsync_flags == "": self.rsync_flags = None if self.network_root == "": self.network_root = None # If no breed was specified on the command line, set it to "freebsd" for this module if self.breed == None: self.breed = "freebsd" # import takes a --kickstart for forcing selection that can't be used in all circumstances if self.kickstart_file and not self.breed: utils.die(self.logger,"Kickstart file can only be specified when a specific breed is selected") if self.os_version and not self.breed: utils.die(self.logger,"OS version can only be specified when a specific breed is selected") if self.breed and self.breed.lower() not in self.get_valid_breeds(): utils.die(self.logger,"Supplied import breed is not supported by this module") # now walk the filesystem looking for distributions that match certain patterns self.logger.info("adding distros") distros_added = [] # FIXME : search below self.path for isolinux configurations or known directories from TRY_LIST os.path.walk(self.path, self.distro_adder, distros_added) if len(distros_added) == 0: self.logger.warning("No distros imported, bailing out") return False # find out if we can auto-create any repository records from the install tree if self.network_root is None: self.logger.info("associating repos") # FIXME: this automagic is not possible (yet) without mirroring self.repo_finder(distros_added) # find the most appropriate answer files for each profile object self.logger.info("associating answerfiles") self.kickstart_finder(distros_added) # ensure bootloaders are present self.api.pxegen.copy_bootloaders() return True # required function for import modules def get_valid_arches(self): return ["i386", "ia64", "ppc", "ppc64", "s390", "s390x", "x86_64", "x86",] # required function for import modules def get_valid_breeds(self): return ["freebsd",] # required function for import modules def get_valid_os_versions(self): return utils.get_valid_os_versions_for_breed("freebsd") def get_valid_repo_breeds(self): return ["rsync", "rhn", "yum",] def get_release_files(self): data = glob.glob(os.path.join(self.get_rootdir(), "*RELEASE")) data2 = [] for x in data: b = os.path.basename(x) for valid_os in self.get_valid_os_versions(): if b.find(valid_os) != -1: data2.append(x) return data2 def get_tree_location(self, distro): """ Once a distribution is identified, find the part of the distribution that has the URL in it that we want to use for kickstarting the distribution, and create a ksmeta variable $tree that contains this. """ base = self.get_rootdir() if self.network_root is None: dest_link = os.path.join(self.settings.webdir, "links", distro.name) # create the links directory only if we are mirroring because with # SELinux Apache can't symlink to NFS (without some doing) if not os.path.exists(dest_link): try: os.symlink(base, dest_link) except: # this shouldn't happen but I've seen it ... debug ... self.logger.warning("symlink creation failed: %(base)s, %(dest)s") % { "base" : base, "dest" : dest_link } # how we set the tree depends on whether an explicit network_root was specified tree = "http://@@http_server@@/cblr/links/%s" % (distro.name) self.set_install_tree(distro, tree) else: # where we assign the answerfile source is relative to our current directory # and the input start directory in the crawl. We find the path segments # between and tack them on the network source path to find the explicit # network path to the distro that Anaconda can digest. tail = self.path_tail(self.path, base) tree = self.network_root[:-1] + tail self.set_install_tree(distro, tree) return def repo_finder(self, distros_added): """ This routine looks through all distributions and tries to find any applicable repositories in those distributions for post-install usage. """ for distro in distros_added: self.logger.info("traversing distro %s" % distro.name) # FIXME : Shouldn't decide this the value of self.network_root ? if distro.kernel.find("ks_mirror") != -1: basepath = os.path.dirname(distro.kernel) top = self.get_rootdir() self.logger.info("descent into %s" % top) # FIXME : The location of repo definition is known from breed os.path.walk(top, self.repo_scanner, distro) else: self.logger.info("this distro isn't mirrored") def repo_scanner(self,distro,dirname,fnames): """ This is an os.path.walk routine that looks for potential yum repositories to be added to the configuration for post-install usage. """ matches = {} for x in fnames: if x == "base" or x == "repodata": self.logger.info("processing repo at : %s" % dirname) # only run the repo scanner on directories that contain a comps.xml gloob1 = glob.glob("%s/%s/*comps*.xml" % (dirname,x)) if len(gloob1) >= 1: if matches.has_key(dirname): self.logger.info("looks like we've already scanned here: %s" % dirname) continue self.logger.info("need to process repo/comps: %s" % dirname) self.process_comps_file(dirname, distro) matches[dirname] = 1 else: self.logger.info("directory %s is missing xml comps file, skipping" % dirname) continue def process_comps_file(self, comps_path, distro): """ When importing Fedora/EL certain parts of the install tree can also be used as yum repos containing packages that might not yet be available via updates in yum. This code identifies those areas. """ processed_repos = {} masterdir = "repodata" if not os.path.exists(os.path.join(comps_path, "repodata")): # older distros... masterdir = "base" # figure out what our comps file is ... self.logger.info("looking for %(p1)s/%(p2)s/*comps*.xml" % { "p1" : comps_path, "p2" : masterdir }) files = glob.glob("%s/%s/*comps*.xml" % (comps_path, masterdir)) if len(files) == 0: self.logger.info("no comps found here: %s" % os.path.join(comps_path, masterdir)) return # no comps xml file found # pull the filename from the longer part comps_file = files[0].split("/")[-1] try: # store the yum configs on the filesystem so we can use them later. # and configure them in the answerfile post, etc counter = len(distro.source_repos) # find path segment for yum_url (changing filesystem path to http:// trailing fragment) seg = comps_path.rfind("ks_mirror") urlseg = comps_path[seg+10:] # write a yum config file that shows how to use the repo. if counter == 0: dotrepo = "%s.repo" % distro.name else: dotrepo = "%s-%s.repo" % (distro.name, counter) fname = os.path.join(self.settings.webdir, "ks_mirror", "config", "%s-%s.repo" % (distro.name, counter)) repo_url = "http://@@http_server@@/cobbler/ks_mirror/config/%s-%s.repo" % (distro.name, counter) repo_url2 = "http://@@http_server@@/cobbler/ks_mirror/%s" % (urlseg) distro.source_repos.append([repo_url,repo_url2]) # NOTE: the following file is now a Cheetah template, so it can be remapped # during sync, that's why we have the @@http_server@@ left as templating magic. # repo_url2 is actually no longer used. (?) config_file = open(fname, "w+") config_file.write("[core-%s]\n" % counter) config_file.write("name=core-%s\n" % counter) config_file.write("baseurl=http://@@http_server@@/cobbler/ks_mirror/%s\n" % (urlseg)) config_file.write("enabled=1\n") config_file.write("gpgcheck=0\n") config_file.write("priority=$yum_distro_priority\n") config_file.close() # don't run creatrepo twice -- this can happen easily for Xen and PXE, when # they'll share same repo files. if not processed_repos.has_key(comps_path): utils.remove_yum_olddata(comps_path) #cmd = "createrepo --basedir / --groupfile %s %s" % (os.path.join(comps_path, masterdir, comps_file), comps_path) cmd = "createrepo %s --groupfile %s %s" % (self.settings.createrepo_flags,os.path.join(comps_path, masterdir, comps_file), comps_path) utils.subprocess_call(self.logger, cmd, shell=True) processed_repos[comps_path] = 1 # for older distros, if we have a "base" dir parallel with "repodata", we need to copy comps.xml up one... p1 = os.path.join(comps_path, "repodata", "comps.xml") p2 = os.path.join(comps_path, "base", "comps.xml") if os.path.exists(p1) and os.path.exists(p2): shutil.copyfile(p1,p2) except: self.logger.error("error launching createrepo (not installed?), ignoring") utils.log_exc(self.logger) def distro_adder(self,distros_added,dirname,fnames): """ This is an os.path.walk routine that finds distributions in the directory to be scanned and then creates them. """ initrd = None kernel = None for x in fnames: adtls = [] fullname = os.path.join(dirname,x) if os.path.islink(fullname) and os.path.isdir(fullname): if fullname.startswith(self.path): self.logger.warning("avoiding symlink loop") continue self.logger.info("following symlink: %s" % fullname) os.path.walk(fullname, self.distro_adder, distros_added) if x == "mfsroot.gz": initrd = os.path.join(dirname,x) if x == "pxeboot" or x == "pxeboot.bs": kernel = os.path.join(dirname,x) # if we've collected a matching kernel and initrd pair, turn the in and add them to the list if initrd is not None and kernel is not None: adtls.append(self.add_entry(dirname,kernel,initrd)) kernel = None initrd = None for adtl in adtls: distros_added.extend(adtl) def add_entry(self,dirname,kernel,initrd): """ When we find a directory with a valid kernel/initrd in it, create the distribution objects as appropriate and save them. This includes creating xen and rescue distros/profiles if possible. """ proposed_name = self.get_proposed_name(dirname,kernel) proposed_arch = self.get_proposed_arch(dirname) if self.arch and proposed_arch and self.arch != proposed_arch: utils.die(self.logger,"Arch from pathname (%s) does not match with supplied one %s"%(proposed_arch,self.arch)) archs = self.learn_arch_from_tree() if not archs: if self.arch: archs.append( self.arch ) else: if self.arch and self.arch not in archs: utils.die(self.logger, "Given arch (%s) not found on imported tree %s"%(self.arch,self.get_rootdir())) if proposed_arch: if archs and proposed_arch not in archs: self.logger.warning("arch from pathname (%s) not found on imported tree %s" % (proposed_arch,self.get_rootdir())) return archs = [ proposed_arch ] if len(archs)>1: self.logger.warning("- Warning : Multiple archs found : %s" % (archs)) distros_added = [] for pxe_arch in archs: name = proposed_name + "-" + pxe_arch existing_distro = self.distros.find(name=name) if existing_distro is not None: self.logger.warning("skipping import, as distro name already exists: %s" % name) continue else: self.logger.info("creating new distro: %s" % name) distro = self.config.new_distro() distro.set_name(name) distro.set_kernel(kernel) distro.set_initrd(initrd) distro.set_arch(pxe_arch) distro.set_breed(self.breed) # If a version was supplied on command line, we set it now if self.os_version: distro.set_os_version(self.os_version) self.distros.add(distro,save=True) distros_added.append(distro) existing_profile = self.profiles.find(name=name) # see if the profile name is already used, if so, skip it and # do not modify the existing profile if existing_profile is None: self.logger.info("creating new profile: %s" % name) #FIXME: The created profile holds a default answerfile, and should be breed specific profile = self.config.new_profile() else: self.logger.info("skipping existing profile, name already exists: %s" % name) continue # save our minimal profile which just points to the distribution and a good # default answer file profile.set_name(name) profile.set_distro(name) profile.set_kickstart(self.kickstart_file) profile.set_virt_type("vmware") # save our new profile to the collection self.profiles.add(profile,save=True) return distros_added def get_proposed_name(self,dirname,kernel=None): """ Given a directory name where we have a kernel/initrd pair, try to autoname the distribution (and profile) object based on the contents of that path """ if self.network_root is not None: name = self.name + "-".join(self.path_tail(os.path.dirname(self.path),dirname).split("/")) else: # remove the part that says /var/www/cobbler/ks_mirror/name name = "-".join(dirname.split("/")[5:]) # Clean up name name = name.replace("-boot","") for separator in [ '-' , '_' , '.' ] : for arch in [ "i386" , "x86_64" , "ia64" , "ppc64", "ppc32", "ppc", "x86" , "s390x", "s390" , "386" , "amd" ]: name = name.replace("%s%s" % ( separator , arch ),"") return name def get_proposed_arch(self,dirname): """ Given an directory name, can we infer an architecture from a path segment? """ if dirname.find("x86_64") != -1 or dirname.find("amd") != -1: return "x86_64" if dirname.find("ia64") != -1: return "ia64" if dirname.find("i386") != -1 or dirname.find("386") != -1 or dirname.find("x86") != -1: return "i386" if dirname.find("s390x") != -1: return "s390x" if dirname.find("s390") != -1: return "s390" if dirname.find("ppc64") != -1 or dirname.find("chrp") != -1: return "ppc64" if dirname.find("ppc32") != -1: return "ppc" if dirname.find("ppc") != -1: return "ppc" return None def arch_walker(self,foo,dirname,fnames): """ See docs on learn_arch_from_tree. The TRY_LIST is used to speed up search, and should be dropped for default importer Searched kernel names are kernel-header, linux-headers-, kernel-largesmp, kernel-hugemem This method is useful to get the archs, but also to package type and a raw guess of the breed """ # try to find a kernel header RPM and then look at it's arch. for x in fnames: if self.match_kernelarch_file(x): for arch in self.get_valid_arches(): if x.find(arch) != -1: foo[arch] = 1 for arch in [ "i686" , "amd64" ]: if x.find(arch) != -1: foo[arch] = 1 def kickstart_finder(self,distros_added): """ For all of the profiles in the config w/o a answerfile, use the given answerfile, or look at the kernel path, from that, see if we can guess the distro, and if we can, assign a answerfile if one is available for it. """ for profile in self.profiles: distro = self.distros.find(name=profile.get_conceptual_parent().name) if distro is None or not (distro in distros_added): continue kdir = os.path.dirname(distro.kernel) if self.kickstart_file == None: for rpm in self.get_release_files(): # FIXME : This redhat specific check should go into the importer.find_release_files method if rpm.find("notes") != -1: continue results = self.scan_pkg_filename(rpm) # FIXME : If os is not found on tree but set with CLI, no answerfile is searched if results is None: self.logger.warning("No version found on imported tree") continue (flavor, major, minor) = results version , ks = self.set_variance(flavor, major, minor, distro.arch) if self.os_version: if self.os_version != version: utils.die(self.logger,"CLI version differs from tree : %s vs. %s" % (self.os_version,version)) ds = self.get_datestamp() distro.set_comment(version) distro.set_os_version(version) if ds is not None: distro.set_tree_build_time(ds) profile.set_kickstart(ks) if flavor == "freebsd": self.logger.info("This is FreeBSD - adding boot files to fetchable files") # add fetchable files to distro distro.set_fetchable_files('boot/mfsroot.gz=$initrd boot/*=$webdir/ks_mirror/$distro/boot/') self.profiles.add(profile,save=True) self.configure_tree_location(distro) self.distros.add(distro,save=True) # re-save self.api.serialize() def configure_tree_location(self, distro): """ Once a distribution is identified, find the part of the distribution that has the URL in it that we want to use for kickstarting the distribution, and create a ksmeta variable $tree that contains this. """ base = self.get_rootdir() if self.network_root is None: dest_link = os.path.join(self.settings.webdir, "links", distro.name) # create the links directory only if we are mirroring because with # SELinux Apache can't symlink to NFS (without some doing) if not os.path.exists(dest_link): try: os.symlink(base, dest_link) except: # this shouldn't happen but I've seen it ... debug ... self.logger.warning("symlink creation failed: %(base)s, %(dest)s") % { "base" : base, "dest" : dest_link } # how we set the tree depends on whether an explicit network_root was specified tree = "http://@@http_server@@/cblr/links/%s" % (distro.name) self.set_install_tree( distro, tree) else: # where we assign the kickstart source is relative to our current directory # and the input start directory in the crawl. We find the path segments # between and tack them on the network source path to find the explicit # network path to the distro that Anaconda can digest. tail = utils.path_tail(self.path, base) tree = self.network_root[:-1] + tail self.set_install_tree( distro, tree) def get_rootdir(self): return self.rootdir def get_pkgdir(self): if not self.pkgdir: return None return os.path.join(self.get_rootdir(),self.pkgdir) def set_install_tree(self, distro, url): distro.ks_meta["tree"] = url def learn_arch_from_tree(self): """ If a distribution is imported from DVD, there is a good chance the path doesn't contain the arch and we should add it back in so that it's part of the meaningful name ... so this code helps figure out the arch name. This is important for producing predictable distro names (and profile names) from differing import sources """ result = {} # FIXME : this is called only once, should not be a walk if self.get_rootdir(): os.path.walk(self.get_rootdir(), self.arch_walker, result) if result.pop("amd64",False): result["x86_64"] = 1 if result.pop("i686",False): result["i386"] = 1 return result.keys() def match_kernelarch_file(self, filename): """ Is the given filename a kernel filename? """ if not filename.endswith("rpm") and not filename.endswith("deb"): return False for match in ["kernel-header", "kernel-source", "kernel-smp", "kernel-largesmp", "kernel-hugemem", "linux-headers-", "kernel-devel", "kernel-"]: if filename.find(match) != -1: return True return False def scan_pkg_filename(self, filename): """ Determine what the distro is based on the release package filename. """ release_file = os.path.basename(filename) if release_file.lower().find("release") != -1: flavor = "freebsd" match = re.search(r'(\d).(\d)-RELEASE', release_file) if match: major = match.group(1) minor = match.group(2) else: # FIXME: what should we do if the re fails above? return None return (flavor, major, minor) def get_datestamp(self): """ Based on a FreeBSD tree find the creation timestamp """ pass def set_variance(self, flavor, major, minor, arch): """ find the profile answerfile and set the distro breed/os-version based on what we can find out from the filenames and then return the answerfile path to use. """ os_version = "%s%s.%s" % (flavor,major,minor) answerfile = "/var/lib/cobbler/kickstarts/default.ks" return os_version, answerfile # ========================================================================== def get_import_manager(config,logger): return ImportFreeBSDManager(config,logger) cobbler-2.4.1/cobbler/modules/manage_import_redhat.py000066400000000000000000000775751227367477500230020ustar00rootroot00000000000000""" This is some of the code behind 'cobbler sync'. Copyright 2006-2009, Red Hat, Inc and Others Michael DeHaan John Eckersberg This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import os import os.path import shutil import glob import traceback import string import utils from cexceptions import * import templar import item_distro import item_profile import item_repo import item_system from utils import _ def register(): """ The mandatory cobbler module registration hook. """ return "manage/import" class ImportRedhatManager: def __init__(self,config,logger): """ Constructor """ self.logger = logger self.config = config self.api = config.api self.distros = config.distros() self.profiles = config.profiles() self.systems = config.systems() self.settings = config.settings() self.repos = config.repos() self.templar = templar.Templar(config) # required function for import modules def what(self): return "import/redhat" # required function for import modules def check_for_signature(self,path,cli_breed): signatures = [ 'RedHat/RPMS', 'RedHat/rpms', 'RedHat/Base', 'Fedora/RPMS', 'Fedora/rpms', 'CentOS/RPMS', 'CentOS/rpms', 'CentOS', 'Packages', 'Fedora', 'Server', 'Client', 'SL', ] #self.logger.info("scanning %s for a redhat-based distro signature" % path) for signature in signatures: d = os.path.join(path,signature) if os.path.exists(d): self.logger.info("Found a redhat compatible signature: %s" % signature) return (True,signature) if cli_breed and cli_breed in self.get_valid_breeds(): self.logger.info("Warning: No distro signature for kernel at %s, using value from command line" % path) return (True,None) return (False,None) # required function for import modules def run(self,pkgdir,name,path,network_root=None,kickstart_file=None,rsync_flags=None,arch=None,breed=None,os_version=None): self.pkgdir = pkgdir self.name = name self.network_root = network_root self.kickstart_file = kickstart_file self.rsync_flags = rsync_flags self.arch = arch self.breed = breed self.os_version = os_version self.path = path self.rootdir = path # some fixups for the XMLRPC interface, which does not use "None" if self.arch == "": self.arch = None if self.name == "": self.name = None if self.kickstart_file == "": self.kickstart_file = None if self.os_version == "": self.os_version = None if self.rsync_flags == "": self.rsync_flags = None if self.network_root == "": self.network_root = None # If no breed was specified on the command line, set it to "redhat" for this module if self.breed == None: self.breed = "redhat" # import takes a --kickstart for forcing selection that can't be used in all circumstances if self.kickstart_file and not self.breed: utils.die(self.logger,"Kickstart file can only be specified when a specific breed is selected") if self.os_version and not self.breed: utils.die(self.logger,"OS version can only be specified when a specific breed is selected") if self.breed and self.breed.lower() not in self.get_valid_breeds(): utils.die(self.logger,"Supplied import breed is not supported by this module") # if --arch is supplied, make sure the user is not importing a path with a different # arch, which would just be silly. if self.arch: # validate it first if self.arch not in self.get_valid_arches(): utils.die(self.logger,"arch must be one of: %s" % string.join(self.get_valid_arches(),", ")) # now walk the filesystem looking for distributions that match certain patterns self.logger.info("adding distros") distros_added = [] # FIXME : search below self.path for isolinux configurations or known directories from TRY_LIST os.path.walk(self.path, self.distro_adder, distros_added) if len(distros_added) == 0: self.logger.warning("No distros imported, bailing out") return False # find out if we can auto-create any repository records from the install tree if self.network_root is None: self.logger.info("associating repos") # FIXME: this automagic is not possible (yet) without mirroring self.repo_finder(distros_added) # find the most appropriate answer files for each profile object self.logger.info("associating kickstarts") self.kickstart_finder(distros_added) # ensure bootloaders are present self.api.pxegen.copy_bootloaders() return True # required function for import modules def get_valid_arches(self): return ["i386", "ia64", "ppc", "ppc64", "s390", "s390x", "x86_64", "x86",] # required function for import modules def get_valid_breeds(self): return ["redhat",] # required function for import modules def get_valid_os_versions(self): return utils.get_valid_os_versions_for_breed("redhat") def get_valid_repo_breeds(self): return ["rsync", "rhn", "yum",] def get_release_files(self): data = glob.glob(os.path.join(self.get_pkgdir(), "*release-*")) data2 = [] for x in data: b = os.path.basename(x) if b.find("fedora") != -1 or \ b.find("redhat") != -1 or \ b.find("centos") != -1: data2.append(x) return data2 def repo_finder(self, distros_added): """ This routine looks through all distributions and tries to find any applicable repositories in those distributions for post-install usage. """ for distro in distros_added: self.logger.info("traversing distro %s" % distro.name) # FIXME : Shouldn't decide this the value of self.network_root ? if distro.kernel.find("ks_mirror") != -1: basepath = os.path.dirname(distro.kernel) top = self.get_rootdir() self.logger.info("descent into %s" % top) # FIXME : The location of repo definition is known from breed os.path.walk(top, self.repo_scanner, distro) else: self.logger.info("this distro isn't mirrored") def repo_scanner(self,distro,dirname,fnames): """ This is an os.path.walk routine that looks for potential yum repositories to be added to the configuration for post-install usage. """ matches = {} for x in fnames: if x == "base" or x == "repodata": self.logger.info("processing repo at : %s" % dirname) # only run the repo scanner on directories that contain a comps.xml gloob1 = glob.glob("%s/%s/*comps*.xml" % (dirname,x)) if len(gloob1) >= 1: if matches.has_key(dirname): self.logger.info("looks like we've already scanned here: %s" % dirname) continue self.logger.info("need to process repo/comps: %s" % dirname) self.process_comps_file(dirname, distro) matches[dirname] = 1 else: self.logger.info("directory %s is missing xml comps file, skipping" % dirname) continue def process_comps_file(self, comps_path, distro): """ When importing Fedora/EL certain parts of the install tree can also be used as yum repos containing packages that might not yet be available via updates in yum. This code identifies those areas. """ processed_repos = {} masterdir = "repodata" if not os.path.exists(os.path.join(comps_path, "repodata")): # older distros... masterdir = "base" # figure out what our comps file is ... self.logger.info("looking for %(p1)s/%(p2)s/*comps*.xml" % { "p1" : comps_path, "p2" : masterdir }) files = glob.glob("%s/%s/*comps*.xml" % (comps_path, masterdir)) if len(files) == 0: self.logger.info("no comps found here: %s" % os.path.join(comps_path, masterdir)) return # no comps xml file found # pull the filename from the longer part comps_file = files[0].split("/")[-1] try: # store the yum configs on the filesystem so we can use them later. # and configure them in the kickstart post, etc counter = len(distro.source_repos) # find path segment for yum_url (changing filesystem path to http:// trailing fragment) seg = comps_path.rfind("ks_mirror") urlseg = comps_path[seg+10:] # write a yum config file that shows how to use the repo. if counter == 0: dotrepo = "%s.repo" % distro.name else: dotrepo = "%s-%s.repo" % (distro.name, counter) fname = os.path.join(self.settings.webdir, "ks_mirror", "config", "%s-%s.repo" % (distro.name, counter)) repo_url = "http://@@http_server@@/cobbler/ks_mirror/config/%s-%s.repo" % (distro.name, counter) repo_url2 = "http://@@http_server@@/cobbler/ks_mirror/%s" % (urlseg) distro.source_repos.append([repo_url,repo_url2]) # NOTE: the following file is now a Cheetah template, so it can be remapped # during sync, that's why we have the @@http_server@@ left as templating magic. # repo_url2 is actually no longer used. (?) config_file = open(fname, "w+") config_file.write("[core-%s]\n" % counter) config_file.write("name=core-%s\n" % counter) config_file.write("baseurl=http://@@http_server@@/cobbler/ks_mirror/%s\n" % (urlseg)) config_file.write("enabled=1\n") config_file.write("gpgcheck=0\n") config_file.write("priority=$yum_distro_priority\n") config_file.close() # don't run creatrepo twice -- this can happen easily for Xen and PXE, when # they'll share same repo files. if not processed_repos.has_key(comps_path): utils.remove_yum_olddata(comps_path) #cmd = "createrepo --basedir / --groupfile %s %s" % (os.path.join(comps_path, masterdir, comps_file), comps_path) cmd = "createrepo %s --groupfile %s %s" % (self.settings.createrepo_flags,os.path.join(comps_path, masterdir, comps_file), comps_path) utils.subprocess_call(self.logger, cmd, shell=True) processed_repos[comps_path] = 1 # for older distros, if we have a "base" dir parallel with "repodata", we need to copy comps.xml up one... p1 = os.path.join(comps_path, "repodata", "comps.xml") p2 = os.path.join(comps_path, "base", "comps.xml") if os.path.exists(p1) and os.path.exists(p2): shutil.copyfile(p1,p2) except: self.logger.error("error launching createrepo (not installed?), ignoring") utils.log_exc(self.logger) def distro_adder(self,distros_added,dirname,fnames): """ This is an os.path.walk routine that finds distributions in the directory to be scanned and then creates them. """ # FIXME: If there are more than one kernel or initrd image on the same directory, # results are unpredictable initrd = None kernel = None # make sure we don't mismatch PAE and non-PAE types pae_initrd = None pae_kernel = None for x in fnames: adtls = [] fullname = os.path.join(dirname,x) if os.path.islink(fullname) and os.path.isdir(fullname): if fullname.startswith(self.path): # Prevent infinite loop with Sci Linux 5 self.logger.warning("avoiding symlink loop") continue self.logger.info("following symlink: %s" % fullname) os.path.walk(fullname, self.distro_adder, distros_added) if ( x.startswith("initrd") or x.startswith("ramdisk.image.gz") ) and x != "initrd.size": if x.find("PAE") == -1: initrd = os.path.join(dirname,x) else: pae_initrd = os.path.join(dirname, x) if ( x.startswith("vmlinu") or x.startswith("kernel.img") or x.startswith("linux") ) and x.find("initrd") == -1: if x.find("PAE") == -1: kernel = os.path.join(dirname,x) else: pae_kernel = os.path.join(dirname, x) # if we've collected a matching kernel and initrd pair, turn the in and add them to the list if initrd is not None and kernel is not None and dirname.find("isolinux") == -1: adtls.append(self.add_entry(dirname,kernel,initrd)) kernel = None initrd = None elif pae_initrd is not None and pae_kernel is not None and dirname.find("isolinux") == -1: adtls.append(self.add_entry(dirname,pae_kernel,pae_initrd)) pae_kernel = None pae_initrd = None for adtl in adtls: distros_added.extend(adtl) def add_entry(self,dirname,kernel,initrd): """ When we find a directory with a valid kernel/initrd in it, create the distribution objects as appropriate and save them. This includes creating xen and rescue distros/profiles if possible. """ proposed_name = self.get_proposed_name(dirname,kernel) proposed_arch = self.get_proposed_arch(dirname) if self.arch and proposed_arch and self.arch != proposed_arch: utils.die(self.logger,"Arch from pathname (%s) does not match with supplied one %s"%(proposed_arch,self.arch)) archs = self.learn_arch_from_tree() if not archs: if self.arch: archs.append( self.arch ) else: if self.arch and self.arch not in archs: utils.die(self.logger, "Given arch (%s) not found on imported tree %s"%(self.arch,self.get_pkgdir())) if proposed_arch: if archs and proposed_arch not in archs: self.logger.warning("arch from pathname (%s) not found on imported tree %s" % (proposed_arch,self.get_pkgdir())) return archs = [ proposed_arch ] if len(archs)>1: #if self.breed in [ "redhat" ]: # self.logger.warning("directory %s holds multiple arches : %s" % (dirname, archs)) # return self.logger.warning("- Warning : Multiple archs found : %s" % (archs)) distros_added = [] for pxe_arch in archs: name = proposed_name + "-" + pxe_arch existing_distro = self.distros.find(name=name) if existing_distro is not None: self.logger.warning("skipping import, as distro name already exists: %s" % name) continue else: self.logger.info("creating new distro: %s" % name) distro = self.config.new_distro() if name.find("-autoboot") != -1: # this is an artifact of some EL-3 imports continue distro.set_name(name) distro.set_kernel(kernel) distro.set_initrd(initrd) distro.set_arch(pxe_arch) distro.set_breed(self.breed) # If a version was supplied on command line, we set it now if self.os_version: distro.set_os_version(self.os_version) self.distros.add(distro,save=True) distros_added.append(distro) existing_profile = self.profiles.find(name=name) # see if the profile name is already used, if so, skip it and # do not modify the existing profile if existing_profile is None: self.logger.info("creating new profile: %s" % name) #FIXME: The created profile holds a default kickstart, and should be breed specific profile = self.config.new_profile() else: self.logger.info("skipping existing profile, name already exists: %s" % name) continue # save our minimal profile which just points to the distribution and a good # default answer file profile.set_name(name) profile.set_distro(name) profile.set_kickstart(self.kickstart_file) # depending on the name of the profile we can define a good virt-type # for usage with koan if name.find("-xen") != -1: profile.set_virt_type("xenpv") elif name.find("vmware") != -1: profile.set_virt_type("vmware") else: profile.set_virt_type("qemu") # save our new profile to the collection self.profiles.add(profile,save=True) # Create a rescue image as well, if this is not a xen distro # but only for red hat profiles # this code disabled as it seems to be adding "-rescue" to # distros that are /not/ rescue related, which is wrong. # left as a FIXME for those who find this feature interesting. #if name.find("-xen") == -1 and self.breed == "redhat": # rescue_name = 'rescue-' + name # existing_profile = self.profiles.find(name=rescue_name) # # if existing_profile is None: # self.logger.info("creating new profile: %s" % rescue_name) # profile = self.config.new_profile() # else: # continue # # profile.set_name(rescue_name) # profile.set_distro(name) # profile.set_virt_type("qemu") # profile.kernel_options['rescue'] = None # profile.kickstart = '/var/lib/cobbler/kickstarts/pxerescue.ks' # # self.profiles.add(profile,save=True) return distros_added def get_proposed_name(self,dirname,kernel=None): """ Given a directory name where we have a kernel/initrd pair, try to autoname the distribution (and profile) object based on the contents of that path """ if self.network_root is not None: ##name = self.mirror_name #+ "-".join(utils.path_tail(os.path.dirname(self.path),dirname).split("/")) name = self.name else: # remove the part that says /var/www/cobbler/ks_mirror/name name = "-".join(dirname.split("/")[5:]) if kernel is not None and kernel.find("PAE") != -1: name = name + "-PAE" # These are all Ubuntu's doing, the netboot images are buried pretty # deep. ;-) -JC name = name.replace("-netboot","") name = name.replace("-ubuntu-installer","") name = name.replace("-amd64","") name = name.replace("-i386","") # we know that some kernel paths should not be in the name name = name.replace("-images","") name = name.replace("-pxeboot","") name = name.replace("-install","") name = name.replace("-isolinux","") # some paths above the media root may have extra path segments we want # to clean up name = name.replace("-os","") name = name.replace("-tree","") name = name.replace("var-www-cobbler-", "") name = name.replace("ks_mirror-","") name = name.replace("--","-") # remove any architecture name related string, as real arch will be appended later name = name.replace("chrp","ppc64") for separator in [ '-' , '_' , '.' ] : for arch in [ "i386" , "x86_64" , "ia64" , "ppc64", "ppc32", "ppc", "x86" , "s390x", "s390" , "386" , "amd" ]: name = name.replace("%s%s" % ( separator , arch ),"") return name def get_proposed_arch(self,dirname): """ Given an directory name, can we infer an architecture from a path segment? """ if dirname.find("x86_64") != -1 or dirname.find("amd") != -1: return "x86_64" if dirname.find("ia64") != -1: return "ia64" if dirname.find("i386") != -1 or dirname.find("386") != -1 or dirname.find("x86") != -1: return "i386" if dirname.find("s390x") != -1: return "s390x" if dirname.find("s390") != -1: return "s390" if dirname.find("ppc64") != -1 or dirname.find("chrp") != -1: return "ppc64" if dirname.find("ppc32") != -1: return "ppc" if dirname.find("ppc") != -1: return "ppc" return None def arch_walker(self,foo,dirname,fnames): """ See docs on learn_arch_from_tree. The TRY_LIST is used to speed up search, and should be dropped for default importer Searched kernel names are kernel-header, linux-headers-, kernel-largesmp, kernel-hugemem This method is useful to get the archs, but also to package type and a raw guess of the breed """ # try to find a kernel header RPM and then look at it's arch. for x in fnames: if self.match_kernelarch_file(x): for arch in self.get_valid_arches(): if x.find(arch) != -1: foo[arch] = 1 for arch in [ "i686" , "amd64" ]: if x.find(arch) != -1: foo[arch] = 1 def kickstart_finder(self,distros_added): """ For all of the profiles in the config w/o a kickstart, use the given kickstart file, or look at the kernel path, from that, see if we can guess the distro, and if we can, assign a kickstart if one is available for it. """ for profile in self.profiles: distro = self.distros.find(name=profile.get_conceptual_parent().name) if distro is None or not (distro in distros_added): continue kdir = os.path.dirname(distro.kernel) if self.kickstart_file == None: for rpm in self.get_release_files(): # FIXME : This redhat specific check should go into the importer.find_release_files method if rpm.find("notes") != -1: continue results = self.scan_pkg_filename(rpm) # FIXME : If os is not found on tree but set with CLI, no kickstart is searched if results is None: self.logger.warning("No version found on imported tree") continue (flavor, major, minor) = results version , ks = self.set_variance(flavor, major, minor, distro.arch) if self.os_version: if self.os_version != version: utils.die(self.logger,"CLI version differs from tree : %s vs. %s" % (self.os_version,version)) ds = self.get_datestamp() distro.set_comment("%s.%s" % (version, int(minor))) distro.set_os_version(version) if ds is not None: distro.set_tree_build_time(ds) profile.set_kickstart(ks) self.profiles.add(profile,save=True) self.configure_tree_location(distro) self.distros.add(distro,save=True) # re-save self.api.serialize() def configure_tree_location(self, distro): """ Once a distribution is identified, find the part of the distribution that has the URL in it that we want to use for kickstarting the distribution, and create a ksmeta variable $tree that contains this. """ base = self.get_rootdir() if self.network_root is None: dest_link = os.path.join(self.settings.webdir, "links", distro.name) # create the links directory only if we are mirroring because with # SELinux Apache can't symlink to NFS (without some doing) if not os.path.exists(dest_link): try: os.symlink(base, dest_link) except: # this shouldn't happen but I've seen it ... debug ... self.logger.warning("symlink creation failed: %(base)s, %(dest)s") % { "base" : base, "dest" : dest_link } # how we set the tree depends on whether an explicit network_root was specified tree = "http://@@http_server@@/cblr/links/%s" % (distro.name) self.set_install_tree( distro, tree) else: # where we assign the kickstart source is relative to our current directory # and the input start directory in the crawl. We find the path segments # between and tack them on the network source path to find the explicit # network path to the distro that Anaconda can digest. tail = utils.path_tail(self.path, base) tree = self.network_root[:-1] + tail self.set_install_tree( distro, tree) def get_rootdir(self): return self.rootdir def get_pkgdir(self): if not self.pkgdir: return None return os.path.join(self.get_rootdir(),self.pkgdir) def set_install_tree(self, distro, url): distro.ks_meta["tree"] = url def learn_arch_from_tree(self): """ If a distribution is imported from DVD, there is a good chance the path doesn't contain the arch and we should add it back in so that it's part of the meaningful name ... so this code helps figure out the arch name. This is important for producing predictable distro names (and profile names) from differing import sources """ result = {} # FIXME : this is called only once, should not be a walk if self.get_pkgdir(): os.path.walk(self.get_pkgdir(), self.arch_walker, result) if result.pop("amd64",False): result["x86_64"] = 1 if result.pop("i686",False): result["i386"] = 1 if result.pop("x86",False): result["i386"] = 1 return result.keys() def match_kernelarch_file(self, filename): """ Is the given filename a kernel filename? """ if not filename.endswith("rpm") and not filename.endswith("deb"): return False for match in ["kernel-header", "kernel-source", "kernel-smp", "kernel-largesmp", "kernel-hugemem", "linux-headers-", "kernel-devel", "kernel-"]: if filename.find(match) != -1: return True return False def scan_pkg_filename(self, rpm): """ Determine what the distro is based on the release package filename. """ rpm = os.path.basename(rpm) # if it looks like a RHEL RPM we'll cheat. # it may be slightly wrong, but it will be close enough # for RHEL5 we can get it exactly. for x in [ "4AS", "4ES", "4WS", "4common", "4Desktop" ]: if rpm.find(x) != -1: return ("redhat", 4, 0) for x in [ "3AS", "3ES", "3WS", "3Desktop" ]: if rpm.find(x) != -1: return ("redhat", 3, 0) for x in [ "2AS", "2ES", "2WS", "2Desktop" ]: if rpm.find(x) != -1: return ("redhat", 2, 0) # now get the flavor: flavor = "redhat" if rpm.lower().find("fedora") != -1: flavor = "fedora" if rpm.lower().find("centos") != -1: flavor = "centos" # get all the tokens and try to guess a version accum = [] tokens = rpm.split(".") for t in tokens: tokens2 = t.split("-") for t2 in tokens2: try: float(t2) accum.append(t2) except: pass major = float(accum[0]) minor = float(accum[1]) return (flavor, major, minor) def get_datestamp(self): """ Based on a RedHat tree find the creation timestamp """ base = self.get_rootdir() if os.path.exists("%s/.discinfo" % base): discinfo = open("%s/.discinfo" % base, "r") datestamp = discinfo.read().split("\n")[0] discinfo.close() else: return 0 return float(datestamp) def set_variance(self, flavor, major, minor, arch): """ find the profile kickstart and set the distro breed/os-version based on what we can find out from the rpm filenames and then return the kickstart path to use. """ if flavor == "fedora": try: os_version = "fedora%s" % int(major) except: os_version = "other" if flavor == "redhat" or flavor == "centos": if major <= 2: # rhel2.1 is the only rhel2 os_version = "rhel2.1" else: try: # must use libvirt version os_version = "rhel%s" % (int(major)) except: os_version = "other" kickbase = "/var/lib/cobbler/kickstarts" # Look for ARCH/OS_VERSION.MINOR kickstart first # ARCH/OS_VERSION next # OS_VERSION next # OS_VERSION.MINOR next # ARCH/default.ks next # FLAVOR.ks next kickstarts = [ "%s/%s/%s.%i.ks" % (kickbase,arch,os_version,int(minor)), "%s/%s/%s.ks" % (kickbase,arch,os_version), "%s/%s.%i.ks" % (kickbase,os_version,int(minor)), "%s/%s.ks" % (kickbase,os_version), "%s/%s/default.ks" % (kickbase,arch), "%s/%s.ks" % (kickbase,flavor), ] for kickstart in kickstarts: if os.path.exists(kickstart): return os_version, kickstart major = int(major) if flavor == "fedora": if major >= 8: return os_version , "/var/lib/cobbler/kickstarts/sample_end.ks" if major >= 6: return os_version , "/var/lib/cobbler/kickstarts/sample.ks" if flavor == "redhat" or flavor == "centos": if major >= 5: return os_version , "/var/lib/cobbler/kickstarts/sample.ks" return os_version , "/var/lib/cobbler/kickstarts/legacy.ks" self.logger.warning("could not use distro specifics, using rhel 4 compatible kickstart") return None , "/var/lib/cobbler/kickstarts/legacy.ks" # ========================================================================== def get_import_manager(config,logger): return ImportRedhatManager(config,logger) cobbler-2.4.1/cobbler/modules/manage_import_signatures.py000066400000000000000000000677361227367477500237150ustar00rootroot00000000000000""" Copyright 2006-2009, Red Hat, Inc and Others Michael DeHaan John Eckersberg This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import os import os.path import shutil import glob import re import simplejson import traceback import utils from cexceptions import * import templar import item_distro import item_profile import item_repo import item_system # Import aptsources module if available to obtain repo mirror. try: from aptsources import distro from aptsources import sourceslist apt_available = True except: apt_available = False from utils import _ def register(): """ The mandatory cobbler module registration hook. """ return "manage/import" class ImportSignatureManager: def __init__(self,config,logger): """ Constructor """ self.logger = logger self.config = config self.api = config.api self.distros = config.distros() self.profiles = config.profiles() self.systems = config.systems() self.settings = config.settings() self.repos = config.repos() self.templar = templar.Templar(config) self.signature = None self.found_repos = {} # required function for import modules def what(self): return "import/signatures" def get_file_lines(self,filename): """ Get lines from a file, which may or may not be compressed """ lines = [] ftype = utils.subprocess_get(self.logger, "/usr/bin/file %s" % filename) if ftype.find("gzip") != -1: try: import gzip f = gzip.open(filename,'r') lines = f.readlines() f.close() except: pass elif ftype.find("text") != -1: f = open(filename,'r') lines = f.readlines() f.close() return lines # required function for import modules def run(self,path,name,network_root=None,kickstart_file=None,arch=None,breed=None,os_version=None): """ path: the directory we are scanning for files name: the base name of the distro network_root: the remote path (nfs/http/ftp) for the distro files kickstart_file: user-specified response file, which will override the default arch: user-specified architecture breed: user-specified breed os_version: user-specified OS version """ self.name = name self.network_root = network_root self.kickstart_file = kickstart_file self.arch = arch self.breed = breed self.os_version = os_version self.path = path self.rootdir = path self.pkgdir = path # some fixups for the XMLRPC interface, which does not use "None" if self.arch == "": self.arch = None if self.name == "": self.name = None if self.kickstart_file == "": self.kickstart_file = None if self.os_version == "": self.os_version = None if self.network_root == "": self.network_root = None if self.os_version and not self.breed: utils.die(self.logger,"OS version can only be specified when a specific breed is selected") self.signature = self.scan_signatures() if not self.signature: self.logger.error("No signature matched in %s" % path) return False # now walk the filesystem looking for distributions that match certain patterns self.logger.info("Adding distros from path %s:"%self.path) distros_added = [] os.path.walk(self.path, self.distro_adder, distros_added) if len(distros_added) == 0: self.logger.warning("No distros imported, bailing out") return False # find out if we can auto-create any repository records from the install tree if self.network_root is None: self.logger.info("associating repos") # FIXME: this automagic is not possible (yet) without mirroring self.repo_finder(distros_added) return True def scan_signatures(self): """ loop through the signatures, looking for a match for both the signature directory and the version file """ sigdata = self.api.get_signatures() #self.logger.debug("signature cache: %s" % str(sigdata)) for breed in sigdata["breeds"].keys(): if self.breed and self.breed != breed: continue for version in sigdata["breeds"][breed].keys(): if self.os_version and self.os_version != version: continue for sig in sigdata["breeds"][breed][version]["signatures"]: pkgdir = os.path.join(self.path,sig) if os.path.exists(pkgdir): self.logger.debug("Found a candidate signature: breed=%s, version=%s" % (breed,version)) f_re = re.compile(sigdata["breeds"][breed][version]["version_file"]) for (root,subdir,fnames) in os.walk(self.path): for fname in fnames+subdir: if f_re.match(fname): # if the version file regex exists, we use it # to scan the contents of the target version file # to ensure it's the right version if sigdata["breeds"][breed][version]["version_file_regex"]: vf_re = re.compile(sigdata["breeds"][breed][version]["version_file_regex"]) vf_lines = self.get_file_lines(os.path.join(root,fname)) for line in vf_lines: if vf_re.match(line): break else: continue self.logger.debug("Found a matching signature: breed=%s, version=%s" % (breed,version)) if not self.breed: self.breed = breed if not self.os_version: self.os_version = version if not self.kickstart_file: self.kickstart_file = sigdata["breeds"][breed][version]["default_kickstart"] self.pkgdir = pkgdir return sigdata["breeds"][breed][version] return None # required function for import modules def get_valid_arches(self): if self.signature: return self.signature["supported_arches"] return [] def get_valid_repo_breeds(self): if self.signature: return self.signature["supported_repo_breeds"] return [] def distro_adder(self,distros_added,dirname,fnames): """ This is an os.path.walk routine that finds distributions in the directory to be scanned and then creates them. """ re_krn = re.compile(self.signature["kernel_file"]) re_img = re.compile(self.signature["initrd_file"]) # make sure we don't mismatch PAE and non-PAE types initrd = None kernel = None pae_initrd = None pae_kernel = None for x in fnames: adtls = [] # most of the time we just want to ignore isolinux # directories, unless this is one of the oddball # distros where we do want it if dirname.find("isolinux") != -1 and not self.signature["isolinux_ok"]: continue fullname = os.path.join(dirname,x) if os.path.islink(fullname) and os.path.isdir(fullname): if fullname.startswith(self.path): # Prevent infinite loop with Sci Linux 5 #self.logger.warning("avoiding symlink loop") continue self.logger.info("following symlink: %s" % fullname) os.path.walk(fullname, self.distro_adder, distros_added) if re_img.match(x): if x.find("PAE") == -1: initrd = os.path.join(dirname,x) else: pae_initrd = os.path.join(dirname, x) if re_krn.match(x): if x.find("PAE") == -1: kernel = os.path.join(dirname,x) else: pae_kernel = os.path.join(dirname, x) # if we've collected a matching kernel and initrd pair, turn the in and add them to the list if initrd is not None and kernel is not None: adtls.append(self.add_entry(dirname,kernel,initrd)) kernel = None initrd = None elif pae_initrd is not None and pae_kernel is not None: adtls.append(self.add_entry(dirname,pae_kernel,pae_initrd)) pae_kernel = None pae_initrd = None for adtl in adtls: distros_added.extend(adtl) def add_entry(self,dirname,kernel,initrd): """ When we find a directory with a valid kernel/initrd in it, create the distribution objects as appropriate and save them. This includes creating xen and rescue distros/profiles if possible. """ # build a proposed name based on the directory structure proposed_name = self.get_proposed_name(dirname,kernel) # build a list of arches found in the packages directory archs = self.learn_arch_from_tree() if not archs and self.arch: archs.append( self.arch ) else: if self.arch and self.arch not in archs: utils.die(self.logger, "Given arch (%s) not found on imported tree %s"%(self.arch,self.path)) if len(archs) == 0: self.logger.error("No arch could be detected in %s, and none was specified via the --arch option" % dirname) return [] elif len(archs) > 1: self.logger.warning("- Warning : Multiple archs found : %s" % (archs)) distros_added = [] for pxe_arch in archs: name = proposed_name + "-" + pxe_arch existing_distro = self.distros.find(name=name) if existing_distro is not None: self.logger.warning("skipping import, as distro name already exists: %s" % name) continue else: self.logger.info("creating new distro: %s" % name) distro = self.config.new_distro() if name.find("-autoboot") != -1: # this is an artifact of some EL-3 imports continue distro.set_name(name) distro.set_kernel(kernel) distro.set_initrd(initrd) distro.set_arch(pxe_arch) distro.set_breed(self.breed) distro.set_os_version(self.os_version) distro.set_kernel_options(self.signature.get("kernel_options","")) distro.set_kernel_options_post(self.signature.get("kernel_options_post","")) distro.set_template_files(self.signature.get("template_files","")) boot_files = '' for boot_file in self.signature["boot_files"]: boot_files += '$local_img_path/%s=%s/%s ' % (boot_file,self.path,boot_file) distro.set_boot_files(boot_files.strip()) self.configure_tree_location(distro) self.distros.add(distro,save=True) distros_added.append(distro) # see if the profile name is already used, if so, skip it and # do not modify the existing profile existing_profile = self.profiles.find(name=name) if existing_profile is None: self.logger.info("creating new profile: %s" % name) profile = self.config.new_profile() else: self.logger.info("skipping existing profile, name already exists: %s" % name) continue profile.set_name(name) profile.set_distro(name) profile.set_kickstart(self.kickstart_file) # depending on the name of the profile we can # define a good virt-type for usage with koan if name.find("-xen") != -1: profile.set_virt_type("xenpv") elif name.find("vmware") != -1: profile.set_virt_type("vmware") else: profile.set_virt_type("kvm") self.profiles.add(profile,save=True) return distros_added def learn_arch_from_tree(self): """ If a distribution is imported from DVD, there is a good chance the path doesn't contain the arch and we should add it back in so that it's part of the meaningful name ... so this code helps figure out the arch name. This is important for producing predictable distro names (and profile names) from differing import sources """ result = {} # FIXME : this is called only once, should not be a walk os.path.walk(self.path, self.arch_walker, result) if result.pop("amd64",False): result["x86_64"] = 1 if result.pop("i686",False): result["i386"] = 1 if result.pop("i586",False): result["i386"] = 1 if result.pop("x86",False): result["i386"] = 1 return result.keys() def arch_walker(self,foo,dirname,fnames): """ Function for recursively searching through a directory for a kernel file matching a given architecture, called by learn_arch_from_tree() """ re_krn = re.compile(self.signature["kernel_arch"]) # try to find a kernel header RPM and then look at it's arch. for x in fnames: if re_krn.match(x): if self.signature["kernel_arch_regex"]: re_krn2 = re.compile(self.signature["kernel_arch_regex"]) krn_lines = self.get_file_lines(os.path.join(dirname,x)) for line in krn_lines: m = re_krn2.match(line) if m: for group in m.groups(): group = group.lower() if group in self.get_valid_arches(): foo[group] = 1 else: for arch in self.get_valid_arches(): if x.find(arch) != -1: foo[arch] = 1 for arch in [ "i686" , "amd64" ]: if x.find(arch) != -1: foo[arch] = 1 def get_proposed_name(self,dirname,kernel=None): """ Given a directory name where we have a kernel/initrd pair, try to autoname the distribution (and profile) object based on the contents of that path """ if self.network_root is not None: name = self.name else: # remove the part that says /var/www/cobbler/ks_mirror/name name = "-".join(dirname.split("/")[5:]) if kernel is not None: if kernel.find("PAE") != -1 and name.find("PAE") == -1: name = name + "-PAE" if kernel.find("xen") != -1 and name.find("xen") == -1: name = name + "-xen" # Clear out some cruft from the proposed name name = name.replace("--","-") for x in ("-netboot","-ubuntu-installer","-amd64","-i386", \ "-images","-pxeboot","-install","-isolinux","-boot", "-suseboot", \ "-loader","-os","-tree","var-www-cobbler-","ks_mirror-"): name = name.replace(x,"") # remove any architecture name related string, as real arch will be appended later name = name.replace("chrp","ppc64") for separator in [ '-' , '_' , '.' ] : for arch in ["i386","x86_64","ia64","ppc64","ppc32","ppc","x86","s390x","s390","386","amd"]: name = name.replace("%s%s" % (separator,arch),"") return name def configure_tree_location(self, distro): """ Once a distribution is identified, find the part of the distribution that has the URL in it that we want to use for kickstarting the distribution, and create a ksmeta variable $tree that contains this. """ base = self.rootdir # how we set the tree depends on whether an explicit network_root was specified if self.network_root is None: dest_link = os.path.join(self.settings.webdir, "links", distro.name) # create the links directory only if we are mirroring because with # SELinux Apache can't symlink to NFS (without some doing) if not os.path.exists(dest_link): try: self.logger.info("trying symlink: %s -> %s" % (str(base),str(dest_link))) os.symlink(base, dest_link) except: # this shouldn't happen but I've seen it ... debug ... self.logger.warning("symlink creation failed: %(base)s, %(dest)s" % { "base" : base, "dest" : dest_link }) tree = "http://@@http_server@@/cblr/links/%s" % (distro.name) self.set_install_tree(distro, tree) else: # where we assign the kickstart source is relative to our current directory # and the input start directory in the crawl. We find the path segments # between and tack them on the network source path to find the explicit # network path to the distro that Anaconda can digest. tail = utils.path_tail(self.path, base) tree = self.network_root[:-1] + tail self.set_install_tree(distro, tree) def set_install_tree(self, distro, url): """ Simple helper function to set the tree ksmeta """ distro.ks_meta["tree"] = url # ========================================================================== # Repo Functions def repo_finder(self, distros_added): """ This routine looks through all distributions and tries to find any applicable repositories in those distributions for post-install usage. """ for repo_breed in self.get_valid_repo_breeds(): self.logger.info("checking for %s repo(s)" % repo_breed) repo_adder = None if repo_breed == "yum": repo_adder = self.yum_repo_adder elif repo_breed == "rhn": repo_adder = self.rhn_repo_adder elif repo_breed == "rsync": repo_adder = self.rsync_repo_adder elif repo_breed == "apt": repo_adder = self.apt_repo_adder else: self.logger.warning("skipping unknown/unsupported repo breed: %s" % repo_breed) continue for distro in distros_added: if distro.kernel.find("ks_mirror") != -1: repo_adder(distro) self.distros.add(distro, save=True) else: self.logger.info("skipping distro %s since it isn't mirrored locally" % distro.name) # ========================================================================== # yum-specific def yum_repo_adder(self,distro): """ For yum, we recursively scan the rootdir for repos to add """ self.logger.info("starting descent into %s for %s" % (self.rootdir,distro.name)) os.path.walk(self.rootdir, self.yum_repo_scanner, distro) def yum_repo_scanner(self,distro,dirname,fnames): """ This is an os.path.walk routine that looks for potential yum repositories to be added to the configuration for post-install usage. """ matches = {} for x in fnames: if x == "base" or x == "repodata": self.logger.info("processing repo at : %s" % dirname) # only run the repo scanner on directories that contain a comps.xml gloob1 = glob.glob("%s/%s/*comps*.xml" % (dirname,x)) if len(gloob1) >= 1: if matches.has_key(dirname): self.logger.info("looks like we've already scanned here: %s" % dirname) continue self.logger.info("need to process repo/comps: %s" % dirname) self.yum_process_comps_file(dirname, distro) matches[dirname] = 1 else: self.logger.info("directory %s is missing xml comps file, skipping" % dirname) continue def yum_process_comps_file(self,comps_path,distro): """ When importing Fedora/EL certain parts of the install tree can also be used as yum repos containing packages that might not yet be available via updates in yum. This code identifies those areas. Existing repodata will be used as-is, but repodate is created for earlier, non-yum based, installers. """ if os.path.exists(os.path.join(comps_path, "repodata")): keeprepodata = True masterdir = "repodata" else: # older distros... masterdir = "base" keeprepodata = False # figure out what our comps file is ... self.logger.info("looking for %(p1)s/%(p2)s/*comps*.xml" % { "p1" : comps_path, "p2" : masterdir }) files = glob.glob("%s/%s/*comps*.xml" % (comps_path, masterdir)) if len(files) == 0: self.logger.info("no comps found here: %s" % os.path.join(comps_path, masterdir)) return # no comps xml file found # pull the filename from the longer part comps_file = files[0].split("/")[-1] try: # store the yum configs on the filesystem so we can use them later. # and configure them in the kickstart post, etc counter = len(distro.source_repos) # find path segment for yum_url (changing filesystem path to http:// trailing fragment) seg = comps_path.rfind("ks_mirror") urlseg = comps_path[seg+10:] # write a yum config file that shows how to use the repo. if counter == 0: dotrepo = "%s.repo" % distro.name else: dotrepo = "%s-%s.repo" % (distro.name, counter) fname = os.path.join(self.settings.webdir, "ks_mirror", "config", "%s-%s.repo" % (distro.name, counter)) repo_url = "http://@@http_server@@/cobbler/ks_mirror/config/%s-%s.repo" % (distro.name, counter) repo_url2 = "http://@@http_server@@/cobbler/ks_mirror/%s" % (urlseg) distro.source_repos.append([repo_url,repo_url2]) # NOTE: the following file is now a Cheetah template, so it can be remapped # during sync, that's why we have the @@http_server@@ left as templating magic. # repo_url2 is actually no longer used. (?) config_file = open(fname, "w+") config_file.write("[core-%s]\n" % counter) config_file.write("name=core-%s\n" % counter) config_file.write("baseurl=http://@@http_server@@/cobbler/ks_mirror/%s\n" % (urlseg)) config_file.write("enabled=1\n") config_file.write("gpgcheck=0\n") config_file.write("priority=$yum_distro_priority\n") config_file.close() # don't run creatrepo twice -- this can happen easily for Xen and PXE, when # they'll share same repo files. if keeprepodata: self.logger.info("Keeping repodata as-is :%s/repodata" % comps_path) self.found_repos[comps_path] = 1 elif not self.found_repos.has_key(comps_path): utils.remove_yum_olddata(comps_path) cmd = "createrepo %s --groupfile %s %s" % (self.settings.createrepo_flags,os.path.join(comps_path, masterdir, comps_file), comps_path) utils.subprocess_call(self.logger, cmd, shell=True) self.found_repos[comps_path] = 1 # for older distros, if we have a "base" dir parallel with "repodata", we need to copy comps.xml up one... p1 = os.path.join(comps_path, "repodata", "comps.xml") p2 = os.path.join(comps_path, "base", "comps.xml") if os.path.exists(p1) and os.path.exists(p2): shutil.copyfile(p1,p2) except: self.logger.error("error launching createrepo (not installed?), ignoring") utils.log_exc(self.logger) # ========================================================================== # apt-specific def apt_repo_adder(self,distro): self.logger.info("adding apt repo for %s" % distro.name) # Obtain repo mirror from APT if available mirror = False if apt_available: # Example returned URL: http://us.archive.ubuntu.com/ubuntu mirror = self.get_repo_mirror_from_apt() if not mirror: mirror = "http://archive.ubuntu.com/ubuntu" repo = item_repo.Repo(self.config) repo.set_breed("apt") repo.set_arch(distro.arch) repo.set_keep_updated(True) repo.set_apt_components("main universe") # TODO: make a setting? repo.set_apt_dists("%s %s-updates %s-security" % ((distro.os_version,)*3)) repo.yumopts["--ignore-release-gpg"] = None repo.yumopts["--verbose"] = None repo.set_name(distro.name) repo.set_os_version(distro.os_version) if distro.breed == "ubuntu": repo.set_mirror(mirror) else: # NOTE : The location of the mirror should come from timezone repo.set_mirror( "http://ftp.%s.debian.org/debian/dists/%s" % ( 'us' , distro.os_version ) ) self.logger.info("Added repos for %s" % distro.name) repos = self.config.repos() repos.add(repo,save=True) #FIXME: # Add the found/generated repos to the profiles # that were created during the import process def get_repo_mirror_from_apt(self): """ This tries to determine the apt mirror/archive to use (when processing repos) if the host machine is Debian or Ubuntu. """ try: sources = sourceslist.SourcesList() release = distro.get_distro() release.get_sources(sources) mirrors = release.get_server_list() for mirror in mirrors: if mirror[2] == True: mirror = mirror[1] break except: return False return mirror # ========================================================================== # rhn-specific def rhn_repo_adder(self,distro): """ not currently used """ return # ========================================================================== # rsync-specific def rsync_repo_adder(self,distro): """ not currently used """ return # ========================================================================== def get_import_manager(config,logger): return ImportSignatureManager(config,logger) cobbler-2.4.1/cobbler/modules/manage_import_suse.py000066400000000000000000000621761227367477500225010ustar00rootroot00000000000000""" -- -- Copyright (c) 2011 Novell -- Uwe Gansert -- -- This software is licensed to you under the GNU General Public License, -- version 2 (GPLv2). There is NO WARRANTY for this software, express or -- implied, including the implied warranties of MERCHANTABILITY or FITNESS -- FOR A PARTICULAR PURPOSE. You should have received a copy of GPLv2 -- along with this software; if not, see -- http://www.gnu.org/licenses/old-licenses/gpl-2.0.txt. -- -- """ import os import os.path import glob import traceback import errno import utils from cexceptions import * import templar import item_distro import item_profile import item_repo import item_system from utils import _ def register(): """ The mandatory cobbler module registration hook. """ return "manage/import" class ImportSuseManager: def __init__(self,config,logger): """ Constructor """ self.logger = logger self.config = config self.api = config.api self.distros = config.distros() self.profiles = config.profiles() self.systems = config.systems() self.settings = config.settings() self.repos = config.repos() self.templar = templar.Templar(config) # required function for import modules def what(self): return "import/suse" # required function for import modules def check_for_signature(self,path,cli_breed): signatures = [ 'suse' ] #self.logger.info("scanning %s for a redhat-based distro signature" % path) for signature in signatures: d = os.path.join(path,signature) if os.path.exists(d): self.logger.info("Found a SUSE compatible signature: %s" % signature) return (True,signature) if cli_breed and cli_breed in self.get_valid_breeds(): self.logger.info("Warning: No distro signature for kernel at %s, using value from command line" % path) return (True,None) return (False,None) # required function for import modules def run(self,pkgdir,name,path,network_root=None,kickstart_file=None,rsync_flags=None,arch=None,breed=None,os_version=None): self.pkgdir = pkgdir self.network_root = network_root self.kickstart_file = kickstart_file self.rsync_flags = rsync_flags self.arch = arch self.breed = breed self.os_version = os_version self.name = name self.path = path self.rootdir = path # some fixups for the XMLRPC interface, which does not use "None" if self.arch == "": self.arch = None if self.kickstart_file == "": self.kickstart_file = None if self.os_version == "": self.os_version = None if self.rsync_flags == "": self.rsync_flags = None if self.network_root == "": self.network_root = None # If no breed was specified on the command line, set it to "suse" for this module if self.breed == None: self.breed = "suse" # import takes a --kickstart for forcing selection that can't be used in all circumstances if self.kickstart_file and not self.breed: utils.die(self.logger,"Kickstart file can only be specified when a specific breed is selected") if self.os_version and not self.breed: utils.die(self.logger,"OS version can only be specified when a specific breed is selected") if self.breed and self.breed.lower() not in self.get_valid_breeds(): utils.die(self.logger,"Supplied import breed is not supported by this module") # now walk the filesystem looking for distributions that match certain patterns self.logger.info("adding distros") distros_added = [] # FIXME : search below self.path for isolinux configurations or known directories from TRY_LIST os.path.walk(self.path, self.distro_adder, distros_added) if len(distros_added) == 0: self.logger.warning("No distros imported, bailing out") return False # find out if we can auto-create any repository records from the install tree if self.network_root is None: self.logger.info("associating repos") # FIXME: this automagic is not possible (yet) without mirroring self.repo_finder(distros_added) # find the most appropriate answer files for each profile object self.logger.info("associating kickstarts") self.kickstart_finder(distros_added) # ensure bootloaders are present self.api.pxegen.copy_bootloaders() return True # required function for import modules def get_valid_arches(self): return ["i386", "ia64", "ppc", "ppc64", "s390", "s390x", "x86_64", "x86",] # required function for import modules def get_valid_breeds(self): return ["suse",] # required function for import modules def get_valid_os_versions(self): return [] def get_valid_repo_breeds(self): return ["yast", "rsync", "yum"] def get_release_files(self): data = glob.glob(os.path.join(self.get_pkgdir(), "*release-*")) data2 = [] for x in data: b = os.path.basename(x) # FIXME # if b.find("fedora") != -1 or \ # b.find("redhat") != -1 or \ # b.find("centos") != -1: # data2.append(x) return data2 def get_tree_location(self, distro): """ Once a distribution is identified, find the part of the distribution that has the URL in it that we want to use for kickstarting the distribution, and create a ksmeta variable $tree that contains this. """ base = self.get_rootdir() if self.network_root is None: dest_link = os.path.join(self.settings.webdir, "links", distro.name) # create the links directory only if we are mirroring because with # SELinux Apache can't symlink to NFS (without some doing) if not os.path.exists(dest_link): try: os.symlink(base, dest_link) except: # this shouldn't happen but I've seen it ... debug ... self.logger.warning("symlink creation failed: %(base)s, %(dest)s") % { "base" : base, "dest" : dest_link } # how we set the tree depends on whether an explicit network_root was specified tree = "http://@@http_server@@/cblr/links/%s" % (distro.name) self.set_install_tree(distro, tree) else: # where we assign the kickstart source is relative to our current directory # and the input start directory in the crawl. We find the path segments # between and tack them on the network source path to find the explicit # network path to the distro that Anaconda can digest. tail = utils.path_tail(self.path, base) tree = self.network_root[:-1] + tail self.set_install_tree(distro, tree) return def repo_finder(self, distros_added): """ This routine looks through all distributions and tries to find any applicable repositories in those distributions for post-install usage. """ for distro in distros_added: self.logger.info("traversing distro %s" % distro.name) # FIXME : Shouldn't decide this the value of self.network_root ? if distro.kernel.find("ks_mirror") != -1: basepath = os.path.dirname(distro.kernel) top = self.get_rootdir() self.logger.info("descent into %s" % top) # FIXME : The location of repo definition is known from breed os.path.walk(top, self.repo_scanner, distro) else: self.logger.info("this distro isn't mirrored") def repo_scanner(self,distro,dirname,fnames): """ This is an os.path.walk routine that looks for potential yum repositories to be added to the configuration for post-install usage. """ matches = {} for x in fnames: if x == "base" or x == "repodata": self.logger.info("processing repo at : %s" % dirname) # only run the repo scanner on directories that contain a comps.xml gloob1 = glob.glob("%s/%s/*comps*.xml" % (dirname,x)) if len(gloob1) >= 1: if matches.has_key(dirname): self.logger.info("looks like we've already scanned here: %s" % dirname) continue self.logger.info("need to process repo/comps: %s" % dirname) matches[dirname] = 1 else: self.logger.info("directory %s is missing xml comps file, skipping" % dirname) continue def distro_adder(self,distros_added,dirname,fnames): """ This is an os.path.walk routine that finds distributions in the directory to be scanned and then creates them. """ # FIXME: If there are more than one kernel or initrd image on the same directory, # results are unpredictable initrd = None kernel = None # make sure we don't mismatch PAE and non-PAE types pae_initrd = None pae_kernel = None for x in fnames: adtls = [] fullname = os.path.join(dirname,x) if os.path.islink(fullname) and os.path.isdir(fullname): if fullname.startswith(self.path): # Prevent infinite loop with Sci Linux 5 self.logger.warning("avoiding symlink loop") continue self.logger.info("following symlink: %s" % fullname) os.path.walk(fullname, self.distro_adder, distros_added) if ( x.startswith("initrd") or x.startswith("ramdisk.image.gz") ) and x != "initrd.size": if x.find("PAE") == -1: initrd = os.path.join(dirname,x) else: pae_initrd = os.path.join(dirname, x) if ( x.startswith("vmlinu") or x.startswith("kernel.img") or x.startswith("linux") ) and x.find("initrd") == -1: if x.find("PAE") == -1: kernel = os.path.join(dirname,x) else: pae_kernel = os.path.join(dirname, x) # if we've collected a matching kernel and initrd pair, turn the in and add them to the list if initrd is not None and kernel is not None and dirname.find("isolinux") == -1: adtls.append(self.add_entry(dirname,kernel,initrd)) kernel = None initrd = None elif pae_initrd is not None and pae_kernel is not None and dirname.find("isolinux") == -1: adtls.append(self.add_entry(dirname,pae_kernel,pae_initrd)) pae_kernel = None pae_initrd = None for adtl in adtls: distros_added.extend(adtl) def add_entry(self,dirname,kernel,initrd): """ When we find a directory with a valid kernel/initrd in it, create the distribution objects as appropriate and save them. This includes creating xen and rescue distros/profiles if possible. """ proposed_name = self.get_proposed_name(dirname,kernel) proposed_arch = self.get_proposed_arch(dirname) if self.arch and proposed_arch and self.arch != proposed_arch: utils.die(self.logger,"Arch from pathname (%s) does not match with supplied one %s"%(proposed_arch,self.arch)) archs = self.learn_arch_from_tree() if not archs: if self.arch: archs.append( self.arch ) else: if self.arch and self.arch not in archs: utils.die(self.logger, "Given arch (%s) not found on imported tree %s"%(self.arch,self.get_pkgdir())) if proposed_arch: if archs and proposed_arch not in archs: self.logger.warning("arch from pathname (%s) not found on imported tree %s" % (proposed_arch,self.get_pkgdir())) return archs = [ proposed_arch ] if len(archs)>1: #if self.breed in [ "suse" ]: # self.logger.warning("directory %s holds multiple arches : %s" % (dirname, archs)) # return self.logger.warning("- Warning : Multiple archs found : %s" % (archs)) distros_added = [] for pxe_arch in archs: name = proposed_name + "-" + pxe_arch existing_distro = self.distros.find(name=name) if existing_distro is not None: self.logger.warning("skipping import, as distro name already exists: %s" % name) continue else: self.logger.info("creating new distro: %s" % name) distro = self.config.new_distro() if name.find("-autoboot") != -1: # this is an artifact of some EL-3 imports continue distro.set_name(name) distro.set_kernel(kernel) distro.set_initrd(initrd) distro.set_arch(pxe_arch) distro.set_breed(self.breed) distro.set_kernel_options("install=http://%s/cblr/links/%s" % (self.settings.server, name)) # If a version was supplied on command line, we set it now if self.os_version: distro.set_os_version(self.os_version) self.distros.add(distro,save=True) distros_added.append(distro) existing_profile = self.profiles.find(name=name) # see if the profile name is already used, if so, skip it and # do not modify the existing profile if existing_profile is None: self.logger.info("creating new profile: %s" % name) #FIXME: The created profile holds a default kickstart, and should be breed specific profile = self.config.new_profile() else: self.logger.info("skipping existing profile, name already exists: %s" % name) continue # save our minimal profile which just points to the distribution and a good # default answer file profile.set_name(name) profile.set_distro(name) profile.set_kickstart(self.kickstart_file) # depending on the name of the profile we can define a good virt-type # for usage with koan if name.find("-xen") != -1: profile.set_virt_type("xenpv") elif name.find("vmware") != -1: profile.set_virt_type("vmware") else: profile.set_virt_type("qemu") # save our new profile to the collection self.profiles.add(profile,save=True) # Create a rescue image as well, if this is not a xen distro # but only for red hat profiles # this code disabled as it seems to be adding "-rescue" to # distros that are /not/ rescue related, which is wrong. # left as a FIXME for those who find this feature interesting. #if name.find("-xen") == -1 and self.breed == "redhat": # rescue_name = 'rescue-' + name # existing_profile = self.profiles.find(name=rescue_name) # # if existing_profile is None: # self.logger.info("creating new profile: %s" % rescue_name) # profile = self.config.new_profile() # else: # continue # # profile.set_name(rescue_name) # profile.set_distro(name) # profile.set_virt_type("qemu") # profile.kernel_options['rescue'] = None # profile.kickstart = '/var/lib/cobbler/kickstarts/pxerescue.ks' # # self.profiles.add(profile,save=True) return distros_added def get_proposed_name(self,dirname,kernel=None): """ Given a directory name where we have a kernel/initrd pair, try to autoname the distribution (and profile) object based on the contents of that path """ if self.network_root is not None: name = self.name else: # remove the part that says /var/www/cobbler/ks_mirror/name name = "-".join(dirname.split("/")[5:]) if kernel is not None and kernel.find("PAE") != -1: name = name + "-PAE" if kernel is not None and kernel.find("xen") != -1: name = name + "-xen" # we have our kernel in ../boot//vmlinuz-xen and # .../boot//loader/vmlinuz # name = name.replace("-loader","") name = name.replace("-boot","") # some paths above the media root may have extra path segments we want # to clean up name = name.replace("-os","") name = name.replace("-tree","") name = name.replace("srv-www-cobbler-", "") name = name.replace("var-www-cobbler-", "") name = name.replace("ks_mirror-","") name = name.replace("--","-") for separator in [ '-' , '_' , '.' ] : for arch in [ "i386" , "x86_64" , "ia64" , "ppc64", "ppc32", "ppc", "x86" , "s390x", "s390" , "386" , "amd" ]: name = name.replace("%s%s" % ( separator , arch ),"") return name def get_proposed_arch(self,dirname): """ Given an directory name, can we infer an architecture from a path segment? """ if dirname.find("x86_64") != -1 or dirname.find("amd") != -1: return "x86_64" if dirname.find("ia64") != -1: return "ia64" if dirname.find("i386") != -1 or dirname.find("386") != -1 or dirname.find("x86") != -1: return "i386" if dirname.find("s390x") != -1: return "s390x" if dirname.find("s390") != -1: return "s390" if dirname.find("ppc64") != -1 or dirname.find("chrp") != -1: return "ppc64" if dirname.find("ppc32") != -1: return "ppc" if dirname.find("ppc") != -1: return "ppc" return None def arch_walker(self,foo,dirname,fnames): """ See docs on learn_arch_from_tree. The TRY_LIST is used to speed up search, and should be dropped for default importer Searched kernel names are kernel-header, linux-headers-, kernel-largesmp, kernel-hugemem This method is useful to get the archs, but also to package type and a raw guess of the breed """ # try to find a kernel header RPM and then look at it's arch. for x in fnames: if self.match_kernelarch_file(x): for arch in self.get_valid_arches(): if x.find(arch) != -1: foo[arch] = 1 for arch in [ "i686" , "amd64" ]: if x.find(arch) != -1: foo[arch] = 1 def kickstart_finder(self,distros_added): """ For all of the profiles in the config w/o a kickstart, use the given kickstart file, or look at the kernel path, from that, see if we can guess the distro, and if we can, assign a kickstart if one is available for it. """ for profile in self.profiles: distro = self.distros.find(name=profile.get_conceptual_parent().name) if distro is None or not (distro in distros_added): continue kdir = os.path.dirname(distro.kernel) if self.kickstart_file == None: for rpm in self.get_release_files(): results = self.scan_pkg_filename(rpm) # FIXME : If os is not found on tree but set with CLI, no kickstart is searched if results is None: self.logger.warning("No version found on imported tree") continue (flavor, major, minor) = results version , ks = self.set_variance(flavor, major, minor, distro.arch) if self.os_version: if self.os_version != version: utils.die(self.logger,"CLI version differs from tree : %s vs. %s" % (self.os_version,version)) ds = self.get_datestamp() distro.set_comment("%s.%s" % (version, int(minor))) distro.set_os_version(version) if ds is not None: distro.set_tree_build_time(ds) profile.set_kickstart(ks) self.profiles.add(profile,save=True) self.configure_tree_location(distro) self.distros.add(distro,save=True) # re-save self.api.serialize() def configure_tree_location(self, distro): """ Once a distribution is identified, find the part of the distribution that has the URL in it that we want to use for kickstarting the distribution, and create a ksmeta variable $tree that contains this. """ base = self.get_rootdir() if self.network_root is None: dest_link = os.path.join(self.settings.webdir, "links", distro.name) # create the links directory only if we are mirroring because with # SELinux Apache can't symlink to NFS (without some doing) if not os.path.exists(dest_link): try: os.symlink(base, dest_link) except: # this shouldn't happen but I've seen it ... debug ... self.logger.warning("symlink creation failed: %(base)s, %(dest)s") % { "base" : base, "dest" : dest_link } # how we set the tree depends on whether an explicit network_root was specified tree = "http://@@http_server@@/cblr/links/%s" % (distro.name) self.set_install_tree( distro, tree) else: # where we assign the kickstart source is relative to our current directory # and the input start directory in the crawl. We find the path segments # between and tack them on the network source path to find the explicit # network path to the distro that Anaconda can digest. tail = utils.path_tail(self.path, base) tree = self.network_root[:-1] + tail self.set_install_tree( distro, tree) def get_rootdir(self): return self.rootdir def get_pkgdir(self): if not self.pkgdir: return None return os.path.join(self.get_rootdir(),self.pkgdir) def set_install_tree(self, distro, url): distro.ks_meta["tree"] = url def learn_arch_from_tree(self): """ If a distribution is imported from DVD, there is a good chance the path doesn't contain the arch and we should add it back in so that it's part of the meaningful name ... so this code helps figure out the arch name. This is important for producing predictable distro names (and profile names) from differing import sources """ result = {} # FIXME : this is called only once, should not be a walk if self.get_pkgdir(): os.path.walk(self.get_pkgdir(), self.arch_walker, result) if result.pop("amd64",False): result["x86_64"] = 1 if result.pop("i686",False): result["i386"] = 1 if result.pop("x86",False): result["i386"] = 1 return result.keys() def match_kernelarch_file(self, filename): """ Is the given filename a kernel filename? """ if not filename.endswith("rpm") and not filename.endswith("deb"): return False for match in ["kernel-header", "kernel-source", "kernel-smp", "kernel-default", "kernel-desktop", "linux-headers-", "kernel-devel", "kernel-"]: if filename.find(match) != -1: return True return False def scan_pkg_filename(self, rpm): """ Determine what the distro is based on the release package filename. """ rpm = os.path.basename(rpm) return ("suse", 1, 1) def get_datestamp(self): """ Based on a RedHat tree find the creation timestamp """ base = self.get_rootdir() if os.path.exists("%s/.discinfo" % base): discinfo = open("%s/.discinfo" % base, "r") datestamp = discinfo.read().split("\n")[0] discinfo.close() else: return 0 return float(datestamp) def set_variance(self, flavor, major, minor, arch): """ find the profile kickstart and set the distro breed/os-version based on what we can find out from the rpm filenames and then return the kickstart path to use. """ os_version = "suse" kickbase = "/var/lib/cobbler/kickstarts" return os_version, "autoyast_sample.xml" # ========================================================================== def get_import_manager(config,logger): return ImportSuseManager(config,logger) cobbler-2.4.1/cobbler/modules/manage_import_vmware.py000066400000000000000000000517771227367477500230300ustar00rootroot00000000000000""" This is some of the code behind 'cobbler sync'. Copyright 2006-2009, Red Hat, Inc and Others Michael DeHaan John Eckersberg This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import os import os.path import glob import traceback import errno import re import utils from cexceptions import * import templar import item_distro import item_profile import item_repo import item_system from utils import _ def register(): """ The mandatory cobbler module registration hook. """ return "manage/import" class ImportVMWareManager: def __init__(self,config,logger): """ Constructor """ self.logger = logger self.config = config self.api = config.api self.distros = config.distros() self.profiles = config.profiles() self.systems = config.systems() self.settings = config.settings() self.repos = config.repos() self.templar = templar.Templar(config) # required function for import modules def what(self): return "import/vmware" # required function for import modules def check_for_signature(self,path,cli_breed): signatures = [ 'VMware/RPMS', 'imagedd.bz2', 'tboot.b00', ] for signature in signatures: d = os.path.join(path,signature) if os.path.exists(d): self.logger.info("Found a vmware compatible signature: %s" % signature) return (True,signature) if cli_breed and cli_breed in self.get_valid_breeds(): self.logger.info("Warning: No distro signature for kernel at %s, using value from command line" % path) return (True,None) return (False,None) # required function for import modules def run(self,pkgdir,name,path,network_root=None,kickstart_file=None,rsync_flags=None,arch=None,breed=None,os_version=None): self.pkgdir = pkgdir self.network_root = network_root self.kickstart_file = kickstart_file self.rsync_flags = rsync_flags self.arch = arch self.breed = breed self.os_version = os_version self.name = name self.path = path self.rootdir = path # some fixups for the XMLRPC interface, which does not use "None" if self.arch == "": self.arch = None if self.kickstart_file == "": self.kickstart_file = None if self.os_version == "": self.os_version = None if self.rsync_flags == "": self.rsync_flags = None if self.network_root == "": self.network_root = None # If no breed was specified on the command line, set it to "redhat" for this module if self.breed == None: self.breed = "vmware" # import takes a --kickstart for forcing selection that can't be used in all circumstances if self.kickstart_file and not self.breed: utils.die(self.logger,"Kickstart file can only be specified when a specific breed is selected") if self.os_version and not self.breed: utils.die(self.logger,"OS version can only be specified when a specific breed is selected") if self.breed and self.breed.lower() not in self.get_valid_breeds(): utils.die(self.logger,"Supplied import breed is not supported by this module") # now walk the filesystem looking for distributions that match certain patterns self.logger.info("adding distros") distros_added = [] # FIXME : search below self.path for isolinux configurations or known directories from TRY_LIST os.path.walk(self.path, self.distro_adder, distros_added) if len(distros_added) == 0: self.logger.warning("No distros imported, bailing out") return False # find the most appropriate answer files for each profile object self.logger.info("associating kickstarts") self.kickstart_finder(distros_added) # ensure bootloaders are present self.api.pxegen.copy_bootloaders() return True # required function for import modules def get_valid_arches(self): return ["i386", "x86_64", "x86",] # required function for import modules def get_valid_breeds(self): return ["vmware",] # required function for import modules def get_valid_os_versions(self): return utils.get_valid_os_versions_for_breed("vmware") def get_valid_repo_breeds(self): return ["rsync", "rhn", "yum",] def get_release_files(self): """ Find distro release packages. """ data = glob.glob(os.path.join(self.get_pkgdir(), "vmware-esx-vmware-release-*")) data2 = [] for x in data: b = os.path.basename(x) if b.find("vmware") != -1: data2.append(x) if len(data2) == 0: # ESXi maybe? data2 = glob.glob(os.path.join(self.get_rootdir(), "*.*")) return data2 def get_tree_location(self, distro): """ Once a distribution is identified, find the part of the distribution that has the URL in it that we want to use for kickstarting the distribution, and create a ksmeta variable $tree that contains this. """ base = self.get_rootdir() if self.network_root is None: dest_link = os.path.join(self.settings.webdir, "links", distro.name) # create the links directory only if we are mirroring because with # SELinux Apache can't symlink to NFS (without some doing) if not os.path.exists(dest_link): try: os.symlink(base, dest_link) except: # this shouldn't happen but I've seen it ... debug ... self.logger.warning("symlink creation failed: %(base)s, %(dest)s") % { "base" : base, "dest" : dest_link } # how we set the tree depends on whether an explicit network_root was specified tree = "http://@@http_server@@/cblr/links/%s" % (distro.name) self.set_install_tree(distro, tree) else: # where we assign the kickstart source is relative to our current directory # and the input start directory in the crawl. We find the path segments # between and tack them on the network source path to find the explicit # network path to the distro that Anaconda can digest. tail = utils.path_tail(self.path, base) tree = self.network_root[:-1] + tail self.set_install_tree(distro, tree) return def distro_adder(self,distros_added,dirname,fnames): """ This is an os.path.walk routine that finds distributions in the directory to be scanned and then creates them. """ # FIXME: If there are more than one kernel or initrd image on the same directory, # results are unpredictable initrd = None kernel = None for x in fnames: adtls = [] fullname = os.path.join(dirname,x) if os.path.islink(fullname) and os.path.isdir(fullname): if fullname.startswith(self.path): self.logger.warning("avoiding symlink loop") continue self.logger.info("following symlink: %s" % fullname) os.path.walk(fullname, self.distro_adder, distros_added) if ( x.startswith("initrd") or x.startswith("ramdisk.image.gz") or x.startswith("vmkboot.gz") or x.startswith("s.v00") ) and x != "initrd.size": initrd = os.path.join(dirname,x) if ( x.startswith("vmlinu") or x.startswith("kernel.img") or x.startswith("linux") or x.startswith("mboot.c32") ) and x.find("initrd") == -1: kernel = os.path.join(dirname,x) # if we've collected a matching kernel and initrd pair, turn the in and add them to the list if initrd is not None and kernel is not None: adtls.append(self.add_entry(dirname,kernel,initrd)) kernel = None initrd = None for adtl in adtls: distros_added.extend(adtl) def add_entry(self,dirname,kernel,initrd): """ When we find a directory with a valid kernel/initrd in it, create the distribution objects as appropriate and save them. This includes creating xen and rescue distros/profiles if possible. """ arch = "x86_64" # esxi only supports 64-bit proposed_name = self.get_proposed_name(dirname,kernel) distros_added = [] name = proposed_name + "-" + arch existing_distro = self.distros.find(name=name) if existing_distro is not None: self.logger.warning("skipping import, as distro name already exists: %s" % name) else: self.logger.info("creating new distro: %s" % name) distro = self.config.new_distro() distro.set_name(name) distro.set_kernel(kernel) distro.set_initrd(initrd) distro.set_arch(arch) distro.set_breed(self.breed) # If a version was supplied on command line, we set it now if self.os_version: distro.set_os_version(self.os_version) self.distros.add(distro,save=True) distros_added.append(distro) # see if the profile name is already used, if so, skip it and # do not modify the existing profile existing_profile = self.profiles.find(name=name) if existing_profile is not None: self.logger.info("skipping existing profile, name already exists: %s" % name) else: self.logger.info("creating new profile: %s" % name) #FIXME: The created profile holds a default kickstart, and should be breed specific profile = self.config.new_profile() profile.set_name(name) profile.set_distro(name) profile.set_kickstart(self.kickstart_file) # We just set the virt type to vmware for these # since newer VMwares support running ESX as a guest for testing profile.set_virt_type("vmware") # save our new profile to the collection self.profiles.add(profile,save=True) return distros_added def get_proposed_name(self,dirname,kernel=None): """ Given a directory name where we have a kernel/initrd pair, try to autoname the distribution (and profile) object based on the contents of that path """ if self.network_root is not None: name = self.name #+ "-".join(utils.path_tail(os.path.dirname(self.path),dirname).split("/")) else: # remove the part that says /var/www/cobbler/ks_mirror/name name = "-".join(dirname.split("/")[5:]) if kernel is not None and kernel.find("PAE") != -1: name = name + "-PAE" # These are all Ubuntu's doing, the netboot images are buried pretty # deep. ;-) -JC name = name.replace("-netboot","") name = name.replace("-ubuntu-installer","") name = name.replace("-amd64","") name = name.replace("-i386","") # we know that some kernel paths should not be in the name name = name.replace("-images","") name = name.replace("-pxeboot","") name = name.replace("-install","") name = name.replace("-isolinux","") # some paths above the media root may have extra path segments we want # to clean up name = name.replace("-os","") name = name.replace("-tree","") name = name.replace("var-www-cobbler-", "") name = name.replace("ks_mirror-","") name = name.replace("--","-") # remove any architecture name related string, as real arch will be appended later name = name.replace("chrp","ppc64") for separator in [ '-' , '_' , '.' ] : for arch in [ "i386" , "x86_64" , "ia64" , "ppc64", "ppc32", "ppc", "x86" , "s390x", "s390" , "386" , "amd" ]: name = name.replace("%s%s" % ( separator , arch ),"") return name def kickstart_finder(self,distros_added): """ For all of the profiles in the config w/o a kickstart, use the given kickstart file, or look at the kernel path, from that, see if we can guess the distro, and if we can, assign a kickstart if one is available for it. """ # FIXME: this is bass-ackwards... why do we loop through all # profiles to find distros we added when we already have the list # of distros we added??? It would be easier to loop through the # distros_added list and modify all child profiles for profile in self.profiles: distro = self.distros.find(name=profile.get_conceptual_parent().name) if distro is None or not (distro in distros_added): continue kdir = os.path.dirname(distro.kernel) release_files = self.get_release_files() for release_file in release_files: results = self.scan_pkg_filename(release_file) if results is None: continue (flavor, major, minor, release, update) = results version , ks = self.set_variance(flavor, major, minor, release, update, distro.arch) if self.os_version: if self.os_version != version: utils.die(self.logger,"CLI version differs from tree : %s vs. %s" % (self.os_version,version)) ds = self.get_datestamp() distro.set_comment("%s.%s.%s update %s" % (version,minor,release,update)) distro.set_os_version(version) if ds is not None: distro.set_tree_build_time(ds) if self.kickstart_file == None: profile.set_kickstart(ks) boot_files = '' if version == "esxi4": self.logger.info("This is an ESXi4 distro - adding extra PXE files to boot-files list") # add extra files to boot_files in the distro for file in ('vmkernel.gz','sys.vgz','cim.vgz','ienviron.vgz','install.vgz'): boot_files += '$img_path/%s=%s/%s ' % (file,self.path,file) elif version == "esxi5": self.logger.info("This is an ESXi5 distro - copying all files to boot-files list") #for file in glob.glob(os.path.join(self.path,"*.*")): # file_name = os.path.basename(file) # boot_files += '$img_path/%s=%s ' % (file_name,file) boot_files = '$img_path/=%s' % os.path.join(self.path,"*.*") distro.set_boot_files(boot_files.strip()) self.profiles.add(profile,save=True) # we found the correct details above, we can stop looping break self.configure_tree_location(distro) self.distros.add(distro,save=True) # re-save self.api.serialize() def configure_tree_location(self, distro): """ Once a distribution is identified, find the part of the distribution that has the URL in it that we want to use for kickstarting the distribution, and create a ksmeta variable $tree that contains this. """ base = self.get_rootdir() if self.network_root is None: dest_link = os.path.join(self.settings.webdir, "links", distro.name) # create the links directory only if we are mirroring because with # SELinux Apache can't symlink to NFS (without some doing) if not os.path.exists(dest_link): try: os.symlink(base, dest_link) except: # this shouldn't happen but I've seen it ... debug ... self.logger.warning("symlink creation failed: %(base)s, %(dest)s") % { "base" : base, "dest" : dest_link } # how we set the tree depends on whether an explicit network_root was specified tree = "http://@@http_server@@/cblr/links/%s" % (distro.name) self.set_install_tree( distro, tree) else: # where we assign the kickstart source is relative to our current directory # and the input start directory in the crawl. We find the path segments # between and tack them on the network source path to find the explicit # network path to the distro that Anaconda can digest. tail = utils.path_tail(self.path, base) tree = self.network_root[:-1] + tail self.set_install_tree( distro, tree) def get_rootdir(self): return self.rootdir def get_pkgdir(self): if not self.pkgdir: return None return os.path.join(self.get_rootdir(),self.pkgdir) def set_install_tree(self, distro, url): distro.ks_meta["tree"] = url def scan_pkg_filename(self, filename): """ Determine what the distro is based on the release package filename. """ file_base_name = os.path.basename(filename) success = False if file_base_name.lower().find("-esx-") != -1: flavor = "esx" match = re.search(r'release-(\d)+-(\d)+\.(\d)+\.(\d)+-(\d)\.', filename) if match: major = match.group(2) minor = match.group(3) release = match.group(4) update = match.group(5) success = True elif file_base_name.lower() == "vmkernel.gz": flavor = "esxi" major = 0 minor = 0 release = 0 update = 0 # this should return something like: # VMware ESXi 4.1.0 [Releasebuild-260247], built on May 18 2010 # though there will most likely be multiple results scan_cmd = 'gunzip -c %s | strings | grep -i "^vmware esxi"' % filename (data,rc) = utils.subprocess_sp(None, scan_cmd) lines = data.split('\n') m = re.compile(r'ESXi (\d)+\.(\d)+\.(\d)+ \[Releasebuild-([\d]+)\]') for line in lines: match = m.search(line) if match: major = match.group(1) minor = match.group(2) release = match.group(3) update = match.group(4) success = True break elif file_base_name.lower() == "s.v00": flavor = "esxi" major = 0 minor = 0 release = 0 update = 0 # this should return something like: # VMware ESXi 5.0.0 build-469512 # though there will most likely be multiple results scan_cmd = 'gunzip -c %s | strings | grep -i "^# vmware esxi"' % filename (data,rc) = utils.subprocess_sp(None, scan_cmd) lines = data.split('\n') m = re.compile(r'ESXi (\d)+\.(\d)+\.(\d)+[ ]+build-([\d]+)') for line in lines: match = m.search(line) if match: major = match.group(1) minor = match.group(2) release = match.group(3) update = match.group(4) success = True break if success: return (flavor, major, minor, release, update) else: return None def get_datestamp(self): """ Based on a VMWare tree find the creation timestamp """ pass def set_variance(self, flavor, major, minor, release, update, arch): """ Set distro specific versioning. """ os_version = "%s%s" % (flavor, major) if os_version == "esx4": ks = "/var/lib/cobbler/kickstarts/esx.ks" elif os_version == "esxi4": ks = "/var/lib/cobbler/kickstarts/esxi4-ks.cfg" elif os_version == "esxi5": ks = "/var/lib/cobbler/kickstarts/esxi5-ks.cfg" else: ks = "/var/lib/cobbler/kickstarts/default.ks" return os_version , ks # ========================================================================== def get_import_manager(config,logger): return ImportVMWareManager(config,logger) cobbler-2.4.1/cobbler/modules/manage_in_tftpd.py000066400000000000000000000150061227367477500217250ustar00rootroot00000000000000""" This is some of the code behind 'cobbler sync'. This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import os.path, traceback, errno import re import clogger import pxegen import shutil import glob import utils from cexceptions import * import templar from utils import _ def register(): """ The mandatory cobbler module registration hook. """ return "manage" class InTftpdManager: def what(self): return "tftpd" def __init__(self,config,logger): """ Constructor """ self.logger = logger if self.logger is None: self.logger = clogger.Logger() self.config = config self.templar = templar.Templar(config) self.settings_file = "/etc/xinetd.d/tftp" self.pxegen = pxegen.PXEGen(config, self.logger) self.systems = config.systems() self.bootloc = utils.tftpboot_location() def regen_hosts(self): pass # not used def write_dns_files(self): pass # not used def write_boot_files_distro(self,distro): # collapse the object down to a rendered datastructure # the second argument set to false means we don't collapse # hashes/arrays into a flat string target = utils.blender(self.config.api, False, distro) # Create metadata for the templar function # Right now, just using local_img_path, but adding more # cobbler variables here would probably be good metadata = {} metadata["local_img_path"] = os.path.join(utils.tftpboot_location(),"images",distro.name) # Create the templar instance. Used to template the target directory templater = templar.Templar(self.config) # Loop through the hash of boot files, # executing a cp for each one self.logger.info("processing boot_files for distro: %s" % distro.name) for file in target["boot_files"].keys(): rendered_file = templater.render(file,metadata,None) try: for f in glob.glob(target["boot_files"][file]): if f == target["boot_files"][file]: # this wasn't really a glob, so just copy it as is filedst = rendered_file else: # this was a glob, so figure out what the destination # file path/name should be tgt_path,tgt_file=os.path.split(f) rnd_path,rnd_file=os.path.split(rendered_file) filedst = os.path.join(rnd_path,tgt_file) if not os.path.isfile(filedst): shutil.copyfile(f, filedst) self.config.api.log("copied file %s to %s for %s" % (f,filedst,distro.name)) except: self.logger.error("failed to copy file %s to %s for %s" % (f,filedst,distro.name)) return 0 def write_boot_files(self): """ Copy files in profile["boot_files"] into /tftpboot. Used for vmware currently. """ for distro in self.config.distros(): self.write_boot_files_distro(distro) return 0 def write_tftpd_files(self): """ xinetd files are written when manage_tftp is set in /var/lib/cobbler/settings. """ template_file = "/etc/cobbler/tftpd.template" try: f = open(template_file,"r") except: raise CX(_("error reading template %s") % template_file) template_data = "" template_data = f.read() f.close() metadata = { "user" : "root", "binary" : "/usr/sbin/in.tftpd", "args" : "%s" % self.bootloc } self.logger.info("generating %s" % self.settings_file) self.templar.render(template_data, metadata, self.settings_file, None) def update_netboot(self,name): """ Write out new pxelinux.cfg files to /tftpboot """ system = self.systems.find(name=name) if system is None: utils.die(self.logger,"error in system lookup for %s" % name) self.pxegen.write_all_system_files(system) # generate any templates listed in the system self.pxegen.write_templates(system) def add_single_system(self,system): """ Write out new pxelinux.cfg files to /tftpboot """ # write the PXE files for the system self.pxegen.write_all_system_files(system) # generate any templates listed in the distro self.pxegen.write_templates(system) def add_single_distro(self,distro): self.pxegen.copy_single_distro_files(distro,self.bootloc,False) self.write_boot_files_distro(distro) def sync(self,verbose=True): """ Write out all files to /tftpdboot """ self.pxegen.verbose = verbose self.logger.info("copying bootloaders") self.pxegen.copy_bootloaders() self.logger.info("copying distros to tftpboot") # Adding in the exception handling to not blow up if files have # been moved (or the path references an NFS directory that's no longer # mounted) for d in self.config.distros(): try: self.logger.info("copying files for distro: %s" % d.name) self.pxegen.copy_single_distro_files(d,self.bootloc,False) except CX, e: self.logger.error(e.value) self.logger.info("copying images") self.pxegen.copy_images() # the actual pxelinux.cfg files, for each interface self.logger.info("generating PXE configuration files") for x in self.systems: self.pxegen.write_all_system_files(x) self.logger.info("generating PXE menu structure") self.pxegen.make_pxe_menu() def get_manager(config,logger): return InTftpdManager(config,logger) cobbler-2.4.1/cobbler/modules/manage_isc.py000066400000000000000000000154131227367477500206760ustar00rootroot00000000000000""" This is some of the code behind 'cobbler sync'. Copyright 2006-2009, Red Hat, Inc and Others Michael DeHaan John Eckersberg This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import time import glob import traceback import errno import utils from cexceptions import * import templar import item_distro import item_profile import item_repo import item_system from utils import _ def register(): """ The mandatory cobbler module registration hook. """ return "manage" class IscManager: def what(self): return "isc" def __init__(self,config,logger): """ Constructor """ self.logger = logger self.config = config self.api = config.api self.distros = config.distros() self.profiles = config.profiles() self.systems = config.systems() self.settings = config.settings() self.repos = config.repos() self.templar = templar.Templar(config) self.settings_file = utils.dhcpconf_location(self.api) def write_dhcp_file(self): """ DHCP files are written when manage_dhcp is set in /var/lib/cobbler/settings. """ template_file = "/etc/cobbler/dhcp.template" blender_cache = {} try: f2 = open(template_file,"r") except: raise CX(_("error reading template: %s") % template_file) template_data = "" template_data = f2.read() f2.close() # use a simple counter for generating generic names where a hostname # is not available counter = 0 # we used to just loop through each system, but now we must loop # through each network interface of each system. dhcp_tags = { "default": {} } elilo = "/elilo-3.6-ia64.efi" yaboot = "/yaboot-1.3.14" for system in self.systems: if not system.is_management_supported(cidr_ok=False): continue profile = system.get_conceptual_parent() distro = profile.get_conceptual_parent() # if distro is None then the profile is really an image # record! for (name, interface) in system.interfaces.iteritems(): # this is really not a per-interface setting # but we do this to make the templates work # without upgrade interface["gateway"] = system.gateway mac = interface["mac_address"] if interface["interface_type"] in ("slave","bond_slave","bridge_slave","bonded_bridge_slave"): if interface["interface_master"] not in system.interfaces: # Can't write DHCP entry; master interface does not # exist continue ip = system.interfaces[interface["interface_master"]]["ip_address"] interface["ip_address"] = ip host = system.interfaces[interface["interface_master"]]["dns_name"] else: ip = interface["ip_address"] host = interface["dns_name"] if distro is not None: interface["distro"] = distro.to_datastruct() if mac is None or mac == "": # can't write a DHCP entry for this system continue counter = counter + 1 # the label the entry after the hostname if possible if host is not None and host != "": if name != "eth0": interface["name"] = "%s-%s" % (host,name) else: interface["name"] = "%s" % (host) else: interface["name"] = "generic%d" % counter # add references to the system, profile, and distro # for use in the template if blender_cache.has_key(system.name): blended_system = blender_cache[system.name] else: blended_system = utils.blender( self.api, False, system ) blender_cache[system.name] = blended_system interface["next_server"] = blended_system["server"] interface["netboot_enabled"] = blended_system["netboot_enabled"] interface["hostname"] = blended_system["hostname"] interface["owner"] = blended_system["name"] interface["enable_gpxe"] = blended_system["enable_gpxe"] if not interface["netboot_enabled"] and interface['static']: continue interface["filename"] = "/pxelinux.0" # can't use pxelinux.0 anymore if distro is not None: if distro.arch == "ia64": interface["filename"] = elilo elif distro.arch.startswith("ppc"): interface["filename"] = yaboot dhcp_tag = interface["dhcp_tag"] if dhcp_tag == "": dhcp_tag = "default" if not dhcp_tags.has_key(dhcp_tag): dhcp_tags[dhcp_tag] = { mac: interface } else: dhcp_tags[dhcp_tag][mac] = interface # we are now done with the looping through each interface of each system metadata = { "date" : time.asctime(time.gmtime()), "cobbler_server" : "%s:%s" % (self.settings.server,self.settings.http_port), "next_server" : self.settings.next_server, "elilo" : elilo, "yaboot" : yaboot, "dhcp_tags" : dhcp_tags } if self.logger is not None: self.logger.info("generating %s" % self.settings_file) self.templar.render(template_data, metadata, self.settings_file, None) def regen_ethers(self): pass # ISC/BIND do not use this def get_manager(config,logger): return IscManager(config,logger) cobbler-2.4.1/cobbler/modules/manage_tftpd_py.py000066400000000000000000000063251227367477500217530ustar00rootroot00000000000000""" This is some of the code behind 'cobbler sync'. This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import traceback import errno import clogger import utils from cexceptions import * import templar import pxegen from utils import _ def register(): """ The mandatory cobbler module registration hook. """ return "manage" class TftpdPyManager: def what(self): return "tftpd" def __init__(self,config,logger): """ Constructor """ self.logger = logger if self.logger is None: self.logger = clogger.Logger() self.config = config self.templar = templar.Templar(config) self.settings_file = "/etc/xinetd.d/tftp" def regen_hosts(self): pass # not used def write_dns_files(self): pass # not used def write_boot_files_distro(self,distro): """ Copy files in profile["boot_files"] into /tftpboot. Used for vmware currently. """ pass # not used. Handed by tftp.py def write_boot_files(self): """ Copy files in profile["boot_files"] into /tftpboot. Used for vmware currently. """ pass # not used. Handed by tftp.py def add_single_distro(self,distro): pass # not used def write_tftpd_files(self): """ xinetd files are written when manage_tftp is set in /var/lib/cobbler/settings. """ template_file = "/etc/cobbler/tftpd.template" try: f = open(template_file,"r") except: raise CX(_("error reading template %s") % template_file) template_data = "" template_data = f.read() f.close() metadata = { "user" : "nobody", "binary" : "/usr/sbin/tftpd.py", "args" : "-v" } self.logger.info("generating %s" % self.settings_file) self.templar.render(template_data, metadata, self.settings_file, None) def sync(self,verbose=True): """ Write out files to /tftpdboot. Mostly unused for the python server """ self.logger.info("copying bootloaders") pxegen.PXEGen(self.config,self.logger).copy_bootloaders() def update_netboot(self,name): """ Write out files to /tftpdboot. Unused for the python server """ pass def add_single_system(self,name): """ Write out files to /tftpdboot. Unused for the python server """ pass def get_manager(config,logger): return TftpdPyManager(config,logger) cobbler-2.4.1/cobbler/modules/scm_track.py000066400000000000000000000063141227367477500205560ustar00rootroot00000000000000""" (C) 2009, Red Hat Inc. Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import distutils.sysconfig import sys import os import traceback from cobbler.cexceptions import * import os import sys #import xmlrpclib import cobbler.module_loader as module_loader import cobbler.utils as utils plib = distutils.sysconfig.get_python_lib() mod_path="%s/cobbler" % plib sys.path.insert(0, mod_path) def register(): # this pure python trigger acts as if it were a legacy shell-trigger, but is much faster. # the return of this method indicates the trigger type return "/var/lib/cobbler/triggers/change/*" def run(api,args,logger): settings = api.settings() scm_track_enabled = str(settings.scm_track_enabled).lower() mode = str(settings.scm_track_mode).lower() if scm_track_enabled not in [ "y", "yes", "1", "true" ]: # feature disabled return 0 if mode == "git": old_dir = os.getcwd() os.chdir("/var/lib/cobbler") if os.getcwd() != "/var/lib/cobbler": raise "danger will robinson" if not os.path.exists("/var/lib/cobbler/.git"): rc = utils.subprocess_call(logger,"git init",shell=True) # FIXME: if we know the remote user of an XMLRPC call # use them as the author rc = utils.subprocess_call(logger,"git add --all config",shell=True) rc = utils.subprocess_call(logger,"git add --all kickstarts",shell=True) rc = utils.subprocess_call(logger,"git add --all snippets",shell=True) rc = utils.subprocess_call(logger,"git commit -m 'API update' --author 'cobbler '",shell=True) os.chdir(old_dir) return 0 elif mode == "hg": # use mercurial old_dir = os.getcwd() os.chdir("/var/lib/cobbler") if os.getcwd() != "/var/lib/cobbler": raise "danger will robinson" if not os.path.exists("/var/lib/cobbler/.hg"): rc = utils.subprocess_call(logger,"hg init",shell=True) # FIXME: if we know the remote user of an XMLRPC call # use them as the user rc = utils.subprocess_call(logger,"hg add config",shell=True) rc = utils.subprocess_call(logger,"hg add kickstarts",shell=True) rc = utils.subprocess_call(logger,"hg add snippets",shell=True) rc = utils.subprocess_call(logger,"hg commit -m 'API update' --user 'cobbler '",shell=True) os.chdir(old_dir) return 0 else: raise CX("currently unsupported SCM type: %s" % mode) cobbler-2.4.1/cobbler/modules/serializer_catalog.py000066400000000000000000000161641227367477500224570ustar00rootroot00000000000000""" Serializer code for cobbler. As of 8/2009, this is the "best" serializer option. It uses multiple files in /var/lib/cobbler/config/distros.d, profiles.d, etc And JSON, when possible, and YAML, when not. It is particularly fast, especially when using JSON. YAML, not so much. It also knows how to upgrade the old "single file" configs to .d versions. Copyright 2006-2009, Red Hat, Inc and Others Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import distutils.sysconfig import os import sys import glob import traceback import yaml # PyYAML import simplejson import exceptions plib = distutils.sysconfig.get_python_lib() mod_path="%s/cobbler" % plib sys.path.insert(0, mod_path) from utils import _ import utils from cexceptions import * import os import cobbler.api as capi def can_use_json(): version = sys.version[:3] version = float(version) return (version > 2.3) def register(): """ The mandatory cobbler module registration hook. """ return "serializer" def what(): """ Module identification function """ return "serializer/catalog" def serialize_item(obj, item): if item.name is None or item.name == "": raise exceptions.RuntimeError("name unset for object!") # FIXME: Need a better way to support collections/items # appending an 's' does not work in all cases if obj.collection_type() in [ 'mgmtclass' ]: filename = "/var/lib/cobbler/config/%ses.d/%s" % (obj.collection_type(),item.name) else: filename = "/var/lib/cobbler/config/%ss.d/%s" % (obj.collection_type(),item.name) datastruct = item.to_datastruct() jsonable = can_use_json() if jsonable: if capi.BootAPI().settings().serializer_pretty_json: sort_keys = True indent = 4 else: sort_keys = False indent = None # avoid using JSON on python 2.3 where we can encounter # unicode problems with simplejson pre 2.0 if os.path.exists(filename): print "upgrading yaml file to json: %s" % filename os.remove(filename) filename = filename + ".json" datastruct = item.to_datastruct() fd = open(filename,"w+") data = simplejson.dumps(datastruct, encoding="utf-8", sort_keys = sort_keys, indent = indent) #data = data.encode('utf-8') fd.write(data) else: if os.path.exists(filename + ".json"): print "downgrading json file back to yaml: %s" % filename os.remove(filename + ".json") datastruct = item.to_datastruct() fd = open(filename,"w+") data = yaml.dump(datastruct) fd.write(data) fd.close() return True def serialize_delete(obj, item): # FIXME: Need a better way to support collections/items # appending an 's' does not work in all cases if obj.collection_type() in [ 'mgmtclass', ]: filename = "/var/lib/cobbler/config/%ses.d/%s" % (obj.collection_type(),item.name) else: filename = "/var/lib/cobbler/config/%ss.d/%s" % (obj.collection_type(),item.name) filename2 = filename + ".json" if os.path.exists(filename): os.remove(filename) if os.path.exists(filename2): os.remove(filename2) return True def deserialize_item_raw(collection_type, item_name): # this new fn is not really implemented performantly in this module. # yet. # FIXME: Need a better way to support collections/items # appending an 's' does not work in all cases if item_name in [ 'mgmtclass' ]: filename = "/var/lib/cobbler/config/%ses.d/%s" % (collection_type(),item_name) else: filename = "/var/lib/cobbler/config/%ss.d/%s" % (collection_type,item_name) filename2 = filename + ".json" if os.path.exists(filename): fd = open(filename) data = fd.read() return yaml.safe_load(data) elif os.path.exists(filename2): fd = open(filename2) data = fd.read() return simplejson.loads(data, encoding="utf-8") else: return None def serialize(obj): """ Save an object to disk. Object must "implement" Serializable. FIXME: Return False on access/permission errors. This should NOT be used by API if serialize_item is available. """ ctype = obj.collection_type() if ctype == "settings": return True for x in obj: serialize_item(obj,x) return True def deserialize_raw(collection_type): if collection_type == "settings": fd = open("/etc/cobbler/settings") datastruct = yaml.safe_load(fd.read()) fd.close() return datastruct else: results = [] # FIXME: Need a better way to support collections/items # appending an 's' does not work in all cases if collection_type in [ 'mgmtclass' ]: all_files = glob.glob("/var/lib/cobbler/config/%ses.d/*" % collection_type) else: all_files = glob.glob("/var/lib/cobbler/config/%ss.d/*" % collection_type) all_files = filter_upgrade_duplicates(all_files) for f in all_files: fd = open(f) ydata = fd.read() # ydata = ydata.decode() if f.endswith(".json"): datastruct = simplejson.loads(ydata, encoding='utf-8') else: datastruct = yaml.safe_load(ydata) results.append(datastruct) fd.close() return results def filter_upgrade_duplicates(file_list): """ In a set of files, some ending with .json, some not, return the list of files with the .json ones taking priority over the ones that are not. """ bases = {} for f in file_list: basekey = f.replace(".json","") if f.endswith(".json"): bases[basekey] = f else: lookup = bases.get(basekey,"") if not lookup.endswith(".json"): bases[basekey] = f return bases.values() def deserialize(obj,topological=True): """ Populate an existing object with the contents of datastruct. Object must "implement" Serializable. """ datastruct = deserialize_raw(obj.collection_type()) if topological and type(datastruct) == list: datastruct.sort(__depth_cmp) obj.from_datastruct(datastruct) return True def __depth_cmp(item1, item2): d1 = item1.get("depth",1) d2 = item2.get("depth",1) return cmp(d1,d2) if __name__ == "__main__": print deserialize_item_raw("distro","D1") cobbler-2.4.1/cobbler/modules/serializer_couch.py000066400000000000000000000076231227367477500221460ustar00rootroot00000000000000""" Serializer code for cobbler. Experimental: couchdb version Copyright 2006-2009, Red Hat, Inc and Others Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import distutils.sysconfig import os import sys import glob import traceback import yaml # PyYAML import simplejson import exceptions plib = distutils.sysconfig.get_python_lib() mod_path="%s/cobbler" % plib sys.path.insert(0, mod_path) from utils import _ import utils from cexceptions import * import os import couch typez = [ "distro", "profile", "system", "image", "repo" ] couchdb = couch.Couch('127.0.0.1') def __connect(): couchdb.connect() for x in typez: couchdb.createDb(x) def register(): """ The mandatory cobbler module registration hook. """ # FIXME: only run this if enabled. return "serializer" def what(): """ Module identification function """ return "serializer/couchdb" def serialize_item(obj, item): __connect() datastruct = item.to_datastruct() # blindly prevent conflict resolution couchdb.openDoc(obj.collection_type(), item.name) data = couchdb.saveDoc(obj.collection_type(), simplejson.dumps(datastruct, encoding="utf-8"), item.name) data = simplejson.loads(data) return True def serialize_delete(obj, item): __connect() couchdb.deleteDoc(obj.collection_type(), item.name) return True def deserialize_item_raw(collection_type, item_name): __connect() data = couchdb.openDoc(collection_type, item_name) return simplejson.loads(data, encoding="utf-8") def serialize(obj): """ Save an object to disk. Object must "implement" Serializable. FIXME: Return False on access/permission errors. This should NOT be used by API if serialize_item is available. """ __connect() ctype = obj.collection_type() if ctype == "settings": return True for x in obj: serialize_item(obj,x) return True def deserialize_raw(collection_type): __connect() contents = simplejson.loads(couchdb.listDoc(collection_type)) items = [] if contents.has_key("error") and contents.get("reason","").find("Missing") != -1: # no items in the DB yet return [] for x in contents["rows"]: items.append(x["key"]) if collection_type == "settings": fd = open("/etc/cobbler/settings") datastruct = yaml.safe_load(fd.read()) fd.close() return datastruct else: results = [] for f in items: data = couchdb.openDoc(collection_type, f) datastruct = simplejson.loads(data, encoding='utf-8') results.append(datastruct) return results def deserialize(obj,topological=True): """ Populate an existing object with the contents of datastruct. Object must "implement" Serializable. """ __connect() datastruct = deserialize_raw(obj.collection_type()) if topological and type(datastruct) == list: datastruct.sort(__depth_cmp) obj.from_datastruct(datastruct) return True def __depth_cmp(item1, item2): d1 = item1.get("depth",1) d2 = item2.get("depth",1) return cmp(d1,d2) if __name__ == "__main__": print deserialize_item_raw("distro","D1") cobbler-2.4.1/cobbler/modules/serializer_mongodb.py000066400000000000000000000071511227367477500224660ustar00rootroot00000000000000""" Serializer code for cobbler. Experimental: mongodb version Copyright 2006-2009, Red Hat, Inc and Others Michael DeHaan James Cammarata This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import distutils.sysconfig import os import sys import traceback import exceptions plib = distutils.sysconfig.get_python_lib() mod_path="%s/cobbler" % plib sys.path.insert(0, mod_path) from utils import _ import utils from cexceptions import * import os import ConfigParser pymongo_loaded = False try: from pymongo import Connection pymongo_loaded = True except: # FIXME: log message pass cp = ConfigParser.ConfigParser() cp.read("/etc/cobbler/mongodb.conf") host = cp.get("connection","host") port = int(cp.get("connection","port")) mongodb = None def __connect(): # TODO: detect connection error global mongodb try: mongodb = Connection('localhost', 27017)['cobbler'] return True except: # FIXME: log error return False def register(): """ The mandatory cobbler module registration hook. """ # FIXME: only run this if enabled. if not pymongo_loaded: return "" return "serializer" def what(): """ Module identification function """ return "serializer/mongodb" def serialize_item(obj, item): if not __connect(): # FIXME: log error return False collection = mongodb[obj.collection_type()] data = collection.find_one({'name':item.name}) if data: collection.update({'name':item.name}, item.to_datastruct()) else: collection.insert(item.to_datastruct()) return True def serialize_delete(obj, item): if not __connect(): # FIXME: log error return False collection = mongodb[obj.collection_type()] collection.remove({'name':item.name}) return True def deserialize_item_raw(collection_type, item_name): if not __connect(): # FIXME: log error return False collection = mongodb[obj.collection_type()] data = collection.find_one({'name':item.name}) return data def serialize(obj): """ Save an object to the database. """ # TODO: error detection ctype = obj.collection_type() for x in obj: serialize_item(obj,x) return True def deserialize_raw(collection_type): if not __connect(): # FIXME: log error return False collection = mongodb[collection_type] return collection.find() def deserialize(obj,topological=True): """ Populate an existing object with the contents of datastruct. Object must "implement" Serializable. """ datastruct = deserialize_raw(obj.collection_type()) if topological and type(datastruct) == list: datastruct.sort(__depth_cmp) obj.from_datastruct(datastruct) return True def __depth_cmp(item1, item2): d1 = item1.get("depth",1) d2 = item2.get("depth",1) return cmp(d1,d2) if __name__ == "__main__": print deserialize_item_raw("distro","D1") cobbler-2.4.1/cobbler/modules/serializer_mysql.py000066400000000000000000000111631227367477500222040ustar00rootroot00000000000000""" Serializer code for cobbler. Experimental: mysql version Copyright 2006-2009, Red Hat, Inc and Others Michael DeHaan James Cammarata This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import distutils.sysconfig import os import sys import traceback import exceptions import simplejson plib = distutils.sysconfig.get_python_lib() mod_path="%s/cobbler" % plib sys.path.insert(0, mod_path) from utils import _ import utils from cexceptions import * import os import ConfigParser mysql_loaded = False try: import MySQLdb mysql_loaded = True except: # FIXME: log message pass mysqlconn = None def __connect(): # TODO: detect connection error global mysqlconn try: needs_connection = False if not mysqlconn: needs_connection = True elif not mysqlconn.open: needs_connection = True if needs_connection: mysqlconn = MySQLdb.connect(host="localhost",user="cobbler",passwd="testing123",db="cobbler") return mysqlconn.open except: # FIXME: log error return False def register(): """ The mandatory cobbler module registration hook. """ # FIXME: only run this if enabled. if not mysql_loaded: return "" __connect() return "serializer" def what(): """ Module identification function """ return "serializer/mysql" # Note that for all SQL inserts/deletes, we're using parameterized calls to # execute queries (DO NOT USE "string %s" % foo!!!). # # This is the safe and correct way to do things (no Bobby Tables), though # MySQLdb doesn't like it when you do that with the table name, so we still # do that the old way. This should not be a concern, since the collection # types are not exposed to the user and are internal only. def serialize_item(obj, item): if not __connect(): raise "Failed to connect" c = mysqlconn.cursor() data = simplejson.dumps(item.to_datastruct()) res = c.execute("INSERT INTO %s (name,data) VALUES(%%s,%%s) ON DUPLICATE KEY UPDATE data=%%s" % obj.collection_type(),(item.name,data,data)) mysqlconn.commit() if res: return True else: return False def serialize_delete(obj, item): if not __connect(): raise "Failed to connect" c = mysqlconn.cursor() res = c.execute("DELETE FROM %s WHERE name = %%s" % obj.collection_type(),item.name) mysqlconn.commit() if res: return True else: return False def deserialize_item_raw(collection_type, item_name): if not __connect(): raise "Failed to connect" c = mysqlconn.cursor() c.execute("SELECT data FROM %s WHERE name=%%s" % collection_type,item_name) data = c.fetchone() if data: data = simplejson.loads(data[0]) return data def serialize(obj): """ Save an object to the database. """ # TODO: error detection ctype = obj.collection_type() for x in obj: serialize_item(obj,x) return True def deserialize_raw(collection_type): if not __connect(): raise "Failed to connect" c = mysqlconn.cursor() c.execute("SELECT data FROM %s" % collection_type) data = c.fetchall() rdata = [] for row in data: rdata.append(simplejson.loads(row[0])) return rdata def deserialize(obj,topological=True): """ Populate an existing object with the contents of datastruct. Object must "implement" Serializable. """ datastruct = deserialize_raw(obj.collection_type()) if topological and type(datastruct) == list: datastruct.sort(__depth_cmp) obj.from_datastruct(datastruct) return True def __depth_cmp(item1, item2): d1 = item1.get("depth",1) d2 = item2.get("depth",1) return cmp(d1,d2) if __name__ == "__main__": print deserialize_item_raw("distro","D1") cobbler-2.4.1/cobbler/modules/sync_post_restart_services.py000066400000000000000000000052431227367477500243000ustar00rootroot00000000000000import distutils.sysconfig import sys import os import traceback import cexceptions import os import sys import xmlrpclib import cobbler.module_loader as module_loader import cobbler.utils as utils plib = distutils.sysconfig.get_python_lib() mod_path="%s/cobbler" % plib sys.path.insert(0, mod_path) def register(): # this pure python trigger acts as if it were a legacy shell-trigger, but is much faster. # the return of this method indicates the trigger type return "/var/lib/cobbler/triggers/sync/post/*" def run(api,args,logger): settings = api.settings() manage_dhcp = str(settings.manage_dhcp).lower() manage_dns = str(settings.manage_dns).lower() manage_tftpd = str(settings.manage_tftpd).lower() restart_dhcp = str(settings.restart_dhcp).lower() restart_dns = str(settings.restart_dns).lower() which_dhcp_module = module_loader.get_module_from_file("dhcp","module",just_name=True).strip() which_dns_module = module_loader.get_module_from_file("dns","module",just_name=True).strip() # special handling as we don't want to restart it twice has_restarted_dnsmasq = False rc = 0 if manage_dhcp != "0": if which_dhcp_module == "manage_isc": if restart_dhcp != "0": rc = utils.subprocess_call(logger, "dhcpd -t -q", shell=True) if rc != 0: logger.error("dhcpd -t failed") return 1 dhcp_service_name = utils.dhcp_service_name(api) dhcp_restart_command = "service %s restart" % dhcp_service_name rc = utils.subprocess_call(logger, dhcp_restart_command, shell=True) elif which_dhcp_module == "manage_dnsmasq": if restart_dhcp != "0": rc = utils.subprocess_call(logger, "service dnsmasq restart") has_restarted_dnsmasq = True else: logger.error("unknown DHCP engine: %s" % which_dhcp_module) rc = 411 if manage_dns != "0" and restart_dns != "0": if which_dns_module == "manage_bind": named_service_name = utils.named_service_name(api) dns_restart_command = "service %s restart" % named_service_name rc = utils.subprocess_call(logger, dns_restart_command, shell=True) elif which_dns_module == "manage_dnsmasq" and not has_restarted_dnsmasq: rc = utils.subprocess_call(logger, "service dnsmasq restart", shell=True) elif which_dns_module == "manage_dnsmasq" and has_restarted_dnsmasq: rc = 0 else: logger.error("unknown DNS engine: %s" % which_dns_module) rc = 412 return rc cobbler-2.4.1/cobbler/pxegen.py000066400000000000000000001327531227367477500164350ustar00rootroot00000000000000""" Builds out filesystem trees/data based on the object tree. This is the code behind 'cobbler sync'. Copyright 2006-2009, Red Hat, Inc and Others Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import os import os.path import shutil import shlex import time import sys import glob import traceback import errno import string import socket import utils from cexceptions import * import templar import item_distro import item_profile import item_repo import item_system import item_image from utils import _ class PXEGen: """ Handles building out PXE stuff """ def __init__(self, config, logger): """ Constructor """ self.config = config self.logger = logger self.api = config.api self.distros = config.distros() self.profiles = config.profiles() self.systems = config.systems() self.settings = config.settings() self.repos = config.repos() self.images = config.images() self.templar = templar.Templar(config) self.bootloc = utils.tftpboot_location() # FIXME: not used anymore, can remove? self.verbose = False def copy_bootloaders(self): """ Copy bootloaders to the configured tftpboot directory NOTE: we support different arch's if defined in /etc/cobbler/settings. """ dst = self.bootloc grub_dst = os.path.join(dst, "grub") image_dst = os.path.join(dst, "images") # copy syslinux from one of two locations try: try: utils.copyfile_pattern('/var/lib/cobbler/loaders/pxelinux.0', dst, api=self.api, cache=False, logger=self.logger) utils.copyfile_pattern('/var/lib/cobbler/loaders/menu.c32', dst, api=self.api, cache=False, logger=self.logger) except: utils.copyfile_pattern('/usr/share/syslinux/pxelinux.0', dst, api=self.api, cache=False, logger=self.logger) utils.copyfile_pattern('/usr/share/syslinux/menu.c32', dst, api=self.api, cache=False, logger=self.logger) except: utils.copyfile_pattern('/usr/lib/syslinux/pxelinux.0', dst, api=self.api, cache=False, logger=self.logger) utils.copyfile_pattern('/usr/lib/syslinux/menu.c32', dst, api=self.api, cache=False, logger=self.logger) # copy memtest only if we find it utils.copyfile_pattern('/boot/memtest*', image_dst, require_match=False, api=self.api, cache=False, logger=self.logger) # copy elilo which we include for IA64 targets utils.copyfile_pattern('/var/lib/cobbler/loaders/elilo.efi', dst, require_match=False, api=self.api, cache=False, logger=self.logger) # copy yaboot which we include for PowerPC targets utils.copyfile_pattern('/var/lib/cobbler/loaders/yaboot', dst, require_match=False, api=self.api, cache=False, logger=self.logger) try: utils.copyfile_pattern('/usr/lib/syslinux/memdisk', dst, api=self.api, cache=False, logger=self.logger) except: utils.copyfile_pattern('/usr/share/syslinux/memdisk', dst, require_match=False, api=self.api, cache=False, logger=self.logger) # Copy gPXE/iPXE bootloader if it exists utils.copyfile_pattern('/usr/share/*pxe/undionly.kpxe', dst, require_match=False, api=self.api, cache=False, logger=self.logger) # Copy grub EFI bootloaders if possible: utils.copyfile_pattern('/var/lib/cobbler/loaders/grub*.efi', grub_dst, require_match=False, api=self.api, cache=False, logger=self.logger) def copy_images(self): """ Like copy_distros except for images. """ errors = list() for i in self.images: try: self.copy_single_image_files(i) except CX, e: errors.append(e) self.logger.error(e.value) # FIXME: using logging module so this ends up in cobbler.log? def copy_single_distro_files(self, d, dirtree, symlink_ok): distros = os.path.join(dirtree, "images") distro_dir = os.path.join(distros,d.name) utils.mkdir(distro_dir) kernel = utils.find_kernel(d.kernel) # full path initrd = utils.find_initrd(d.initrd) # full path if kernel is None: raise CX("kernel not found: %(file)s, distro: %(distro)s" % { "file" : d.kernel, "distro" : d.name }) if initrd is None: raise CX("initrd not found: %(file)s, distro: %(distro)s" % { "file" : d.initrd, "distro" : d.name }) # Kernels referenced by remote URL are passed through to koan directly, # no need for copying the kernel locally: if not utils.file_is_remote(kernel): b_kernel = os.path.basename(kernel) dst1 = os.path.join(distro_dir, b_kernel) utils.linkfile(kernel, dst1, symlink_ok=symlink_ok, api=self.api, logger=self.logger) if not utils.file_is_remote(initrd): b_initrd = os.path.basename(initrd) dst2 = os.path.join(distro_dir, b_initrd) utils.linkfile(initrd, dst2, symlink_ok=symlink_ok, api=self.api, logger=self.logger) def copy_single_image_files(self, img): images_dir = os.path.join(self.bootloc, "images2") filename = img.file if not os.path.exists(filename): # likely for virtual usage, cannot use return if not os.path.exists(images_dir): os.makedirs(images_dir) basename = os.path.basename(img.file) newfile = os.path.join(images_dir, img.name) utils.linkfile(filename, newfile, api=self.api, logger=self.logger) return True def write_all_system_files(self, system): profile = system.get_conceptual_parent() if profile is None: raise CX("system %(system)s references a missing profile %(profile)s" % { "system" : system.name, "profile" : system.profile}) distro = profile.get_conceptual_parent() image_based = False image = None if distro is None: if profile.COLLECTION_TYPE == "profile": raise CX("profile %(profile)s references a missing distro %(distro)s" % { "profile" : system.profile, "distro" : profile.distro}) else: image_based = True image = profile # hack: s390 generates files per system not per interface if not image_based and distro.arch.startswith("s390"): # Always write a system specific _conf and _parm file f2 = os.path.join(self.bootloc, "s390x", "s_%s" % system.name) cf = "%s_conf" % f2 pf = "%s_parm" % f2 template_cf = open("/etc/cobbler/pxe/s390x_conf.template") template_pf = open("/etc/cobbler/pxe/s390x_parm.template") blended = utils.blender(self.api, True, system) self.templar.render(template_cf, blended, cf) # FIXME: profiles also need this data! # FIXME: the _conf and _parm files are limited to 80 characters in length try: ipaddress = socket.gethostbyname_ex(blended["http_server"])[2][0] except socket.gaierror: ipaddress = blended["http_server"] kickstart_path = "http://%s/cblr/svc/op/ks/system/%s" % (ipaddress, system.name) # gather default kernel_options and default kernel_options_s390x kopts = blended.get("kernel_options","") hkopts = shlex.split(utils.hash_to_string(kopts)) blended["kickstart_expanded"] = "ks=%s" % kickstart_path blended["kernel_options"] = hkopts self.templar.render(template_pf, blended, pf) # Write system specific zPXE file if system.is_management_supported(): self.write_pxe_file(f2, system, profile, distro, distro.arch) else: # ensure the file doesn't exist utils.rmfile(f2) return pxe_metadata = {'pxe_menu_items': self.get_menu_items()['pxe'] } # generate one record for each described NIC .. for (name,interface) in system.interfaces.iteritems(): ip = interface["ip_address"] f1 = utils.get_config_filename(system, interface=name) if f1 is None: self.logger.warning("invalid interface recorded for system (%s,%s)" % (system.name,name)) continue; if image_based: working_arch = image.arch else: working_arch = distro.arch if working_arch is None: raise "internal error, invalid arch supplied" # for tftp only ... grub_path = None if working_arch in [ "i386", "x86", "x86_64", "arm", "standard"]: # pxelinux wants a file named $name under pxelinux.cfg f2 = os.path.join(self.bootloc, "pxelinux.cfg", f1) # Only generating grub menus for these arch's: grub_path = os.path.join(self.bootloc, "grub", f1.upper()) elif working_arch == "ia64": # elilo expects files to be named "$name.conf" in the root # and can not do files based on the MAC address if ip is not None and ip != "": self.logger.warning("Warning: Itanium system object (%s) needs an IP address to PXE" % system.name) filename = "%s.conf" % utils.get_config_filename(system,interface=name) f2 = os.path.join(self.bootloc, filename) elif working_arch.startswith("ppc"): # Determine filename for system-specific yaboot.conf filename = "%s" % utils.get_config_filename(system, interface=name).lower() f2 = os.path.join(self.bootloc, "etc", filename) # Link to the yaboot binary f3 = os.path.join(self.bootloc, "ppc", filename) if os.path.lexists(f3): utils.rmfile(f3) os.symlink("../yaboot", f3) else: continue if system.is_management_supported(): if not image_based: self.write_pxe_file(f2, system, profile, distro, working_arch, metadata=pxe_metadata) if grub_path: self.write_pxe_file(grub_path, system, profile, distro, working_arch, format="grub") else: self.write_pxe_file(f2, system, None, None, working_arch, image=profile, metadata=pxe_metadata) else: # ensure the file doesn't exist utils.rmfile(f2) if grub_path: utils.rmfile(grub_path) def make_pxe_menu(self): self.make_s390_pseudo_pxe_menu() self.make_actual_pxe_menu() def make_s390_pseudo_pxe_menu(self): s390path = os.path.join(self.bootloc, "s390x") if not os.path.exists(s390path): utils.mkdir(s390path) profile_list = [profile for profile in self.profiles] image_list = [image for image in self.images] def sort_name(a,b): return cmp(a.name,b.name) profile_list.sort(sort_name) image_list.sort(sort_name) listfile = open(os.path.join(s390path, "profile_list"),"w+") for profile in profile_list: distro = profile.get_conceptual_parent() if distro is None: raise CX("profile is missing distribution: %s, %s" % (profile.name, profile.distro)) if distro.arch.startswith("s390"): listfile.write("%s\n" % profile.name) f2 = os.path.join(self.bootloc, "s390x", "p_%s" % profile.name) self.write_pxe_file(f2,None,profile,distro,distro.arch) cf = "%s_conf" % f2 pf = "%s_parm" % f2 template_cf = open("/etc/cobbler/pxe/s390x_conf.template") template_pf = open("/etc/cobbler/pxe/s390x_parm.template") blended = utils.blender(self.api, True, profile) self.templar.render(template_cf, blended, cf) # FIXME: profiles also need this data! # FIXME: the _conf and _parm files are limited to 80 characters in length try: ipaddress = socket.gethostbyname_ex(blended["http_server"])[2][0] except socket.gaierror: ipaddress = blended["http_server"] kickstart_path = "http://%s/cblr/svc/op/ks/profile/%s" % (ipaddress, profile.name) # gather default kernel_options and default kernel_options_s390x kopts = blended.get("kernel_options","") hkopts = shlex.split(utils.hash_to_string(kopts)) blended["kickstart_expanded"] = "ks=%s" % kickstart_path blended["kernel_options"] = hkopts self.templar.render(template_pf, blended, pf) listfile.close() def get_menu_items(self): """ Generates menu items for pxe and grub """ # sort the profiles profile_list = [profile for profile in self.profiles] def sort_name(a,b): return cmp(a.name,b.name) profile_list.sort(sort_name) # sort the images image_list = [image for image in self.images] image_list.sort(sort_name) # Build out menu items and append each to this master list, used for # the default menus: pxe_menu_items = "" grub_menu_items = "" # For now, profiles are the only items we want grub EFI boot menu entries for: for profile in profile_list: if not profile.enable_menu: # This profile has been excluded from the menu continue distro = profile.get_conceptual_parent() contents = self.write_pxe_file(filename=None, system=None, profile=profile, distro=distro, arch=distro.arch, include_header=False) if contents is not None: pxe_menu_items = pxe_menu_items + contents + "\n" grub_contents = self.write_pxe_file(filename=None, system=None, profile=profile, distro=distro, arch=distro.arch, include_header=False, format="grub") if grub_contents is not None: grub_menu_items = grub_menu_items + grub_contents + "\n" # image names towards the bottom for image in image_list: if os.path.exists(image.file): contents = self.write_pxe_file(filename=None, system=None, profile=None, distro=None, arch=image.arch, image=image) if contents is not None: pxe_menu_items = pxe_menu_items + contents + "\n" # if we have any memtest files in images, make entries for them # after we list the profiles memtests = glob.glob(self.bootloc + "/images/memtest*") if len(memtests) > 0: pxe_menu_items = pxe_menu_items + "\n\n" for memtest in glob.glob(self.bootloc + '/images/memtest*'): base = os.path.basename(memtest) contents = self.write_memtest_pxe("/%s" % base) pxe_menu_items = pxe_menu_items + contents + "\n" return {'pxe' : pxe_menu_items, 'grub' : grub_menu_items} def make_actual_pxe_menu(self): """ Generates both pxe and grub boot menus. """ # only do this if there is NOT a system named default. default = self.systems.find(name="default") if default is None: timeout_action = "local" else: timeout_action = default.profile menu_items = self.get_menu_items() # Write the PXE menu: metadata = { "pxe_menu_items" : menu_items['pxe'], "pxe_timeout_profile" : timeout_action} outfile = os.path.join(self.bootloc, "pxelinux.cfg", "default") template_src = open(os.path.join(self.settings.pxe_template_dir,"pxedefault.template")) template_data = template_src.read() self.templar.render(template_data, metadata, outfile, None) template_src.close() # Write the grub menu: metadata = { "grub_menu_items" : menu_items['grub'] } outfile = os.path.join(self.bootloc, "grub", "efidefault") template_src = open(os.path.join(self.settings.pxe_template_dir, "efidefault.template")) template_data = template_src.read() self.templar.render(template_data, metadata, outfile, None) template_src.close() def write_memtest_pxe(self,filename): """ Write a configuration file for memtest """ # FIXME: this should be handled via "cobbler image add" now that it is available, # though it would be nice if there was a less-manual way to add those as images. # just some random variables template = None metadata = {} buffer = "" template = os.path.join(self.settings.pxe_template_dir,"pxeprofile.template") # store variables for templating metadata["menu_label"] = "MENU LABEL %s" % os.path.basename(filename) metadata["profile_name"] = os.path.basename(filename) metadata["kernel_path"] = "/images/%s" % os.path.basename(filename) metadata["initrd_path"] = "" metadata["append_line"] = "" # get the template template_fh = open(template) template_data = template_fh.read() template_fh.close() # return results buffer = self.templar.render(template_data, metadata, None) return buffer def write_pxe_file(self, filename, system, profile, distro, arch, image=None, include_header=True, metadata=None, format="pxe"): """ Write a configuration file for the boot loader(s). More system-specific configuration may come in later, if so that would appear inside the system object in api.py NOTE: relevant to tftp and pseudo-PXE (s390) only ia64 is mostly the same as syslinux stuff, s390 is a bit short-circuited and simpler. All of it goes through the templating engine, see the templates in /etc/cobbler for more details Can be used for different formats, "pxe" (default) and "grub". """ if arch is None: raise "missing arch" if image and not os.path.exists(image.file): return None # nfs:// URLs or something, can't use for TFTP if metadata is None: metadata = {} (rval,settings) = utils.input_string_or_hash(self.settings.to_datastruct()) if rval: for key in settings.keys(): metadata[key] = settings[key] # --- # just some random variables template = None buffer = "" # --- kickstart_path = None kernel_path = None initrd_path = None img_path = None if image is None: # not image based, it's something normalish img_path = os.path.join("/images",distro.name) kernel_path = os.path.join("/images",distro.name,os.path.basename(distro.kernel)) initrd_path = os.path.join("/images",distro.name,os.path.basename(distro.initrd)) # Find the kickstart if we inherit from another profile if system: blended = utils.blender(self.api, True, system) else: blended = utils.blender(self.api, True, profile) kickstart_path = blended.get("kickstart","") else: # this is an image we are making available, not kernel+initrd if image.image_type == "direct": kernel_path = os.path.join("/images2",image.name) elif image.image_type == "memdisk": kernel_path = "/memdisk" initrd_path = os.path.join("/images2",image.name) else: # CD-ROM ISO or virt-clone image? We can't PXE boot it. kernel_path = None initrd_path = None if img_path is not None and not metadata.has_key("img_path"): metadata["img_path"] = img_path if kernel_path is not None and not metadata.has_key("kernel_path"): metadata["kernel_path"] = kernel_path if initrd_path is not None and not metadata.has_key("initrd_path"): metadata["initrd_path"] = initrd_path # --- # choose a template if system: if format == "grub": template = os.path.join(self.settings.pxe_template_dir, "grubsystem.template") else: # pxe if system.netboot_enabled: template = os.path.join(self.settings.pxe_template_dir,"pxesystem.template") if arch.startswith("s390"): template = os.path.join(self.settings.pxe_template_dir,"pxesystem_s390x.template") elif arch == "ia64": template = os.path.join(self.settings.pxe_template_dir,"pxesystem_ia64.template") elif arch.startswith("ppc"): template = os.path.join(self.settings.pxe_template_dir,"pxesystem_ppc.template") elif arch.startswith("arm"): template = os.path.join(self.settings.pxe_template_dir,"pxesystem_arm.template") elif distro and distro.os_version.startswith("esxi"): # ESXi uses a very different pxe method, using more files than # a standard kickstart and different options - so giving it a dedicated # PXE template makes more sense than shoe-horning it into the existing # templates template = os.path.join(self.settings.pxe_template_dir,"pxesystem_esxi.template") else: # local booting on ppc requires removing the system-specific dhcpd.conf filename if arch is not None and arch.startswith("ppc"): # Disable yaboot network booting for all interfaces on the system for (name,interface) in system.interfaces.iteritems(): filename = "%s" % utils.get_config_filename(system, interface=name).lower() # Remove symlink to the yaboot binary f3 = os.path.join(self.bootloc, "ppc", filename) if os.path.lexists(f3): utils.rmfile(f3) # Remove the interface-specific config file f3 = os.path.join(self.bootloc, "etc", filename) if os.path.lexists(f3): utils.rmfile(f3) # Yaboot/OF doesn't support booting locally once you've # booted off the network, so nothing left to do return None elif arch is not None and arch.startswith("s390"): template = os.path.join(self.settings.pxe_template_dir,"pxelocal_s390x.template") elif arch is not None and arch.startswith("ia64"): template = os.path.join(self.settings.pxe_template_dir,"pxelocal_ia64.template") else: template = os.path.join(self.settings.pxe_template_dir,"pxelocal.template") else: # not a system record, so this is a profile record or an image if arch.startswith("s390"): template = os.path.join(self.settings.pxe_template_dir,"pxeprofile_s390x.template") if arch.startswith("arm"): template = os.path.join(self.settings.pxe_template_dir,"pxeprofile_arm.template") elif format == "grub": template = os.path.join(self.settings.pxe_template_dir,"grubprofile.template") elif distro and distro.os_version.startswith("esxi"): # ESXi uses a very different pxe method, see comment above in the system section template = os.path.join(self.settings.pxe_template_dir,"pxeprofile_esxi.template") else: template = os.path.join(self.settings.pxe_template_dir,"pxeprofile.template") if kernel_path is not None: metadata["kernel_path"] = kernel_path if initrd_path is not None: metadata["initrd_path"] = initrd_path # generate the kernel options and append line: kernel_options = self.build_kernel_options(system, profile, distro, image, arch, kickstart_path) metadata["kernel_options"] = kernel_options if distro and distro.os_version.startswith("esxi") and filename is not None: append_line = "BOOTIF=%s" % (os.path.basename(filename)) elif metadata.has_key("initrd_path") and (not arch or arch not in ["ia64", "ppc", "ppc64", "arm"]): append_line = "append initrd=%s" % (metadata["initrd_path"]) else: append_line = "append " append_line = "%s%s" % (append_line, kernel_options) if arch.startswith("ppc") or arch.startswith("s390"): # remove the prefix "append" # TODO: this looks like it's removing more than append, really # not sure what's up here... append_line = append_line[7:] if distro and distro.os_version.startswith("xenserver620"): append_line = "%s" % (kernel_options) metadata["append_line"] = append_line # store variables for templating metadata["menu_label"] = "" if profile: if not arch in [ "ia64", "ppc", "ppc64", "s390", "s390x" ]: metadata["menu_label"] = "MENU LABEL %s" % profile.name metadata["profile_name"] = profile.name elif image: metadata["menu_label"] = "MENU LABEL %s" % image.name metadata["profile_name"] = image.name if system: metadata["system_name"] = system.name # get the template if kernel_path is not None: template_fh = open(template) template_data = template_fh.read() template_fh.close() else: # this is something we can't PXE boot template_data = "\n" # save file and/or return results, depending on how called. buffer = self.templar.render(template_data, metadata, None) if filename is not None: self.logger.info("generating: %s" % filename) fd = open(filename, "w") fd.write(buffer) fd.close() return buffer def build_kernel_options(self, system, profile, distro, image, arch, kickstart_path): """ Builds the full kernel options line. """ management_interface = None if system is not None: blended = utils.blender(self.api, False, system) # find the first management interface try: for intf in system.interfaces.keys(): if system.interfaces[intf]["management"]: management_interface = intf break except: # just skip this then pass elif profile is not None: blended = utils.blender(self.api, False, profile) else: blended = utils.blender(self.api, False, image) append_line = "" kopts = blended.get("kernel_options", dict()) # support additional initrd= entries in kernel options. if "initrd" in kopts: append_line = ",%s" % kopts.pop("initrd") hkopts = utils.hash_to_string(kopts) append_line = "%s %s" % (append_line, hkopts) # kickstart path rewriting (get URLs for local files) if kickstart_path is not None and kickstart_path != "": # FIXME: need to make shorter rewrite rules for these URLs try: ipaddress = socket.gethostbyname_ex(blended["http_server"])[2][0] except socket.gaierror: ipaddress = blended["http_server"] if system is not None and kickstart_path.startswith("/"): kickstart_path = "http://%s/cblr/svc/op/ks/system/%s" % (ipaddress, system.name) elif kickstart_path.startswith("/"): kickstart_path = "http://%s/cblr/svc/op/ks/profile/%s" % (ipaddress, profile.name) if distro.breed is None or distro.breed == "redhat": append_line = "%s ks=%s" % (append_line, kickstart_path) gpxe = blended["enable_gpxe"] if gpxe: append_line = append_line.replace('ksdevice=bootif','ksdevice=${net0/mac}') elif distro.breed == "suse": append_line = "%s autoyast=%s" % (append_line, kickstart_path) elif distro.breed == "debian" or distro.breed == "ubuntu": append_line = "%s auto-install/enable=true priority=critical url=%s" % (append_line, kickstart_path) if management_interface: append_line += " netcfg/choose_interface=%s" % management_interface elif distro.breed == "freebsd": append_line = "%s ks=%s" % (append_line, kickstart_path) # rework kernel options for debian distros translations = { 'ksdevice':"interface" , 'lang':"locale" } for k,v in translations.iteritems(): append_line = append_line.replace("%s="%k,"%s="%v) # interface=bootif causes a failure append_line = append_line.replace("interface=bootif","") elif distro.breed == "vmware": if distro.os_version.find("esxi") != -1: # ESXi is very picky, it's easier just to redo the # entire append line here since append_line = " ks=%s %s" % (kickstart_path, hkopts) # ESXi likes even fewer options, so we remove them too append_line = append_line.replace("kssendmac","") else: append_line = "%s vmkopts=debugLogToSerial:1 mem=512M ks=%s" % \ (append_line, kickstart_path) # interface=bootif causes a failure append_line = append_line.replace("ksdevice=bootif","") elif distro.breed == "xen": if distro.os_version.find("xenserver620") != -1: img_path = os.path.join("/images",distro.name) append_line = "append %s/xen.gz dom0_max_vcpus=2 dom0_mem=752M com1=115200,8n1 console=com1,vga --- %s/vmlinuz xencons=hvc console=hvc0 console=tty0 install answerfile=%s --- %s/install.img" % (img_path,img_path,kickstart_path,img_path ) return append_line if distro is not None and (distro.breed in [ "debian", "ubuntu" ]): # Hostname is required as a parameter, the one in the preseed is # not respected, so calculate if we have one here. # We're trying: first part of FQDN in hostname field, then system # name, then profile name. # In Ubuntu, this is at least used for the volume group name when # using LVM. domain = "local.lan" if system is not None: if system.hostname is not None and system.hostname != "": # If this is a FQDN, grab the first bit hostname = system.hostname.split(".")[0] _domain = system.hostname.split(".")[1:] if _domain: domain = ".".join( _domain ) else: hostname = system.name else: # ubuntu at the very least does not like having underscores # in the hostname. # FIXME: Really this should remove all characters that are # forbidden in hostnames hostname = profile.name.replace("_","") # At least for debian deployments configured for DHCP networking # this values are not used, but specifying here avoids questions append_line = "%s hostname=%s" % (append_line, hostname) append_line = "%s domain=%s" % (append_line, domain) # A similar issue exists with suite name, as installer requires # the existence of "stable" in the dists directory append_line = "%s suite=%s" % (append_line, distro.os_version) # append necessary kernel args for arm architectures if arch is not None and arch.startswith("arm"): append_line = "%s fixrtc vram=48M omapfb.vram=0:24M" % append_line # do variable substitution on the append line # FIXME: should we just promote all of the ksmeta # variables instead of just the tree? if blended.has_key("ks_meta") and blended["ks_meta"].has_key("tree"): blended["tree"] = blended["ks_meta"]["tree"] append_line = self.templar.render(append_line,utils.flatten(blended),None) # FIXME - the append_line length limit is architecture specific if len(append_line) >= 255: self.logger.warning("warning: kernel option length exceeds 255") return append_line def write_templates(self,obj,write_file=False,path=None): """ A semi-generic function that will take an object with a template_files hash {source:destiation}, and generate a rendered file. The write_file option allows for generating of the rendered output without actually creating any files. The return value is a hash of the destination file names (after variable substitution is done) and the data in the file. """ self.logger.info("Writing template files for %s" % obj.name) results = {} try: templates = obj.template_files except: return results blended = utils.blender(self.api, False, obj) ksmeta = blended.get("ks_meta",{}) try: del blended["ks_meta"] except: pass blended.update(ksmeta) # make available at top level templates = blended.get("template_files",{}) try: del blended["template_files"] except: pass blended.update(templates) # make available at top level (success, templates) = utils.input_string_or_hash(templates) if not success: return results # FIXME: img_path and local_img_path should probably be moved # up into the blender function to ensure they're consistently # available to templates across the board if blended["distro_name"]: blended['img_path'] = os.path.join("/images",blended["distro_name"]) blended['local_img_path'] = os.path.join(utils.tftpboot_location(),"images",blended["distro_name"]) for template in templates.keys(): dest = templates[template] if dest is None: continue # Run the source and destination files through # templar first to allow for variables in the path template = self.templar.render(template, blended, None).strip() dest = os.path.normpath(self.templar.render(dest, blended, None).strip()) # Get the path for the destination output dest_dir = os.path.normpath(os.path.dirname(dest)) # If we're looking for a single template, skip if this ones # destination is not it. if not path is None and path != dest: continue # If we are writing output to a file, we allow files tobe # written into the tftpboot directory, otherwise force all # templated configs into the rendered directory to ensure that # a user granted cobbler privileges via sudo can't overwrite # arbitrary system files (This also makes cleanup easier). if os.path.isabs(dest_dir) and write_file: if dest_dir.find(utils.tftpboot_location()) != 0: raise CX(" warning: template destination (%s) is outside %s, skipping." % (dest_dir,utils.tftpboot_location())) continue else: dest_dir = os.path.join(self.settings.webdir, "rendered", dest_dir) dest = os.path.join(dest_dir, os.path.basename(dest)) if not os.path.exists(dest_dir): utils.mkdir(dest_dir) # Check for problems if not os.path.exists(template): raise CX("template source %s does not exist" % template) continue elif write_file and not os.path.isdir(dest_dir): raise CX("template destination (%s) is invalid" % dest_dir) continue elif write_file and os.path.exists(dest): raise CX("template destination (%s) already exists" % dest) continue elif write_file and os.path.isdir(dest): raise CX("template destination (%s) is a directory" % dest) continue elif template == "" or dest == "": raise CX("either the template source or destination was blank (unknown variable used?)" % dest) continue template_fh = open(template) template_data = template_fh.read() template_fh.close() buffer = self.templar.render(template_data, blended, None) results[dest] = buffer if write_file: self.logger.info("generating: %s" % dest) fd = open(dest, "w") fd.write(buffer) fd.close() return results def generate_gpxe(self,what,name): if what.lower() not in ("profile","system"): return "# gpxe is only valid for profiles and systems" distro = None if what == "profile": obj = self.api.find_profile(name=name) distro = obj.get_conceptual_parent() else: obj = self.api.find_system(name=name) distro = obj.get_conceptual_parent().get_conceptual_parent() netboot_enabled = obj.netboot_enabled # For multi-arch distros, the distro name in ks_mirror # may not contain the arch string, so we need to figure out # the path based on where the kernel is stored. We do this # because some distros base future downloads on the initial # URL passed in, so all of the files need to be at this location # (which is why we can't use the images link, which just contains # the kernel and initrd). ks_mirror_name = string.join(distro.kernel.split('/')[-2:-1],'') blended = utils.blender(self.api, False, obj) ksmeta = blended.get("ks_meta",{}) try: del blended["ks_meta"] except: pass blended.update(ksmeta) # make available at top level blended['distro'] = distro.name blended['ks_mirror_name'] = ks_mirror_name blended['kernel_name'] = os.path.basename(distro.kernel) blended['initrd_name'] = os.path.basename(distro.initrd) if what == "profile": blended['append_line'] = self.build_kernel_options(None,obj,distro,None,None,blended['kickstart']) else: blended['append_line'] = self.build_kernel_options(obj,None,distro,None,None,blended['kickstart']) template = None if distro.breed in ['redhat','debian','ubuntu','suse']: # all of these use a standard kernel/initrd setup so # they all use the same gPXE template template = os.path.join(self.settings.pxe_template_dir,"gpxe_%s_linux.template" % what.lower()) elif distro.breed == 'vmware': if distro.os_version == 'esx4': # older ESX is pretty much RHEL, so it uses the standard kernel/initrd setup template = os.path.join(self.settings.pxe_template_dir,"gpxe_%s_linux.template" % what.lower()) elif distro.os_version == 'esxi4': template = os.path.join(self.settings.pxe_template_dir,"gpxe_%s_esxi4.template" % what.lower()) elif distro.os_version.startswith('esxi5'): template = os.path.join(self.settings.pxe_template_dir,"gpxe_%s_esxi5.template" % what.lower()) elif distro.breed == 'freebsd': template = os.path.join(self.settings.pxe_template_dir,"gpxe_%s_freebsd.template" % what.lower()) if what == "system": if not netboot_enabled: template = os.path.join(self.settings.pxe_template_dir,"gpxe_%s_local.template" % what.lower()) if not template: return "# unsupported breed/os version" if not os.path.exists(template): return "# gpxe template not found for the %s named %s (filename=%s)" % (what,name,template) template_fh = open(template) template_data = template_fh.read() template_fh.close() return self.templar.render(template_data, blended, None) def generate_bootcfg(self,what,name): if what.lower() not in ("profile","system"): return "# bootcfg is only valid for profiles and systems" distro = None if what == "profile": obj = self.api.find_profile(name=name) distro = obj.get_conceptual_parent() else: obj = self.api.find_system(name=name) distro = obj.get_conceptual_parent().get_conceptual_parent() # For multi-arch distros, the distro name in ks_mirror # may not contain the arch string, so we need to figure out # the path based on where the kernel is stored. We do this # because some distros base future downloads on the initial # URL passed in, so all of the files need to be at this location # (which is why we can't use the images link, which just contains # the kernel and initrd). ks_mirror_name = string.join(distro.kernel.split('/')[-2:-1],'') blended = utils.blender(self.api, False, obj) ksmeta = blended.get("ks_meta",{}) try: del blended["ks_meta"] except: pass blended.update(ksmeta) # make available at top level blended['distro'] = ks_mirror_name # FIXME: img_path should probably be moved up into the # blender function to ensure they're consistently # available to templates across the board if obj.enable_gpxe: blended['img_path'] = 'http://%s:%s/cobbler/links/%s' % (self.settings.server,self.settings.http_port,distro.name) else: blended['img_path'] = os.path.join("/images",distro.name) template = os.path.join(self.settings.pxe_template_dir,"bootcfg_%s_%s.template" % (what.lower(),distro.os_version)) if not os.path.exists(template): return "# boot.cfg template not found for the %s named %s (filename=%s)" % (what,name,template) template_fh = open(template) template_data = template_fh.read() template_fh.close() return self.templar.render(template_data, blended, None) def generate_script(self,what,objname,script_name): if what == "profile": obj = self.api.find_profile(name=objname) else: obj = self.api.find_system(name=objname) if not obj: return "# %s named %s not found" % (what,objname) distro = obj.get_conceptual_parent() while distro.get_conceptual_parent(): distro = distro.get_conceptual_parent() blended = utils.blender(self.api, False, obj) ksmeta = blended.get("ks_meta",{}) try: del blended["ks_meta"] except: pass blended.update(ksmeta) # make available at top level # FIXME: img_path should probably be moved up into the # blender function to ensure they're consistently # available to templates across the board if obj.enable_gpxe: blended['img_path'] = 'http://%s:%s/cobbler/links/%s' % (self.settings.server,self.settings.http_port,distro.name) else: blended['img_path'] = os.path.join("/images",distro.name) template = os.path.normpath(os.path.join("/var/lib/cobbler/scripts",script_name)) if not os.path.exists(template): return "# script template %s not found" % script_name template_fh = open(template) template_data = template_fh.read() template_fh.close() return self.templar.render(template_data, blended, None, obj) cobbler-2.4.1/cobbler/remote.py000066400000000000000000003174601227367477500164420ustar00rootroot00000000000000""" Code for Cobbler's XMLRPC API Copyright 2007-2009, Red Hat, Inc and Others Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import sys, socket, time, os, errno, re, random, stat, string import base64 import SimpleXMLRPCServer from SocketServer import ThreadingMixIn import xmlrpclib import base64 import fcntl import traceback import glob from threading import Thread import api as cobbler_api import utils from cexceptions import * import item_distro import item_profile import item_system import item_repo import item_image import item_mgmtclass import item_package import item_file import clogger import pxegen import utils #from utils import * # BAD! from utils import _ import configgen EVENT_TIMEOUT = 7*24*60*60 # 1 week CACHE_TIMEOUT = 10*60 # 10 minutes # task codes EVENT_RUNNING = "running" EVENT_COMPLETE = "complete" EVENT_FAILED = "failed" # normal events EVENT_INFO = "notification" # for backwards compatibility with 1.6 and prev XMLRPC # do not remove! REMAP_COMPAT = { "ksmeta" : "ks_meta", "kopts" : "kernel_options", "kopts_post" : "kernel_options_post", "netboot-enabled" : "netboot_enabled" } class CobblerThread(Thread): def __init__(self,event_id,remote,logatron,options): Thread.__init__(self) self.event_id = event_id self.remote = remote self.logger = logatron if options is None: options = {} self.options = options def on_done(self): pass def run(self): time.sleep(1) try: rc = self._run(self) if not rc: self.remote._set_task_state(self,self.event_id,EVENT_FAILED) else: self.remote._set_task_state(self,self.event_id,EVENT_COMPLETE) self.on_done() return rc except: utils.log_exc(self.logger) self.remote._set_task_state(self,self.event_id,EVENT_FAILED) return False # ********************************************************************* # ********************************************************************* class CobblerXMLRPCInterface: """ This is the interface used for all XMLRPC methods, for instance, as used by koan or CobblerWeb. Most read-write operations require a token returned from "login". Read operations do not. """ def __init__(self,api): """ Constructor. Requires a Cobbler API handle. """ self.api = api self.logger = self.api.logger self.token_cache = {} self.object_cache = {} self.timestamp = self.api.last_modified_time() self.events = {} self.shared_secret = utils.get_shared_secret() random.seed(time.time()) self.translator = utils.Translator(keep=string.printable) self.pxegen = pxegen.PXEGen(api._config,self.logger) def check(self, token): """ Returns a list of all the messages/warnings that are things that admin may want to correct about the configuration of the cobbler server. This has nothing to do with "check_access" which is an auth/authz function in the XMLRPC API. """ self.check_access(token, "check") return self.api.check(logger=self.logger) def background_buildiso(self, options, token): """ Generates an ISO in /var/www/cobbler/pub that can be used to install profiles without using PXE. """ # FIXME: better use webdir from the settings? webdir = "/var/www/cobbler/" if os.path.exists("/srv/www"): webdir = "/srv/www/cobbler/" def runner(self): return self.remote.api.build_iso( self.options.get("iso",webdir+"/pub/generated.iso"), self.options.get("profiles",None), self.options.get("systems",None), self.options.get("buildisodir",None), self.options.get("distro",None), self.options.get("standalone",False), self.options.get("source",None), self.options.get("exclude_dns",False), self.options.get("mkisofs_opts",None), self.logger ) def on_done(self): if self.options.get("iso","") == webdir+"/pub/generated.iso": msg = "ISO now available for download" self.remote._new_event(msg) return self.__start_task(runner, token, "buildiso", "Build Iso", options, on_done) def background_aclsetup(self, options, token): def runner(self): return self.remote.api.acl_config( self.options.get("adduser",None), self.options.get("addgroup",None), self.options.get("removeuser",None), self.options.get("removegroup",None), self.logger ) return self.__start_task(runner, token, "aclsetup", "(CLI) ACL Configuration", options) def background_dlcontent(self, options, token): """ Download bootloaders and other support files. """ def runner(self): return self.remote.api.dlcontent(self.options.get("force",False), self.logger) return self.__start_task(runner, token, "get_loaders", "Download Bootloader Content", options) def background_sync(self, options, token): def runner(self): return self.remote.api.sync(self.options.get("verbose",False),logger=self.logger) return self.__start_task(runner, token, "sync", "Sync", options) def background_hardlink(self, options, token): def runner(self): return self.remote.api.hardlink(logger=self.logger) return self.__start_task(runner, token, "hardlink", "Hardlink", options) def background_validateks(self, options, token): def runner(self): return self.remote.api.validateks(logger=self.logger) return self.__start_task(runner, token, "validateks", "Kickstart Validation", options) def background_replicate(self, options, token): def runner(self): # FIXME: defaults from settings here should come from views, fix in views.py return self.remote.api.replicate( self.options.get("master", None), self.options.get("distro_patterns", ""), self.options.get("profile_patterns", ""), self.options.get("system_patterns", ""), self.options.get("repo_patterns", ""), self.options.get("image_patterns", ""), self.options.get("mgmtclass_patterns", ""), self.options.get("package_patterns", ""), self.options.get("file_patterns", ""), self.options.get("prune", False), self.options.get("omit_data", False), self.options.get("sync_all", False), self.options.get("use_ssl", False), self.logger ) return self.__start_task(runner, token, "replicate", "Replicate", options) def background_import(self, options, token): def runner(self): return self.remote.api.import_tree( self.options.get("path", None), self.options.get("name", None), self.options.get("available_as", None), self.options.get("kickstart_file", None), self.options.get("rsync_flags",None), self.options.get("arch",None), self.options.get("breed", None), self.options.get("os_version", None), self.logger ) return self.__start_task(runner, token, "import", "Media import", options) def background_reposync(self, options, token): def runner(self): # NOTE: WebUI passes in repos here, CLI passes only: repos = options.get("repos", []) only = options.get("only", None) if only is not None: repos = [ only ] nofail = options.get("nofail", len(repos) > 0) if len(repos) > 0: for name in repos: self.remote.api.reposync(tries=self.options.get("tries", 3), name=name, nofail=nofail, logger=self.logger) else: self.remote.api.reposync(tries=self.options.get("tries",3), name=None, nofail=nofail, logger=self.logger) return True return self.__start_task(runner, token, "reposync", "Reposync", options) def background_power_system(self, options, token): def runner(self): for x in self.options.get("systems",[]): try: object_id = self.remote.get_system_handle(x,token) self.remote.power_system(object_id,self.options.get("power",""),token,logger=self.logger) except: self.logger.warning("failed to execute power task on %s" % str(x)) return True self.check_access(token, "power") return self.__start_task(runner, token, "power", "Power management (%s)" % options.get("power",""), options) def background_signature_update(self, options, token): def runner(self): return self.remote.api.signature_update(self.logger) self.check_access(token, "sigupdate") return self.__start_task(runner, token, "sigupdate", "Updating Signatures", options) def get_events(self, for_user=""): """ Returns a hash(key=event id) = [ statetime, name, state, [read_by_who] ] If for_user is set to a string, it will only return events the user has not seen yet. If left unset, it will return /all/ events. """ # return only the events the user has not seen self.events_filtered = {} for (k,x) in self.events.iteritems(): if for_user in x[3]: pass else: self.events_filtered[k] = x # mark as read so user will not get events again if for_user is not None and for_user != "": for (k,x) in self.events.iteritems(): if for_user in x[3]: pass else: self.events[k][3].append(for_user) return self.events_filtered def get_event_log(self,event_id): """ Returns the contents of a task log. Events that are not task-based do not have logs. """ event_id = str(event_id).replace("..","").replace("/","") path = "/var/log/cobbler/tasks/%s.log" % event_id self._log("getting log for %s" % event_id) if os.path.exists(path): fh = open(path, "r") data = str(fh.read()) data = self.translator(data) fh.close() return data else: return "?" def __generate_event_id(self,optype): t = time.time() (year, month, day, hour, minute, second, weekday, julian, dst) = time.localtime() return "%04d-%02d-%02d_%02d%02d%02d_%s" % (year,month,day,hour,minute,second,optype) def _new_event(self, name): event_id = self.__generate_event_id("event") event_id = str(event_id) self.events[event_id] = [ float(time.time()), str(name), EVENT_INFO, [] ] def __start_task(self, thr_obj_fn, token, role_name, name, args, on_done=None): """ Starts a new background task. token -- token from login() call, all tasks require tokens role_name -- used to check token against authn/authz layers thr_obj_fn -- function handle to run in a background thread name -- display name to show in logs/events args -- usually this is a single hash, containing options on_done -- an optional second function handle to run after success (and only success) Returns a task id. """ self.check_access(token, role_name) event_id = self.__generate_event_id(role_name) # use short form for logfile suffix event_id = str(event_id) self.events[event_id] = [ float(time.time()), str(name), EVENT_RUNNING, [] ] self._log("start_task(%s); event_id(%s)"%(name,event_id)) logatron = clogger.Logger("/var/log/cobbler/tasks/%s.log" % event_id) thr_obj = CobblerThread(event_id,self,logatron,args) on_done_type = type(thr_obj.on_done) thr_obj._run = thr_obj_fn if on_done is not None: thr_obj.on_done = on_done_type(on_done, thr_obj, CobblerThread) thr_obj.start() return event_id def _set_task_state(self,thread_obj,event_id,new_state): event_id = str(event_id) if self.events.has_key(event_id): self.events[event_id][2] = new_state self.events[event_id][3] = [] # clear the list of who has read it if thread_obj is not None: if new_state == EVENT_COMPLETE: thread_obj.logger.info("### TASK COMPLETE ###") if new_state == EVENT_FAILED: thread_obj.logger.error("### TASK FAILED ###") def get_task_status(self, event_id): event_id = str(event_id) if self.events.has_key(event_id): return self.events[event_id] else: raise CX("no event with that id") def __sorter(self,a,b): """ Helper function to sort two datastructure representations of cobbler objects by name. """ return cmp(a["name"],b["name"]) def last_modified_time(self, token=None): """ Return the time of the last modification to any object. Used to verify from a calling application that no cobbler objects have changed since last check. """ return self.api.last_modified_time() def update(self, token=None): """ Deprecated method. Now does nothing. """ return True def ping(self): """ Deprecated method. Now does nothing. """ return True def get_user_from_token(self,token): """ Given a token returned from login, return the username that logged in with it. """ if not self.token_cache.has_key(token): raise CX("invalid token: %s" % token) else: return self.token_cache[token][1] def _log(self,msg,user=None,token=None,name=None,object_id=None,attribute=None,debug=False,error=False): """ Helper function to write data to the log file from the XMLRPC remote implementation. Takes various optional parameters that should be supplied when known. """ # add the user editing the object, if supplied m_user = "?" if user is not None: m_user = user if token is not None: try: m_user = self.get_user_from_token(token) except: # invalid or expired token? m_user = "???" msg = "REMOTE %s; user(%s)" % (msg, m_user) if name is not None: msg = "%s; name(%s)" % (msg, name) if object_id is not None: msg = "%s; object_id(%s)" % (msg, object_id) # add any attributes being modified, if any if attribute: msg = "%s; attribute(%s)" % (msg, attribute) # log to the correct logger if error: logger = self.logger.error elif debug: logger = self.logger.debug else: logger = self.logger.info logger(msg) def __sort(self,data,sort_field=None): """ Helper function used by the various find/search functions to return object representations in order. """ sort_fields=["name"] sort_rev=False if sort_field is not None: if sort_field.startswith("!"): sort_field=sort_field[1:] sort_rev=True sort_fields.insert(0,sort_field) sortdata=[(x.sort_key(sort_fields),x) for x in data] if sort_rev: sortdata.sort(lambda a,b:cmp(b,a)) else: sortdata.sort() return [x for (key, x) in sortdata] def __paginate(self,data,page=None,items_per_page=None,token=None): """ Helper function to support returning parts of a selection, for example, for use in a web app where only a part of the results are to be presented on each screen. """ default_page = 1 default_items_per_page = 25 try: page = int(page) if page < 1: page = default_page except: page = default_page try: items_per_page = int(items_per_page) if items_per_page <= 0: items_per_page = default_items_per_page except: items_per_page = default_items_per_page num_items = len(data) num_pages = ((num_items-1)/items_per_page)+1 if num_pages==0: num_pages=1 if page>num_pages: page=num_pages start_item = (items_per_page * (page-1)) end_item = start_item + items_per_page if start_item > num_items: start_item = num_items - 1 if end_item > num_items: end_item = num_items data = data[start_item:end_item] if page > 1: prev_page = page - 1 else: prev_page = None if page < num_pages: next_page = page + 1 else: next_page = None return (data,{ 'page' : page, 'prev_page' : prev_page, 'next_page' : next_page, 'pages' : range(1,num_pages+1), 'num_pages' : num_pages, 'num_items' : num_items, 'start_item' : start_item, 'end_item' : end_item, 'items_per_page' : items_per_page, 'items_per_page_list' : [10,20,50,100,200,500], }) def __get_object(self, object_id): """ Helper function. Given an object id, return the actual object. """ if object_id.startswith("___NEW___"): return self.object_cache[object_id][1] (otype, oname) = object_id.split("::",1) return self.api.get_item(otype,oname) def get_item(self, what, name, flatten=False): """ Returns a hash describing a given object. what -- "distro", "profile", "system", "image", "repo", etc name -- the object name to retrieve flatten -- reduce hashes to string representations (True/False) """ self._log("get_item(%s,%s)"%(what,name)) item=self.api.get_item(what,name) if item is not None: item=item.to_datastruct() if flatten: item = utils.flatten(item) return self.xmlrpc_hacks(item) def get_distro(self,name,flatten=False,token=None,**rest): return self.get_item("distro",name,flatten=flatten) def get_profile(self,name,flatten=False,token=None,**rest): return self.get_item("profile",name,flatten=flatten) def get_system(self,name,flatten=False,token=None,**rest): return self.get_item("system",name,flatten=flatten) def get_repo(self,name,flatten=False,token=None,**rest): return self.get_item("repo",name,flatten=flatten) def get_image(self,name,flatten=False,token=None,**rest): return self.get_item("image",name,flatten=flatten) def get_mgmtclass(self,name,flatten=False,token=None,**rest): return self.get_mgmtclass("mgmtclass",name,flatten=flatten) def get_package(self,name,flatten=False,token=None,**rest): return self.get_package("package",name,flatten=flatten) def get_file(self,name,flatten=False,token=None,**rest): return self.get_file("file",name,flatten=flatten) def get_items(self, what): """ Returns a list of hashes. what is the name of a cobbler object type, as described for get_item. Individual list elements are the same for get_item. """ # FIXME: is the xmlrpc_hacks method still required ? item = [x.to_datastruct() for x in self.api.get_items(what)] return self.xmlrpc_hacks(item) def get_item_names(self, what): """ Returns a list of object names (keys) for the given object type. This is just like get_items, but transmits less data. """ return [x.name for x in self.api.get_items(what)] def get_distros(self,page=None,results_per_page=None,token=None,**rest): return self.get_items("distro") def get_profiles(self,page=None,results_per_page=None,token=None,**rest): return self.get_items("profile") def get_systems(self,page=None,results_per_page=None,token=None,**rest): return self.get_items("system") def get_repos(self,page=None,results_per_page=None,token=None,**rest): return self.get_items("repo") def get_images(self,page=None,results_per_page=None,token=None,**rest): return self.get_items("image") def get_mgmtclasses(self,page=None,results_per_page=None,token=None,**rest): return self.get_items("mgmtclass") def get_packages(self,page=None,results_per_page=None,token=None,**rest): return self.get_items("package") def get_files(self,page=None,results_per_page=None,token=None,**rest): return self.get_items("file") def find_items(self, what, criteria=None,sort_field=None,expand=True): """ Returns a list of hashes. Works like get_items but also accepts criteria as a hash to search on. Example: { "name" : "*.example.org" } Wildcards work as described by 'pydoc fnmatch'. """ self._log("find_items(%s); criteria(%s); sort(%s)" % (what,criteria,sort_field)) items = self.api.find_items(what,criteria=criteria) items = self.__sort(items,sort_field) if not expand: items = [x.name for x in items] else: items = [x.to_datastruct() for x in items] return self.xmlrpc_hacks(items) def find_distro(self,criteria={},expand=False,token=None,**rest): return self.find_items("distro",criteria,expand=expand) def find_profile(self,criteria={},expand=False,token=None,**rest): return self.find_items("profile",criteria,expand=expand) def find_system(self,criteria={},expand=False,token=None,**rest): return self.find_items("system",criteria,expand=expand) def find_repo(self,criteria={},expand=False,token=None,**rest): return self.find_items("repo",criteria,expand=expand) def find_image(self,criteria={},expand=False,token=None,**rest): return self.find_items("image",criteria,expand=expand) def find_mgmtclass(self,criteria={},expand=False,token=None,**rest): return self.find_items("mgmtclass",criteria,expand=expand) def find_package(self,criteria={},expand=False,token=None,**rest): return self.find_items("package",criteria,expand=expand) def find_file(self,criteria={},expand=False,token=None,**rest): return self.find_items("file",criteria,expand=expand) def find_items_paged(self, what, criteria=None, sort_field=None, page=None, items_per_page=None, token=None): """ Returns a list of hashes as with find_items but additionally supports returning just a portion of the total list, for instance in supporting a web app that wants to show a limited amount of items per page. """ # FIXME: make token required for all logging calls self._log("find_items_paged(%s); criteria(%s); sort(%s)" % (what,criteria,sort_field), token=token) items = self.api.find_items(what,criteria=criteria) items = self.__sort(items,sort_field) (items,pageinfo) = self.__paginate(items,page,items_per_page) items = [x.to_datastruct() for x in items] return self.xmlrpc_hacks({ 'items' : items, 'pageinfo' : pageinfo }) def has_item(self,what,name,token=None): """ Returns True if a given collection has an item with a given name, otherwise returns False. """ self._log("has_item(%s)"%what,token=token,name=name) found = self.api.get_item(what,name) if found is None: return False else: return True def get_item_handle(self,what,name,token=None): """ Given the name of an object (or other search parameters), return a reference (object id) that can be used with modify_* functions or save_* functions to manipulate that object. """ found = self.api.get_item(what,name) if found is None: raise CX("internal error, unknown %s name %s" % (what,name)) return "%s::%s" % (what,found.name) def get_distro_handle(self,name,token): return self.get_item_handle("distro",name,token) def get_profile_handle(self,name,token): return self.get_item_handle("profile",name,token) def get_system_handle(self,name,token): return self.get_item_handle("system",name,token) def get_repo_handle(self,name,token): return self.get_item_handle("repo",name,token) def get_image_handle(self,name,token): return self.get_item_handle("image",name,token) def get_mgmtclass_handle(self,name,token): return self.get_item_handle("mgmtclass",name,token) def get_package_handle(self,name,token): return self.get_item_handle("package",name,token) def get_file_handle(self,name,token): return self.get_item_handle("file",name,token) def remove_item(self,what,name,token,recursive=True): """ Deletes an item from a collection. Note that this requires the name of the distro, not an item handle. """ self._log("remove_item (%s, recursive=%s)" % (what,recursive),name=name,token=token) self.check_access(token, "remove_item", name) return self.api.remove_item(what,name,delete=True,with_triggers=True,recursive=recursive) def remove_distro(self,name,token,recursive=1): return self.remove_item("distro",name,token,recursive) def remove_profile(self,name,token,recursive=1): return self.remove_item("profile",name,token,recursive) def remove_system(self,name,token,recursive=1): return self.remove_item("system",name,token,recursive) def remove_repo(self,name,token,recursive=1): return self.remove_item("repo",name,token,recursive) def remove_image(self,name,token,recursive=1): return self.remove_item("image",name,token,recursive) def remove_mgmtclass(self,name,token,recursive=1): return self.remove_item("mgmtclass",name,token,recursive) def remove_package(self,name,token,recursive=1): return self.remove_item("package",name,token,recursive) def remove_file(self,name,token,recursive=1): return self.remove_item("file",name,token,recursive) def copy_item(self,what,object_id,newname,token=None): """ Creates a new object that matches an existing object, as specified by an id. """ self._log("copy_item(%s)" % what,object_id=object_id,token=token) self.check_access(token,"copy_%s" % what) obj = self.__get_object(object_id) return self.api.copy_item(what,obj,newname) def copy_distro(self,object_id,newname,token=None): return self.copy_item("distro",object_id,newname,token) def copy_profile(self,object_id,newname,token=None): return self.copy_item("profile",object_id,newname,token) def copy_system(self,object_id,newname,token=None): return self.copy_item("system",object_id,newname,token) def copy_repo(self,object_id,newname,token=None): return self.copy_item("repo",object_id,newname,token) def copy_image(self,object_id,newname,token=None): return self.copy_item("image",object_id,newname,token) def copy_mgmtclass(self,object_id,newname,token=None): return self.copy_item("mgmtclass",object_id,newname,token) def copy_package(self,object_id,newname,token=None): return self.copy_item("package",object_id,newname,token) def copy_file(self,object_id,newname,token=None): return self.copy_item("file",object_id,newname,token) def rename_item(self,what,object_id,newname,token=None): """ Renames an object specified by object_id to a new name. """ self._log("rename_item(%s)" % what,object_id=object_id,token=token) obj = self.__get_object(object_id) return self.api.rename_item(what,obj,newname) def rename_distro(self,object_id,newname,token=None): return self.rename_item("distro",object_id,newname,token) def rename_profile(self,object_id,newname,token=None): return self.rename_item("profile",object_id,newname,token) def rename_system(self,object_id,newname,token=None): return self.rename_item("system",object_id,newname,token) def rename_repo(self,object_id,newname,token=None): return self.rename_item("repo",object_id,newname,token) def rename_image(self,object_id,newname,token=None): return self.rename_item("image",object_id,newname,token) def rename_mgmtclass(self,object_id,newname,token=None): return self.rename_item("mgmtclass",object_id,newname,token) def rename_package(self,object_id,newname,token=None): return self.rename_item("package",object_id,newname,token) def rename_file(self,object_id,newname,token=None): return self.rename_item("file",object_id,newname,token) def new_item(self,what,token,is_subobject=False): """ Creates a new (unconfigured) object, returning an object handle that can be used with modify_* methods and then finally save_* methods. The handle only exists in memory until saved. "what" specifies the type of object: distro, profile, system, repo, or image """ self._log("new_item(%s)"%what,token=token) self.check_access(token,"new_%s"%what) if what == "distro": d = item_distro.Distro(self.api._config,is_subobject=is_subobject) elif what == "profile": d = item_profile.Profile(self.api._config,is_subobject=is_subobject) elif what == "system": d = item_system.System(self.api._config,is_subobject=is_subobject) elif what == "repo": d = item_repo.Repo(self.api._config,is_subobject=is_subobject) elif what == "image": d = item_image.Image(self.api._config,is_subobject=is_subobject) elif what == "mgmtclass": d = item_mgmtclass.Mgmtclass(self.api._config,is_subobject=is_subobject) elif what == "package": d = item_package.Package(self.api._config,is_subobject=is_subobject) elif what == "file": d = item_file.File(self.api._config,is_subobject=is_subobject) else: raise CX("internal error, collection name is %s" % what) key = "___NEW___%s::%s" % (what,self.__get_random(25)) self.object_cache[key] = (time.time(), d) return key def new_distro(self,token): return self.new_item("distro",token) def new_profile(self,token): return self.new_item("profile",token) def new_subprofile(self,token): return self.new_item("profile",token,is_subobject=True) def new_system(self,token): return self.new_item("system",token) def new_repo(self,token): return self.new_item("repo",token) def new_image(self,token): return self.new_item("image",token) def new_mgmtclass(self,token): return self.new_item("mgmtclass",token) def new_package(self,token): return self.new_item("package",token) def new_file(self,token): return self.new_item("file",token) def modify_item(self,what,object_id,attribute,arg,token): """ Adjusts the value of a given field, specified by 'what' on a given object id. Allows modification of certain attributes on newly created or existing distro object handle. """ self._log("modify_item(%s)" % what,object_id=object_id,attribute=attribute,token=token) obj = self.__get_object(object_id) self.check_access(token, "modify_%s"%what, obj, attribute) # support 1.6 field name exceptions for backwards compat attribute = REMAP_COMPAT.get(attribute,attribute) method = obj.remote_methods().get(attribute, None) if method == None: # it's ok, the CLI will send over lots of junk we can't process # (like newname or in-place) so just go with it. return False # raise CX("object has no method: %s" % attribute) return method(arg) def modify_distro(self,object_id,attribute,arg,token): return self.modify_item("distro",object_id,attribute,arg,token) def modify_profile(self,object_id,attribute,arg,token): return self.modify_item("profile",object_id,attribute,arg,token) def modify_system(self,object_id,attribute,arg,token): return self.modify_item("system",object_id,attribute,arg,token) def modify_image(self,object_id,attribute,arg,token): return self.modify_item("image",object_id,attribute,arg,token) def modify_repo(self,object_id,attribute,arg,token): return self.modify_item("repo",object_id,attribute,arg,token) def modify_mgmtclass(self,object_id,attribute,arg,token): return self.modify_item("mgmtclass",object_id,attribute,arg,token) def modify_package(self,object_id,attribute,arg,token): return self.modify_item("package",object_id,attribute,arg,token) def modify_file(self,object_id,attribute,arg,token): return self.modify_item("file",object_id,attribute,arg,token) def modify_setting(self,setting_name,value,token): try: self.api.settings().set(setting_name, value) return 0 except: return 1 def __is_interface_field(self,f): if f in ("delete_interface","rename_interface"): return True k = "*%s" % f for x in item_system.FIELDS: if k == x[0]: return True return False def xapi_object_edit(self,object_type,object_name,edit_type,attributes,token): """ Extended API: New style object manipulations, 2.0 and later Prefered over using new_, modify_, save_ directly. Though we must preserve the old ways for backwards compatibility these cause much less XMLRPC traffic. edit_type - One of 'add', 'rename', 'copy', 'remove' Ex: xapi_object_edit("distro","el5","add",{"kernel":"/tmp/foo","initrd":"/tmp/foo"},token) """ if object_name.strip() == "": raise CX("xapi_object_edit() called without an object name") self.check_access(token,"xedit_%s" % object_type, token) if edit_type == "add" and not attributes.has_key("clobber"): handle = 0 try: handle = self.get_item_handle(object_type, object_name) except: pass if handle != 0: raise CX("it seems unwise to overwrite this object, try 'edit'") if edit_type == "add": is_subobject = object_type == "profile" and "parent" in attributes if object_type == "system": if "profile" not in attributes and "image" not in attributes: raise CX("You must specify a --profile or --image for new systems") handle = self.new_item(object_type, token, is_subobject=is_subobject) else: handle = self.get_item_handle(object_type, object_name) if edit_type == "rename": self.rename_item(object_type, handle, attributes["newname"], token) handle = self.get_item_handle(object_type, attributes["newname"], token) if edit_type == "copy": self.copy_item(object_type, handle, attributes["newname"], token) handle = self.get_item_handle(object_type, attributes["newname"], token) if edit_type in [ "copy", "rename" ]: del attributes["name"] del attributes["newname"] if edit_type != "remove": # FIXME: this doesn't know about interfaces yet! # if object type is system and fields add to hash and then # modify when done, rather than now. imods = {} # FIXME: needs to know about how to delete interfaces too! for (k,v) in attributes.iteritems(): if object_type != "system" or not self.__is_interface_field(k): # in place modifications allow for adding a key/value pair while keeping other k/v # pairs intact. if k in ["ks_meta","kernel_options","kernel_options_post","template_files","boot_files","fetchable_files","params"] and attributes.has_key("in_place") and attributes["in_place"]: details = self.get_item(object_type,object_name) v2 = details[k] (ok, input) = utils.input_string_or_hash(v) for (a,b) in input.iteritems(): if a.startswith("~") and len(a) > 1: del v2[a[1:]] else: v2[a] = b v = v2 self.modify_item(object_type,handle,k,v,token) else: modkey = "%s-%s" % (k, attributes.get("interface","")) imods[modkey] = v if object_type == "system": if not attributes.has_key("delete_interface") and not attributes.has_key("rename_interface"): self.modify_system(handle, 'modify_interface', imods, token) elif attributes.has_key("delete_interface"): self.modify_system(handle, 'delete_interface', attributes.get("interface", ""), token) elif attributes.has_key("rename_interface"): ifargs = [attributes.get("interface",""),attributes.get("rename_interface","")] self.modify_system(handle, 'rename_interface', ifargs, token) else: recursive = attributes.get("recursive",False) return self.remove_item(object_type, object_name, token, recursive=recursive) # FIXME: use the bypass flag or not? return self.save_item(object_type, handle, token) def save_item(self,what,object_id,token,editmode="bypass"): """ Saves a newly created or modified object to disk. Calling save is required for any changes to persist. """ self._log("save_item(%s)" % what,object_id=object_id,token=token) obj = self.__get_object(object_id) self.check_access(token,"save_%s"%what,obj) if editmode == "new": rc = self.api.add_item(what,obj,check_for_duplicate_names=True) else: rc = self.api.add_item(what,obj) return rc def save_distro(self,object_id,token,editmode="bypass"): return self.save_item("distro",object_id,token,editmode=editmode) def save_profile(self,object_id,token,editmode="bypass"): return self.save_item("profile",object_id,token,editmode=editmode) def save_system(self,object_id,token,editmode="bypass"): return self.save_item("system",object_id,token,editmode=editmode) def save_image(self,object_id,token,editmode="bypass"): return self.save_item("image",object_id,token,editmode=editmode) def save_repo(self,object_id,token,editmode="bypass"): return self.save_item("repo",object_id,token,editmode=editmode) def save_mgmtclass(self,object_id,token,editmode="bypass"): return self.save_item("mgmtclass",object_id,token,editmode=editmode) def save_package(self,object_id,token,editmode="bypass"): return self.save_item("package",object_id,token,editmode=editmode) def save_file(self,object_id,token,editmode="bypass"): return self.save_item("file",object_id,token,editmode=editmode) def get_kickstart_templates(self,token=None,**rest): """ Returns all of the kickstarts that are in use by the system. """ self._log("get_kickstart_templates",token=token) #self.check_access(token, "get_kickstart_templates") return utils.get_kickstart_templates(self.api) def get_snippets(self,token=None,**rest): """ Returns all the kickstart snippets. """ self._log("get_snippets",token=token) # FIXME: settings.snippetsdir should be used here return self.__get_sub_snippets("/var/lib/cobbler/snippets") def __get_sub_snippets(self, path): results = [] files = glob.glob(os.path.join(path,"*")) for f in files: if os.path.isdir(f) and not os.path.islink(f): results += self.__get_sub_snippets(f) elif not os.path.islink(f): results.append(f) results.sort() return results def is_kickstart_in_use(self,ks,token=None,**rest): self._log("is_kickstart_in_use",token=token) for x in self.api.profiles(): if x.kickstart is not None and x.kickstart == ks: return True for x in self.api.systems(): if x.kickstart is not None and x.kickstart == ks: return True return False def generate_kickstart(self,profile=None,system=None,REMOTE_ADDR=None,REMOTE_MAC=None,**rest): self._log("generate_kickstart") try: return self.api.generate_kickstart(profile,system) except Exception, e: utils.log_exc(self.logger) return "# This kickstart had errors that prevented it from being rendered correctly.\n# The cobbler.log should have information relating to this failure." def generate_gpxe(self,profile=None,system=None,**rest): self._log("generate_gpxe") return self.api.generate_gpxe(profile,system) def generate_bootcfg(self,profile=None,system=None,**rest): self._log("generate_bootcfg") return self.api.generate_bootcfg(profile,system) def generate_script(self,profile=None,system=None,name=None,**rest): self._log("generate_script, name is %s" % str(name)) return self.api.generate_script(profile,system,name) def get_blended_data(self,profile=None,system=None): if profile is not None and profile != "": obj = self.api.find_profile(profile) if obj is None: raise CX("profile not found: %s" % profile) elif system is not None and system != "": obj = self.api.find_system(system) if obj is None: raise CX("system not found: %s" % system) else: raise CX("internal error, no system or profile specified") return self.xmlrpc_hacks(utils.blender(self.api, True, obj)) def get_settings(self,token=None,**rest): """ Return the contents of /etc/cobbler/settings, which is a hash. """ self._log("get_settings",token=token) results = self.api.settings().to_datastruct() self._log("my settings are: %s" % results, debug=True) return self.xmlrpc_hacks(results) def get_signatures(self,token=None,**rest): """ Return the contents of the API signatures """ self._log("get_signatures",token=token) results = self.api.get_signatures() return self.xmlrpc_hacks(results) def get_valid_breeds(self,token=None,**rest): """ Return the list of valid breeds as read in from the distro signatures data """ self._log("get_valid_breeds",token=token) results = utils.get_valid_breeds() results.sort() return self.xmlrpc_hacks(results) def get_valid_os_versions_for_breed(self,breed,token=None,**rest): """ Return the list of valid os_versions for the given breed """ self._log("get_valid_os_versions_for_breed",token=token) results = utils.get_valid_os_versions_for_breed(breed) results.sort() return self.xmlrpc_hacks(results) def get_valid_os_versions(self,token=None,**rest): """ Return the list of valid os_versions as read in from the distro signatures data """ self._log("get_valid_os_versions",token=token) results = utils.get_valid_os_versions() results.sort() return self.xmlrpc_hacks(results) def get_repo_config_for_profile(self,profile_name,**rest): """ Return the yum configuration a given profile should use to obtain all of it's cobbler associated repos. """ obj = self.api.find_profile(profile_name) if obj is None: return "# object not found: %s" % profile_name return self.api.get_repo_config_for_profile(obj) def get_repo_config_for_system(self,system_name,**rest): """ Return the yum configuration a given profile should use to obtain all of it's cobbler associated repos. """ obj = self.api.find_system(system_name) if obj is None: return "# object not found: %s" % system_name return self.api.get_repo_config_for_system(obj) def get_template_file_for_profile(self,profile_name,path,**rest): """ Return the templated file requested for this profile """ obj = self.api.find_profile(profile_name) if obj is None: return "# object not found: %s" % profile_name return self.api.get_template_file_for_profile(obj,path) def get_template_file_for_system(self,system_name,path,**rest): """ Return the templated file requested for this system """ obj = self.api.find_system(system_name) if obj is None: return "# object not found: %s" % system_name return self.api.get_template_file_for_system(obj,path) def register_new_system(self,info,token=None,**rest): """ If register_new_installs is enabled in settings, this allows /usr/bin/cobbler-register (part of the koan package) to add new system records remotely if they don't already exist. There is a cobbler_register snippet that helps with doing this automatically for new installs but it can also be used for existing installs. See "AutoRegistration" on the Wiki. """ enabled = self.api.settings().register_new_installs if not str(enabled) in [ "1", "y", "yes", "true" ]: raise CX("registration is disabled in cobbler settings") # validate input name = info.get("name","") profile = info.get("profile","") hostname = info.get("hostname","") interfaces = info.get("interfaces",{}) ilen = len(interfaces.keys()) if name == "": raise CX("no system name submitted") if profile == "": raise CX("profile not submitted") if ilen == 0: raise CX("no interfaces submitted") if ilen >= 64: raise CX("too many interfaces submitted") # validate things first name = info.get("name","") inames = interfaces.keys() if self.api.find_system(name=name): raise CX("system name conflicts") if hostname != "" and self.api.find_system(hostname=hostname): raise CX("hostname conflicts") for iname in inames: mac = info["interfaces"][iname].get("mac_address","") ip = info["interfaces"][iname].get("ip_address","") if ip.find("/") != -1: raise CX("no CIDR ips are allowed") if mac == "": raise CX("missing MAC address for interface %s" % iname) if mac != "": system = self.api.find_system(mac_address=mac) if system is not None: raise CX("mac conflict: %s" % mac) if ip != "": system = self.api.find_system(ip_address=ip) if system is not None: raise CX("ip conflict: %s"% ip) # looks like we can go ahead and create a system now obj = self.api.new_system() obj.set_profile(profile) obj.set_name(name) if hostname != "": obj.set_hostname(hostname) obj.set_netboot_enabled(False) for iname in inames: if info["interfaces"][iname].get("bridge","") == 1: # don't add bridges continue #if info["interfaces"][iname].get("module","") == "": # # don't attempt to add wireless interfaces # continue mac = info["interfaces"][iname].get("mac_address","") ip = info["interfaces"][iname].get("ip_address","") netmask = info["interfaces"][iname].get("netmask","") if mac == "?": # see koan/utils.py for explanation of network info discovery continue; obj.set_mac_address(mac, iname) if hostname != "": obj.set_dns_name(hostname, iname) if ip != "" and ip != "?": obj.set_ip_address(ip, iname) if netmask != "" and netmask != "?": obj.set_netmask(netmask, iname) self.api.add_system(obj) return 0 def disable_netboot(self,name,token=None,**rest): """ This is a feature used by the pxe_just_once support, see manpage. Sets system named "name" to no-longer PXE. Disabled by default as this requires public API access and is technically a read-write operation. """ self._log("disable_netboot",token=token,name=name) # used by nopxe.cgi if not self.api.settings().pxe_just_once: # feature disabled! return False systems = self.api.systems() obj = systems.find(name=name) if obj == None: # system not found! return False obj.set_netboot_enabled(0) # disabling triggers and sync to make this extremely fast. systems.add(obj,save=True,with_triggers=False,with_sync=False,quick_pxe_update=True) # re-generate dhcp configuration self.api.sync_dhcp() return True def upload_log_data(self, sys_name, file, size, offset, data, token=None,**rest): """ This is a logger function used by the "anamon" logging system to upload all sorts of auxilliary data from Anaconda. As it's a bit of a potential log-flooder, it's off by default and needs to be enabled in /etc/cobbler/settings. """ self._log("upload_log_data (file: '%s', size: %s, offset: %s)" % (file, size, offset), token=token, name=sys_name) # Check if enabled in self.api.settings() if not self.api.settings().anamon_enabled: # feature disabled! return False # Find matching system record systems = self.api.systems() obj = systems.find(name=sys_name) if obj == None: # system not found! self._log("upload_log_data - WARNING - system '%s' not found in cobbler" % sys_name, token=token, name=sys_name) return self.__upload_file(sys_name, file, size, offset, data) def __upload_file(self, sys_name, file, size, offset, data): ''' system: the name of the system name: the name of the file size: size of contents (bytes) data: base64 encoded file contents offset: the offset of the chunk files can be uploaded in chunks, if so the size describes the chunk rather than the whole file. the offset indicates where the chunk belongs the special offset -1 is used to indicate the final chunk''' contents = base64.decodestring(data) del data if offset != -1: if size is not None: if size != len(contents): return False #XXX - have an incoming dir and move after upload complete # SECURITY - ensure path remains under uploadpath tt = string.maketrans("/","+") fn = string.translate(file, tt) if fn.startswith('..'): raise CX("invalid filename used: %s" % fn) # FIXME ... get the base dir from cobbler settings() udir = "/var/log/cobbler/anamon/%s" % sys_name if not os.path.isdir(udir): os.mkdir(udir, 0755) fn = "%s/%s" % (udir, fn) try: st = os.lstat(fn) except OSError, e: if e.errno == errno.ENOENT: pass else: raise else: if not stat.S_ISREG(st.st_mode): raise CX("destination not a file: %s" % fn) fd = os.open(fn, os.O_RDWR | os.O_CREAT, 0644) # log_error("fd=%r" %fd) try: if offset == 0 or (offset == -1 and size == len(contents)): #truncate file fcntl.lockf(fd, fcntl.LOCK_EX|fcntl.LOCK_NB) try: os.ftruncate(fd, 0) # log_error("truncating fd %r to 0" %fd) finally: fcntl.lockf(fd, fcntl.LOCK_UN) if offset == -1: os.lseek(fd,0,2) else: os.lseek(fd,offset,0) #write contents fcntl.lockf(fd, fcntl.LOCK_EX|fcntl.LOCK_NB, len(contents), 0, 2) try: os.write(fd, contents) # log_error("wrote contents") finally: fcntl.lockf(fd, fcntl.LOCK_UN, len(contents), 0, 2) if offset == -1: if size is not None: #truncate file fcntl.lockf(fd, fcntl.LOCK_EX|fcntl.LOCK_NB) try: os.ftruncate(fd, size) # log_error("truncating fd %r to size %r" % (fd,size)) finally: fcntl.lockf(fd, fcntl.LOCK_UN) finally: os.close(fd) return True def run_install_triggers(self,mode,objtype,name,ip,token=None,**rest): """ This is a feature used to run the pre/post install triggers. See CobblerTriggers on Wiki for details """ self._log("run_install_triggers",token=token) if mode != "pre" and mode != "post" and mode != "firstboot": return False if objtype != "system" and objtype !="profile": return False # the trigger script is called with name,mac, and ip as arguments 1,2, and 3 # we do not do API lookups here because they are rather expensive at install # time if reinstalling all of a cluster all at once. # we can do that at "cobbler check" time. utils.run_triggers(self.api, None, "/var/lib/cobbler/triggers/install/%s/*" % mode, additional=[objtype,name,ip],logger=self.logger) return True def version(self,token=None,**rest): """ Return the cobbler version for compatibility testing with remote applications. See api.py for documentation. """ self._log("version",token=token) return self.api.version() def extended_version(self,token=None,**rest): """ Returns the full dictionary of version information. See api.py for documentation. """ self._log("version",token=token) return self.api.version(extended=True) def get_distros_since(self,mtime): """ Return all of the distro objects that have been modified after mtime. """ data = self.api.get_distros_since(mtime, collapse=True) return self.xmlrpc_hacks(data) def get_profiles_since(self,mtime): """ See documentation for get_distros_since """ data = self.api.get_profiles_since(mtime, collapse=True) return self.xmlrpc_hacks(data) def get_systems_since(self,mtime): """ See documentation for get_distros_since """ data = self.api.get_systems_since(mtime, collapse=True) return self.xmlrpc_hacks(data) def get_repos_since(self,mtime): """ See documentation for get_distros_since """ data = self.api.get_repos_since(mtime, collapse=True) return self.xmlrpc_hacks(data) def get_images_since(self,mtime): """ See documentation for get_distros_since """ data = self.api.get_images_since(mtime, collapse=True) return self.xmlrpc_hacks(data) def get_mgmtclasses_since(self,mtime): """ See documentation for get_distros_since """ data = self.api.get_mgmtclasses_since(mtime, collapse=True) return self.xmlrpc_hacks(data) def get_packages_since(self,mtime): """ See documentation for get_distros_since """ data = self.api.get_packages_since(mtime, collapse=True) return self.xmlrpc_hacks(data) def get_files_since(self,mtime): """ See documentation for get_distros_since """ data = self.api.get_files_since(mtime, collapse=True) return self.xmlrpc_hacks(data) def get_repos_compatible_with_profile(self,profile=None,token=None,**rest): """ Get repos that can be used with a given profile name """ self._log("get_repos_compatible_with_profile",token=token) profile = self.api.find_profile(profile) if profile is None: return -1 results = [] distro = profile.get_conceptual_parent() repos = self.get_repos() for r in repos: # there be dragons! # accept all repos that are src/noarch # but otherwise filter what repos are compatible # with the profile based on the arch of the distro. if r["arch"] is None or r["arch"] in [ "", "noarch", "src" ]: results.append(r) else: # some backwards compatibility fuzz # repo.arch is mostly a text field # distro.arch is i386/x86_64/ia64/s390x/etc if r["arch"] in [ "i386", "x86", "i686" ]: if distro.arch in [ "i386", "x86" ]: results.append(r) elif r["arch"] in [ "x86_64" ]: if distro.arch in [ "x86_64" ]: results.append(r) elif r["arch"].startswith("s390"): if distro.arch in [ "s390x" ]: results.append(r) else: if distro.arch == r["arch"]: results.append(r) return results # this is used by the puppet external nodes feature def find_system_by_dns_name(self,dns_name): # FIXME: implement using api.py's find API # and expose generic finds for other methods # WARNING: this function is /not/ expected to stay in cobbler long term systems = self.get_systems() for x in systems: for y in x["interfaces"]: if x["interfaces"][y]["dns_name"] == dns_name: name = x["name"] return self.get_system_for_koan(name) return {} def get_distro_as_rendered(self,name,token=None,**rest): """ Return the distribution as passed through cobbler's inheritance/graph engine. Shows what would be installed, not the input data. """ return self.get_distro_for_koan(self,name) def get_distro_for_koan(self,name,token=None,**rest): """ Same as get_distro_as_rendered. """ self._log("get_distro_as_rendered",name=name,token=token) obj = self.api.find_distro(name=name) if obj is not None: return self.xmlrpc_hacks(utils.blender(self.api, True, obj)) return self.xmlrpc_hacks({}) def get_profile_as_rendered(self,name,token=None,**rest): """ Return the profile as passed through cobbler's inheritance/graph engine. Shows what would be installed, not the input data. """ return self.get_profile_for_koan(name,token) def get_profile_for_koan(self,name,token=None,**rest): """ Same as get_profile_as_rendered """ self._log("get_profile_as_rendered", name=name, token=token) obj = self.api.find_profile(name=name) if obj is not None: return self.xmlrpc_hacks(utils.blender(self.api, True, obj)) return self.xmlrpc_hacks({}) def get_system_as_rendered(self,name,token=None,**rest): """ Return the system as passed through cobbler's inheritance/graph engine. Shows what would be installed, not the input data. """ return self.get_system_for_koan(name) def get_system_for_koan(self,name,token=None,**rest): """ Same as get_system_as_rendered. """ self._log("get_system_as_rendered",name=name,token=token) obj = self.api.find_system(name=name) if obj is not None: hash = utils.blender(self.api,True,obj) # Generate a pxelinux.cfg? image_based = False profile = obj.get_conceptual_parent() distro = profile.get_conceptual_parent() arch = distro.arch # the management classes stored in the system are just a list # of names, so we need to turn it into a full list of hashes # (right now we just use the params field) mcs = hash["mgmt_classes"] hash["mgmt_classes"] = {} for m in mcs: c = self.api.find_mgmtclass(name=m) if c: hash["mgmt_classes"][m] = c.to_datastruct() if distro is None and profile.COLLECTION_TYPE == "image": image_based = True arch = profile.arch else: arch = distro.arch if obj.is_management_supported(): if not image_based: hash["pxelinux.cfg"] = self.pxegen.write_pxe_file( None, obj, profile, distro, arch) else: hash["pxelinux.cfg"] = self.pxegen.write_pxe_file( None, obj,None,None,arch,image=profile) return self.xmlrpc_hacks(hash) return self.xmlrpc_hacks({}) def get_repo_as_rendered(self,name,token=None,**rest): """ Return the repo as passed through cobbler's inheritance/graph engine. Shows what would be installed, not the input data. """ return self.get_repo_for_koan(self,name) def get_repo_for_koan(self,name,token=None,**rest): """ Same as get_repo_as_rendered. """ self._log("get_repo_as_rendered",name=name,token=token) obj = self.api.find_repo(name=name) if obj is not None: return self.xmlrpc_hacks(utils.blender(self.api, True, obj)) return self.xmlrpc_hacks({}) def get_image_as_rendered(self,name,token=None,**rest): """ Return the image as passed through cobbler's inheritance/graph engine. Shows what would be installed, not the input data. """ return self.get_image_for_koan(self,name) def get_image_for_koan(self,name,token=None,**rest): """ Same as get_image_as_rendered. """ self._log("get_image_as_rendered",name=name,token=token) obj = self.api.find_image(name=name) if obj is not None: return self.xmlrpc_hacks(utils.blender(self.api, True, obj)) return self.xmlrpc_hacks({}) def get_mgmtclass_as_rendered(self,name,token=None,**rest): """ Return the mgmtclass as passed through cobbler's inheritance/graph engine. Shows what would be installed, not the input data. """ return self.get_mgmtclass_for_koan(self,name) def get_mgmtclass_for_koan(self,name,token=None,**rest): """ Same as get_mgmtclass_as_rendered. """ self._log("get_mgmtclass_as_rendered",name=name,token=token) obj = self.api.find_mgmtclass(name=name) if obj is not None: return self.xmlrpc_hacks(utils.blender(self.api, True, obj)) return self.xmlrpc_hacks({}) def get_package_as_rendered(self,name,token=None,**rest): """ Return the package as passed through cobbler's inheritance/graph engine. Shows what would be installed, not the input data. """ return self.get_package_for_koan(self,name) def get_package_for_koan(self,name,token=None,**rest): """ Same as get_package_as_rendered. """ self._log("get_package_as_rendered",name=name,token=token) obj = self.api.find_package(name=name) if obj is not None: return self.xmlrpc_hacks(utils.blender(self.api, True, obj)) return self.xmlrpc_hacks({}) def get_file_as_rendered(self,name,token=None,**rest): """ Return the file as passed through cobbler's inheritance/graph engine. Shows what would be installed, not the input data. """ return self.get_file_for_koan(self,name) def get_file_for_koan(self,name,token=None,**rest): """ Same as get_file_as_rendered. """ self._log("get_file_as_rendered",name=name,token=token) obj = self.api.find_file(name=name) if obj is not None: return self.xmlrpc_hacks(utils.blender(self.api, True, obj)) return self.xmlrpc_hacks({}) def get_random_mac(self,virt_type="xenpv",token=None,**rest): """ Wrapper for utils.get_random_mac Used in the webui """ self._log("get_random_mac",token=None) return utils.get_random_mac(self.api,virt_type) def xmlrpc_hacks(self,data): """ Convert None in XMLRPC to just '~' to make extra sure a client that can't allow_none can deal with this. ALSO: a weird hack ensuring that when dicts with integer keys (or other types) are transmitted with string keys. """ return utils.strip_none(data) def get_status(self,mode="normal",token=None,**rest): """ Returns the same information as `cobbler status` While a read-only operation, this requires a token because it's potentially a fair amount of I/O """ self.check_access(token,"sync") return self.api.status(mode=mode) ###### # READ WRITE METHODS REQUIRE A TOKEN, use login() # TO OBTAIN ONE ###### def __get_random(self,length): urandom = open("/dev/urandom") b64 = base64.encodestring(urandom.read(length)) urandom.close() b64 = b64.replace("\n","") return b64 def __make_token(self,user): """ Returns a new random token. """ b64 = self.__get_random(25) self.token_cache[b64] = (time.time(), user) return b64 def __invalidate_expired_tokens(self): """ Deletes any login tokens that might have expired. Also removes expired events """ timenow = time.time() for token in self.token_cache.keys(): (tokentime, user) = self.token_cache[token] if (timenow > tokentime + self.api.settings().auth_token_expiration): self._log("expiring token",token=token,debug=True) del self.token_cache[token] # and also expired objects for oid in self.object_cache.keys(): (tokentime, entry) = self.object_cache[oid] if (timenow > tokentime + CACHE_TIMEOUT): del self.object_cache[oid] for tid in self.events.keys(): (eventtime, name, status, who) = self.events[tid] if (timenow > eventtime + EVENT_TIMEOUT): del self.events[tid] # logfile cleanup should be dealt w/ by logrotate def __validate_user(self,input_user,input_password): """ Returns whether this user/pass combo should be given access to the cobbler read-write API. For the system user, this answer is always "yes", but it is only valid for the socket interface. FIXME: currently looks for users in /etc/cobbler/auth.conf Would be very nice to allow for PAM and/or just Kerberos. """ return self.api.authenticate(input_user,input_password) def __validate_token(self,token): """ Checks to see if an API method can be called when the given token is passed in. Updates the timestamp of the token automatically to prevent the need to repeatedly call login(). Any method that needs access control should call this before doing anything else. """ self.__invalidate_expired_tokens() if self.token_cache.has_key(token): user = self.get_user_from_token(token) if user == "": # system token is only valid over Unix socket return False self.token_cache[token] = (time.time(), user) # update to prevent timeout return True else: self._log("invalid token",token=token) return False def __name_to_object(self,resource,name): if resource.find("distro") != -1: return self.api.find_distro(name) if resource.find("profile") != -1: return self.api.find_profile(name) if resource.find("system") != -1: return self.api.find_system(name) if resource.find("repo") != -1: return self.api.find_repo(name) if resource.find("mgmtclass") != -1: return self.api.find_mgmtclass(name) if resource.find("package") != -1: return self.api.find_package(name) if resource.find("file") != -1: return self.api.find_file(name) return None def check_access_no_fail(self,token,resource,arg1=None,arg2=None): """ This is called by the WUI to decide whether an element is editable or not. It differs form check_access in that it is supposed to /not/ log the access checks (TBA) and does not raise exceptions. """ need_remap = False for x in [ "distro", "profile", "system", "repo", "image", "mgmtclass", "package", "file" ]: if arg1 is not None and resource.find(x) != -1: need_remap = True break if need_remap: # we're called with an object name, but need an object arg1 = self.__name_to_object(resource,arg1) try: self.check_access(token,resource,arg1,arg2) return True except: utils.log_exc(self.logger) return False def check_access(self,token,resource,arg1=None,arg2=None): validated = self.__validate_token(token) user = self.get_user_from_token(token) if user == "": self._log("CLI Authorized", debug=True) return True rc = self.api.authorize(user,resource,arg1,arg2) self._log("%s authorization result: %s" % (user,rc),debug=True) if not rc: raise CX("authorization failure for user %s" % user) return rc def get_authn_module_name(self, token): validated = self.__validate_token(token) user = self.get_user_from_token(token) if user != "": raise CX("authorization failure for user %s attempting to access authn module name" %user) return self.api.get_module_name_from_file("authentication", "module") def login(self,login_user,login_password): """ Takes a username and password, validates it, and if successful returns a random login token which must be used on subsequent method calls. The token will time out after a set interval if not used. Re-logging in permitted. """ # if shared secret access is requested, don't bother hitting the auth # plugin if login_user == "": if login_password == self.shared_secret: return self.__make_token("") else: utils.die(self.logger, "login failed") # this should not log to disk OR make events as we're going to # call it like crazy in CobblerWeb. Just failed attempts. if self.__validate_user(login_user,login_password): token = self.__make_token(login_user) return token else: utils.die(self.logger, "login failed (%s)" % login_user) def logout(self,token): """ Retires a token ahead of the timeout. """ self._log("logout", token=token) if self.token_cache.has_key(token): del self.token_cache[token] return True return False def token_check(self,token): """ Checks to make sure a token is valid or not """ return self.__validate_token(token) def sync_dhcp(self,token): """ Run sync code, which should complete before XMLRPC timeout. We can't do reposync this way. Would be nice to send output over AJAX/other later. """ self._log("sync_dhcp",token=token) self.check_access(token,"sync") return self.api.sync_dhcp() def sync(self,token): """ Run sync code, which should complete before XMLRPC timeout. We can't do reposync this way. Would be nice to send output over AJAX/other later. """ # FIXME: performance self._log("sync",token=token) self.check_access(token,"sync") return self.api.sync() def read_or_write_kickstart_template(self,kickstart_file,is_read,new_data,token): """ Allows the web app to be used as a kickstart file editor. For security reasons we will only allow kickstart files to be edited if they reside in /var/lib/cobbler/kickstarts/ or /etc/cobbler. This limits the damage doable by Evil who has a cobbler password but not a system password. Also if living in /etc/cobbler the file must be a kickstart file. """ if is_read: what = "read_kickstart_template" else: what = "write_kickstart_template" self._log(what,name=kickstart_file,token=token) self.check_access(token,what,kickstart_file,is_read) if kickstart_file.find("..") != -1 or not kickstart_file.startswith("/"): utils.die(self.logger,"tainted file location") if not kickstart_file.startswith("/etc/cobbler/") and not kickstart_file.startswith("/var/lib/cobbler/kickstarts"): utils.die(self.logger, "unable to view or edit kickstart in this location") if kickstart_file.startswith("/etc/cobbler/"): if not kickstart_file.endswith(".ks") and not kickstart_file.endswith(".cfg"): # take care to not allow config files to be altered. utils.die(self.logger, "this does not seem to be a kickstart file") if not is_read and not os.path.exists(kickstart_file): utils.die(self.logger, "new files must go in /var/lib/cobbler/kickstarts") if is_read: fileh = open(kickstart_file,"r") data = fileh.read() fileh.close() return data else: if new_data == -1: # delete requested if not self.is_kickstart_in_use(kickstart_file,token): os.remove(kickstart_file) else: utils.die(self.logger, "attempt to delete in-use file") else: fileh = open(kickstart_file,"w+") fileh.write(new_data) fileh.close() return True def read_or_write_snippet(self,snippet_file,is_read,new_data,token): """ Allows the WebUI to be used as a snippet file editor. For security reasons we will only allow snippet files to be edited if they reside in /var/lib/cobbler/snippets. """ # FIXME: duplicate code with kickstart view/edit # FIXME: need to move to API level functions if is_read: what = "read_snippet" else: what = "write_snippet" self._log(what,name=snippet_file,token=token) self.check_access(token,what,snippet_file,is_read) if snippet_file.find("..") != -1 or not snippet_file.startswith("/"): utils.die(self.logger, "tainted file location") # FIXME: shouldn't we get snippetdir from the settings? if not snippet_file.startswith("/var/lib/cobbler/snippets"): utils.die(self.logger, "unable to view or edit snippet in this location") if is_read: fileh = open(snippet_file,"r") data = fileh.read() fileh.close() return data else: if new_data == -1: # FIXME: no way to check if something is using it os.remove(snippet_file) else: # path_part(a,b) checks for the path b to be inside path a. It is # guaranteed to return either an empty string (meaning b is NOT inside # a), or a path starting with '/'. If the path ends with '/' the sub-path # is a directory so we don't write to it. # FIXME: shouldn't we get snippetdir from the settings? path_part = utils.path_tail("/var/lib/cobbler/snippets",snippet_file) if path_part != "" and path_part[-1] != "/": try: utils.mkdir(os.path.dirname(snippet_file)) except: utils.die(self.logger, "unable to create directory for snippet file: '%s'" % snippet_file) fileh = open(snippet_file,"w+") fileh.write(new_data) fileh.close() else: utils.die(self.logger, "invalid snippet file specified: '%s'" % snippet_file) return True def power_system(self,object_id,power=None,token=None,logger=None): """ Internal implementation used by background_power, do not call directly if possible. Allows poweron/poweroff/powerstatus/reboot of a system specified by object_id. """ obj = self.__get_object(object_id) self.check_access(token, "power_system", obj) if power=="on": rc=self.api.power_on(obj, user=None, password=None, logger=logger) elif power=="off": rc=self.api.power_off(obj, user=None, password=None, logger=logger) elif power=="status": rc=self.api.power_status(obj, user=None, password=None, logger=logger) elif power=="reboot": rc=self.api.reboot(obj, user=None, password=None, logger=logger) else: utils.die(self.logger, "invalid power mode '%s', expected on/off/status/reboot" % power) return rc def get_config_data(self,hostname): """ Generate configuration data for the system specified by hostname. """ self._log("get_config_data for %s" % hostname) obj = configgen.ConfigGen(hostname) return obj.gen_config_data_for_koan() def clear_system_logs(self, object_id, token=None, logger=None): """ clears console logs of a system """ obj = self.__get_object(object_id) self.check_access(token, "clear_system_logs", obj) rc=self.api.clear_logs(obj, logger=logger) return rc # ********************************************************************************* # ********************************************************************************* class CobblerXMLRPCServer(ThreadingMixIn, SimpleXMLRPCServer.SimpleXMLRPCServer): def __init__(self, args): self.allow_reuse_address = True SimpleXMLRPCServer.SimpleXMLRPCServer.__init__(self,args) # ********************************************************************************* # ********************************************************************************* class ProxiedXMLRPCInterface: def __init__(self,api,proxy_class): self.proxied = proxy_class(api) self.logger = self.proxied.api.logger def _dispatch(self, method, params, **rest): if not hasattr(self.proxied, method): raise CX("unknown remote method") method_handle = getattr(self.proxied, method) # FIXME: see if this works without extra boilerplate try: return method_handle(*params) except Exception, e: utils.log_exc(self.logger) raise e # ********************************************************************* # ********************************************************************* def _test_setup_modules(authn="authn_testing",authz="authz_allowall",pxe_once=1): # rewrite modules.conf so we know we can use the testing module # for xmlrpc rw testing (Makefile will put the user value back) import yaml import Cheetah.Template as Template MODULES_TEMPLATE = "installer_templates/modules.conf.template" DEFAULTS = "installer_templates/defaults" fh = open(DEFAULTS) data = yaml.safe_load(fh.read()) fh.close() data["authn_module"] = authn data["authz_module"] = authz data["pxe_once"] = pxe_once t = Template.Template(file=MODULES_TEMPLATE, searchList=[data]) open("/etc/cobbler/modules.conf","w+").write(t.respond()) def _test_setup_settings(pxe_once=1): # rewrite modules.conf so we know we can use the testing module # for xmlrpc rw testing (Makefile will put the user value back) import yaml import Cheetah.Template as Template MODULES_TEMPLATE = "installer_templates/settings.template" DEFAULTS = "installer_templates/defaults" fh = open(DEFAULTS) data = yaml.safe_load(fh.read()) fh.close() data["pxe_once"] = pxe_once t = Template.Template(file=MODULES_TEMPLATE, searchList=[data]) open("/etc/cobbler/settings","w+").write(t.respond()) def _test_bootstrap_restart(): rc1 = utils.subprocess_call(None,"/sbin/service cobblerd restart",shell=False) assert rc1 == 0 rc2 = utils.subprocess.call(None,"/sbin/service httpd restart",shell=False) assert rc2 == 0 time.sleep(5) _test_remove_objects() def _test_remove_objects(): api = cobbler_api.BootAPI() # local handle # from ro tests d0 = api.find_distro("distro0") i0 = api.find_image("image0") r0 = api.find_image("repo0") # from rw tests d1 = api.find_distro("distro1") i1 = api.find_image("image1") r1 = api.find_image("repo1") if d0 is not None: api.remove_distro(d0, recursive = True) if i0 is not None: api.remove_image(i0) if r0 is not None: api.remove_repo(r0) if d1 is not None: api.remove_distro(d1, recursive = True) if i1 is not None: api.remove_image(i1) if r1 is not None: api.remove_repo(r1) def test_xmlrpc_ro(): _test_bootstrap_restart() server = xmlrpclib.Server("http://127.0.0.1/cobbler_api") time.sleep(2) # delete all distributions distros = server.get_distros() profiles = server.get_profiles() systems = server.get_systems() repos = server.get_repos() images = server.get_systems() settings = server.get_settings() assert type(distros) == type([]) assert type(profiles) == type([]) assert type(systems) == type([]) assert type(repos) == type([]) assert type(images) == type([]) assert type(settings) == type({}) # now populate with something more useful # using the non-remote API api = cobbler_api.BootAPI() # local handle before_distros = len(api.distros()) before_profiles = len(api.profiles()) before_systems = len(api.systems()) before_repos = len(api.repos()) before_images = len(api.images()) fake = open("/tmp/cobbler.fake","w+") fake.write("") fake.close() distro = api.new_distro() distro.set_name("distro0") distro.set_kernel("/tmp/cobbler.fake") distro.set_initrd("/tmp/cobbler.fake") api.add_distro(distro) repo = api.new_repo() repo.set_name("repo0") if not os.path.exists("/tmp/empty"): os.mkdir("/tmp/empty",770) repo.set_mirror("/tmp/empty") files = glob.glob("rpm-build/*.rpm") if len(files) == 0: raise Exception("Tests must be run from the cobbler checkout directory.") rc = utils.subprocess_call(None,"cp rpm-build/*.rpm /tmp/empty",shell=True) api.add_repo(repo) profile = api.new_profile() profile.set_name("profile0") profile.set_distro("distro0") profile.set_kickstart("/var/lib/cobbler/kickstarts/sample.ks") profile.set_repos(["repo0"]) api.add_profile(profile) system = api.new_system() system.set_name("system0") system.set_hostname("hostname0") system.set_gateway("192.168.1.1") system.set_profile("profile0") system.set_dns_name("hostname0","eth0") api.add_system(system) image = api.new_image() image.set_name("image0") image.set_file("/tmp/cobbler.fake") api.add_image(image) # reposync is required in order to create the repo config files api.reposync(name="repo0") # FIXME: the following tests do not yet look to see that all elements # retrieved match what they were created with, but we presume this # all works. It is not a high priority item to test but do not assume # this is a complete test of access functions. def comb(haystack, needle): for x in haystack: if x["name"] == needle: return True return False distros = server.get_distros() assert len(distros) == before_distros + 1 assert comb(distros, "distro0") profiles = server.get_profiles() print "BEFORE: %s" % before_profiles print "CURRENT: %s" % len(profiles) for p in profiles: print " PROFILES: %s" % p["name"] for p in api.profiles(): print " API : %s" % p.name assert len(profiles) == before_profiles + 1 assert comb(profiles, "profile0") systems = server.get_systems() # assert len(systems) == before_systems + 1 assert comb(systems, "system0") repos = server.get_repos() # FIXME: disable temporarily # assert len(repos) == before_repos + 1 assert comb(repos, "repo0") images = server.get_images() # assert len(images) == before_images + 1 assert comb(images, "image0") # now test specific gets distro = server.get_distro("distro0") assert distro["name"] == "distro0" assert type(distro["kernel_options"] == type({})) profile = server.get_profile("profile0") assert profile["name"] == "profile0" assert type(profile["kernel_options"] == type({})) system = server.get_system("system0") assert system["name"] == "system0" assert type(system["kernel_options"] == type({})) repo = server.get_repo("repo0") assert repo["name"] == "repo0" image = server.get_image("image0") assert image["name"] == "image0" # now test the calls koan uses # the difference is that koan's object types are flattened somewhat # and also that they are passed through utils.blender() so they represent # not the object but the evaluation of the object tree at that object. server.update() # should be unneeded distro = server.get_distro_for_koan("distro0") assert distro["name"] == "distro0" assert type(distro["kernel_options"] == type("")) profile = server.get_profile_for_koan("profile0") assert profile["name"] == "profile0" assert type(profile["kernel_options"] == type("")) system = server.get_system_for_koan("system0") assert system["name"] == "system0" assert type(system["kernel_options"] == type("")) repo = server.get_repo_for_koan("repo0") assert repo["name"] == "repo0" image = server.get_image_for_koan("image0") assert image["name"] == "image0" # now test some of the additional webui calls # compatible profiles, etc assert server.ping() == True assert server.get_size("distros") == 1 assert server.get_size("profiles") == 1 assert server.get_size("systems") == 1 assert server.get_size("repos") == 1 assert server.get_size("images") == 1 templates = server.get_kickstart_templates("???") assert "/var/lib/cobbler/kickstarts/sample.ks" in templates assert server.is_kickstart_in_use("/var/lib/cobbler/kickstarts/sample.ks","???") == True assert server.is_kickstart_in_use("/var/lib/cobbler/kickstarts/legacy.ks","???") == False generated = server.generate_kickstart("profile0") assert type(generated) == type("") assert generated.find("ERROR") == -1 assert generated.find("url") != -1 assert generated.find("network") != -1 yumcfg = server.get_repo_config_for_profile("profile0") assert type(yumcfg) == type("") assert yumcfg.find("ERROR") == -1 assert yumcfg.find("http://") != -1 yumcfg = server.get_repo_config_for_system("system0") assert type(yumcfg) == type("") assert yumcfg.find("ERROR") == -1 assert yumcfg.find("http://") != -1 server.register_mac("CC:EE:FF:GG:AA:AA","profile0") systems = server.get_systems() found = False for s in systems: if s["name"] == "CC:EE:FF:GG:AA:AA": for iname in s["interfaces"]: if s["interfaces"]["iname"].get("mac_address") == "CC:EE:FF:GG:AA:AA": found = True break if found: break # FIXME: mac registration test code needs a correct settings file in order to # be enabled. # assert found == True # FIXME: the following tests don't work if pxe_just_once is disabled in settings so we need # to account for this by turning it on... # basically we need to rewrite the settings file # system = server.get_system("system0") # assert system["netboot_enabled"] == "True" # rc = server.disable_netboot("system0") # assert rc == True # ne = server.get_system("system0")["netboot_enabled"] # assert ne == False # FIXME: tests for new built-in configuration management feature # require that --template-files attributes be set. These do not # retrieve the kickstarts but rather config files (see Wiki topics). # This is probably better tested at the URL level with urlgrabber, one layer # up, in a different set of tests.. # FIXME: tests for rendered kickstart retrieval, same as above assert server.run_install_triggers("pre","profile","profile0","127.0.0.1") assert server.run_install_triggers("post","profile","profile0","127.0.0.1") assert server.run_install_triggers("pre","system","system0","127.0.0.1") assert server.run_install_triggers("post","system","system0","127.0.0.1") ver = server.version() assert (str(ver)[0] == "?" or str(ver).find(".") != -1) # do removals via the API since the read-only API can't do them # and the read-write tests are seperate _test_remove_objects() # this last bit mainly tests the tests, to ensure we've left nothing behind # not XMLRPC. Tests polluting the user config is not desirable even though # we do save/restore it. # assert (len(api.distros()) == before_distros) # assert (len(api.profiles()) == before_profiles) # assert (len(api.systems()) == before_systems) # assert (len(api.images()) == before_images) # assert (len(api.repos()) == before_repos) def test_xmlrpc_rw(): # ideally we need tests for the various auth modes, not just one # and the ownership module, though this will provide decent coverage. _test_setup_modules(authn="authn_testing",authz="authz_allowall") _test_bootstrap_restart() server = xmlrpclib.Server("http://127.0.0.1/cobbler_api") # remote api = cobbler_api.BootAPI() # local instance, /DO/ ping cobblerd # note if authn_testing is not engaged this will not work # test getting token, will raise remote exception on fail token = server.login("testing","testing") # create distro did = server.new_distro(token) server.modify_distro(did, "name", "distro1", token) server.modify_distro(did, "kernel", "/tmp/cobbler.fake", token) server.modify_distro(did, "initrd", "/tmp/cobbler.fake", token) server.modify_distro(did, "kopts", { "dog" : "fido", "cat" : "fluffy" }, token) # hash or string server.modify_distro(did, "ksmeta", "good=sg1 evil=gould", token) # hash or string server.modify_distro(did, "breed", "redhat", token) server.modify_distro(did, "os-version", "rhel5", token) server.modify_distro(did, "owners", "sam dave", token) # array or string server.modify_distro(did, "mgmt-classes", "blip", token) # list or string server.modify_distro(did, "template-files", "/tmp/cobbler.fake=/tmp/a /etc/fstab=/tmp/b",token) # hash or string server.modify_distro(did, "comment", "...", token) server.modify_distro(did, "redhat_management_key", "ALPHA", token) server.modify_distro(did, "redhat_management_server", "rhn.example.com", token) server.save_distro(did, token) # use the non-XMLRPC API to check that it's added seeing we tested XMLRPC RW APIs above # this makes extra sure it's been committed to disk. api.deserialize() assert api.find_distro("distro1") != None pid = server.new_profile(token) server.modify_profile(pid, "name", "profile1", token) server.modify_profile(pid, "distro", "distro1", token) server.modify_profile(pid, "enable-menu", True, token) server.modify_profile(pid, "kickstart", "/var/lib/cobbler/kickstarts/sample.ks", token) server.modify_profile(pid, "kopts", { "level" : "11" }, token) server.modify_profile(pid, "kopts_post", "noapic", token) server.modify_profile(pid, "virt_auto_boot", 0, token) server.modify_profile(pid, "virt_file_size", 20, token) server.modify_profile(pid, "virt_disk_driver", "raw", token) server.modify_profile(pid, "virt_ram", 2048, token) server.modify_profile(pid, "repos", [], token) server.modify_profile(pid, "template-files", {}, token) server.modify_profile(pid, "virt_path", "VolGroup00", token) server.modify_profile(pid, "virt_bridge", "virbr1", token) server.modify_profile(pid, "virt_cpus", 2, token) server.modify_profile(pid, "owners", [ "sam", "dave" ], token) server.modify_profile(pid, "mgmt_classes", "one two three", token) server.modify_profile(pid, "comment", "...", token) server.modify_profile(pid, "name_servers", ["one","two"], token) server.modify_profile(pid, "name_servers_search", ["one","two"], token) server.modify_profile(pid, "redhat_management_key", "BETA", token) server.modify_distro(did, "redhat_management_server", "sat.example.com", token) server.save_profile(pid, token) api.deserialize() assert api.find_profile("profile1") != None sid = server.new_system(token) server.modify_system(sid, 'name', 'system1', token) server.modify_system(sid, 'hostname', 'system1', token) server.modify_system(sid, 'gateway', '127.0.0.1', token) server.modify_system(sid, 'profile', 'profile1', token) server.modify_system(sid, 'kopts', { "dog" : "fido" }, token) server.modify_system(sid, 'kopts_post', { "cat" : "fluffy" }, token) server.modify_system(sid, 'kickstart', '/var/lib/cobbler/kickstarts/sample.ks', token) server.modify_system(sid, 'netboot_enabled', True, token) server.modify_system(sid, 'virt_path', "/opt/images", token) server.modify_system(sid, 'virt_type', 'qemu', token) server.modify_system(sid, 'name_servers', 'one two three four', token) server.modify_system(sid, 'name_servers_search', 'one two three four', token) server.modify_system(sid, 'modify_interface', { "macaddress-eth0" : "AA:BB:CC:EE:EE:EE", "ipaddress-eth0" : "192.168.10.50", "gateway-eth0" : "192.168.10.1", "virtbridge-eth0" : "virbr0", "dnsname-eth0" : "foo.example.com", "static-eth0" : False, "dhcptag-eth0" : "section2", "staticroutes-eth0" : "a:b:c d:e:f" }, token) server.modify_system(sid, 'modify_interface', { "static-eth1" : False, "staticroutes-eth1" : [ "g:h:i", "j:k:l" ] }, token) server.modify_system(sid, "mgmt_classes", [ "one", "two", "three"], token) server.modify_system(sid, "template_files", {}, token) server.modify_system(sid, "boot_files", {}, token) server.modify_system(sid, "fetchable_files", {}, token) server.modify_system(sid, "comment", "...", token) server.modify_system(sid, "power_address", "power.example.org", token) server.modify_system(sid, "power_type", "ipmitool", token) server.modify_system(sid, "power_user", "Admin", token) server.modify_system(sid, "power_pass", "magic", token) server.modify_system(sid, "power_id", "7", token) server.modify_system(sid, "redhat_management_key", "GAMMA", token) server.modify_distro(did, "redhat_management_server", "spacewalk.example.com", token) server.save_system(sid,token) api.deserialize() assert api.find_system("system1") != None # FIXME: add some checks on object contents iid = server.new_image(token) server.modify_image(iid, "name", "image1", token) server.modify_image(iid, "image_type", "iso", token) server.modify_image(iid, "breed", "redhat", token) server.modify_image(iid, "os_version", "rhel5", token) server.modify_image(iid, "arch", "x86_64", token) server.modify_image(iid, "file", "nfs://server/path/to/x.iso", token) server.modify_image(iid, "owners", [ "alex", "michael" ], token) server.modify_image(iid, "virt_auto_boot", 0, token) server.modify_image(iid, "virt_cpus", 1, token) server.modify_image(iid, "virt_file_size", 5, token) server.modify_image(iid, "virt_disk_driver", "raw", token) server.modify_image(iid, "virt_bridge", "virbr0", token) server.modify_image(iid, "virt_path", "VolGroup01", token) server.modify_image(iid, "virt_ram", 1024, token) server.modify_image(iid, "virt_type", "xenpv", token) server.modify_image(iid, "comment", "...", token) server.save_image(iid, token) api.deserialize() assert api.find_image("image1") != None # FIXME: add some checks on object contents # FIXME: repo adds rid = server.new_repo(token) server.modify_repo(rid, "name", "repo1", token) server.modify_repo(rid, "arch", "x86_64", token) server.modify_repo(rid, "mirror", "http://example.org/foo/x86_64", token) server.modify_repo(rid, "keep_updated", True, token) server.modify_repo(rid, "priority", "50", token) server.modify_repo(rid, "rpm_list", [], token) server.modify_repo(rid, "createrepo_flags", "--verbose", token) server.modify_repo(rid, "yumopts", {}, token) server.modify_repo(rid, "owners", [ "slash", "axl" ], token) server.modify_repo(rid, "mirror_locally", True, token) server.modify_repo(rid, "environment", {}, token) server.modify_repo(rid, "comment", "...", token) server.save_repo(rid, token) api.deserialize() assert api.find_repo("repo1") != None # FIXME: add some checks on object contents # test handle lookup did = server.get_distro_handle("distro1", token) assert did != None rid = server.get_repo_handle("repo1", token) assert rid != None iid = server.get_image_handle("image1", token) assert iid != None # test renames rc = server.rename_distro(did, "distro2", token) assert rc == True # object has changed due to parent rename, get a new handle pid = server.get_profile_handle("profile1", token) assert pid != None rc = server.rename_profile(pid, "profile2", token) assert rc == True # object has changed due to parent rename, get a new handle sid = server.get_system_handle("system1", token) assert sid != None rc = server.rename_system(sid, "system2", token) assert rc == True rc = server.rename_repo(rid, "repo2", token) assert rc == True rc = server.rename_image(iid, "image2", token) assert rc == True # FIXME: make the following code unneccessary api.clear() api.deserialize() assert api.find_distro("distro2") != None assert api.find_profile("profile2") != None assert api.find_repo("repo2") != None assert api.find_image("image2") != None assert api.find_system("system2") != None # BOOKMARK: currently here in terms of test testing. for d in api.distros(): print "FOUND DISTRO: %s" % d.name assert api.find_distro("distro1") == None assert api.find_profile("profile1") == None assert api.find_repo("repo1") == None assert api.find_image("image1") == None assert api.find_system("system1") == None did = server.get_distro_handle("distro2", token) assert did != None pid = server.get_profile_handle("profile2", token) assert pid != None rid = server.get_repo_handle("repo2", token) assert rid != None sid = server.get_system_handle("system2", token) assert sid != None iid = server.get_image_handle("image2", token) assert iid != None # test copies server.copy_distro(did, "distro1", token) server.copy_profile(pid, "profile1", token) server.copy_repo(rid, "repo1", token) server.copy_image(iid, "image1", token) server.copy_system(sid, "system1", token) api.deserialize() assert api.find_distro("distro2") != None assert api.find_profile("profile2") != None assert api.find_repo("repo2") != None assert api.find_image("image2") != None assert api.find_system("system2") != None assert api.find_distro("distro1") != None assert api.find_profile("profile1") != None assert api.find_repo("repo1") != None assert api.find_image("image1") != None assert api.find_system("system1") != None assert server.last_modified_time() > 0 print server.get_distros_since(2) assert len(server.get_distros_since(2)) > 0 assert len(server.get_profiles_since(2)) > 0 assert len(server.get_systems_since(2)) > 0 assert len(server.get_images_since(2)) > 0 assert len(server.get_repos_since(2)) > 0 assert len(server.get_distros_since(2)) > 0 now = time.time() the_future = time.time() + 99999 assert len(server.get_distros_since(the_future)) == 0 # it would be cleaner to do this from the distro down # and the server.update calls would then be unneeded. server.remove_system("system1", token) server.update() server.remove_profile("profile1", token) server.update() server.remove_distro("distro1", token) server.remove_repo("repo1", token) server.remove_image("image1", token) server.remove_system("system2", token) # again, calls are needed because we're deleting in the wrong # order. A fix is probably warranted for this. server.update() server.remove_profile("profile2", token) server.update() server.remove_distro("distro2", token) server.remove_repo("repo2", token) server.remove_image("image2", token) # have to update the API as it has changed api.update() d1 = api.find_distro("distro1") assert d1 is None assert api.find_profile("profile1") is None assert api.find_repo("repo1") is None assert api.find_image("image1") is None assert api.find_system("system1") is None for x in api.distros(): print "DISTRO REMAINING: %s" % x.name assert api.find_distro("distro2") is None assert api.find_profile("profile2") is None assert api.find_repo("repo2") is None assert api.find_image("image2") is None assert api.find_system("system2") is None # FIXME: should not need cleanup as we've done it above _test_remove_objects() cobbler-2.4.1/cobbler/resource.py000066400000000000000000000041041227367477500167620ustar00rootroot00000000000000""" An Resource is a serializable thing that can appear in a Collection Copyright 2006-2009, Red Hat, Inc and Others Kelsey Hightower This software may be freely redistributed under the terms of the GNU general public license. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. """ import item import utils from cexceptions import CX class Resource(item.Item): """ Base Class for management resources. """ def set_action(self,action): """ All management resources have an action. Action determine weather a most resources should be created or removed, and if packages should be installed or un-installed. """ action = action.lower() valid_actions = ['create','remove'] if action not in valid_actions: raise CX('%s is not a valid action' % action) self.action = action return True def set_group(self,group): """ Unix group ownership of a file or directory. """ self.group = group return True def set_mode(self,mode): """ Unix file permission mode ie: '0644' assigned to file and directory resources. """ self.mode = mode return True def set_owner(self,owner): """ Unix owner of a file or directory """ self.owner = owner return True def set_path(self,path): """ File path used by file and directory resources. Normally a absolute path of the file or directory to create or manage. """ self.path = path return True def set_template(self,template): """ Path to cheetah template on cobbler's local file system. Used to generate file data shipped to koan via json. All templates have access to flatten ksmeta data. """ self.template = template return Truecobbler-2.4.1/cobbler/serializer.py000066400000000000000000000115031227367477500173050ustar00rootroot00000000000000""" Serializer code for cobbler Now adapted to support different storage backends Copyright 2006-2009, Red Hat, Inc and Others Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import errno import os from utils import _ import fcntl import traceback import sys import signal import time from cexceptions import * import api as cobbler_api LOCK_ENABLED = True LOCK_HANDLE = None def handler(num,frame): print >> sys.stderr, "Ctrl-C not allowed during writes. Please wait." return True def __grab_lock(): """ Dual purpose locking: (A) flock to avoid multiple process access (B) block signal handler to avoid ctrl+c while writing YAML """ try: if LOCK_ENABLED: if not os.path.exists("/var/lib/cobbler/lock"): fd = open("/var/lib/cobbler/lock","w+") fd.close() LOCK_HANDLE = open("/var/lib/cobbler/lock","r") fcntl.flock(LOCK_HANDLE.fileno(), fcntl.LOCK_EX) return True except: # this is pretty much FATAL, avoid corruption and quit now. traceback.print_exc() sys.exit(7) def __release_lock(with_changes=False): if with_changes: # this file is used to know when the last config change # was made -- allowing the API to work more smoothly without # a lot of unneccessary reloads. fd = os.open("/var/lib/cobbler/.mtime", os.O_CREAT|os.O_RDWR, 0200) os.write(fd, "%f" % time.time()) os.close(fd) if LOCK_ENABLED: LOCK_HANDLE = open("/var/lib/cobbler/lock","r") fcntl.flock(LOCK_HANDLE.fileno(), fcntl.LOCK_UN) LOCK_HANDLE.close() return True def serialize(obj): """ Save a collection to disk or other storage. """ __grab_lock() storage_module = __get_storage_module(obj.collection_type()) storage_module.serialize(obj) __release_lock() return True def serialize_item(collection, item): """ Save an item. """ __grab_lock() storage_module = __get_storage_module(collection.collection_type()) save_fn = getattr(storage_module, "serialize_item", None) if save_fn is None: rc = storage_module.serialize(collection) else: rc = save_fn(collection,item) __release_lock(with_changes=True) return rc def serialize_delete(collection, item): """ Delete an object from a saved state. """ __grab_lock() storage_module = __get_storage_module(collection.collection_type()) delete_fn = getattr(storage_module, "serialize_delete", None) if delete_fn is None: rc = storage_module.serialize(collection) else: rc = delete_fn(collection,item) __release_lock(with_changes=True) return rc def deserialize(obj,topological=True): """ Fill in an empty collection from disk or other storage """ __grab_lock() storage_module = __get_storage_module(obj.collection_type()) rc = storage_module.deserialize(obj,topological) __release_lock() return rc def deserialize_raw(collection_type): """ Return the datastructure corresponding to the serialized disk state, without going through the Cobbler object system. Much faster, when you don't need the objects. """ __grab_lock() storage_module = __get_storage_module(collection_type) rc = storage_module.deserialize_raw(collection_type) __release_lock() return rc def deserialize_item(collection_type, item_name): """ Get a specific record. """ __grab_lock() storage_module = __get_storage_module(collection_type) rc = storage_module.deserialize_item(collection_type, item_name) __release_lock() return rc def deserialize_item_raw(collection_type, item_name): __grab_lock() storage_module = __get_storage_module(collection_type) rc = storage_module.deserialize_item_raw(collection_type, item_name) __release_lock() return rc def __get_storage_module(collection_type): """ Look up serializer in /etc/cobbler/modules.conf """ capi = cobbler_api.BootAPI() return capi.get_module_from_file("serializers",collection_type,"serializer_catalog") if __name__ == "__main__": __grab_lock() __release_lock() cobbler-2.4.1/cobbler/services.py000066400000000000000000000423021227367477500167600ustar00rootroot00000000000000""" Mod Python service functions for Cobbler's public interface (aka cool stuff that works with wget) based on code copyright 2007 Albert P. Tobey additions: 2007-2009 Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import exceptions import xmlrpclib import os import traceback import string import sys import time import urlgrabber import yaml # PyYAML import config # the following imports are largely for the test code import remote import glob import api as cobbler_api import utils import os import os.path import simplejson class CobblerSvc(object): """ Interesting mod python functions are all keyed off the parameter mode, which defaults to index. All options are passed as parameters into the function. """ def __init__(self, server=None, req=None): self.server = server self.remote = None self.req = req self.config = config.Config(self) def __xmlrpc_setup(self): """ Sets up the connection to the Cobbler XMLRPC server. This is the version that does not require logins. """ if self.remote is None: self.remote = xmlrpclib.Server(self.server, allow_none=True) def index(self,**args): return "no mode specified" def debug(self,profile=None,**rest): # the purpose of this method could change at any time # and is intented for temporary test code only, don't rely on it self.__xmlrpc_setup() return self.remote.get_repos_compatible_with_profile(profile) def ks(self,profile=None,system=None,REMOTE_ADDR=None,REMOTE_MAC=None,**rest): """ Generate kickstart files... """ self.__xmlrpc_setup() data = self.remote.generate_kickstart(profile,system,REMOTE_ADDR,REMOTE_MAC) return u"%s" % data def gpxe(self,profile=None,system=None,**rest): """ Generate a gPXE config """ self.__xmlrpc_setup() data = self.remote.generate_gpxe(profile,system) return u"%s" % data def bootcfg(self,profile=None,system=None,**rest): """ Generate a boot.cfg config file. Used primarily for VMware ESXi. """ self.__xmlrpc_setup() data = self.remote.generate_bootcfg(profile,system) return u"%s" % data def script(self,profile=None,system=None,**rest): """ Generate a script based on snippets. Useful for post or late-action scripts where it's difficult to embed the script in the response file. """ self.__xmlrpc_setup() data = self.remote.generate_script(profile,system,rest['query_string']['script'][0]) return u"%s" % data def events(self,user="",**rest): self.__xmlrpc_setup() if user == "": data = self.remote.get_events("") else: data = self.remote.get_events(user) # sort it... it looks like { timestamp : [ array of details ] } keylist = data.keys() keylist.sort() results = [] for k in keylist: etime = int(data[k][0]) nowtime = time.time() if ((nowtime - etime) < 30): results.append([k, data[k][0], data[k][1], data[k][2]]) return simplejson.dumps(results) def template(self,profile=None,system=None,path=None,**rest): """ Generate a templated file for the system """ self.__xmlrpc_setup() if path is not None: path = path.replace("_","/") path = path.replace("//","_") else: return "# must specify a template path" if profile is not None: data = self.remote.get_template_file_for_profile(profile,path) elif system is not None: data = self.remote.get_template_file_for_system(system,path) else: data = "# must specify profile or system name" return data def yum(self,profile=None,system=None,**rest): self.__xmlrpc_setup() if profile is not None: data = self.remote.get_repo_config_for_profile(profile) elif system is not None: data = self.remote.get_repo_config_for_system(system) else: data = "# must specify profile or system name" return data def trig(self,mode="?",profile=None,system=None,REMOTE_ADDR=None,**rest): """ Hook to call install triggers. """ self.__xmlrpc_setup() ip = REMOTE_ADDR if profile: rc = self.remote.run_install_triggers(mode,"profile",profile,ip) else: rc = self.remote.run_install_triggers(mode,"system",system,ip) return str(rc) def nopxe(self,system=None,**rest): self.__xmlrpc_setup() return str(self.remote.disable_netboot(system)) def list(self,what="systems",**rest): self.__xmlrpc_setup() buf = "" if what == "systems": listing = self.remote.get_systems() elif what == "profiles": listing = self.remote.get_profiles() elif what == "distros": listing = self.remote.get_distros() elif what == "images": listing = self.remote.get_images() elif what == "repos": listing = self.remote.get_repos() elif what == "mgmtclasses": listing = self.remote.get_mgmtclasses() elif what == "packages": listing = self.remote.get_packages() elif what == "files": listing = self.remote.get_files() else: return "?" for x in listing: buf = buf + "%s\n" % x["name"] return buf def autodetect(self,**rest): self.__xmlrpc_setup() systems = self.remote.get_systems() # if kssendmac was in the kernel options line, see # if a system can be found matching the MAC address. This # is more specific than an IP match. macinput = rest["REMOTE_MAC"] if macinput is not None: # FIXME: will not key off other NICs, problem? mac = macinput.split()[1].strip() else: mac = "None" ip = rest["REMOTE_ADDR"] candidates = [] for x in systems: for y in x["interfaces"]: if x["interfaces"][y]["mac_address"].lower() == mac.lower(): candidates.append(x) if len(candidates) == 0: for x in systems: for y in x["interfaces"]: if x["interfaces"][y]["ip_address"] == ip: candidates.append(x) if len(candidates) == 0: return "FAILED: no match (%s,%s)" % (ip, macinput) elif len(candidates) > 1: return "FAILED: multiple matches" elif len(candidates) == 1: return candidates[0]["name"] def look(self,**rest): # debug only return repr(rest) def findks(self,system=None,profile=None,**rest): self.__xmlrpc_setup() serverseg = "http//%s" % self.config._settings.server name = "?" type = "system" if system is not None: url = "%s/cblr/svc/op/ks/system/%s" % (serverseg, name) elif profile is not None: url = "%s/cblr/svc/op/ks/profile/%s" % (serverseg, name) else: name = self.autodetect(**rest) if name.startswith("FAILED"): return "# autodetection %s" % name url = "%s/cblr/svc/op/ks/system/%s" % (serverseg, name) try: return urlgrabber.urlread(url) except: return "# kickstart retrieval failed (%s)" % url def puppet(self,hostname=None,**rest): self.__xmlrpc_setup() if hostname is None: return "hostname is required" settings = self.remote.get_settings() results = self.remote.find_system_by_dns_name(hostname) classes = results.get("mgmt_classes", {}) params = results.get("mgmt_parameters",{}) environ = results.get("status","production") if settings.get("puppet_parameterized_classes",False): for ckey in classes.keys(): tmp = {} class_name = classes[ckey].get("class_name","") if class_name in (None,""): class_name = ckey if classes[ckey].get("is_definition",False): def_tmp = {} def_name = classes[ckey]["params"].get("name","") del classes[ckey]["params"]["name"] if def_name != "": for pkey in classes[ckey]["params"].keys(): def_tmp[pkey] = classes[ckey]["params"][pkey] tmp["instances"] = {def_name:def_tmp} else: # FIXME: log an error here? # skip silently... continue else: for pkey in classes[ckey]["params"].keys(): tmp[pkey] = classes[ckey]["params"][pkey] del classes[ckey] classes[class_name] = tmp else: classes = classes.keys() newdata = { "classes" : classes, "parameters" : params, "environment": environ, } return yaml.dump(newdata,default_flow_style=False) def __test_setup(): # this contains some code from remote.py that has been modified # slightly to add in some extra parameters for these checks. # it can probably be combined into something like a test_utils # module later. api = cobbler_api.BootAPI() fake = open("/tmp/cobbler.fake","w+") fake.write("") fake.close() distro = api.new_distro() distro.set_name("distro0") distro.set_kernel("/tmp/cobbler.fake") distro.set_initrd("/tmp/cobbler.fake") api.add_distro(distro) repo = api.new_repo() repo.set_name("repo0") if not os.path.exists("/tmp/empty"): os.mkdir("/tmp/empty",770) repo.set_mirror("/tmp/empty") files = glob.glob("rpm-build/*.rpm") if len(files) == 0: raise Exception("Tests must be run from the cobbler checkout directory.") rc = utils.subprocess_call(None,"cp rpm-build/*.rpm /tmp/empty",shell=True) api.add_repo(repo) fd = open("/tmp/cobbler_t1","w+") fd.write("$profile_name") fd.close() fd = open("/tmp/cobbler_t2","w+") fd.write("$system_name") fd.close() profile = api.new_profile() profile.set_name("profile0") profile.set_distro("distro0") profile.set_kickstart("/var/lib/cobbler/kickstarts/sample.ks") profile.set_repos(["repo0"]) profile.set_mgmt_classes(["alpha","beta"]) profile.set_ksmeta({"tree":"look_for_this1","gamma":3}) profile.set_template_files("/tmp/cobbler_t1=/tmp/t1-rendered") api.add_profile(profile) system = api.new_system() system.set_name("system0") system.set_hostname("hostname0") system.set_gateway("192.168.1.1") system.set_profile("profile0") system.set_dns_name("hostname0","eth0") system.set_ksmeta({"tree":"look_for_this2"}) system.set_template_files({"/tmp/cobbler_t2":"/tmp/t2-rendered"}) api.add_system(system) image = api.new_image() image.set_name("image0") image.set_file("/tmp/cobbler.fake") api.add_image(image) # perhaps an artifact of the test process? # FIXME: get path (at least webdir) from settings? if os.path.exists("/var/www/cobbler/repo_mirror/"): utils.os_system("rm -rf /var/www/cobbler/repo_mirror/repo0") elif os.path.exists("/srv/www/cobbler/repo_mirror/"): utils.os_system("rm -rf /srv/www/cobbler/repo_mirror/repo0") api.reposync(name="repo0") def test_services_access(): import remote remote._test_setup_settings(pxe_once=1) remote._test_bootstrap_restart() remote._test_remove_objects() __test_setup() time.sleep(5) api = cobbler_api.BootAPI() # test mod_python service URLs -- more to be added here templates = [ "sample.ks", "sample_end.ks", "legacy.ks" ] for template in templates: ks = "/var/lib/cobbler/kickstarts/%s" % template p = api.find_profile("profile0") assert p is not None p.set_kickstart(ks) api.add_profile(p) url = "http://127.0.0.1/cblr/svc/op/ks/profile/profile0" data = urlgrabber.urlread(url) assert data.find("look_for_this1") != -1 url = "http://127.0.0.1/cblr/svc/op/ks/system/system0" data = urlgrabber.urlread(url) assert data.find("look_for_this2") != -1 # see if we can pull up the yum configs url = "http://127.0.0.1/cblr/svc/op/yum/profile/profile0" data = urlgrabber.urlread(url) print "D1=%s" % data assert data.find("repo0") != -1 url = "http://127.0.0.1/cblr/svc/op/yum/system/system0" data = urlgrabber.urlread(url) print "D2=%s" % data assert data.find("repo0") != -1 for a in [ "pre", "post" ]: filename = "/var/lib/cobbler/triggers/install/%s/unit_testing" % a fd = open(filename, "w+") fd.write("#!/bin/bash\n") fd.write("echo \"TESTING %s type ($1) name ($2) ip ($3)\" >> /var/log/cobbler/kicklog/cobbler_trigger_test\n" % a) fd.write("exit 0\n") fd.close() utils.os_system("chmod +x %s" % filename) urls = [ "http://127.0.0.1/cblr/svc/op/trig/mode/pre/profile/profile0" "http://127.0.0.1/cblr/svc/op/trig/mode/post/profile/profile0" "http://127.0.0.1/cblr/svc/op/trig/mode/pre/system/system0" "http://127.0.0.1/cblr/svc/op/trig/mode/post/system/system0" ] for x in urls: print "reading: %s" % url data = urlgrabber.urlread(x) print "read: %s" % data time.sleep(5) assert os.path.exists("/var/log/cobbler/kicklog/cobbler_trigger_test") os.unlink("/var/log/cobbler/kicklog/cobbler_trigger_test") os.unlink("/var/lib/cobbler/triggers/install/pre/unit_testing") os.unlink("/var/lib/cobbler/triggers/install/post/unit_testing") # trigger testing complete # now let's test the nopxe URL (Boot loop prevention) sys = api.find_system("system0") sys.set_netboot_enabled(True) api.add_system(sys) # save the system to ensure it's set True url = "http://127.0.0.1/cblr/svc/op/nopxe/system/system0" data = urlgrabber.urlread(url) time.sleep(2) sys = api.find_system("system0") assert str(sys.netboot_enabled).lower() not in [ "1", "true", "yes" ] # now let's test the listing URLs since we document # them even know I don't know of anything relying on them. url = "http://127.0.0.1/cblr/svc/op/list/what/distros" assert urlgrabber.urlread(url).find("distro0") != -1 url = "http://127.0.0.1/cblr/svc/op/list/what/profiles" assert urlgrabber.urlread(url).find("profile0") != -1 url = "http://127.0.0.1/cblr/svc/op/list/what/systems" assert urlgrabber.urlread(url).find("system0") != -1 url = "http://127.0.0.1/cblr/svc/op/list/what/repos" assert urlgrabber.urlread(url).find("repo0") != -1 url = "http://127.0.0.1/cblr/svc/op/list/what/images" assert urlgrabber.urlread(url).find("image0") != -1 # the following modes are implemented by external apps # and are not concerned part of cobbler's core, so testing # is less of a priority: # autodetect # findks # these features may be removed in a later release # of cobbler but really aren't hurting anything so there # is no pressing need. # now let's test the puppet external nodes support # and just see if we get valid YAML back without # doing much more url = "http://127.0.0.1/cblr/svc/op/puppet/hostname/hostname0" data = urlgrabber.urlread(url) assert data.find("alpha") != -1 assert data.find("beta") != -1 assert data.find("gamma") != -1 assert data.find("3") != -1 data = yaml.safe_load(data) assert data.has_key("classes") assert data.has_key("parameters") # now let's test the template file serving # which is used by the snippet download_config_files # and also by koan's --update-files url = "http://127.0.0.1/cblr/svc/op/template/profile/profile0/path/_tmp_t1-rendered" data = urlgrabber.urlread(url) assert data.find("profile0") != -1 assert data.find("$profile_name") == -1 url = "http://127.0.0.1/cblr/svc/op/template/system/system0/path/_tmp_t2-rendered" data = urlgrabber.urlread(url) assert data.find("system0") != -1 assert data.find("$system_name") == -1 os.unlink("/tmp/cobbler_t1") os.unlink("/tmp/cobbler_t2") remote._test_remove_objects() cobbler-2.4.1/cobbler/settings.py000066400000000000000000000263331227367477500170030ustar00rootroot00000000000000""" Cobbler app-wide settings Copyright 2006-2008, Red Hat, Inc and Others Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import utils from utils import _ import os.path import glob import re import sys TESTMODE = False # defaults is to be used if the config file doesn't contain the value # we need. DEFAULTS = { "anamon_enabled" : [0,"bool"], "allow_duplicate_hostnames" : [0,"bool"], "allow_duplicate_macs" : [0,"bool"], "allow_duplicate_ips" : [0,"bool"], "allow_dynamic_settings" : [0,"bool"], "auth_token_expiration" : [3600,"int"], "bind_chroot_path" : ["","str"], "bind_master" : ["127.0.0.1","str"], "build_reporting_enabled" : [0,"bool"], "build_reporting_to_address" : ["","str"], "build_reporting_sender" : ["","str"], "build_reporting_subject" : ["","str"], "build_reporting_smtp_server" : ["localhost","str"], "build_reporting_ignorelist" : ["","str"], "buildisodir" : ["/var/cache/cobbler/buildiso","str"], "cheetah_import_whitelist" : [["re", "random", "time"],"list"], "client_use_localhost" : ["","str"], "cobbler_master" : ["","str"], "consoles" : ["/var/consoles","str"], "createrepo_flags" : ["-c cache -s sha","str"], "default_deployment_method" : ["ssh","str"], "default_kickstart" : ["/var/lib/cobbler/kickstarts/default.ks","str"], "default_name_servers" : [[],"list"], "default_name_servers_search" : [[],"list"], "default_password_crypted" : ["\$1\$mF86/UHC\$WvcIcX2t6crBz2onWxyac.","str"], "default_template_type" : ["cheetah","str"], "default_virt_bridge" : ["xenbr0","str"], "default_virt_type" : ["auto","str"], "default_virt_file_size" : [5,"int"], "default_virt_disk_driver" : ["raw","str"], "default_virt_ram" : [512,"int"], "default_ownership" : [["admin"],"list"], "enable_gpxe" : [0,"bool"], "enable_menu" : [1,"bool"], "func_master" : ["overlord.example.org","str"], "func_auto_setup" : [0,"bool"], "http_port" : [80,"int"], "isc_set_host_name" : [0,"bool"], "iso_template_dir" : ["/etc/cobbler/iso","str"], "kerberos_realm" : ["EXAMPLE.COM","str"], "kernel_options" : [{"lang":" ", "text":None, "ksdevice":"eth0"},"dict"], "kernel_options_s390x" : [{},"dict"], "ldap_server" : ["grimlock.devel.redhat.com","str"], "ldap_base_dn" : ["DC=devel,DC=redhat,DC=com","str"], "ldap_port" : [389,"int"], "ldap_tls" : ["on","str"], "ldap_anonymous_bind" : [1,"bool"], "ldap_management_default_type" : ["authconfig","str"], "ldap_search_bind_dn" : ["","str"], "ldap_search_passwd" : ["","str"], "ldap_search_prefix" : ['uid=',"str"], "ldap_tls_cacertfile" : ["", "str"], "ldap_tls_keyfile" : ["", "str"], "ldap_tls_certfile" : ["", "str"], "manage_dhcp" : [0,"bool"], "manage_dns" : [0,"bool"], "manage_tftp" : [1,"bool"], "manage_tftpd" : [1,"bool"], "manage_rsync" : [0,"bool"], "manage_forward_zones" : [[],"list"], "manage_reverse_zones" : [[],"list"], "mgmt_classes" : [[],"list"], "mgmt_parameters" : [{},"dict"], "next_server" : ["127.0.0.1","str"], "power_management_default_type" : ["ipmitool","str"], "power_template_dir" : ["/etc/cobbler/power","str"], "puppet_auto_setup" : [0,"bool"], "puppet_parameterized_classes" : [1,"bool"], "puppet_server" : ["puppet","str"], "puppet_version" : [2,"int"], "pxe_just_once" : [0,"bool"], "pxe_template_dir" : ["/etc/cobbler/pxe","str"], "redhat_management_permissive" : [0,"bool"], "redhat_management_type" : ["off","str"], "redhat_management_key" : ["","str"], "redhat_management_server" : ["xmlrpc.rhn.redhat.com","str"], "register_new_installs" : [0,"bool"], "remove_old_puppet_certs_automatically": [0,"bool"], "replicate_rsync_options" : ["-avzH", "str"], "replicate_repo_rsync_options" : ["-avzH", "str"], "reposync_flags" : ["-l -m -d","str"], "restart_dns" : [1,"bool"], "restart_dhcp" : [1,"bool"], "restart_xinetd" : [1,"bool"], "run_install_triggers" : [1,"bool"], "scm_track_enabled" : [0,"bool"], "scm_track_mode" : ["git","str"], "serializer_pretty_json" : [0,"bool"], "server" : ["127.0.0.1","str"], "sign_puppet_certs_automatically" : [0,"bool"], "signature_path" : ["/var/lib/cobbler/distro_signatures.json","str"], "signature_url" : ["http://www.cobblerd.org/signatures/latest.json","str"], "snippetsdir" : ["/var/lib/cobbler/snippets","str"], "template_remote_kickstarts" : [0,"bool"], "virt_auto_boot" : [0,"bool"], "webdir" : ["/var/www/cobbler","str"], "xmlrpc_port" : [25151,"int"], "yum_post_install_mirror" : [1,"bool"], "yum_distro_priority" : [1,"int"], "yumdownloader_flags" : ["--resolve","str"], } FIELDS = [ ["name","","","Name",True,"Ex: server",0,"str"], ["value","","","Value",True,"Ex: 127.0.0.1",0,"str"], ] if os.path.exists("/srv/www/"): DEFAULTS["webdir"] = "/srv/www/cobbler" # Autodetect bind chroot configuration # RHEL/Fedora if os.path.exists("/etc/sysconfig/named"): bind_config_filename = "/etc/sysconfig/named" # Debian else: bind_config_filename = None bind_config_files = glob.glob("/etc/default/bind*") for filename in bind_config_files: if os.path.exists(filename): bind_config_filename = filename # Parse the config file if bind_config_filename: bind_config = {} # When running as a webapp we can't access this, but don't need it try: bind_config_file = open(bind_config_filename,"r") except (IOError, OSError): pass else: for line in bind_config_file: if re.match("[a-zA-Z]+=", line): (name, value) = line.rstrip().split("=") bind_config[name] = value.strip('"') # RHEL, SysV Fedora if "ROOTDIR" in bind_config: DEFAULTS["bind_chroot_path"] = bind_config["ROOTDIR"] # Debian, Systemd Fedora if "OPTIONS" in bind_config: rootdirmatch = re.search("-t ([/\w]+)", bind_config["OPTIONS"]) if rootdirmatch is not None: DEFAULTS["bind_chroot_path"] = rootdirmatch.group(1) class Settings: def collection_type(self): return "settings" def __init__(self): """ Constructor. """ self.clear() def clear(self): """ Reset this object to reasonable default values. """ self.__dict__ = {} for key in DEFAULTS.keys(): self.__dict__[key] = DEFAULTS[key][0] def set(self,name,value): return self.__setattr__(name,value) def printable(self): buf = "" buf = buf + _("defaults\n") buf = buf + _("kernel options : %s\n") % self.__dict__['kernel_options'] return buf def to_datastruct(self): """ Return an easily serializable representation of the config. """ return self.__dict__ def from_datastruct(self,datastruct): """ Modify this object to load values in datastruct. """ if datastruct is None: print _("warning: not loading empty structure for %s") % self.filename() return self.clear() self.__dict__.update(datastruct) return self def __setattr__(self,name,value): if DEFAULTS.has_key(name): try: if DEFAULTS[name][1] == "str": value = str(value) elif DEFAULTS[name][1] == "int": value = int(value) elif DEFAULTS[name][1] == "bool": if utils.input_boolean(value): value = 1 else: value = 0 elif DEFAULTS[name][1] == "float": value = float(value) elif DEFAULTS[name][1] == "list": value = utils.input_string_or_list(value) elif DEFAULTS[name][1] == "dict": value = utils.input_string_or_hash(value)[1] except: raise AttributeError, "failed to set %s to %s" % (name,str(value)) self.__dict__[name] = value if not utils.update_settings_file(self.to_datastruct()): raise AttributeError, "failed to save the settings file!" return 0 else: raise AttributeError, name def __getattr__(self,name): try: if name == "kernel_options": # backwards compatibility -- convert possible string value to hash (success, result) = utils.input_string_or_hash(self.__dict__[name], " ",allow_multiples=False) self.__dict__[name] = result return result return self.__dict__[name] except: if DEFAULTS.has_key(name): lookup = DEFAULTS[name][0] self.__dict__[name] = lookup return lookup else: raise AttributeError, name cobbler-2.4.1/cobbler/sub_process.py000066400000000000000000001157731227367477500175010ustar00rootroot00000000000000# subprocess - Subprocesses with accessible I/O streams # # For more information about this module, see PEP 324. # # This module should remain compatible with Python 2.2, see PEP 291. # # Copyright (c) 2003-2005 by Peter Astrand # # Licensed to PSF under a Contributor Agreement. # See http://www.python.org/2.4/license for licensing details. r"""subprocess - Subprocesses with accessible I/O streams This module allows you to spawn processes, connect to their input/output/error pipes, and obtain their return codes. This module intends to replace several other, older modules and functions, like: os.system os.spawn* os.popen* popen2.* commands.* Information about how the subprocess module can be used to replace these modules and functions can be found below. Using the subprocess module =========================== This module defines one class called Popen: class Popen(args, bufsize=0, executable=None, stdin=None, stdout=None, stderr=None, preexec_fn=None, close_fds=False, shell=False, cwd=None, env=None, universal_newlines=False, startupinfo=None, creationflags=0): Arguments are: args should be a string, or a sequence of program arguments. The program to execute is normally the first item in the args sequence or string, but can be explicitly set by using the executable argument. On UNIX, with shell=False (default): In this case, the Popen class uses os.execvp() to execute the child program. args should normally be a sequence. A string will be treated as a sequence with the string as the only item (the program to execute). On UNIX, with shell=True: If args is a string, it specifies the command string to execute through the shell. If args is a sequence, the first item specifies the command string, and any additional items will be treated as additional shell arguments. On Windows: the Popen class uses CreateProcess() to execute the child program, which operates on strings. If args is a sequence, it will be converted to a string using the list2cmdline method. Please note that not all MS Windows applications interpret the command line the same way: The list2cmdline is designed for applications using the same rules as the MS C runtime. bufsize, if given, has the same meaning as the corresponding argument to the built-in open() function: 0 means unbuffered, 1 means line buffered, any other positive value means use a buffer of (approximately) that size. A negative bufsize means to use the system default, which usually means fully buffered. The default value for bufsize is 0 (unbuffered). stdin, stdout and stderr specify the executed programs' standard input, standard output and standard error file handles, respectively. Valid values are PIPE, an existing file descriptor (a positive integer), an existing file object, and None. PIPE indicates that a new pipe to the child should be created. With None, no redirection will occur; the child's file handles will be inherited from the parent. Additionally, stderr can be STDOUT, which indicates that the stderr data from the applications should be captured into the same file handle as for stdout. If preexec_fn is set to a callable object, this object will be called in the child process just before the child is executed. If close_fds is true, all file descriptors except 0, 1 and 2 will be closed before the child process is executed. if shell is true, the specified command will be executed through the shell. If cwd is not None, the current directory will be changed to cwd before the child is executed. If env is not None, it defines the environment variables for the new process. If universal_newlines is true, the file objects stdout and stderr are opened as a text files, but lines may be terminated by any of '\n', the Unix end-of-line convention, '\r', the Macintosh convention or '\r\n', the Windows convention. All of these external representations are seen as '\n' by the Python program. Note: This feature is only available if Python is built with universal newline support (the default). Also, the newlines attribute of the file objects stdout, stdin and stderr are not updated by the communicate() method. The startupinfo and creationflags, if given, will be passed to the underlying CreateProcess() function. They can specify things such as appearance of the main window and priority for the new process. (Windows only) This module also defines two shortcut functions: call(*args, **kwargs): Run command with arguments. Wait for command to complete, then return the returncode attribute. The arguments are the same as for the Popen constructor. Example: retcode = call(["ls", "-l"]) Exceptions ---------- Exceptions raised in the child process, before the new program has started to execute, will be re-raised in the parent. Additionally, the exception object will have one extra attribute called 'child_traceback', which is a string containing traceback information from the childs point of view. The most common exception raised is OSError. This occurs, for example, when trying to execute a non-existent file. Applications should prepare for OSErrors. A ValueError will be raised if Popen is called with invalid arguments. Security -------- Unlike some other popen functions, this implementation will never call /bin/sh implicitly. This means that all characters, including shell metacharacters, can safely be passed to child processes. Popen objects ============= Instances of the Popen class have the following methods: poll() Check if child process has terminated. Returns returncode attribute. wait() Wait for child process to terminate. Returns returncode attribute. communicate(input=None) Interact with process: Send data to stdin. Read data from stdout and stderr, until end-of-file is reached. Wait for process to terminate. The optional stdin argument should be a string to be sent to the child process, or None, if no data should be sent to the child. communicate() returns a tuple (stdout, stderr). Note: The data read is buffered in memory, so do not use this method if the data size is large or unlimited. The following attributes are also available: stdin If the stdin argument is PIPE, this attribute is a file object that provides input to the child process. Otherwise, it is None. stdout If the stdout argument is PIPE, this attribute is a file object that provides output from the child process. Otherwise, it is None. stderr If the stderr argument is PIPE, this attribute is file object that provides error output from the child process. Otherwise, it is None. pid The process ID of the child process. returncode The child return code. A None value indicates that the process hasn't terminated yet. A negative value -N indicates that the child was terminated by signal N (UNIX only). Replacing older functions with the subprocess module ==================================================== In this section, "a ==> b" means that b can be used as a replacement for a. Note: All functions in this section fail (more or less) silently if the executed program cannot be found; this module raises an OSError exception. In the following examples, we assume that the subprocess module is imported with "from subprocess import *". Replacing /bin/sh shell backquote --------------------------------- output=`mycmd myarg` ==> output = Popen(["mycmd", "myarg"], stdout=PIPE).communicate()[0] Replacing shell pipe line ------------------------- output=`dmesg | grep hda` ==> p1 = Popen(["dmesg"], stdout=PIPE) p2 = Popen(["grep", "hda"], stdin=p1.stdout, stdout=PIPE) output = p2.communicate()[0] Replacing os.system() --------------------- sts = os.system("mycmd" + " myarg") ==> p = Popen("mycmd" + " myarg", shell=True) sts = os.waitpid(p.pid, 0) Note: * Calling the program through the shell is usually not required. * It's easier to look at the returncode attribute than the exitstatus. A more real-world example would look like this: try: retcode = call("mycmd" + " myarg", shell=True) if retcode < 0: print >>sys.stderr, "Child was terminated by signal", -retcode else: print >>sys.stderr, "Child returned", retcode except OSError, e: print >>sys.stderr, "Execution failed:", e Replacing os.spawn* ------------------- P_NOWAIT example: pid = os.spawnlp(os.P_NOWAIT, "/bin/mycmd", "mycmd", "myarg") ==> pid = Popen(["/bin/mycmd", "myarg"]).pid P_WAIT example: retcode = os.spawnlp(os.P_WAIT, "/bin/mycmd", "mycmd", "myarg") ==> retcode = call(["/bin/mycmd", "myarg"]) Vector example: os.spawnvp(os.P_NOWAIT, path, args) ==> Popen([path] + args[1:]) Environment example: os.spawnlpe(os.P_NOWAIT, "/bin/mycmd", "mycmd", "myarg", env) ==> Popen(["/bin/mycmd", "myarg"], env={"PATH": "/usr/bin"}) Replacing os.popen* ------------------- pipe = os.popen(cmd, mode='r', bufsize) ==> pipe = Popen(cmd, shell=True, bufsize=bufsize, stdout=PIPE).stdout pipe = os.popen(cmd, mode='w', bufsize) ==> pipe = Popen(cmd, shell=True, bufsize=bufsize, stdin=PIPE).stdin (child_stdin, child_stdout) = os.popen2(cmd, mode, bufsize) ==> p = Popen(cmd, shell=True, bufsize=bufsize, stdin=PIPE, stdout=PIPE, close_fds=True) (child_stdin, child_stdout) = (p.stdin, p.stdout) (child_stdin, child_stdout, child_stderr) = os.popen3(cmd, mode, bufsize) ==> p = Popen(cmd, shell=True, bufsize=bufsize, stdin=PIPE, stdout=PIPE, stderr=PIPE, close_fds=True) (child_stdin, child_stdout, child_stderr) = (p.stdin, p.stdout, p.stderr) (child_stdin, child_stdout_and_stderr) = os.popen4(cmd, mode, bufsize) ==> p = Popen(cmd, shell=True, bufsize=bufsize, stdin=PIPE, stdout=PIPE, stderr=STDOUT, close_fds=True) (child_stdin, child_stdout_and_stderr) = (p.stdin, p.stdout) Replacing popen2.* ------------------ Note: If the cmd argument to popen2 functions is a string, the command is executed through /bin/sh. If it is a list, the command is directly executed. (child_stdout, child_stdin) = popen2.popen2("somestring", bufsize, mode) ==> p = Popen(["somestring"], shell=True, bufsize=bufsize stdin=PIPE, stdout=PIPE, close_fds=True) (child_stdout, child_stdin) = (p.stdout, p.stdin) (child_stdout, child_stdin) = popen2.popen2(["mycmd", "myarg"], bufsize, mode) ==> p = Popen(["mycmd", "myarg"], bufsize=bufsize, stdin=PIPE, stdout=PIPE, close_fds=True) (child_stdout, child_stdin) = (p.stdout, p.stdin) The popen2.Popen3 and popen3.Popen4 basically works as subprocess.Popen, except that: * subprocess.Popen raises an exception if the execution fails * the capturestderr argument is replaced with the stderr argument. * stdin=PIPE and stdout=PIPE must be specified. * popen2 closes all filedescriptors by default, but you have to specify close_fds=True with subprocess.Popen. """ import sys mswindows = (sys.platform == "win32") import os import types import traceback if mswindows: import threading import msvcrt if 0: # <-- change this to use pywin32 instead of the _subprocess driver import pywintypes from win32api import GetStdHandle, STD_INPUT_HANDLE, \ STD_OUTPUT_HANDLE, STD_ERROR_HANDLE from win32api import GetCurrentProcess, DuplicateHandle, \ GetModuleFileName, GetVersion from win32con import DUPLICATE_SAME_ACCESS, SW_HIDE from win32pipe import CreatePipe from win32process import CreateProcess, STARTUPINFO, \ GetExitCodeProcess, STARTF_USESTDHANDLES, \ STARTF_USESHOWWINDOW, CREATE_NEW_CONSOLE from win32event import WaitForSingleObject, INFINITE, WAIT_OBJECT_0 else: from _subprocess import * class STARTUPINFO: dwFlags = 0 hStdInput = None hStdOutput = None hStdError = None class pywintypes: error = IOError else: import select import errno import fcntl import pickle __all__ = ["Popen", "PIPE", "STDOUT", "call"] try: MAXFD = os.sysconf("SC_OPEN_MAX") except: MAXFD = 256 # True/False does not exist on 2.2.0 try: False except NameError: False = 0 True = 1 _active = [] def _cleanup(): for inst in _active[:]: inst.poll() PIPE = -1 STDOUT = -2 def call(*args, **kwargs): """Run command with arguments. Wait for command to complete, then return the returncode attribute. The arguments are the same as for the Popen constructor. Example: retcode = call(["ls", "-l"]) """ return Popen(*args, **kwargs).wait() def list2cmdline(seq): """ Translate a sequence of arguments into a command line string, using the same rules as the MS C runtime: 1) Arguments are delimited by white space, which is either a space or a tab. 2) A string surrounded by double quotation marks is interpreted as a single argument, regardless of white space contained within. A quoted string can be embedded in an argument. 3) A double quotation mark preceded by a backslash is interpreted as a literal double quotation mark. 4) Backslashes are interpreted literally, unless they immediately precede a double quotation mark. 5) If backslashes immediately precede a double quotation mark, every pair of backslashes is interpreted as a literal backslash. If the number of backslashes is odd, the last backslash escapes the next double quotation mark as described in rule 3. """ # See # http://msdn.microsoft.com/library/en-us/vccelng/htm/progs_12.asp result = [] needquote = False for arg in seq: bs_buf = [] # Add a space to separate this argument from the others if result: result.append(' ') needquote = (" " in arg) or ("\t" in arg) if needquote: result.append('"') for c in arg: if c == '\\': # Don't know if we need to double yet. bs_buf.append(c) elif c == '"': # Double backspaces. result.append('\\' * len(bs_buf)*2) bs_buf = [] result.append('\\"') else: # Normal char if bs_buf: result.extend(bs_buf) bs_buf = [] result.append(c) # Add remaining backspaces, if any. if bs_buf: result.extend(bs_buf) if needquote: result.extend(bs_buf) result.append('"') return ''.join(result) class Popen(object): def __init__(self, args, bufsize=0, executable=None, stdin=None, stdout=None, stderr=None, preexec_fn=None, close_fds=False, shell=False, cwd=None, env=None, universal_newlines=False, startupinfo=None, creationflags=0): """Create new Popen instance.""" _cleanup() if not isinstance(bufsize, (int, long)): raise TypeError("bufsize must be an integer") if mswindows: if preexec_fn is not None: raise ValueError("preexec_fn is not supported on Windows " "platforms") if close_fds: raise ValueError("close_fds is not supported on Windows " "platforms") else: # POSIX if startupinfo is not None: raise ValueError("startupinfo is only supported on Windows " "platforms") if creationflags != 0: raise ValueError("creationflags is only supported on Windows " "platforms") self.stdin = None self.stdout = None self.stderr = None self.pid = None self.returncode = None self.universal_newlines = universal_newlines # Input and output objects. The general principle is like # this: # # Parent Child # ------ ----- # p2cwrite ---stdin---> p2cread # c2pread <--stdout--- c2pwrite # errread <--stderr--- errwrite # # On POSIX, the child objects are file descriptors. On # Windows, these are Windows file handles. The parent objects # are file descriptors on both platforms. The parent objects # are None when not using PIPEs. The child objects are None # when not redirecting. (p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite) = self._get_handles(stdin, stdout, stderr) self._execute_child(args, executable, preexec_fn, close_fds, cwd, env, universal_newlines, startupinfo, creationflags, shell, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite) if p2cwrite: self.stdin = os.fdopen(p2cwrite, 'wb', bufsize) if c2pread: if universal_newlines: self.stdout = os.fdopen(c2pread, 'rU', bufsize) else: self.stdout = os.fdopen(c2pread, 'rb', bufsize) if errread: if universal_newlines: self.stderr = os.fdopen(errread, 'rU', bufsize) else: self.stderr = os.fdopen(errread, 'rb', bufsize) _active.append(self) def _translate_newlines(self, data): data = data.replace("\r\n", "\n") data = data.replace("\r", "\n") return data if mswindows: # # Windows methods # def _get_handles(self, stdin, stdout, stderr): """Construct and return tupel with IO objects: p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite """ if stdin == None and stdout == None and stderr == None: return (None, None, None, None, None, None) p2cread, p2cwrite = None, None c2pread, c2pwrite = None, None errread, errwrite = None, None if stdin == None: p2cread = GetStdHandle(STD_INPUT_HANDLE) elif stdin == PIPE: p2cread, p2cwrite = CreatePipe(None, 0) # Detach and turn into fd p2cwrite = p2cwrite.Detach() p2cwrite = msvcrt.open_osfhandle(p2cwrite, 0) elif type(stdin) == types.IntType: p2cread = msvcrt.get_osfhandle(stdin) else: # Assuming file-like object p2cread = msvcrt.get_osfhandle(stdin.fileno()) p2cread = self._make_inheritable(p2cread) if stdout == None: c2pwrite = GetStdHandle(STD_OUTPUT_HANDLE) elif stdout == PIPE: c2pread, c2pwrite = CreatePipe(None, 0) # Detach and turn into fd c2pread = c2pread.Detach() c2pread = msvcrt.open_osfhandle(c2pread, 0) elif type(stdout) == types.IntType: c2pwrite = msvcrt.get_osfhandle(stdout) else: # Assuming file-like object c2pwrite = msvcrt.get_osfhandle(stdout.fileno()) c2pwrite = self._make_inheritable(c2pwrite) if stderr == None: errwrite = GetStdHandle(STD_ERROR_HANDLE) elif stderr == PIPE: errread, errwrite = CreatePipe(None, 0) # Detach and turn into fd errread = errread.Detach() errread = msvcrt.open_osfhandle(errread, 0) elif stderr == STDOUT: errwrite = c2pwrite elif type(stderr) == types.IntType: errwrite = msvcrt.get_osfhandle(stderr) else: # Assuming file-like object errwrite = msvcrt.get_osfhandle(stderr.fileno()) errwrite = self._make_inheritable(errwrite) return (p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite) def _make_inheritable(self, handle): """Return a duplicate of handle, which is inheritable""" return DuplicateHandle(GetCurrentProcess(), handle, GetCurrentProcess(), 0, 1, DUPLICATE_SAME_ACCESS) def _find_w9xpopen(self): """Find and return absolut path to w9xpopen.exe""" w9xpopen = os.path.join(os.path.dirname(GetModuleFileName(0)), "w9xpopen.exe") if not os.path.exists(w9xpopen): # Eeek - file-not-found - possibly an embedding # situation - see if we can locate it in sys.exec_prefix w9xpopen = os.path.join(os.path.dirname(sys.exec_prefix), "w9xpopen.exe") if not os.path.exists(w9xpopen): raise RuntimeError("Cannot locate w9xpopen.exe, which is " "needed for Popen to work with your " "shell or platform.") return w9xpopen def _execute_child(self, args, executable, preexec_fn, close_fds, cwd, env, universal_newlines, startupinfo, creationflags, shell, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite): """Execute program (MS Windows version)""" if not isinstance(args, types.StringTypes): args = list2cmdline(args) # Process startup details default_startupinfo = STARTUPINFO() if startupinfo == None: startupinfo = default_startupinfo if not None in (p2cread, c2pwrite, errwrite): startupinfo.dwFlags |= STARTF_USESTDHANDLES startupinfo.hStdInput = p2cread startupinfo.hStdOutput = c2pwrite startupinfo.hStdError = errwrite if shell: default_startupinfo.dwFlags |= STARTF_USESHOWWINDOW default_startupinfo.wShowWindow = SW_HIDE comspec = os.environ.get("COMSPEC", "cmd.exe") args = comspec + " /c " + args if (GetVersion() >= 0x80000000L or os.path.basename(comspec).lower() == "command.com"): # Win9x, or using command.com on NT. We need to # use the w9xpopen intermediate program. For more # information, see KB Q150956 # (http://web.archive.org/web/20011105084002/http://support.microsoft.com/support/kb/articles/Q150/9/56.asp) w9xpopen = self._find_w9xpopen() args = '"%s" %s' % (w9xpopen, args) # Not passing CREATE_NEW_CONSOLE has been known to # cause random failures on win9x. Specifically a # dialog: "Your program accessed mem currently in # use at xxx" and a hopeful warning about the # stability of your system. Cost is Ctrl+C wont # kill children. creationflags |= CREATE_NEW_CONSOLE # Start the process try: hp, ht, pid, tid = CreateProcess(executable, args, # no special security None, None, # must inherit handles to pass std # handles 1, creationflags, env, cwd, startupinfo) except pywintypes.error, e: # Translate pywintypes.error to WindowsError, which is # a subclass of OSError. FIXME: We should really # translate errno using _sys_errlist (or simliar), but # how can this be done from Python? raise WindowsError(*e.args) # Retain the process handle, but close the thread handle self._handle = hp self.pid = pid ht.Close() # Child is launched. Close the parent's copy of those pipe # handles that only the child should have open. You need # to make sure that no handles to the write end of the # output pipe are maintained in this process or else the # pipe will not close when the child process exits and the # ReadFile will hang. if p2cread != None: p2cread.Close() if c2pwrite != None: c2pwrite.Close() if errwrite != None: errwrite.Close() def poll(self): """Check if child process has terminated. Returns returncode attribute.""" if self.returncode == None: if WaitForSingleObject(self._handle, 0) == WAIT_OBJECT_0: self.returncode = GetExitCodeProcess(self._handle) _active.remove(self) return self.returncode def wait(self): """Wait for child process to terminate. Returns returncode attribute.""" if self.returncode == None: obj = WaitForSingleObject(self._handle, INFINITE) self.returncode = GetExitCodeProcess(self._handle) _active.remove(self) return self.returncode def _readerthread(self, fh, buffer): buffer.append(fh.read()) def communicate(self, input=None): """Interact with process: Send data to stdin. Read data from stdout and stderr, until end-of-file is reached. Wait for process to terminate. The optional input argument should be a string to be sent to the child process, or None, if no data should be sent to the child. communicate() returns a tuple (stdout, stderr).""" stdout = None # Return stderr = None # Return if self.stdout: stdout = [] stdout_thread = threading.Thread(target=self._readerthread, args=(self.stdout, stdout)) stdout_thread.setDaemon(True) stdout_thread.start() if self.stderr: stderr = [] stderr_thread = threading.Thread(target=self._readerthread, args=(self.stderr, stderr)) stderr_thread.setDaemon(True) stderr_thread.start() if self.stdin: if input != None: self.stdin.write(input) self.stdin.close() if self.stdout: stdout_thread.join() if self.stderr: stderr_thread.join() # All data exchanged. Translate lists into strings. if stdout != None: stdout = stdout[0] if stderr != None: stderr = stderr[0] # Translate newlines, if requested. We cannot let the file # object do the translation: It is based on stdio, which is # impossible to combine with select (unless forcing no # buffering). if self.universal_newlines and hasattr(open, 'newlines'): if stdout: stdout = self._translate_newlines(stdout) if stderr: stderr = self._translate_newlines(stderr) self.wait() return (stdout, stderr) else: # # POSIX methods # def _get_handles(self, stdin, stdout, stderr): """Construct and return tupel with IO objects: p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite """ p2cread, p2cwrite = None, None c2pread, c2pwrite = None, None errread, errwrite = None, None if stdin == None: pass elif stdin == PIPE: p2cread, p2cwrite = os.pipe() elif type(stdin) == types.IntType: p2cread = stdin else: # Assuming file-like object p2cread = stdin.fileno() if stdout == None: pass elif stdout == PIPE: c2pread, c2pwrite = os.pipe() elif type(stdout) == types.IntType: c2pwrite = stdout else: # Assuming file-like object c2pwrite = stdout.fileno() if stderr == None: pass elif stderr == PIPE: errread, errwrite = os.pipe() elif stderr == STDOUT: errwrite = c2pwrite elif type(stderr) == types.IntType: errwrite = stderr else: # Assuming file-like object errwrite = stderr.fileno() return (p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite) def _set_cloexec_flag(self, fd): try: cloexec_flag = fcntl.FD_CLOEXEC except AttributeError: cloexec_flag = 1 old = fcntl.fcntl(fd, fcntl.F_GETFD) fcntl.fcntl(fd, fcntl.F_SETFD, old | cloexec_flag) def _close_fds(self, but): for i in range(3, MAXFD): if i == but: continue try: os.close(i) except: pass def _execute_child(self, args, executable, preexec_fn, close_fds, cwd, env, universal_newlines, startupinfo, creationflags, shell, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite): """Execute program (POSIX version)""" if isinstance(args, types.StringTypes): args = [args] if shell: args = ["/bin/sh", "-c"] + args if executable == None: executable = args[0] # For transferring possible exec failure from child to parent # The first char specifies the exception type: 0 means # OSError, 1 means some other error. errpipe_read, errpipe_write = os.pipe() self._set_cloexec_flag(errpipe_write) self.pid = os.fork() if self.pid == 0: # Child try: # Close parent's pipe ends if p2cwrite: os.close(p2cwrite) if c2pread: os.close(c2pread) if errread: os.close(errread) os.close(errpipe_read) # Dup fds for child if p2cread: os.dup2(p2cread, 0) if c2pwrite: os.dup2(c2pwrite, 1) if errwrite: os.dup2(errwrite, 2) # Close pipe fds. Make sure we doesn't close the same # fd more than once. if p2cread: os.close(p2cread) if c2pwrite and c2pwrite not in (p2cread,): os.close(c2pwrite) if errwrite and errwrite not in (p2cread, c2pwrite): os.close(errwrite) # Close all other fds, if asked for if close_fds: self._close_fds(but=errpipe_write) if cwd != None: os.chdir(cwd) if preexec_fn: apply(preexec_fn) if env == None: os.execvp(executable, args) else: os.execvpe(executable, args, env) except: exc_type, exc_value, tb = sys.exc_info() # Save the traceback and attach it to the exception object exc_lines = traceback.format_exception(exc_type, exc_value, tb) exc_value.child_traceback = ''.join(exc_lines) os.write(errpipe_write, pickle.dumps(exc_value)) # This exitcode won't be reported to applications, so it # really doesn't matter what we return. os._exit(255) # Parent os.close(errpipe_write) if p2cread and p2cwrite: os.close(p2cread) if c2pwrite and c2pread: os.close(c2pwrite) if errwrite and errread: os.close(errwrite) # Wait for exec to fail or succeed; possibly raising exception data = os.read(errpipe_read, 1048576) # Exceptions limited to 1 MB os.close(errpipe_read) if data != "": os.waitpid(self.pid, 0) child_exception = pickle.loads(data) raise child_exception def _handle_exitstatus(self, sts): if os.WIFSIGNALED(sts): self.returncode = -os.WTERMSIG(sts) elif os.WIFEXITED(sts): self.returncode = os.WEXITSTATUS(sts) else: # Should never happen raise RuntimeError("Unknown child exit status!") _active.remove(self) def poll(self): """Check if child process has terminated. Returns returncode attribute.""" if self.returncode == None: try: pid, sts = os.waitpid(self.pid, os.WNOHANG) if pid == self.pid: self._handle_exitstatus(sts) except os.error: pass return self.returncode def wait(self): """Wait for child process to terminate. Returns returncode attribute.""" if self.returncode == None: pid, sts = os.waitpid(self.pid, 0) self._handle_exitstatus(sts) return self.returncode def communicate(self, input=None): """Interact with process: Send data to stdin. Read data from stdout and stderr, until end-of-file is reached. Wait for process to terminate. The optional input argument should be a string to be sent to the child process, or None, if no data should be sent to the child. communicate() returns a tuple (stdout, stderr).""" read_set = [] write_set = [] stdout = None # Return stderr = None # Return if self.stdin: # Flush stdio buffer. This might block, if the user has # been writing to .stdin in an uncontrolled fashion. self.stdin.flush() if input: write_set.append(self.stdin) else: self.stdin.close() if self.stdout: read_set.append(self.stdout) stdout = [] if self.stderr: read_set.append(self.stderr) stderr = [] while read_set or write_set: rlist, wlist, xlist = select.select(read_set, write_set, []) if self.stdin in wlist: # When select has indicated that the file is writable, # we can write up to PIPE_BUF bytes without risk # blocking. POSIX defines PIPE_BUF >= 512 bytes_written = os.write(self.stdin.fileno(), input[:512]) input = input[bytes_written:] if not input: self.stdin.close() write_set.remove(self.stdin) if self.stdout in rlist: data = os.read(self.stdout.fileno(), 1024) if data == "": self.stdout.close() read_set.remove(self.stdout) stdout.append(data) if self.stderr in rlist: data = os.read(self.stderr.fileno(), 1024) if data == "": self.stderr.close() read_set.remove(self.stderr) stderr.append(data) # All data exchanged. Translate lists into strings. if stdout != None: stdout = ''.join(stdout) if stderr != None: stderr = ''.join(stderr) # Translate newlines, if requested. We cannot let the file # object do the translation: It is based on stdio, which is # impossible to combine with select (unless forcing no # buffering). if self.universal_newlines and hasattr(open, 'newlines'): if stdout: stdout = self._translate_newlines(stdout) if stderr: stderr = self._translate_newlines(stderr) self.wait() return (stdout, stderr) def _demo_posix(): # # Example 1: Simple redirection: Get process list # plist = Popen(["ps"], stdout=PIPE).communicate()[0] print "Process list:" print plist # # Example 2: Change uid before executing child # if os.getuid() == 0: p = Popen(["id"], preexec_fn=lambda: os.setuid(100)) p.wait() # # Example 3: Connecting several subprocesses # print "Looking for 'hda'..." p1 = Popen(["dmesg"], stdout=PIPE) p2 = Popen(["grep", "hda"], stdin=p1.stdout, stdout=PIPE) print repr(p2.communicate()[0]) # # Example 4: Catch execution error # print print "Trying a weird file..." try: print Popen(["/this/path/does/not/exist"]).communicate() except OSError, e: if e.errno == errno.ENOENT: print "The file didn't exist. I thought so..." print "Child traceback:" print e.child_traceback else: print "Error", e.errno else: print >>sys.stderr, "Gosh. No error." def _demo_windows(): # # Example 1: Connecting several subprocesses # print "Looking for 'PROMPT' in set output..." p1 = Popen("set", stdout=PIPE, shell=True) p2 = Popen('find "PROMPT"', stdin=p1.stdout, stdout=PIPE) print repr(p2.communicate()[0]) # # Example 2: Simple execution of program # print "Executing calc..." p = Popen("calc") p.wait() if __name__ == "__main__": if mswindows: _demo_windows() else: _demo_posix() cobbler-2.4.1/cobbler/templar.py000066400000000000000000000214731227367477500166070ustar00rootroot00000000000000""" Cobbler uses Cheetah templates for lots of stuff, but there's some additional magic around that to deal with snippets/etc. (And it's not spelled wrong!) Copyright 2008-2009, Red Hat, Inc and Others Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import os import os.path import glob import pprint from cexceptions import * from template_api import Template from utils import * import utils import clogger import Cheetah major, minor, release = Cheetah.Version.split('.')[0:3] fix_cheetah_class = int(major) >= 2 and int(minor) >=4 and int(release) >= 2 jinja2_available = False try: import jinja2 jinja2_available = True except: """ FIXME: log a message here """ pass try: import functools except: functools = None class Templar: def __init__(self,config,logger=None): """ Constructor """ self.config = None self.settings = None if config: self.config = config self.settings = config.settings() self.last_errors = [] if logger is None: logger = clogger.Logger() self.logger = logger def check_for_invalid_imports(self,data): """ Ensure that Cheetah code is not importing Python modules that may allow for advanced priveledges by ensuring we whitelist the imports that we allow """ lines = data.split("\n") for line in lines: if line.find("#import") != -1: rest=line.replace("#import","").replace(" ","").strip() if self.settings and rest not in self.settings.cheetah_import_whitelist: raise CX("potentially insecure import in template: %s" % rest) def render(self, data_input, search_table, out_path, subject=None, template_type=None): """ Render data_input back into a file. data_input is either a string or a filename search_table is a hash of metadata keys and values out_path if not-none writes the results to a file (though results are always returned) subject is a profile or system object, if available (for snippet eval) """ if not isinstance(data_input, basestring): raw_data = data_input.read() else: raw_data = data_input lines = raw_data.split('\n') if not template_type: # Assume we're using the default template type, if set in # the settinigs file or use cheetah as the last resort if self.settings and self.settings.default_template_type: template_type = self.settings.default_template_type else: template_type = "cheetah" if len(lines) > 0 and lines[0].find("#template=") == 0: # pull the template type out of the first line and then drop # it and rejoin them to pass to the template language template_type = lines[0].split("=")[1].strip().lower() del lines[0] raw_data = string.join(lines,"\n") if template_type == "cheetah": data_out = self.render_cheetah(raw_data, search_table, subject) elif template_type == "jinja2": if jinja2_available: data_out = self.render_jinja2(raw_data, search_table, subject) else: return "# ERROR: JINJA2 NOT AVAILABLE. Maybe you need to install python-jinja2?\n" else: return "# ERROR: UNSUPPORTED TEMPLATE TYPE (%s)" % str(template_type) # now apply some magic post-filtering that is used by cobbler import and some # other places. Forcing folks to double escape things would be very unwelcome. hp = search_table.get("http_port","80") server = search_table.get("server","server.example.org") if hp not in (80, '80'): repstr = "%s:%s" % (server, hp) else: repstr = server search_table["http_server"] = repstr for x in search_table.keys(): if type(x) == str: data_out = data_out.replace("@@%s@@" % str(x), str(search_table[str(x)])) # remove leading newlines which apparently breaks AutoYAST ? if data_out.startswith("\n"): data_out = data_out.lstrip() # if requested, write the data out to a file if out_path is not None: utils.mkdir(os.path.dirname(out_path)) fd = open(out_path, "w+") fd.write(data_out) fd.close() return data_out def render_cheetah(self, raw_data, search_table, subject=None): """ Render data_input back into a file. data_input is either a string or a filename search_table is a hash of metadata keys and values (though results are always returned) subject is a profile or system object, if available (for snippet eval) """ self.check_for_invalid_imports(raw_data) # backward support for Cobbler's legacy (and slightly more readable) # template syntax. raw_data = raw_data.replace("TEMPLATE::","$") # HACK: the ksmeta field may contain nfs://server:/mount in which # case this is likely WRONG for kickstart, which needs the NFS # directive instead. Do this to make the templates work. newdata = "" if search_table.has_key("tree") and search_table["tree"].startswith("nfs://"): for line in raw_data.split("\n"): if line.find("--url") != -1 and line.find("url ") != -1: rest = search_table["tree"][6:] # strip off "nfs://" part try: (server, dir) = rest.split(":",2) except: raise CX("Invalid syntax for NFS path given during import: %s" % search_table["tree"]) line = "nfs --server %s --dir %s" % (server,dir) # but put the URL part back in so koan can still see # what the original value was line = line + "\n" + "#url --url=%s" % search_table["tree"] newdata = newdata + line + "\n" raw_data = newdata # tell Cheetah not to blow up if it can't find a symbol for something raw_data = "#errorCatcher ListErrors\n" + raw_data table_copy = search_table.copy() # for various reasons we may want to call a module inside a template and pass # it all of the template variables. The variable "template_universe" serves # this purpose to make it easier to iterate through all of the variables without # using internal Cheetah variables search_table.update({ "template_universe" : table_copy }) # now do full templating scan, where we will also templatify the snippet insertions t = Template(source=raw_data, searchList=[search_table], compilerSettings={'useStackFrame':False}) if fix_cheetah_class and functools is not None: t.SNIPPET = functools.partial(t.SNIPPET, t) t.read_snippet = functools.partial(t.read_snippet, t) try: data_out = t.respond() self.last_errors = t.errorCatcher().listErrors() if self.last_errors: self.logger.warning("errors were encountered rendering the template") self.logger.warning("\n" + pprint.pformat(self.last_errors)) except Exception, e: self.logger.error(utils.cheetah_exc(e)) raise CX("Error templating file, check cobbler.log for more details") return data_out def render_jinja2(self, raw_data, search_table, subject=None): """ Render data_input back into a file. data_input is either a string or a filename search_table is a hash of metadata keys and values out_path if not-none writes the results to a file (though results are always returned) subject is a profile or system object, if available (for snippet eval) """ try: template = jinja2.Template(raw_data) data_out = template.render(search_table) except: data_out = "# EXCEPTION OCCURRED DURING JINJA2 TEMPLATE PROCESSING\n" return data_out cobbler-2.4.1/cobbler/template_api.py000066400000000000000000000204371227367477500176060ustar00rootroot00000000000000""" Cobbler provides builtin methods for use in Cheetah templates. $SNIPPET is one such function and is now used to implement Cobbler's SNIPPET:: syntax. Written by Daniel Guernsey Contributions by Michael DeHaan US Government work; No explicit copyright attached to this file. This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import Cheetah.Template import os.path import re import utils from cexceptions import * CHEETAH_MACROS_FILE = '/etc/cobbler/cheetah_macros' # This class is defined using the Cheetah language. Using the 'compile' function # we can compile the source directly into a python class. This class will allow # us to define the cheetah builtins. BuiltinTemplate = Cheetah.Template.Template.compile(source="\n".join([ # This part (see 'Template' below # for the other part) handles the actual inclusion of the file contents. We # still need to make the snippet's namespace (searchList) available to the # template calling SNIPPET (done in the other part). # Moved the other functions into /etc/cobbler/cheetah_macros # Left SNIPPET here since it is very important. # This function can be used in two ways: # Cheetah syntax: # # $SNIPPET('my_snippet') # # SNIPPET syntax: # # SNIPPET::my_snippet # # This follows all of the rules of snippets and advanced snippets. First it # searches for a per-system snippet, then a per-profile snippet, then a # general snippet. If none is found, a comment explaining the error is # substituted. "#def SNIPPET($file)", "#set $snippet = $read_snippet($file)", "#if $snippet", "#include source=$snippet", "#else", "# Error: no snippet data for $file", "#end if", "#end def", ]) + "\n") MacrosTemplate = Cheetah.Template.Template.compile(file=CHEETAH_MACROS_FILE) class Template(BuiltinTemplate, MacrosTemplate): """ This class will allow us to include any pure python builtin functions. It derives from the cheetah-compiled class above. This way, we can include both types (cheetah and pure python) of builtins in the same base template. We don't need to override __init__ """ # OK, so this function gets called by Cheetah.Template.Template.__init__ to # compile the template into a class. This is probably a kludge, but it # add a baseclass argument to the standard compile (see Cheetah's compile # docstring) and returns the resulting class. This argument, of course, # points to this class. Now any methods entered here (or in the base class # above) will be accessible to all cheetah templates compiled by cobbler. def compile(klass, *args, **kwargs): """ Compile a cheetah template with cobbler modifications. Modifications include SNIPPET:: syntax replacement and inclusion of cobbler builtin methods. """ def replacer(match): return "$SNIPPET('%s')" % match.group(1) def preprocess(source, file): # Normally, the cheetah compiler worries about this, but we need to # preprocess the actual source if source is None: if isinstance(file, basestring): if os.path.exists(file): f = open(file) source = "#errorCatcher Echo\n" + f.read() f.close() else: source = "# Unable to read %s\n" % file elif hasattr(file, 'read'): source = file.read() file = None # Stop Cheetah from throwing a fit. rx = re.compile(r'SNIPPET::([A-Za-z0-9_\-\/\.]+)') results = rx.sub(replacer, source) return (results, file) preprocessors = [preprocess] if kwargs.has_key('preprocessors'): preprocessors.extend(kwargs['preprocessors']) kwargs['preprocessors'] = preprocessors # Instruct Cheetah to use this class as the base for all cheetah templates if not kwargs.has_key('baseclass'): kwargs['baseclass'] = Template # Now let Cheetah do the actual compilation return Cheetah.Template.Template.compile(*args, **kwargs) compile = classmethod(compile) def read_snippet(self, file): """ Locate the appropriate snippet for the current system and profile and read it's contents. This file could be located in a remote location. This will first check for a per-system snippet, a per-profile snippet, a distro snippet, and a general snippet. If no snippet is located, it returns None. """ for snipclass in ('system', 'profile', 'distro'): if self.varExists('%s_name' % snipclass): fullpath = '%s/per_%s/%s/%s' % (self.getVar('snippetsdir'), snipclass, file, self.getVar('%s_name' % snipclass)) try: contents = utils.read_file_contents(fullpath, fetch_if_remote=True) return contents except FileNotFoundException: pass try: return "#errorCatcher ListErrors\n" + utils.read_file_contents('%s/%s' % (self.getVar('snippetsdir'), file), fetch_if_remote=True) except FileNotFoundException: return None def SNIPPET(self, file): """ Include the contents of the named snippet here. This is equivalent to the #include directive in Cheetah, except that it searches for system and profile specific snippets, and it includes the snippet's namespace. This may be a little frobby, but it's really cool. This is a pure python portion of SNIPPET that appends the snippet's searchList to the caller's searchList. This makes any #defs within a given snippet available to the template that included the snippet. """ # First, do the actual inclusion. Cheetah (when processing #include) # will track the inclusion in self._CHEETAH__cheetahIncludes result = BuiltinTemplate.SNIPPET(self, file) # Now do our dirty work: locate the new include, and append its # searchList to ours. # We have to compute the full path again? Eww. # This weird method is getting even weirder, the cheetah includes keys # are no longer filenames but actual contents of snippets. Regardless # this seems to work and hopefully it will be ok. snippet_contents = self.read_snippet(file); if snippet_contents: # Only include what we don't already have. Because Cheetah # passes our searchList into included templates, the snippet's # searchList will include this templates searchList. We need to # avoid duplicating entries. childList = self._CHEETAH__cheetahIncludes[snippet_contents].searchList() myList = self.searchList() for childElem in childList: if not childElem in myList: myList.append(childElem) return result # This function is used by several cheetah methods in cheetah_macros. # It can be used by the end user as well. # Ex: Replace all instances of '/etc/banner' with a value stored in # $new_banner # # sed 's/$sedesc("/etc/banner")/$sedesc($new_banner)/' # def sedesc(self, value): """ Escape a string for use in sed. """ def escchar(c): if c in '/^.[]$()|*+?{}\\': return '\\' + c else: return c return ''.join([escchar(c) for c in value]) cobbler-2.4.1/cobbler/test_basic.py000066400000000000000000001155251227367477500172650ustar00rootroot00000000000000# Test cases for Cobbler # # Michael DeHaan import sys import unittest import os import tempfile import shutil import traceback from cexceptions import * import modules.authz_ownership as authz_module import api import config import utils utils.TEST_MODE = True FAKE_INITRD="initrd-2.6.15-1.2054_FAKE.img" FAKE_INITRD2="initrd-2.5.16-2.2055_FAKE.img" FAKE_INITRD3="initrd-1.8.18-3.9999_FAKE.img" FAKE_KERNEL="vmlinuz-2.6.15-1.2054_FAKE" FAKE_KERNEL2="vmlinuz-2.5.16-2.2055_FAKE" FAKE_KERNEL3="vmlinuz-1.8.18-3.9999_FAKE" cleanup_dirs = [] class BootTest(unittest.TestCase): def setUp(self): # Create temp dir self.topdir = "/tmp/cobbler_test" try: os.makedirs(self.topdir) except: pass self.fk_initrd = os.path.join(self.topdir, FAKE_INITRD) self.fk_initrd2 = os.path.join(self.topdir, FAKE_INITRD2) self.fk_initrd3 = os.path.join(self.topdir, FAKE_INITRD3) self.fk_kernel = os.path.join(self.topdir, FAKE_KERNEL) self.fk_kernel2 = os.path.join(self.topdir, FAKE_KERNEL2) self.fk_kernel3 = os.path.join(self.topdir, FAKE_KERNEL3) self.api = api.BootAPI() create = [ self.fk_initrd, self.fk_initrd2, self.fk_initrd3, self.fk_kernel, self.fk_kernel2, self.fk_kernel3 ] for fn in create: f = open(fn,"w+") f.close() self.__make_basic_config() def tearDown(self): for x in self.api.distros(): self.api.remove_distro(x,recursive=True) for y in self.api.repos(): self.api.remove_repo(y) for z in self.api.images(): self.api.remove_image(z) shutil.rmtree(self.topdir,ignore_errors=True) self.api = None def __make_basic_config(self): distro = self.api.new_distro() self.assertTrue(distro.set_name("testdistro0")) self.assertTrue(distro.set_kernel(self.fk_kernel)) self.assertTrue(distro.set_initrd(self.fk_initrd)) self.assertTrue(self.api.add_distro(distro)) self.assertTrue(self.api.find_distro(name="testdistro0")) profile = self.api.new_profile() self.assertTrue(profile.set_name("testprofile0")) self.assertTrue(profile.set_distro("testdistro0")) self.assertTrue(profile.set_kickstart("/var/lib/cobbler/kickstarts/sample_end.ks")) self.assertTrue(self.api.add_profile(profile)) self.assertTrue(self.api.find_profile(name="testprofile0")) system = self.api.new_system() self.assertTrue(system.set_name("testsystem0")) self.assertTrue(system.set_mac_address("BB:EE:EE:EE:EE:FF","eth0")) self.assertTrue(system.set_ip_address("192.51.51.50","eth0")) self.assertTrue(system.set_profile("testprofile0")) self.assertTrue(self.api.add_system(system)) self.assertTrue(self.api.find_system(name="testsystem0")) repo = self.api.new_repo() try: os.makedirs("/tmp/test_example_cobbler_repo") except: pass fd = open("/tmp/test_example_cobbler_repo/test.file", "w+") fd.write("hello!") fd.close() self.assertTrue(repo.set_name("testrepo0")) self.assertTrue(repo.set_mirror("/tmp/test_example_cobbler_repo")) self.assertTrue(self.api.add_repo(repo)) image = self.api.new_image() self.assertTrue(image.set_name("testimage0")) self.assertTrue(image.set_file(self.fk_initrd)) # meaningless path self.assertTrue(self.api.add_image(image)) class RenameTest(BootTest): def __tester(self, finder, renamer, name1, name2): x = finder(name1) assert x is not None renamer(x, name2) x = finder(name1) y = finder(name2) assert x is None assert y is not None renamer(y, name1) x = finder(name1) y = finder(name2) assert x is not None assert y is None def test_renames(self): self.__tester(self.api.find_distro, self.api.rename_distro, "testdistro0", "testdistro1") self.__tester(self.api.find_profile, self.api.rename_profile, "testprofile0", "testprofile1") self.__tester(self.api.find_system, self.api.rename_system, "testsystem0", "testsystem1") self.__tester(self.api.find_repo, self.api.rename_repo, "testrepo0", "testrepo1") self.__tester(self.api.find_image, self.api.rename_image, "testimage0", "testimage1") class DuplicateNamesAndIpPrevention(BootTest): """ The command line (and WebUI) have checks to prevent new system additions from conflicting with existing systems and overwriting them inadvertantly. This class tests that code. NOTE: General API users will /not/ encounter these checks. """ def test_duplicate_prevention(self): # find things we are going to test with distro1 = self.api.find_distro(name="testdistro0") profile1 = self.api.find_profile(name="testprofile0") system1 = self.api.find_system(name="testsystem0") repo1 = self.api.find_repo(name="testrepo0") # make sure we can't overwrite a previous distro with # the equivalent of an "add" (not an edit) on the # command line. distro2 = self.api.new_distro() self.assertTrue(distro2.set_name("testdistro0")) self.assertTrue(distro2.set_kernel(self.fk_kernel)) self.assertTrue(distro2.set_initrd(self.fk_initrd)) self.assertTrue(distro2.set_owners("canary")) # this should fail try: self.api.add_distro(distro2,check_for_duplicate_names=True) self.assertTrue(1==2,"distro add should fail") except CobblerException: pass except: self.assertTrue(1==2,"exception type") # we caught the exception but make doubly sure there was no write distro_check = self.api.find_distro(name="testdistro0") self.assertTrue("canary" not in distro_check.owners) # repeat the check for profiles profile2 = self.api.new_profile() self.assertTrue(profile2.set_name("testprofile0")) self.assertTrue(profile2.set_distro("testdistro0")) # this should fail try: self.api.add_profile(profile2,check_for_duplicate_names=True) self.assertTrue(1==2,"profile add should fail") except CobblerException: pass except: traceback.print_exc() self.assertTrue(1==2,"exception type") # repeat the check for systems (just names this time) system2 = self.api.new_system() self.assertTrue(system2.set_name("testsystem0")) self.assertTrue(system2.set_profile("testprofile0")) # this should fail try: self.api.add_system(system2,check_for_duplicate_names=True) self.assertTrue(1==2,"system add should fail") except CobblerException: pass except: traceback.print_exc() self.assertTrue(1==2,"exception type") # repeat the check for repos repo2 = self.api.new_repo() self.assertTrue(repo2.set_name("testrepo0")) self.assertTrue(repo2.set_mirror("http://imaginary")) # self.failUnlessRaises(CobblerException,self.api.add_repo,[repo,check_for_duplicate_names=True]) try: self.api.add_repo(repo2,check_for_duplicate_names=True) self.assertTrue(1==2,"repo add should fail") except CobblerException: pass except: self.assertTrue(1==2,"exception type") # now one more check to verify we can't add a system # of a different name but duplicate netinfo. system3 = self.api.new_system() self.assertTrue(system3.set_name("unused_name")) self.assertTrue(system3.set_profile("testprofile0")) # MAC is initially accepted self.assertTrue(system3.set_mac_address("BB:EE:EE:EE:EE:FF","eth3")) # can't add as this MAC already exists! #self.failUnlessRaises(CobblerException,self.api.add_system,[system3,check_for_duplicate_names=True,check_for_duplicate_netinfo=True) try: self.api.add_system(system3,check_for_duplicate_names=True,check_for_duplicate_netinfo=True) except CobblerException: pass except: traceback.print_exc() self.assertTrue(1==2,"wrong exception type") # set the MAC to a different value and try again self.assertTrue(system3.set_mac_address("FF:EE:EE:EE:EE:DD","eth3")) # it should work self.assertTrue(self.api.add_system(system3,check_for_duplicate_names=False,check_for_duplicate_netinfo=True)) # now set the IP so that collides self.assertTrue(system3.set_ip_address("192.51.51.50","eth6")) # this should also fail # self.failUnlessRaises(CobblerException,self.api.add_system,[system3,check_for_duplicate_names=True,check_for_duplicate_netinfo=True) try: self.api.add_system(system3,check_for_duplicate_names=True,check_for_duplicate_netinfo=True) self.assertTrue(1==2,"system add should fail") except CobblerException: pass except: self.assertTrue(1==2,"wrong exception type") # fix the IP and Mac back self.assertTrue(system3.set_ip_address("192.86.75.30","eth6")) self.assertTrue(system3.set_mac_address("AE:BE:DE:CE:AE:EE","eth3")) # now it works again # note that we will not check for duplicate names as we want # to test this as an 'edit' operation. self.assertTrue(self.api.add_system(system3,check_for_duplicate_names=False,check_for_duplicate_netinfo=True)) # FIXME: note -- how netinfo is handled when doing renames/copies/edits # is more involved and we probably should add tests for that also. class Ownership(BootTest): def test_ownership_params(self): fd = open("/tmp/test_cobbler_kickstart","w+") fd.write("") fd.close() # find things we are going to test with distro = self.api.find_distro(name="testdistro0") profile = self.api.find_profile(name="testprofile0") system = self.api.find_system(name="testsystem0") repo = self.api.find_repo(name="testrepo0") # as we didn't specify an owner for objects, the default # ownership should be as specified in settings default_owner = self.api.settings().default_ownership for obj in [ distro, profile, system, repo ]: self.assertTrue(obj is not None) self.assertEquals(obj.owners, default_owner, "default owner for %s" % obj) # verify we can test things self.assertTrue(distro.set_owners(["superlab","basement1"])) self.assertTrue(profile.set_owners(["superlab","basement1"])) self.assertTrue(profile.set_kickstart("/tmp/test_cobbler_kickstart")) self.assertTrue(system.set_owners(["superlab","basement1","basement3"])) self.assertTrue(repo.set_owners([])) self.api.add_distro(distro) self.api.add_profile(profile) self.api.add_system(system) self.api.add_repo(repo) # now edit the groups file. We won't test the full XMLRPC # auth stack here, but just the module in question def authorize(api, user, resource, arg1=None, arg2=None): return authz_module.authorize(api, user,resource,arg1,arg2) # if the users.conf file exists, back it up for the tests if os.path.exists("/etc/cobbler/users.conf"): shutil.copyfile("/etc/cobbler/users.conf","/tmp/cobbler_ubak") fd = open("/etc/cobbler/users.conf","w+") fd.write("\n") fd.write("[admins]\n") fd.write("admin1 = 1\n") fd.write("\n") fd.write("[superlab]\n") fd.write("superlab1 = 1\n") fd.write("superlab2 = 1\n") fd.write("\n") fd.write("[basement]\n") fd.write("basement1 = 1\n") fd.write("basement2 = 1\n") fd.write("basement3 = 1\n") fd.close() xo = self.api.find_distro("testdistro0") xn = "testdistro0" ro = self.api.find_repo("testrepo0") rn = "testrepo0" # WARNING: complex test explanation follows! # we must ensure those who can edit the kickstart are only those # who can edit all objects that depend on the said kickstart # in this test, superlab & basement1 can edit test_profile0 # superlab & basement1/3 can edit test_system0 # the systems share a common kickstart record (in this case # explicitly set, which is a bit arbitrary as they are parent/child # nodes, but the concept is not limited to this). # Therefore the correct result is that the following users can edit: # admin1, superlab1, superlab2 # And these folks can't # basement1, basement2 # Basement2 is rejected because the kickstart is shared by something # basmeent2 can not edit. for user in [ "admin1", "superlab1", "superlab2", "basement1" ]: self.assertTrue(authorize(self.api, user, "write_kickstart", "/tmp/test_cobbler_kickstart"), "%s can modify_kickstart" % user) for user in [ "basement2", "dne" ]: self.assertFalse(authorize(self.api, user, "write_kickstart", "/tmp/test_cobbler_kickstart"), "%s can modify_kickstart" % user) # ensure admin1 can edit (he's an admin) and do other tasks # same applies to basement1 who is explicitly added as a user # and superlab1 who is in a group in the ownership list for user in ["admin1","superlab1","basement1"]: self.assertTrue(authorize(self.api, user, "save_distro", xo),"%s can save_distro" % user) self.assertTrue(authorize(self.api, user, "modify_distro", xo),"%s can modify_distro" % user) self.assertTrue(authorize(self.api, user, "copy_distro", xo),"%s can copy_distro" % user) self.assertTrue(authorize(self.api, user, "remove_distro", xn),"%s can remove_distro" % user) # ensure all users in the file can sync for user in [ "admin1", "superlab1", "basement1", "basement2" ]: self.assertTrue(authorize(self.api, user, "sync")) # make sure basement2 can't edit (not in group) # and same goes for "dne" (does not exist in users.conf) for user in [ "basement2", "dne" ]: self.assertFalse(authorize(self.api, user, "save_distro", xo), "user %s cannot save_distro" % user) self.assertFalse(authorize(self.api, user, "modify_distro", xo), "user %s cannot modify_distro" % user) self.assertFalse(authorize(self.api, user, "remove_distro", xn), "user %s cannot remove_distro" % user) # basement2 is in the file so he can still copy self.assertTrue(authorize(self.api, "basement2", "copy_distro", xo), "basement2 can copy_distro") # dne can not copy or sync either (not in the users.conf) self.assertFalse(authorize(self.api, "dne", "copy_distro", xo), "dne cannot copy_distro") self.assertFalse(authorize(self.api, "dne", "sync"), "dne cannot sync") # unlike the distro testdistro0, testrepo0 is unowned # so any user in the file will be able to edit it. for user in [ "admin1", "superlab1", "basement1", "basement2" ]: self.assertTrue(authorize(self.api, user, "save_repo", ro), "user %s can save_repo" % user) # though dne is still not listed and will be denied self.assertFalse(authorize(self.api, "dne", "save_repo", ro), "dne cannot save_repo") # if we survive, restore the users file as module testing is done if os.path.exists("/tmp/cobbler_ubak"): shutil.copyfile("/etc/cobbler/users.conf","/tmp/cobbler_ubak") class MultiNIC(BootTest): def test_multi_nic_support(self): system = self.api.new_system() self.assertTrue(system.set_name("nictest")) self.assertTrue(system.set_profile("testprofile0")) self.assertTrue(system.set_dns_name("zero","eth0")) self.assertTrue(system.set_mac_address("EE:FF:DD:CC:DD:CC","eth1")) self.assertTrue(system.set_ip_address("127.0.0.5","eth2")) self.assertTrue(system.set_dhcp_tag("zero","eth3")) self.assertTrue(system.set_virt_bridge("zero","eth4")) self.assertTrue(system.set_gateway("192.168.1.25")) # is global self.assertTrue(system.set_name_servers("a.example.org b.example.org")) # is global self.assertTrue(system.set_mac_address("AA:AA:BB:BB:CC:CC","eth4")) self.assertTrue(system.set_dns_name("fooserver","eth4")) self.assertTrue(system.set_dhcp_tag("red","eth4")) self.assertTrue(system.set_ip_address("192.168.1.26","eth4")) self.assertTrue(system.set_netmask("255.255.255.0","eth4")) self.assertTrue(system.set_dhcp_tag("tag2","eth5")) self.assertTrue(self.api.add_system(system)) self.assertTrue(self.api.find_system(dns_name="fooserver")) self.assertTrue(self.api.find_system(mac_address="EE:FF:DD:CC:DD:CC")) self.assertTrue(self.api.find_system(ip_address="127.0.0.5")) self.assertTrue(self.api.find_system(virt_bridge="zero")) self.assertTrue(self.api.find_system(gateway="192.168.1.25")) self.assertTrue(self.api.find_system(netmask="255.255.255.0")) self.assertTrue(self.api.find_system(dhcp_tag="tag2")) self.assertTrue(self.api.find_system(dhcp_tag="zero")) # verify that systems has exactly 5 interfaces self.assertTrue(len(system.interfaces.keys()) == 6) # now check one interface to make sure it's exactly right # and we didn't accidentally fill in any other fields elsewhere self.assertTrue(system.interfaces.has_key("eth4")) self.assertTrue(system.gateway == "192.168.1.25") for (name,intf) in system.interfaces.iteritems(): if name == "eth4": # xmlrpc dicts must have string keys, so we must also self.assertTrue(intf["virt_bridge"] == "zero") self.assertTrue(intf["netmask"] == "255.255.255.0") self.assertTrue(intf["mac_address"] == "AA:AA:BB:BB:CC:CC") self.assertTrue(intf["ip_address"] == "192.168.1.26") self.assertTrue(intf["dns_name"] == "fooserver") self.assertTrue(intf["dhcp_tag"] == "red") class Utilities(BootTest): def _expeq(self, expected, actual): try: self.failUnlessEqual(expected, actual, "Expected: %s; actual: %s" % (expected, actual)) except: self.fail("exception during failUnlessEqual") def test_kernel_scan(self): self.assertTrue(utils.find_kernel(self.fk_kernel)) self.assertFalse(utils.find_kernel("filedoesnotexist")) self._expeq(self.fk_kernel, utils.find_kernel(self.topdir)) def test_initrd_scan(self): self.assertTrue(utils.find_initrd(self.fk_initrd)) self.assertFalse(utils.find_initrd("filedoesnotexist")) self._expeq(self.fk_initrd, utils.find_initrd(self.topdir)) def test_kickstart_scan(self): # we don't check to see if kickstart files look like anything # so this will pass self.assertTrue(utils.find_kickstart("filedoesnotexist") is None) self.assertTrue(utils.find_kickstart(self.topdir) == None) self.assertTrue(utils.find_kickstart("http://bar")) self.assertTrue(utils.find_kickstart("ftp://bar")) self.assertTrue(utils.find_kickstart("nfs://bar")) self.assertFalse(utils.find_kickstart("gopher://bar")) def test_matching(self): self.assertTrue(utils.is_mac("00:C0:B7:7E:55:50")) self.assertTrue(utils.is_mac("00:c0:b7:7E:55:50")) self.assertFalse(utils.is_mac("00:c0:b:7E:55:50")) self.assertFalse(utils.is_mac("00:c0:b7:7E:55")) self.assertFalse(utils.is_mac("00:c0:b7:7E:55:50:0F")) self.assertFalse(utils.is_mac("00:c0:bZ:7E:55:50")) self.assertFalse(utils.is_mac("x00:c0:b7:7E:55:50")) self.assertFalse(utils.is_mac("00:c0:b7:7E:55:50x")) self.assertFalse(utils.is_mac("00.D0.B7.7E.55.50")) self.assertFalse(utils.is_mac("testsystem0")) self.assertTrue(utils.is_ip("127.0.0.1")) self.assertTrue(utils.is_ip("192.168.1.1")) self.assertFalse(utils.is_ip("00:C0:B7:7E:55:50")) self.assertFalse(utils.is_ip("testsystem0")) def test_some_random_find_commands(self): # initial setup... self.test_system_name_is_a_MAC() # search for a parameter that isn't real, want an error self.failUnlessRaises(CobblerException,self.api.systems().find, pond="mcelligots") # verify that even though we have several different NICs search still works # FIMXE: temprorarily disabled # self.assertTrue(self.api.find_system(name="nictest") is not None) # search for a parameter with a bad value, want None self.assertFalse(self.api.systems().find(name="horton")) # one valid parameter another invalid is still an error self.failUnlessRaises(CobblerException,self.api.systems().find, name="onefish",pond="mcelligots") # searching with no args is ALSO an error self.failUnlessRaises(CobblerException, self.api.systems().find) # searching for a list returns a list of correct length self.assertTrue(len(self.api.systems().find(mac_address="00:16:41:14:B7:71",return_list=True))==1) # make sure we can still search without an explicit keyword arg self.assertTrue(len(self.api.systems().find("00:16:41:14:B7:71",return_list=True))==1) self.assertTrue(self.api.systems().find("00:16:41:14:B7:71")) def test_invalid_distro_non_referenced_kernel(self): distro = self.api.new_distro() self.assertTrue(distro.set_name("testdistro2")) self.failUnlessRaises(CobblerException,distro.set_kernel,"filedoesntexist") self.assertTrue(distro.set_initrd(self.fk_initrd)) self.failUnlessRaises(CobblerException, self.api.add_distro, distro) self.assertFalse(self.api.distros().find(name="testdistro2")) def test_invalid_distro_non_referenced_initrd(self): distro = self.api.new_distro() self.assertTrue(distro.set_name("testdistro3")) self.assertTrue(distro.set_kernel(self.fk_kernel)) self.failUnlessRaises(CobblerException, distro.set_initrd, "filedoesntexist") self.failUnlessRaises(CobblerException, self.api.add_distro, distro) self.assertFalse(self.api.distros().find(name="testdistro3")) def test_invalid_profile_non_referenced_distro(self): profile = self.api.new_profile() self.assertTrue(profile.set_name("testprofile11")) self.failUnlessRaises(CobblerException, profile.set_distro, "distrodoesntexist") self.assertTrue(profile.set_kickstart("/var/lib/cobbler/kickstarts/sample.ks")) self.failUnlessRaises(CobblerException, self.api.add_profile, profile) self.assertFalse(self.api.profiles().find(name="testprofile2")) def test_invalid_profile_kickstart_not_url(self): profile = self.api.new_profile() self.assertTrue(profile.set_name("testprofile12")) self.assertTrue(profile.set_distro("testdistro0")) self.failUnlessRaises(CobblerException, profile.set_kickstart, "kickstartdoesntexist") # since kickstarts are optional, you can still add it self.assertTrue(self.api.add_profile(profile)) self.assertTrue(self.api.profiles().find(name="testprofile12")) # now verify the other kickstart forms would still work self.assertTrue(profile.set_kickstart("http://bar")) self.assertTrue(profile.set_kickstart("ftp://bar")) self.assertTrue(profile.set_kickstart("nfs://bar")) def test_profile_virt_parameter_checking(self): profile = self.api.new_profile() self.assertTrue(profile.set_name("testprofile12b")) self.assertTrue(profile.set_distro("testdistro0")) self.assertTrue(profile.set_kickstart("http://127.0.0.1/foo")) self.assertTrue(profile.set_virt_bridge("xenbr1")) self.assertTrue(profile.set_disk_driver("qcow")) # sizes must be integers self.assertTrue(profile.set_virt_file_size("54321")) self.failUnlessRaises(Exception, profile.set_virt_file_size, "huge") self.failUnlessRaises(Exception, profile.set_virt_file_size, "54.321") # cpus must be integers self.assertTrue(profile.set_virt_cpus("2")) self.failUnlessRaises(Exception, profile.set_virt_cpus, "3.14") self.failUnlessRaises(Exception, profile.set_virt_cpus, "6.02*10^23") self.assertTrue(self.api.add_profile(profile)) def test_inheritance_and_variable_propogation(self): # STEP ONE: verify that non-inherited objects behave # correctly with ks_meta (we picked this attribute # because it's a hash and it's a bit harder to handle # than strings). It should be passed down the render # tree to all subnodes repo = self.api.new_repo() try: os.makedirs("/tmp/test_cobbler_repo") except: pass fd = open("/tmp/test_cobbler_repo/test.file", "w+") fd.write("hello!") fd.close() self.assertTrue(repo.set_name("testrepo")) self.assertTrue(repo.set_mirror("/tmp/test_cobbler_repo")) self.assertTrue(self.api.add_repo(repo)) profile = self.api.new_profile() self.assertTrue(profile.set_name("testprofile12b2")) self.assertTrue(profile.set_distro("testdistro0")) self.assertTrue(profile.set_kickstart("http://127.0.0.1/foo")) self.assertTrue(profile.set_repos(["testrepo"])) self.assertTrue(profile.set_name_servers(["asdf"])) self.assertTrue(self.api.add_profile(profile)) # disable this test as it's not a valid repo yet # self.api.reposync() self.api.sync() system = self.api.new_system() self.assertTrue(system.set_name("foo")) self.assertTrue(system.set_profile("testprofile12b2")) self.assertTrue(system.set_ksmeta({"asdf" : "jkl" })) self.assertTrue(self.api.add_system(system)) profile = self.api.profiles().find("testprofile12b2") ksmeta = profile.ks_meta self.assertFalse(ksmeta.has_key("asdf")) # FIXME: do the same for inherited profiles # now verify the same for an inherited profile # and this time walk up the tree to verify it wasn't # applied to any other object except the base. profile2 = self.api.new_profile(is_subobject=True) profile2.set_name("testprofile12b3") profile2.set_parent("testprofile12b2") self.api.add_profile(profile2) # disable this test as syncing an invalid repo will fail # self.api.reposync() self.api.sync() # FIXME: now add a system to the inherited profile # and set a attribute on it that we will later check for system2 = self.api.new_system() self.assertTrue(system2.set_name("foo2")) self.assertTrue(system2.set_profile("testprofile12b3")) self.assertTrue(system2.set_ksmeta({"narf" : "troz"})) self.assertTrue(self.api.add_system(system2)) # disable this test as invalid repos don't sync # self.api.reposync() self.api.sync() # FIXME: now evaluate the system object and make sure # that it has inherited the repos value from the superprofile # above it's actual profile. This should NOT be present in the # actual object, which we have not modified yet. data = utils.blender(self.api, False, system2) self.assertTrue(data["repos"] == ["testrepo"]) self.assertTrue(self.api.profiles().find(system2.profile).repos == "<>") # now if we set the repos object of the system to an additional # repo we should verify it now contains two repos. # (FIXME) repo2 = self.api.new_repo() try: os.makedirs("/tmp/cobbler_test/repo0") except: pass fd = open("/tmp/cobbler_test/repo0/file.test","w+") fd.write("Hi!") fd.close() self.assertTrue(repo2.set_name("testrepo2")) self.assertTrue(repo2.set_mirror("/tmp/cobbler_test/repo0")) self.assertTrue(self.api.add_repo(repo2)) profile2 = self.api.profiles().find("testprofile12b3") # note: side check to make sure we can also set to string values profile2.set_repos("testrepo2") self.api.add_profile(profile2) # save it # random bug testing: run sync several times and ensure cardinality doesn't change #self.api.reposync() self.api.sync() self.api.sync() self.api.sync() data = utils.blender(self.api, False, system2) self.assertTrue("testrepo" in data["repos"]) self.assertTrue("testrepo2" in data["repos"]) self.assertTrue(len(data["repos"]) == 2) self.assertTrue(self.api.profiles().find(system2.profile).repos == ["testrepo2"]) # now double check that the parent profile still only has one repo in it. # this is part of our test against upward propogation profile = self.api.profiles().find("testprofile12b2") self.assertTrue(len(profile.repos) == 1) self.assertTrue(profile.repos == ["testrepo"]) # now see if the subprofile does NOT have the ksmeta attribute # this is part of our test against upward propogation profile2 = self.api.profiles().find("testprofile12b3") self.assertTrue(type(profile2.ks_meta) == type("")) self.assertTrue(profile2.ks_meta == "<>") # now see if the profile above this profile still doesn't have it profile = self.api.profiles().find("testprofile12b2") self.assertTrue(type(profile.ks_meta) == type({})) # self.api.reposync() self.api.sync() self.assertFalse(profile.ks_meta.has_key("narf"), "profile does not have the system ksmeta") #self.api.reposync() self.api.sync() # verify that the distro did not acquire the property # we just set on the leaf system distro = self.api.distros().find("testdistro0") self.assertTrue(type(distro.ks_meta) == type({})) self.assertFalse(distro.ks_meta.has_key("narf"), "distro does not have the system ksmeta") # STEP THREE: verify that inheritance appears to work # by setting ks_meta on the subprofile and seeing # if it appears on the leaf system ... must use # blender functions profile2 = self.api.profiles().find("testprofile12b3") profile2.set_ksmeta({"canyouseethis" : "yes" }) self.assertTrue(self.api.add_profile(profile2)) system2 = self.api.systems().find("foo2") data = utils.blender(self.api, False, system2) self.assertTrue(data.has_key("ks_meta")) self.assertTrue(data["ks_meta"].has_key("canyouseethis")) # STEP FOUR: do the same on the superprofile and see # if that propogates profile = self.api.profiles().find("testprofile12b2") profile.set_ksmeta({"canyouseethisalso" : "yes" }) self.assertTrue(self.api.add_profile(profile)) system2 = self.api.systems().find("foo2") data = utils.blender(self.api, False, system2) self.assertTrue(data.has_key("ks_meta")) self.assertTrue(data["ks_meta"].has_key("canyouseethisalso")) # STEP FIVE: see if distro attributes propogate distro = self.api.distros().find("testdistro0") distro.set_ksmeta({"alsoalsowik" : "moose" }) self.assertTrue(self.api.add_distro(distro)) system2 = self.api.find_system("foo2") data = utils.blender(self.api, False, system2) self.assertTrue(data.has_key("ks_meta")) self.assertTrue(data["ks_meta"].has_key("alsoalsowik")) # STEP SEVEN: see if settings changes also propogate # TBA def test_system_name_is_a_MAC(self): system = self.api.new_system() name = "00:16:41:14:B7:71" self.assertTrue(system.set_name(name)) self.assertTrue(system.set_profile("testprofile0")) self.assertTrue(self.api.add_system(system)) self.assertTrue(self.api.find_system(name=name)) self.assertTrue(self.api.find_system(mac_address="00:16:41:14:B7:71")) self.assertFalse(self.api.find_system(mac_address="thisisnotamac")) def test_system_name_is_an_IP(self): system = self.api.new_system() name = "192.168.1.54" self.assertTrue(system.set_name(name)) self.assertTrue(system.set_profile("testprofile0")) self.assertTrue(self.api.add_system(system)) self.assertTrue(self.api.find_system(name=name)) def test_invalid_system_non_referenced_profile(self): system = self.api.new_system() self.assertTrue(system.set_name("testsystem0")) self.failUnlessRaises(CobblerException, system.set_profile, "profiledoesntexist") self.failUnlessRaises(CobblerException, self.api.add_system, system) class SyncContents(BootTest): def test_blender_cache_works(self): distro = self.api.new_distro() self.assertTrue(distro.set_name("D1")) self.assertTrue(distro.set_kernel(self.fk_kernel)) self.assertTrue(distro.set_initrd(self.fk_initrd)) self.assertTrue(self.api.add_distro(distro)) self.assertTrue(self.api.find_distro(name="D1")) profile = self.api.new_profile() self.assertTrue(profile.set_name("P1")) self.assertTrue(profile.set_distro("D1")) self.assertTrue(profile.set_kickstart("/var/lib/cobbler/kickstarts/sample.ks")) self.assertTrue(self.api.add_profile(profile)) assert self.api.find_profile(name="P1") != None system = self.api.new_system() self.assertTrue(system.set_name("S1")) self.assertTrue(system.set_mac_address("BB:EE:EE:EE:EE:FF","eth0")) self.assertTrue(system.set_profile("P1")) self.assertTrue(self.api.add_system(system)) assert self.api.find_system(name="S1") != None # ensure that the system after being added has the right template data # in /tftpboot converted="01-bb-ee-ee-ee-ee-ff" if os.path.exists("/var/lib/tftpboot"): fh = open("/var/lib/tftpboot/pxelinux.cfg/%s" % converted) else: fh = open("/tftpboot/pxelinux.cfg/%s" % converted) data = fh.read() self.assertTrue(data.find("/op/ks/") != -1) fh.close() # ensure that after sync is applied, the blender cache still allows # the system data to persist over the profile data in /tftpboot # (which was an error we had in 0.6.3) self.api.sync() if os.path.exists("/var/lib/tftpboot"): fh = open("/var/lib/tftpboot/pxelinux.cfg/%s" % converted) else: fh = open("/tftpboot/pxelinux.cfg/%s" % converted) data = fh.read() print "DEBUG DATA: %s" % data self.assertTrue(data.find("/op/ks/") != -1) fh.close() class Deletions(BootTest): #def test_invalid_delete_profile_doesnt_exist(self): # self.failUnlessRaises(CobblerException, self.api.profiles().remove, "doesnotexist") def test_invalid_delete_profile_would_orphan_systems(self): self.failUnlessRaises(CobblerException, self.api.profiles().remove, "testprofile0") #def test_invalid_delete_system_doesnt_exist(self): # self.failUnlessRaises(CobblerException, self.api.systems().remove, "doesnotexist") #def test_invalid_delete_distro_doesnt_exist(self): # self.failUnlessRaises(CobblerException, self.api.distros().remove, "doesnotexist") def test_invalid_delete_distro_would_orphan_profile(self): self.failUnlessRaises(CobblerException, self.api.distros().remove, "testdistro0") #def test_working_deletes(self): # self.api.clear() # # self.make_basic_config() # #self.assertTrue(self.api.systems().remove("testsystem0")) # self.api.serialize() # self.assertTrue(self.api.remove_profile("testprofile0")) # self.assertTrue(self.api.remove_distro("testdistro0")) # #self.assertFalse(self.api.find_system(name="testsystem0")) # self.assertFalse(self.api.find_profile(name="testprofile0")) # self.assertFalse(self.api.find_distro(name="testdistro0")) class TestCheck(BootTest): def test_check(self): # we can't know if it's supposed to fail in advance # (ain't that the halting problem), but it shouldn't ever # throw exceptions. self.api.check() class TestSync(BootTest): def test_real_run(self): # syncing a real test run in an automated environment would # break a valid cobbler configuration, so we're not going to # test this here. pass class TestListings(BootTest): def test_listings(self): # check to see if the collection listings output something. # this is a minimal check, mainly for coverage, not validity # self.make_basic_config() self.assertTrue(len(self.api.systems().printable()) > 0) self.assertTrue(len(self.api.profiles().printable()) > 0) self.assertTrue(len(self.api.distros().printable()) > 0) class TestImage(BootTest): def test_image_file(self): # ensure that only valid names are accepted and invalid ones are rejected image = self.api.new_image() self.assertTrue(image.set_file("nfs://hostname/path/to/filename.iso")) self.assertTrue(image.set_file("nfs://mcpierce@hostname:/path/to/filename.iso")) self.assertTrue(image.set_file("nfs://hostname:/path/to/filename.iso")) self.assertTrue(image.set_file("nfs://hostname/filename.iso")) self.assertTrue(image.set_file("hostname:/path/to/the/filename.iso")) self.failUnlessRaises(CX, image.set_file, "hostname:filename.iso") self.failUnlessRaises(CX, image.set_file, "path/to/filename.iso") self.failUnlessRaises(CX, image.set_file, "hostname:") # port is not allowed self.failUnlessRaises(CX, image.set_file, "nfs://hostname:1234/path/to/the/filename.iso") #class TestCLIBasic(BootTest): # # def test_cli(self): # # just invoke the CLI to increase coverage and ensure # # nothing major is broke at top level. Full CLI command testing # # is not included (yet) since the API tests hit that fairly throughly # # and it would easily double the length of the tests. # app = "/usr/bin/python" # self.assertTrue(subprocess.call([app,"cobbler/cobbler.py","list"]) == 0) if __name__ == "__main__": if not os.path.exists("setup.py"): print "tests: must invoke from top level directory" sys.exit(1) loader = unittest.defaultTestLoader test_module = __import__("tests") # self import considered harmful? tests = loader.loadTestsFromModule(test_module) runner = unittest.TextTestRunner() runner.run(tests) sys.exit(0) cobbler-2.4.1/cobbler/utils.py000066400000000000000000002231011227367477500162730ustar00rootroot00000000000000""" Misc heavy lifting functions for cobbler Copyright 2006-2009, Red Hat, Inc and Others Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import sys import os import re import copy import socket import glob import random try: import subprocess as sub_process except: import sub_process import shutil import string import traceback import errno import logging import shutil import tempfile import signal from cexceptions import * import codes import time import netaddr import shlex import field_info import clogger import yaml import urllib2 import simplejson try: import hashlib as fiver def md5(key): return fiver.md5(key) except ImportError: # for Python < 2.5 import md5 as fiver def md5(key): return fiver.md5(key) # python-netaddr 0.7 broke backward compatability, try to use the old IP # classes, and fallback on the newer if there's an import error. NETADDR_PRE_0_7 = True try: # Try importing the old (pre-0.7) netaddr IP class: from netaddr import IP except ImportError: NETADDR_PRE_0_7 = False CHEETAH_ERROR_DISCLAIMER=""" # *** ERROR *** # # There is a templating error preventing this file from rendering correctly. # # This is most likely not due to a bug in Cobbler and is something you can fix. # # Look at the message below to see what things are causing problems. # # (1) Does the template file reference a $variable that is not defined? # (2) is there a formatting error in a Cheetah directive? # (3) Should dollar signs ($) be escaped that are not being escaped? # # Try fixing the problem and then investigate to see if this message goes # away or changes. # """ # From http://code.activestate.com/recipes/303342/ class Translator: allchars = string.maketrans('','') def __init__(self, frm='', to='', delete='', keep=None): if len(to) == 1: to = to * len(frm) self.trans = string.maketrans(frm, to) if keep is None: self.delete = delete else: self.delete = self.allchars.translate(self.allchars, keep.translate(self.allchars, delete)) def __call__(self, s): return s.translate(self.trans, self.delete) #placeholder for translation def _(foo): return foo MODULE_CACHE = {} SIGNATURE_CACHE = {} _re_kernel = re.compile(r'(vmlinu[xz]|kernel.img)') _re_initrd = re.compile(r'(initrd(.*).img|ramdisk.image.gz)') _re_is_mac = re.compile(':'.join(('[0-9A-Fa-f][0-9A-Fa-f]',)*6) + '$') _re_is_ibmac = re.compile(':'.join(('[0-9A-Fa-f][0-9A-Fa-f]',)*20) + '$') # all logging from utils.die goes to the main log even if there # is another log. main_logger = None # the logger will be lazy loaded later def die(logger, msg): global main_logger if main_logger is None: main_logger = clogger.Logger() # log the exception once in the per-task log or the main # log if this is not a background op. try: raise CX(msg) except: if logger is not None: log_exc(logger) else: log_exc(main_logger) # now re-raise it so the error can fail the operation raise CX(msg) def log_exc(logger): """ Log an exception. """ (t, v, tb) = sys.exc_info() logger.info("Exception occured: %s" % t ) logger.info("Exception value: %s" % v) logger.info("Exception Info:\n%s" % string.join(traceback.format_list(traceback.extract_tb(tb)))) def get_exc(exc,full=True): (t, v, tb) = sys.exc_info() buf = "" try: getattr(exc, "from_cobbler") buf = str(exc)[1:-1] + "\n" except: if not full: buf = buf + str(t) buf = "%s\n%s" % (buf,v) if full: buf = buf + "\n" + "\n".join(traceback.format_list(traceback.extract_tb(tb))) return buf def cheetah_exc(exc,full=False): lines = get_exc(exc).split("\n") buf = "" for l in lines: buf = buf + "# %s\n" % l return CHEETAH_ERROR_DISCLAIMER + buf def trace_me(): x = traceback.extract_stack() bar = string.join(traceback.format_list(x)) return bar def pretty_hex(ip, length=8): """ Pads an IP object with leading zeroes so that the result is _length_ hex digits. Also do an upper(). """ hexval = "%x" % ip.value if len(hexval) < length: hexval = '0' * (length - len(hexval)) + hexval return hexval.upper() def get_host_ip(ip, shorten=True): """ Return the IP encoding needed for the TFTP boot tree. """ cidr = None if NETADDR_PRE_0_7: ip = netaddr.IP(ip) cidr = ip.cidr() else: ip = netaddr.ip.IPAddress(ip) cidr = netaddr.ip.IPNetwork(ip) if len(cidr) == 1: # Just an IP, e.g. a /32 return pretty_hex(ip) else: pretty = pretty_hex(cidr[0]) if not shorten or len(cidr) <= 8: # not enough to make the last nibble insignificant return pretty else: cutoff = (32 - cidr.prefixlen) / 4 return pretty[0:-cutoff] def _IP(ip): """ Returns a netaddr.IP object representing ip. If ip is already an netaddr.IP instance just return it. Else return a new instance """ ip_class = None if NETADDR_PRE_0_7: ip_class = netaddr.IP else: ip_class = netaddr.ip.IPAddress if isinstance(ip, ip_class) or ip == "": return ip else: return ip_class(ip) def get_config_filename(sys,interface): """ The configuration file for each system pxe uses is either a form of the MAC address of the hex version of the IP. If none of that is available, just use the given name, though the name given will be unsuitable for PXE configuration (For this, check system.is_management_supported()). This same file is used to store system config information in the Apache tree, so it's still relevant. """ interface = str(interface) if not sys.interfaces.has_key(interface): return None if sys.name == "default": return "default" mac = sys.get_mac_address(interface) ip = sys.get_ip_address(interface) if mac is not None and mac != "": return "01-" + "-".join(mac.split(":")).lower() elif ip is not None and ip != "": return get_host_ip(ip) else: return sys.name def is_ip(strdata): """ Return whether the argument is an IP address. """ try: _IP(strdata) except: return False return True def is_mac(strdata): """ Return whether the argument is a mac address. """ if strdata is None: return False return bool(_re_is_mac.match(strdata) or _re_is_ibmac.match(strdata)) def get_random_mac(api_handle,virt_type="xenpv"): """ Generate a random MAC address. from xend/server/netif.py return: MAC address string """ if virt_type.startswith("vmware"): mac = [ 0x00, 0x50, 0x56, random.randint(0x00, 0x3f), random.randint(0x00, 0xff), random.randint(0x00, 0xff) ] elif virt_type.startswith("xen") or virt_type.startswith("qemu") or virt_type.startswith("kvm"): mac = [ 0x00, 0x16, 0x3e, random.randint(0x00, 0x7f), random.randint(0x00, 0xff), random.randint(0x00, 0xff) ] else: raise CX("virt mac assignment not yet supported") mac = ':'.join(map(lambda x: "%02x" % x, mac)) systems = api_handle.systems() while ( systems.find(mac_address=mac) ): mac = get_random_mac(api_handle) return mac def resolve_ip(strdata): """ Resolve the IP address and handle errors... """ try: return socket.gethostbyname(strdata) except: return None def find_matching_files(directory,regex): """ Find all files in a given directory that match a given regex. Can't use glob directly as glob doesn't take regexen. """ files = glob.glob(os.path.join(directory,"*")) results = [] for f in files: if regex.match(os.path.basename(f)): results.append(f) return results def find_highest_files(directory,unversioned,regex): """ Find the highest numbered file (kernel or initrd numbering scheme) in a given directory that matches a given pattern. Used for auto-booting the latest kernel in a directory. """ files = find_matching_files(directory, regex) get_numbers = re.compile(r'(\d+).(\d+).(\d+)') def max2(a, b): """Returns the larger of the two values""" av = get_numbers.search(os.path.basename(a)).groups() bv = get_numbers.search(os.path.basename(b)).groups() ret = cmp(av[0], bv[0]) or cmp(av[1], bv[1]) or cmp(av[2], bv[2]) if ret < 0: return b return a if len(files) > 0: return reduce(max2, files) # couldn't find a highest numbered file, but maybe there # is just a 'vmlinuz' or an 'initrd.img' in this directory? last_chance = os.path.join(directory,unversioned) if os.path.exists(last_chance): return last_chance return None def find_kernel(path): """ Given a directory or a filename, find if the path can be made to resolve into a kernel, and return that full path if possible. """ if path is None: return None if os.path.isfile(path): #filename = os.path.basename(path) #if _re_kernel.match(filename): # return path #elif filename == "vmlinuz": # return path return path elif os.path.isdir(path): return find_highest_files(path,"vmlinuz",_re_kernel) # For remote URLs we expect an absolute path, and will not # do any searching for the latest: elif file_is_remote(path) and remote_file_exists(path): return path return None def remove_yum_olddata(path,logger=None): """ Delete .olddata files that might be present from a failed run of createrepo. # FIXME: verify this is still being used """ trythese = [ ".olddata", ".repodata/.olddata", "repodata/.oldata", "repodata/repodata" ] for pathseg in trythese: olddata = os.path.join(path, pathseg) if os.path.exists(olddata): if logger is not None: logger.info("removing: %s" % olddata) shutil.rmtree(olddata, ignore_errors=False, onerror=None) def find_initrd(path): """ Given a directory or a filename, see if the path can be made to resolve into an intird, return that full path if possible. """ # FUTURE: try to match kernel/initrd pairs? if path is None: return None if os.path.isfile(path): #filename = os.path.basename(path) #if _re_initrd.match(filename): # return path #if filename == "initrd.img" or filename == "initrd": # return path return path elif os.path.isdir(path): return find_highest_files(path,"initrd.img",_re_initrd) # For remote URLs we expect an absolute path, and will not # do any searching for the latest: elif file_is_remote(path) and remote_file_exists(path): return path return None def find_kickstart(url): """ Check if a kickstart url looks like an http, ftp, nfs or local path. If a local path is used, cobbler will copy the kickstart and serve it over http. Return None if the url format does not look valid. """ if url is None: return None x = url.lstrip() for y in ["http://", "nfs://", "ftp://", "/"]: # make sure we get a lower-case protocol without # affecting the rest of the string x = re.sub(r"(?i)%s" % y, y, x, count=1) if x.startswith(y): if x.startswith("/") and not os.path.isfile(x): return None return x return None def read_file_contents(file_location, logger=None, fetch_if_remote=False): """ Reads the contents of a file, which could be referenced locally or as a URI. Returns None if file is remote and templating of remote files is disabled. Throws a FileNotFoundException if the file does not exist at the specified location. """ # Local files: if file_location.startswith("/"): if not os.path.exists(file_location): if logger: logger.warning("File does not exist: %s" % file_location) raise FileNotFoundException("%s: %s" % (_("File not found"), file_location)) try: f = open(file_location) data = f.read() f.close() return data except: if logger: log_exc(logger) raise # Remote files: if not fetch_if_remote: return None if file_is_remote(file_location): try: handler = urllib2.urlopen(file_location) data = handler.read() handler.close() return data except urllib2.HTTPError: # File likely doesn't exist if logger: logger.warning("File does not exist: %s" % file_location) raise FileNotFoundException("%s: %s" % (_("File not found"), file_location)) def remote_file_exists(file_url): """ Return True if the remote file exists. """ try: handler = urllib2.urlopen(file_url) handler.close() return True except urllib2.HTTPError: # File likely doesn't exist return False def file_is_remote(file_location): """ Returns true if the file is remote and referenced via a protocol we support. """ # TODO: nfs and ftp ok too? file_loc_lc = file_location.lower() for prefix in ["http://"]: if file_loc_lc.startswith(prefix): return True return False def input_string_or_list(options): """ Accepts a delimited list of stuff or a list, but always returns a list. """ if options == "<>": return "<>" if options is None or options == "" or options == "delete": return [] elif isinstance(options,list): return options elif isinstance(options,basestring): tokens = shlex.split(options) return tokens else: raise CX(_("invalid input type")) def input_string_or_hash(options,allow_multiples=True): """ Older cobbler files stored configurations in a flat way, such that all values for strings. Newer versions of cobbler allow dictionaries. This function is used to allow loading of older value formats so new users of cobbler aren't broken in an upgrade. """ if options == "<>": options = {} if options is None or options == "delete": return (True, {}) elif isinstance(options, list): raise CX(_("No idea what to do with list: %s") % options) elif isinstance(options, basestring): new_dict = {} tokens = shlex.split(options) for t in tokens: tokens2 = t.split("=",1) if len(tokens2) == 1: # this is a singleton option, no value key = tokens2[0] value = None else: key = tokens2[0] value = tokens2[1] # if we're allowing multiple values for the same key, # check to see if this token has already been # inserted into the dictionary of values already if key in new_dict.keys() and allow_multiples: # if so, check to see if there is already a list of values # otherwise convert the dictionary value to an array, and add # the new value to the end of the list if isinstance(new_dict[key], list): new_dict[key].append(value) else: new_dict[key] = [new_dict[key], value] else: new_dict[key] = value # make sure we have no empty entries new_dict.pop('', None) return (True, new_dict) elif isinstance(options, dict): options.pop('',None) return (True, options) else: raise CX(_("invalid input type")) def input_boolean(value): value = str(value) if value.lower() in [ "true", "1", "on", "yes", "y" ]: return True else: return False def update_settings_file(data): if 1:#try: #clogger.Logger().debug("in update_settings_file(): value is: %s" % str(value)) settings_file = file("/etc/cobbler/settings","w") yaml.safe_dump(data,settings_file) settings_file.close() return True #except: # return False def grab_tree(api_handle, obj): """ Climb the tree and get every node. """ settings = api_handle.settings() results = [ obj ] parent = obj.get_parent() while parent is not None: results.append(parent) parent = parent.get_parent() results.append(settings) return results def blender(api_handle,remove_hashes, root_obj): """ Combine all of the data in an object tree from the perspective of that point on the tree, and produce a merged hash containing consolidated data. """ settings = api_handle.settings() tree = grab_tree(api_handle, root_obj) tree.reverse() # start with top of tree, override going down results = {} for node in tree: __consolidate(node,results) # hack -- s390 nodes get additional default kernel options arch = results.get("arch","?") if arch.startswith("s390"): keyz = settings.kernel_options_s390x.keys() for k in keyz: if not results.has_key(k): results["kernel_options"][k] = settings.kernel_options_s390x[k] # Get topmost object to determine which breed we're dealing with parent = root_obj.get_parent() if parent is None: parent = root_obj while parent.COLLECTION_TYPE is "profile" or parent.COLLECTION_TYPE is "system": parent = parent.get_parent() breed = parent.breed if breed == "redhat": # determine if we have room to add kssendmac to the kernel options line kernel_txt = hash_to_string(results["kernel_options"]) if len(kernel_txt) < 244: results["kernel_options"]["kssendmac"] = None # convert post kernel options to string if results.has_key("kernel_options_post"): results["kernel_options_post"] = hash_to_string(results["kernel_options_post"]) # make interfaces accessible without Cheetah-voodoo in the templates # EXAMPLE: $ip == $ip0, $ip1, $ip2 and so on. if root_obj.COLLECTION_TYPE == "system": for (name,interface) in root_obj.interfaces.iteritems(): for key in interface.keys(): results["%s_%s" % (key,name)] = interface[key] # just to keep templates backwards compatibile if name == "intf0": # prevent stomping on profile variables, which really only happens # with the way we check for virt_bridge, which is a profile setting # and an interface setting if not results.has_key(key): results[key] = interface[key] # if the root object is a profile or system, add in all # repo data for repos that belong to the object chain if root_obj.COLLECTION_TYPE in ("profile","system"): repo_data = [] for r in results.get("repos",[]): repo = api_handle.find_repo(name=r) if repo: repo_data.append(repo.to_datastruct()) # FIXME: sort the repos in the array based on the # repo priority field so that lower priority # repos come first in the array results["repo_data"] = repo_data http_port = results.get("http_port",80) if http_port not in (80, "80"): results["http_server"] = "%s:%s" % (results["server"] , http_port) else: results["http_server"] = results["server"] mgmt_parameters = results.get("mgmt_parameters",{}) mgmt_parameters.update(results.get("ks_meta", {})) results["mgmt_parameters"] = mgmt_parameters # sanitize output for koan and kernel option lines, etc if remove_hashes: results = flatten(results) # the password field is inputed as escaped strings but Cheetah # does weird things when expanding it due to multiple dollar signs # so this is the workaround if results.has_key("default_password_crypted"): results["default_password_crypted"] = results["default_password_crypted"].replace("\$","$") # add in some variables for easier templating # as these variables change based on object type if results.has_key("interfaces"): # is a system object results["system_name"] = results["name"] results["profile_name"] = results["profile"] if results.has_key("distro"): results["distro_name"] = results["distro"] elif results.has_key("image"): results["distro_name"] = "N/A" results["image_name"] = results["image"] elif results.has_key("distro"): # is a profile or subprofile object results["profile_name"] = results["name"] results["distro_name"] = results["distro"] elif results.has_key("kernel"): # is a distro object results["distro_name"] = results["name"] elif results.has_key("file"): # is an image object results["distro_name"] = "N/A" results["image_name"] = results["name"] return results def flatten(data): # convert certain nested hashes to strings. # this is only really done for the ones koan needs as strings # this should not be done for everything if data is None: return None if data.has_key("environment"): data["environment"] = hash_to_string(data["environment"]) if data.has_key("kernel_options"): data["kernel_options"] = hash_to_string(data["kernel_options"]) if data.has_key("kernel_options_post"): data["kernel_options_post"] = hash_to_string(data["kernel_options_post"]) if data.has_key("yumopts"): data["yumopts"] = hash_to_string(data["yumopts"]) if data.has_key("ks_meta"): data["ks_meta"] = hash_to_string(data["ks_meta"]) if data.has_key("template_files"): data["template_files"] = hash_to_string(data["template_files"]) if data.has_key("boot_files"): data["boot_files"] = hash_to_string(data["boot_files"]) if data.has_key("fetchable_files"): data["fetchable_files"] = hash_to_string(data["fetchable_files"]) if data.has_key("repos") and isinstance(data["repos"], list): data["repos"] = " ".join(data["repos"]) if data.has_key("rpm_list") and isinstance(data["rpm_list"], list): data["rpm_list"] = " ".join(data["rpm_list"]) # note -- we do not need to flatten "interfaces" as koan does not expect # it to be a string, nor do we use it on a kernel options line, etc... return data def uniquify(seq, idfun=None): # credit: http://www.peterbe.com/plog/uniqifiers-benchmark # FIXME: if this is actually slower than some other way, overhaul it if idfun is None: def idfun(x): return x seen = {} result = [] for item in seq: marker = idfun(item) if marker in seen: continue seen[marker] = 1 result.append(item) return result def __consolidate(node,results): """ Merge data from a given node with the aggregate of all data from past scanned nodes. Hashes and arrays are treated specially. """ node_data = node.to_datastruct() # if the node has any data items labelled <> we need to expunge them. # so that they do not override the supernodes. node_data_copy = {} for key in node_data: value = node_data[key] if value != "<>": if isinstance(value, dict): node_data_copy[key] = value.copy() elif isinstance(value, list): node_data_copy[key] = value[:] else: node_data_copy[key] = value for field in node_data_copy: data_item = node_data_copy[field] if results.has_key(field): # now merge data types seperately depending on whether they are hash, list, # or scalar. fielddata = results[field] if isinstance(fielddata, dict): # interweave hash results results[field].update(data_item.copy()) elif isinstance(fielddata, list) or isinstance(fielddata, tuple): # add to lists (cobbler doesn't have many lists) # FIXME: should probably uniqueify list after doing this results[field].extend(data_item) results[field] = uniquify(results[field]) else: # distro field gets special handling, since we don't # want to overwrite it ever. # FIXME: should the parent's field too? It will be over- # written if there are multiple sub-profiles in # the chain of inheritance if field != "distro": results[field] = data_item else: results[field] = data_item # now if we have any "!foo" results in the list, delete corresponding # key entry "foo", and also the entry "!foo", allowing for removal # of kernel options set in a distro later in a profile, etc. hash_removals(results,"kernel_options") hash_removals(results,"kernel_options_post") hash_removals(results,"ks_meta") hash_removals(results,"template_files") hash_removals(results,"boot_files") hash_removals(results,"fetchable_files") def hash_removals(results,subkey): if not results.has_key(subkey): return scan = results[subkey].keys() for k in scan: if str(k).startswith("!") and k != "!": remove_me = k[1:] if results[subkey].has_key(remove_me): del results[subkey][remove_me] del results[subkey][k] def hash_to_string(hash): """ Convert a hash to a printable string. used primarily in the kernel options string and for some legacy stuff where koan expects strings (though this last part should be changed to hashes) """ buffer = "" if not isinstance(hash, dict): return hash for key in hash: value = hash[key] if not value: buffer = buffer + str(key) + " " elif isinstance(value, list): # this value is an array, so we print out every # key=value for item in value: buffer = buffer + str(key) + "=" + str(item) + " " else: buffer = buffer + str(key) + "=" + str(value) + " " return buffer def rsync_files(src, dst, args, logger=None, quiet=True): """ Sync files from src to dst. The extra arguments specified by args are appended to the command """ if args == None: args = '' RSYNC_CMD = "rsync -a %%s '%%s' %%s %s --exclude-from=/etc/cobbler/rsync.exclude" % args if quiet: RSYNC_CMD += " --quiet" else: RSYNC_CMD += " --progress" # Make sure we put a "/" on the end of the source # and destination to make sure we don't cause any # rsync weirdness if not dst.endswith("/"): dst = "%s/" % dst if not src.endswith("/"): src = "%s/" % src spacer = "" if not src.startswith("rsync://") and not src.startswith("/"): spacer = ' -e "ssh" ' rsync_cmd = RSYNC_CMD % (spacer,src,dst) try: res = subprocess_call(logger, rsync_cmd) if res != 0: die(logger, "Failed to run the rsync command: '%s'" % rsync_cmd) except: return False return True def run_this(cmd, args, logger): """ A simple wrapper around subprocess calls. """ my_cmd = cmd % args rc = subprocess_call(logger,my_cmd,shell=True) if rc != 0: die(logger,"Command failed") def run_triggers(api,ref,globber,additional=[],logger=None): """ Runs all the trigger scripts in a given directory. ref can be a cobbler object, if not None, the name will be passed to the script. If ref is None, the script will be called with no argumenets. Globber is a wildcard expression indicating which triggers to run. Example: "/var/lib/cobbler/triggers/blah/*" As of Cobbler 1.5.X, this also runs cobbler modules that match the globbing paths. """ # Python triggers first, before shell if logger is not None: logger.debug("running python triggers from %s" % globber) modules = api.get_modules_in_category(globber) for m in modules: arglist = [] if ref: arglist.append(ref.name) for x in additional: arglist.append(x) if logger is not None: logger.debug("running python trigger %s" % m.__name__) rc = m.run(api, arglist, logger) if rc != 0: raise CX("cobbler trigger failed: %s" % m.__name__) # now do the old shell triggers, which are usually going to be slower, but are easier to write # and support any language if logger is not None: logger.debug("running shell triggers from %s" % globber) triggers = glob.glob(globber) triggers.sort() for file in triggers: try: if file.startswith(".") or file.find(".rpm") != -1: # skip dotfiles or .rpmnew files that may have been installed # in the triggers directory continue arglist = [ file ] if ref: arglist.append(ref.name) for x in additional: if x: arglist.append(x) if logger is not None: logger.debug("running shell trigger %s" % file) rc = subprocess_call(logger, arglist, shell=False) # close_fds=True) except: if logger is not None: logger.warning("failed to execute trigger: %s" % file) continue if rc != 0: raise CX(_("cobbler trigger failed: %(file)s returns %(code)d") % { "file" : file, "code" : rc }) def fix_mod_python_select_submission(repos): """ WARNING: this is a heinous hack to convert mod_python submitted form data to something usable. Ultimately we need to fix the root cause of this which doesn't seem to happen on all versions of python/mp. """ # should be nice regex, but this is readable :) repos = str(repos) repos = repos.replace("'repos'","") repos = repos.replace("'","") repos = repos.replace("[","") repos = repos.replace("]","") repos = repos.replace("Field(","") repos = repos.replace(")","") repos = repos.replace(",","") repos = repos.replace('"',"") repos = repos.lstrip().rstrip() return repos def check_dist(): """ Determines what distro we're running under. """ import platform try: return platform.linux_distribution()[0].lower().strip() except AttributeError: return platform.dist()[0].lower().strip() def os_release(): if check_dist() in ("redhat","fedora","centos","scientific linux"): fh = open("/etc/redhat-release") data = fh.read().lower() if data.find("fedora") != -1: make = "fedora" elif data.find("centos") != -1: make = "centos" else: make = "redhat" release_index = data.find("release") rest = data[release_index+7:-1] tokens = rest.split(" ") for t in tokens: try: return (make,float(t)) except ValueError, ve: pass raise CX("failed to detect local OS version from /etc/redhat-release") elif check_dist() == "debian": import lsb_release release = lsb_release.get_distro_information()['RELEASE'] return ("debian", release) elif check_dist() == "ubuntu": version = sub_process.check_output(("lsb_release","--release","--short")).rstrip() make = "ubuntu" return (make, float(version)) elif check_dist() == "suse": fd = open("/etc/SuSE-release") for line in fd.read().split("\n"): if line.find("VERSION") != -1: version = line.replace("VERSION = ","") if line.find("PATCHLEVEL") != -1: rest = line.replace("PATCHLEVEL = ","") make = "suse" return (make, float(version)) else: return ("unknown",0) def tftpboot_location(): """ Guesses the location of the tftpboot directory, based on the distro on which cobblerd is running """ (make,version) = os_release() if make == "fedora" and version >= 9: return "/var/lib/tftpboot" elif make in ("redhat","centos") and version >= 6: return "/var/lib/tftpboot" elif make == "suse": return "/srv/tftpboot" # As of Ubuntu 12.04, while they seem to have settled on sticking with # /var/lib/tftpboot, they haven't scrubbed all of the packages that came # from Debian that use /srv/tftp by default. elif make == "ubuntu" and os.path.exists("/var/lib/tftpboot"): return "/var/lib/tftpboot" elif make == "ubuntu" and os.path.exists("/srv/tftp"): return "/srv/tftp" elif make == "debian" and int(version.split('.')[0]) < 6: return "/var/lib/tftpboot" elif make == "debian" and int(version.split('.')[0]) >= 6: return "/srv/tftp" else: return "/tftpboot" def can_do_public_content(api): """ Returns whether we can use public_content_t which greatly simplifies SELinux usage. """ (dist, ver) = api.get_os_details() if dist == "redhat" and ver <= 4: return False return True def is_safe_to_hardlink(src,dst,api): (dev1, path1) = get_file_device_path(src) (dev2, path2) = get_file_device_path(dst) if dev1 != dev2: return False if dev1.find(":") != -1: # is remoted return False # note: this is very cobbler implementation specific! if not api.is_selinux_enabled(): return True if _re_initrd.match(os.path.basename(path1)): return True if _re_kernel.match(os.path.basename(path1)): return True # we're dealing with SELinux and files that are not safe to chcon return False def hashfile(fn, lcache=None, logger=None): """ Returns the sha1sum of the file """ db = {} try: dbfile = os.path.join(lcache,'link_cache.json') if os.path.exists(dbfile): db = simplejson.load(open(dbfile, 'r')) except: pass mtime = os.stat(fn).st_mtime if db.has_key(fn): if db[fn][0] >= mtime: return db[fn][1] if os.path.exists(fn): cmd = '/usr/bin/sha1sum %s'%fn key = subprocess_get(logger,cmd).split(' ')[0] if lcache is not None: db[fn] = (mtime,key) simplejson.dump(db, open(dbfile,'w')) return key else: return None def cachefile(src, dst, api=None, logger=None): """ Copy a file into a cache and link it into place. Use this with caution, otherwise you could end up copying data twice if the cache is not on the same device as the destination """ lcache = os.path.join(os.path.dirname(os.path.dirname(dst)),'.link_cache') if not os.path.isdir(lcache): os.mkdir(lcache) key = hashfile(src, lcache=lcache, logger=logger) cachefile = os.path.join(lcache, key) if not os.path.exists(cachefile): logger.info("trying to create cache file %s"%cachefile) copyfile(src,cachefile,api=api,logger=logger) logger.debug("trying cachelink %s -> %s -> %s"%(src,cachefile,dst)) rc = os.link(cachefile,dst) return rc def linkfile(src, dst, symlink_ok=False, cache=True, api=None, logger=None): """ Attempt to create a link dst that points to src. Because file systems suck we attempt several different methods or bail to copyfile() """ if api is None: # FIXME: this really should not be a keyword # arg raise "Internal error: API handle is required" is_remote = is_remote_file(src) if os.path.exists(dst): # if the destination exists, is it right in terms of accuracy # and context? if os.path.samefile(src, dst): if not is_safe_to_hardlink(src,dst,api): # may have to remove old hardlinks for SELinux reasons # as previous implementations were not complete if logger is not None: logger.info("removing: %s" % dst) os.remove(dst) else: return True elif os.path.islink(dst): # existing path exists and is a symlink, update the symlink if logger is not None: logger.info("removing: %s" % dst) os.remove(dst) if is_safe_to_hardlink(src,dst,api): # we can try a hardlink if the destination isn't to NFS or Samba # this will help save space and sync time. try: if logger is not None: logger.info("trying hardlink %s -> %s" % (src,dst)) rc = os.link(src, dst) return rc except (IOError, OSError): # hardlink across devices, or link already exists # we'll just symlink it if we can # or otherwise copy it pass if symlink_ok: # we can symlink anywhere except for /tftpboot because # that is run chroot, so if we can symlink now, try it. try: if logger is not None: logger.info("trying symlink %s -> %s" % (src,dst)) rc = os.symlink(src, dst) return rc except (IOError, OSError): pass if cache: try: return cachefile(src,dst,api=api,logger=logger) except (IOError, OSError): pass # we couldn't hardlink and we couldn't symlink so we must copy return copyfile(src, dst, api=api, logger=logger) def copyfile(src,dst,api=None,logger=None): try: if logger is not None: logger.info("copying: %s -> %s" % (src,dst)) rc = shutil.copyfile(src,dst) return rc except: if not os.access(src,os.R_OK): raise CX(_("Cannot read: %s") % src) if not os.path.samefile(src,dst): # accomodate for the possibility that we already copied # the file as a symlink/hardlink raise # traceback.print_exc() # raise CX(_("Error copying %(src)s to %(dst)s") % { "src" : src, "dst" : dst}) def check_openfiles(src): """ Used to check for open files on a mounted partition. """ try: if not os.path.isdir(src): raise CX(_("Error in check_openfiles: the source (%s) must be a directory") % src) cmd = [ "/usr/sbin/lsof", "+D", src, "-Fn", "|", "wc", "-l" ] handle = sub_process.Popen(cmd, shell=True, stdout=sub_process.PIPE, close_fds=True) out = handle.stdout results = out.read() return int(results) except: if not os.access(src,os.R_OK): raise CX(_("Cannot read: %s") % src) if not os.path.samefile(src,dst): # accomodate for the possibility that we already copied # the file as a symlink/hardlink raise def copyfile_pattern(pattern,dst,require_match=True,symlink_ok=False,cache=True,api=None,logger=None): files = glob.glob(pattern) if require_match and not len(files) > 0: raise CX(_("Could not find files matching %s") % pattern) for file in files: base = os.path.basename(file) dst1 = os.path.join(dst,os.path.basename(file)) linkfile(file,dst1,symlink_ok=symlink_ok,cache=cache,api=api,logger=logger) def rmfile(path,logger=None): try: if logger is not None: logger.info("removing: %s" % path) os.unlink(path) return True except OSError, ioe: if not ioe.errno == errno.ENOENT: # doesn't exist if logger is not None: log_exc(logger) raise CX(_("Error deleting %s") % path) return True def rmtree_contents(path,logger=None): what_to_delete = glob.glob("%s/*" % path) for x in what_to_delete: rmtree(x,logger=logger) def rmtree(path,logger=None): try: if os.path.isfile(path): return rmfile(path,logger=logger) else: if logger is not None: logger.info("removing: %s" % path) return shutil.rmtree(path,ignore_errors=True) except OSError, ioe: if logger is not None: log_exc(logger) if not ioe.errno == errno.ENOENT: # doesn't exist raise CX(_("Error deleting %s") % path) return True def mkdir(path,mode=0755,logger=None): try: if logger is not None: logger.info("mkdir: %s" % path) return os.makedirs(path,mode) except OSError, oe: if not oe.errno == 17: # already exists (no constant for 17?) if logger is not None: log_exc(logger) raise CX(_("Error creating %s") % path) def path_tail(apath, bpath): """ Given two paths (B is longer than A), find the part in B not in A """ position = bpath.find(apath) if position != 0: return "" rposition = position + len(apath) result = bpath[rposition:] if not result.startswith("/"): result = "/" + result return result def set_redhat_management_key(self,key): self.redhat_management_key = key return True def set_redhat_management_server(self,server): self.redhat_management_server = server return True def set_arch(self,arch,repo=False): if arch is None or arch == "" or arch == "standard" or arch == "x86": arch = "i386" if repo: valids = [ "i386", "x86_64", "ia64", "ppc", "ppc64", "s390", "s390x", "noarch", "src", "arm" ] else: valids = [ "i386", "x86_64", "ia64", "ppc", "ppc64", "s390", "s390x", "arm" ] if arch in valids: self.arch = arch return True raise CX("arch choices include: %s" % ", ".join(valids)) def set_os_version(self,os_version): if os_version == "" or os_version is None: self.os_version = "" return True self.os_version = os_version.lower() if self.breed is None or self.breed == "": raise CX(_("cannot set --os-version without setting --breed first")) if not self.breed in get_valid_breeds(): raise CX(_("fix --breed first before applying this setting")) matched = SIGNATURE_CACHE["breeds"][self.breed] if not os_version in matched: nicer = ", ".join(matched) raise CX(_("--os-version for breed %s must be one of %s, given was %s") % (self.breed, nicer, os_version)) self.os_version = os_version return True def set_breed(self,breed): valid_breeds = get_valid_breeds() if breed is not None and breed.lower() in valid_breeds: self.breed = breed.lower() return True nicer = ", ".join(valid_breeds) raise CX(_("invalid value for --breed (%s), must be one of %s, different breeds have different levels of support") % (breed, nicer)) def set_repo_os_version(self,os_version): if os_version == "" or os_version is None: self.os_version = "" return True self.os_version = os_version.lower() if self.breed is None or self.breed == "": raise CX(_("cannot set --os-version without setting --breed first")) if not self.breed in codes.VALID_REPO_BREEDS: raise CX(_("fix --breed first before applying this setting")) self.os_version = os_version return True def set_repo_breed(self,breed): valid_breeds = codes.VALID_REPO_BREEDS if breed is not None and breed.lower() in valid_breeds: self.breed = breed.lower() return True nicer = ", ".join(valid_breeds) raise CX(_("invalid value for --breed (%s), must be one of %s, different breeds have different levels of support") % (breed, nicer)) def set_repos(self,repos,bypass_check=False): # WARNING: hack # repos = fix_mod_python_select_submission(repos) # allow the magic inherit string to persist if repos == "<>": self.repos = "<>" return True # store as an array regardless of input type if repos is None: self.repos = [] else: self.repos = input_string_or_list(repos) if bypass_check: return True for r in self.repos: if self.config.repos().find(name=r) is None: raise CX(_("repo %s is not defined") % r) return True def set_virt_file_size(self,num): """ For Virt only. Specifies the size of the virt image in gigabytes. Older versions of koan (x<0.6.3) interpret 0 as "don't care" Newer versions (x>=0.6.4) interpret 0 as "no disks" """ # num is a non-negative integer (0 means default) # can also be a comma seperated list -- for usage with multiple disks if num is None or num == "": self.virt_file_size = 0 return True if num == "<>": self.virt_file_size = "<>" return True if isinstance(num, basestring) and num.find(",") != -1: tokens = num.split(",") for t in tokens: # hack to run validation on each self.set_virt_file_size(t) # if no exceptions raised, good enough self.virt_file_size = num return True try: inum = int(num) if inum != float(num): raise CX(_("invalid virt file size (%s)" % num)) if inum >= 0: self.virt_file_size = inum return True raise CX(_("invalid virt file size (%s)" % num)) except: raise CX(_("invalid virt file size (%s)" % num)) return True def set_virt_disk_driver(self,driver): """ For Virt only. Specifies the on-disk format for the virtualized disk """ # FIXME: we should probably check the driver type # here against the libvirt/virtinst list of # drivers, but this makes things more flexible # meaning we don't have to manage this list # and it's up to the user not to enter an # unsupported disk format self.virt_disk_driver = driver return True def set_virt_auto_boot(self,num): """ For Virt only. Specifies whether the VM should automatically boot upon host reboot 0 tells Koan not to auto_boot virtuals """ if num == "<>": self.virt_auto_boot = "<>" return True # num is a non-negative integer (0 means default) try: inum = int(num) if (inum == 0) or (inum == 1): self.virt_auto_boot = inum return True raise CX(_("invalid virt_auto_boot value (%s): value must be either '0' (disabled) or '1' (enabled)" % inum)) except: raise CX(_("invalid virt_auto_boot value (%s): value must be either '0' (disabled) or '1' (enabled)" % num)) return True def set_virt_pxe_boot(self,num): """ For Virt only. Specifies whether the VM should use PXE for booting 0 tells Koan not to PXE boot virtuals """ # num is a non-negative integer (0 means default) try: inum = int(num) if (inum == 0) or (inum == 1): self.virt_pxe_boot = inum return True raise CX(_("invalid virt_pxe_boot value (%s): value must be either '0' (disabled) or '1' (enabled)" % inum)) except: raise CX(_("invalid virt_pxe_boot value (%s): value must be either '0' (disabled) or '1' (enabled)" % num)) return True def set_virt_ram(self,num): """ For Virt only. Specifies the size of the Virt RAM in MB. 0 tells Koan to just choose a reasonable default. """ if num == "<>": self.virt_ram = "<>" return True # num is a non-negative integer (0 means default) try: inum = int(num) if inum != float(num): raise CX(_("invalid virt ram size (%s)" % num)) if inum >= 0: self.virt_ram = inum return True raise CX(_("invalid virt ram size (%s)" % num)) except: raise CX(_("invalid virt ram size (%s)" % num)) return True def set_virt_type(self,vtype): """ Virtualization preference, can be overridden by koan. """ if vtype == "<>": self.virt_type = "<>" return True if vtype.lower() not in [ "qemu", "kvm", "xenpv", "xenfv", "vmware", "vmwarew", "openvz", "auto" ]: raise CX(_("invalid virt type (%s)" % vtype)) self.virt_type = vtype return True def set_virt_bridge(self,vbridge): """ The default bridge for all virtual interfaces under this profile. """ if vbridge is None or vbridge == "": vbridge = self.settings.default_virt_bridge self.virt_bridge = vbridge return True def set_virt_path(self,path,for_system=False): """ Virtual storage location suggestion, can be overriden by koan. """ if path is None: path = "" if for_system: if path == "": path = "<>" self.virt_path = path return True def set_virt_cpus(self,num): """ For Virt only. Set the number of virtual CPUs to give to the virtual machine. This is fed to virtinst RAW, so cobbler will not yelp if you try to feed it 9999 CPUs. No formatting like 9,999 please :) """ if num == "" or num is None: self.virt_cpus = 1 return True if num == "<>": self.virt_cpus = "<>" return True try: num = int(str(num)) except: raise CX(_("invalid number of virtual CPUs (%s)" % num)) self.virt_cpus = num return True def get_kickstart_templates(api): files = {} for x in api.profiles(): if x.kickstart is not None and x.kickstart != "" and x.kickstart != "<>": if os.path.exists(x.kickstart): files[x.kickstart] = 1 for x in api.systems(): if x.kickstart is not None and x.kickstart != "" and x.kickstart != "<>": if os.path.exists(x.kickstart): files[x.kickstart] = 1 for x in glob.glob("/var/lib/cobbler/kickstarts/*"): if os.path.isfile(x): files[x] = 1 for x in glob.glob("/etc/cobbler/*.ks"): if os.path.isfile(x): files[x] = 1 results = files.keys() results.sort() return results def safe_filter(var): if var is None: return if var.find("..") != -1 or var.find(";") != -1: raise CX("Invalid characters found in input") def is_selinux_enabled(): if not os.path.exists("/usr/sbin/selinuxenabled"): return False args = "/usr/sbin/selinuxenabled" selinuxenabled = sub_process.call(args,close_fds=True) if selinuxenabled == 0: return True else: return False import os import sys import random # We cache the contents of /etc/mtab ... the following variables are used # to keep our cache in sync mtab_mtime = None mtab_map = [] class MntEntObj(object): mnt_fsname = None #* name of mounted file system */ mnt_dir = None #* file system path prefix */ mnt_type = None #* mount type (see mntent.h) */ mnt_opts = None #* mount options (see mntent.h) */ mnt_freq = 0 #* dump frequency in days */ mnt_passno = 0 #* pass number on parallel fsck */ def __init__(self,input=None): if input and isinstance(input, basestring): (self.mnt_fsname, self.mnt_dir, self.mnt_type, self.mnt_opts, \ self.mnt_freq, self.mnt_passno) = input.split() def __dict__(self): return {"mnt_fsname": self.mnt_fsname, "mnt_dir": self.mnt_dir, \ "mnt_type": self.mnt_type, "mnt_opts": self.mnt_opts, \ "mnt_freq": self.mnt_freq, "mnt_passno": self.mnt_passno} def __str__(self): return "%s %s %s %s %s %s" % (self.mnt_fsname, self.mnt_dir, self.mnt_type, \ self.mnt_opts, self.mnt_freq, self.mnt_passno) def get_mtab(mtab="/etc/mtab", vfstype=None): global mtab_mtime, mtab_map mtab_stat = os.stat(mtab) if mtab_stat.st_mtime != mtab_mtime: '''cache is stale ... refresh''' mtab_mtime = mtab_stat.st_mtime mtab_map = __cache_mtab__(mtab) # was a specific fstype requested? if vfstype: mtab_type_map = [] for ent in mtab_map: if ent.mnt_type == "nfs": mtab_type_map.append(ent) return mtab_type_map return mtab_map def __cache_mtab__(mtab="/etc/mtab"): global mtab_mtime f = open(mtab) mtab = [MntEntObj(line) for line in f.read().split('\n') if len(line) > 0] f.close() return mtab def get_file_device_path(fname): '''What this function attempts to do is take a file and return: - the device the file is on - the path of the file relative to the device. For example: /boot/vmlinuz -> (/dev/sda3, /vmlinuz) /boot/efi/efi/redhat/elilo.conf -> (/dev/cciss0, /elilo.conf) /etc/fstab -> (/dev/sda4, /etc/fstab) ''' # resolve any symlinks fname = os.path.realpath(fname) # convert mtab to a dict mtab_dict = {} for ent in get_mtab(): mtab_dict[ent.mnt_dir] = ent.mnt_fsname # find a best match fdir = os.path.dirname(fname) match = mtab_dict.has_key(fdir) while not match: fdir = os.path.realpath(os.path.join(fdir, os.path.pardir)) match = mtab_dict.has_key(fdir) # construct file path relative to device if fdir != os.path.sep: fname = fname[len(fdir):] return (mtab_dict[fdir], fname) def is_remote_file(file): (dev, path) = get_file_device_path(file) if dev.find(":") != -1: return True else: return False def subprocess_sp(logger, cmd, shell=True, input=None): if logger is not None: logger.info("running: %s" % cmd) stdin = None if input: stdin = sub_process.PIPE try: sp = sub_process.Popen(cmd, shell=shell, stdin=stdin, stdout=sub_process.PIPE, stderr=sub_process.PIPE, close_fds=True) except OSError: if logger is not None: log_exc(logger) die(logger, "OS Error, command not found? While running: %s" % cmd) (out,err) = sp.communicate(input) rc = sp.returncode if logger is not None: logger.info("received on stdout: %s" % out) logger.debug("received on stderr: %s" % err) return out, rc def subprocess_call(logger, cmd, shell=True, input=None): data, rc = subprocess_sp(logger, cmd, shell=shell, input=input) return rc def subprocess_get(logger, cmd, shell=True, input=None): data, rc = subprocess_sp(logger, cmd, shell=shell, input=input) return data def popen2(args, **kwargs): """ Leftovers from borrowing some bits from Snake, replace this function with just the subprocess call. """ p = sub_process.Popen(args, stdout=sub_process.PIPE, stdin=sub_process.PIPE, **kwargs) return (p.stdout, p.stdin) def ram_consumption_of_guests(host, api): guest_names = host.virt_guests ttl_ram = 0 if len(guest_names) == 0: # a system with no virt hosts already is our best # candidate return 0 for g in guest_names: host_obj = api.find_system(g) if host_obj is None: # guest object was deleted but host was not updated continue host_data = blender(api,False,host_obj) ram = host_data.get("virt_ram", 512) ttl_ram = ttl_ram + host_data["virt_ram"] return ttl_ram def choose_virt_host(systems, api): """ From a list of systems, choose a system that can best host a virtual machine. This initial engine is not as optimal as it could be, but works by determining the system with the least amount of VM RAM deployed as defined by the amount of virtual ram on each guest for each guest that the hosts hosts. Hop on pop. This does assume hosts are reasonably homogenous. In the future this heuristic should be pluggable and be able to tap into other external data sources and maybe basic usage stats. """ if len(systems) == 0: raise CX("empty candidate systems list") by_name = {} least_host = systems[0] least_host_ct = -1 for s in systems: ct = ram_consumption_of_guests(s, api) if (ct < least_host_ct) or (least_host_ct == -1): least_host = s least_host_ct = ct return least_host.name def os_system(cmd): """ os.system doesn't close file descriptors, so this is a wrapper to ensure we never use it. """ rc = sub_process.call(cmd, shell=True, close_fds=True) return rc def clear_from_fields(obj, fields, is_subobject=False): """ Used by various item_*.py classes for automating datastructure boilerplate. """ for elems in fields: # if elems startswith * it's an interface field and we do not operate on it. if elems[0].startswith("*") or elems[0].find("widget") != -1: continue if is_subobject: val = elems[2] else: val = elems[1] if isinstance(val,basestring): if val.startswith("SETTINGS:"): setkey = val.split(":")[-1] val = getattr(obj.settings, setkey) setattr(obj, elems[0], val) if obj.COLLECTION_TYPE == "system": obj.interfaces = {} def from_datastruct_from_fields(obj, seed_data, fields): int_fields = [] for elems in fields: # we don't have to load interface fields here if elems[0].startswith("*") or elems[0].find("widget") != -1: if elems[0].startswith("*"): int_fields.append(elems) continue src_k = dst_k = elems[0] # deprecated field switcheroo if field_info.DEPRECATED_FIELDS.has_key(src_k): dst_k = field_info.DEPRECATED_FIELDS[src_k] if seed_data.has_key(src_k): setattr(obj, dst_k, seed_data[src_k]) if obj.uid == '': obj.uid = obj.config.generate_uid() # special handling for interfaces if obj.COLLECTION_TYPE == "system": obj.interfaces = copy.deepcopy(seed_data["interfaces"]) # deprecated field switcheroo for interfaces for interface in obj.interfaces.keys(): for k in obj.interfaces[interface].keys(): if field_info.DEPRECATED_FIELDS.has_key(k): if not obj.interfaces[interface].has_key(field_info.DEPRECATED_FIELDS[k]) or \ obj.interfaces[interface][field_info.DEPRECATED_FIELDS[k]] == "": obj.interfaces[interface][field_info.DEPRECATED_FIELDS[k]] = obj.interfaces[interface][k] # populate fields that might be missing for int_field in int_fields: if not obj.interfaces[interface].has_key(int_field[0][1:]): obj.interfaces[interface][int_field[0][1:]] = int_field[1] return obj def get_methods_from_fields(obj, fields): ds = {} for elem in fields: k = elem[0] # modify interfaces is handled differently, and need not work this way if k.startswith("*") or k.find("widget") != -1: continue setfn = getattr(obj, "set_%s" % k) ds[k] = setfn return ds def to_datastruct_from_fields(obj, fields): ds = {} for elem in fields: k = elem[0] if k.startswith("*") or k.find("widget") != -1: continue data = getattr(obj, k) ds[k] = data # interfaces on systems require somewhat special handling # they are the only exception in Cobbler. if obj.COLLECTION_TYPE == "system": ds["interfaces"] = copy.deepcopy(obj.interfaces) #for interface in ds["interfaces"].keys(): # for k in ds["interfaces"][interface].keys(): # if field_info.DEPRECATED_FIELDS.has_key(k): # ds["interfaces"][interface][field_info.DEPRECATED_FIELDS[k]] = ds["interfaces"][interface][k] return ds def printable_from_fields(obj, fields): """ Obj is a hash datastructure, fields is something like item_distro.FIELDS """ buf = "" keys = [] for elem in fields: keys.append((elem[0], elem[3], elem[4])) keys.sort() buf = buf + "%-30s : %s\n" % ("Name", obj["name"]) for (k, nicename, editable) in keys: # FIXME: supress fields users don't need to see? # FIXME: interfaces should be sorted # FIXME: print ctime, mtime nicely if k.startswith("*") or not editable or k.find("widget") != -1: continue if k != "name": # FIXME: move examples one field over, use description here. buf = buf + "%-30s : %s\n" % (nicename, obj[k]) # somewhat brain-melting special handling to print the hashes # inside of the interfaces more neatly. if obj.has_key("interfaces"): for iname in obj["interfaces"].keys(): # FIXME: inames possibly not sorted buf = buf + "%-30s : %s\n" % ("Interface ===== ",iname) for (k, nicename, editable) in keys: nkey = k.replace("*","") if k.startswith("*") and editable: buf = buf + "%-30s : %s\n" % (nicename, obj["interfaces"][iname].get(nkey,"")) return buf def matches_args(args, list_of): """ Used to simplify some code around which arguments to add when. """ for x in args: if x in list_of: return True return False def add_options_from_fields(object_type, parser, fields, object_action): if object_action in ["add","edit","find","copy","rename"]: for elem in fields: k = elem[0] if k.find("widget") != -1: continue # scrub interface tags so all fields get added correctly. k = k.replace("*","") default = elem[1] nicename = elem[3] tooltip = elem[5] choices = elem[6] if field_info.ALTERNATE_OPTIONS.has_key(k): niceopt = field_info.ALTERNATE_OPTIONS[k] else: niceopt = "--%s" % k.replace("_","-") desc = nicename if tooltip != "": desc = nicename + " (%s)" % tooltip aliasopt = [] for deprecated_field in field_info.DEPRECATED_FIELDS.keys(): if field_info.DEPRECATED_FIELDS[deprecated_field] == k: aliasopt.append("--%s" % deprecated_field) if isinstance(choices, list) and len(choices) != 0: if default not in choices: choices.append(default) desc = desc + " (valid options: %s)" % ",".join(choices) parser.add_option(niceopt, dest=k, help=desc, choices=choices) for alias in aliasopt: parser.add_option(alias, dest=k, help=desc, choices=choices) else: parser.add_option(niceopt, dest=k, help=desc) for alias in aliasopt: parser.add_option(alias, dest=k, help=desc) if object_type == "system": # system object parser.add_option("--interface", dest="interface", help="the interface to operate on (can only be specified once per command line)") if object_action in ["add","edit"]: parser.add_option("--delete-interface", dest="delete_interface", action="store_true") parser.add_option("--rename-interface", dest="rename_interface") if object_action in ["copy","rename"]: parser.add_option("--newname", help="new object name") if object_action not in ["find",] and object_type != "setting": parser.add_option("--clobber", dest="clobber", help="allow add to overwrite existing objects", action="store_true") parser.add_option("--in-place", action="store_true", default=False, dest="in_place", help="edit items in kopts or ksmeta without clearing the other items") elif object_action == "remove": parser.add_option("--name", help="%s name to remove" % object_type) parser.add_option("--recursive", action="store_true", dest="recursive", help="also delete child objects") # FIXME: not supported in 2.0 ? #if not object_action in ["dumpvars","find","remove","report","list"]: # parser.add_option("--no-sync", action="store_true", dest="nosync", help="suppress sync for speed") # FIXME: not supported in 2.0 ? # if not matches_args(args,["dumpvars","report","list"]): # parser.add_option("--no-triggers", action="store_true", dest="notriggers", help="suppress trigger execution") def get_remote_methods_from_fields(obj,fields): """ Return the name of set functions for all fields, keyed by the field name. """ ds = {} for elem in fields: name = elem[0].replace("*","") if name.find("widget") == -1: ds[name] = getattr(obj,"set_%s" % name) if obj.COLLECTION_TYPE == "system": ds["modify_interface"] = getattr(obj,"modify_interface") ds["delete_interface"] = getattr(obj,"delete_interface") ds["rename_interface"] = getattr(obj,"rename_interface") return ds def get_power_types(): """ Return all possible power types """ power_types = [] power_template = re.compile(r'fence_(.*)') fence_files = glob.glob("/usr/sbin/fence_*") + glob.glob("/sbin/fence_*") for x in fence_files: power_types.append(power_template.search(x).group(1)) power_types.sort() return power_types def get_power(powertype=None): """ Return power command for type """ if powertype: # try /sbin, then /usr/sbin powerpath1 = "/sbin/fence_%s" % powertype powerpath2 = "/usr/sbin/fence_%s" % powertype for powerpath in (powerpath1,powerpath2): if os.path.isfile(powerpath) and os.access(powerpath, os.X_OK): return powerpath return None def get_power_template(powertype=None): """ Return power template for type """ if powertype: powertemplate = "/etc/cobbler/power/fence_%s.template" % powertype if os.path.isfile(powertemplate): f = open(powertemplate) template = f.read() f.close() return template # return a generic template if a specific one wasn't found return "action=$power_mode\nlogin=$power_user\npasswd=$power_pass\nipaddr=$power_address\nport=$power_id" def load_signatures(filename,cache=True): """ Loads the import signatures for distros """ global SIGNATURE_CACHE try: f = open(filename,"r") sigjson = f.read() f.close() sigdata = simplejson.loads(sigjson) if cache: SIGNATURE_CACHE = sigdata return True except: return False def get_valid_breeds(): """ Return a list of valid breeds found in the import signatures """ if SIGNATURE_CACHE.has_key("breeds"): return SIGNATURE_CACHE["breeds"].keys() else: return [] def get_valid_os_versions_for_breed(breed): """ Return a list of valid os-versions for the given breed """ os_versions = [] if breed in get_valid_breeds(): os_versions = SIGNATURE_CACHE["breeds"][breed].keys() return os_versions def get_valid_os_versions(): """ Return a list of valid os-versions found in the import signatures """ os_versions = [] try: for breed in get_valid_breeds(): os_versions += SIGNATURE_CACHE["breeds"][breed].keys() except: pass return uniquify(os_versions) def get_shared_secret(): """ The 'web.ss' file is regenerated each time cobblerd restarts and is used to agree on shared secret interchange between mod_python and cobblerd, and also the CLI and cobblerd, when username/password access is not required. For the CLI, this enables root users to avoid entering username/pass if on the cobbler server. """ try: fd = open("/var/lib/cobbler/web.ss") data = fd.read() except: return -1 return str(data).strip() def local_get_cobbler_api_url(): # Load server and http port try: fh = open("/etc/cobbler/settings") data = yaml.safe_load(fh.read()) fh.close() except: traceback.print_exc() raise CX("/etc/cobbler/settings is not a valid YAML file") ip = data.get("server","127.0.0.1") if data.get("client_use_localhost", False): # this overrides the server setting ip = "127.0.0.1" port = data.get("http_port","80") protocol = "http" if data.get("client_use_https", False): protocol = "https" return "%s://%s:%s/cobbler_api" % (protocol,ip,port) def get_ldap_template(ldaptype=None): """ Return ldap command for type """ if ldaptype: ldappath = "/etc/cobbler/ldap/ldap_%s.template" % ldaptype if os.path.isfile(ldappath): return ldappath return None def local_get_cobbler_xmlrpc_url(): # Load xmlrpc port try: fh = open("/etc/cobbler/settings") data = yaml.safe_load(fh.read()) fh.close() except: traceback.print_exc() raise CX("/etc/cobbler/settings is not a valid YAML file") return "http://%s:%s" % ("127.0.0.1",data.get("xmlrpc_port","25151")) def strip_none(data, omit_none=False): """ Remove "none" entries from datastructures. Used prior to communicating with XMLRPC. """ if data is None: data = '~' elif isinstance(data, list): data2 = [] for x in data: if omit_none and x is None: pass else: data2.append(strip_none(x)) return data2 elif isinstance(data, dict): data2 = {} for key in data.keys(): keydata = data[key] if omit_none and data[key] is None: pass else: data2[str(key)] = strip_none(data[key]) return data2 return data def cli_find_via_xmlrpc(remote, otype, options): """ Given an options object and a remote handle, find options matching the criteria given. """ criteria = strip_none2(options.__dict__) return remote.find_items(otype,criteria,"name",False) # ------------------------------------------------------- def loh_to_hoh(datastruct, indexkey): """ things like get_distros() returns a list of a hashes convert this to a hash of hashes keyed off of an arbitrary field EX: [ { "a" : 2 }, { "a : 3 } ] -> { "2" : { "a" : 2 }, "3" : { "a" : "3" } """ results = {} for item in datastruct: results[item[indexkey]] = item return results # ------------------------------------------------------- def loh_sort_by_key(datastruct, indexkey): """ Sorts a list of hashes by a given key in the hashes note: this is a destructive operation """ datastruct.sort(lambda a, b: a[indexkey] < b[indexkey]) return datastruct def dhcpconf_location(api): version = api.os_version (dist, ver) = api.get_os_details() if version[0] in [ "redhat", "centos" ] and version[1] < 6: return "/etc/dhcpd.conf" elif version[0] in [ "fedora" ] and version[1] < 11: return "/etc/dhcpd.conf" elif dist == "suse": return "/etc/dhcpd.conf" elif dist == "debian" and int(version[1].split('.')[0]) < 6: return "/etc/dhcp3/dhcpd.conf" elif dist == "ubuntu" and version[1] < 11.10: return "/etc/dhcp3/dhcpd.conf" else: return "/etc/dhcp/dhcpd.conf" def namedconf_location(api): (dist, ver) = api.os_version if dist == "debian" or dist == "ubuntu": return "/etc/bind/named.conf" else: return "/etc/named.conf" def zonefile_base(api): (dist, version) = api.os_version if dist == "debian" or dist == "ubuntu": return "/etc/bind/db." else: return "/var/named/" def dhcp_service_name(api): (dist, version) = api.os_version if dist == "debian" and int(version.split('.')[0]) < 6: return "dhcp3-server" elif dist == "debian" and int(version.split('.')[0]) >= 6: return "isc-dhcp-server" elif dist == "ubuntu" and version < 11.10: return "dhcp3-server" elif dist == "ubuntu" and version >= 11.10: return "isc-dhcp-server" else: return "dhcpd" def named_service_name(api): (dist, ver) = api.os_version if dist == "debian" or dist == "ubuntu": return "bind9" else: return "named" def link_distro(settings, distro): # find the tree location base = find_distro_path(settings, distro) if not base: return dest_link = os.path.join(settings.webdir, "links", distro.name) # create the links directory only if we are mirroring because with # SELinux Apache can't symlink to NFS (without some doing) if not os.path.lexists(dest_link): try: os.symlink(base, dest_link) except: # this shouldn't happen but I've seen it ... debug ... print _("- symlink creation failed: %(base)s, %(dest)s") % { "base" : base, "dest" : dest_link } def find_distro_path(settings, distro): possible_dirs = glob.glob(settings.webdir+"/ks_mirror/*") for dir in possible_dirs: if os.path.dirname(distro.kernel).find(dir) != -1: return os.path.join(settings.webdir, "ks_mirror", dir) # non-standard directory, assume it's the same as the # directory in which the given distro's kernel is return os.path.dirname(distro.kernel) if __name__ == "__main__": print os_release() # returns 2, not 3 cobbler-2.4.1/cobbler/yumgen.py000066400000000000000000000070641227367477500164470ustar00rootroot00000000000000""" Builds out filesystem trees/data based on the object tree. This is the code behind 'cobbler sync'. Copyright 2006-2009, Red Hat, Inc and Others Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import os import os.path import shutil import time import sys import glob import traceback import errno import utils from cexceptions import * import templar import item_distro import item_profile import item_repo import item_system from utils import _ class YumGen: def __init__(self,config): """ Constructor """ self.config = config self.api = config.api self.distros = config.distros() self.profiles = config.profiles() self.systems = config.systems() self.settings = config.settings() self.repos = config.repos() self.templar = templar.Templar(config) def get_yum_config(self,obj,is_profile): """ Return one large yum repo config blob suitable for use by any target system that requests it. """ totalbuf = "" blended = utils.blender(self.api, False, obj) input_files = [] # chance old versions from upgrade do not have a source_repos # workaround for user bug if not blended.has_key("source_repos"): blended["source_repos"] = [] # tack on all the install source repos IF there is more than one. # this is basically to support things like RHEL5 split trees # if there is only one, then there is no need to do this. included = {} for r in blended["source_repos"]: filename = self.settings.webdir + "/" + "/".join(r[0].split("/")[4:]) if not included.has_key(filename): input_files.append(filename) included[filename] = 1 for repo in blended["repos"]: path = os.path.join(self.settings.webdir, "repo_mirror", repo, "config.repo") if not included.has_key(path): input_files.append(path) included[path] = 1 for infile in input_files: if infile.find("ks_mirror") == -1: dispname = infile.split("/")[-2] else: dispname = infile.split("/")[-1].replace(".repo","") try: infile_h = open(infile) except: # file does not exist and the user needs to run reposync # before we will use this, cobbler check will mention # this problem totalbuf = totalbuf + "\n# error: could not read repo source: %s\n\n" % infile continue infile_data = infile_h.read() infile_h.close() outfile = None # disk output only totalbuf = totalbuf + self.templar.render(infile_data, blended, outfile, None) totalbuf = totalbuf + "\n\n" return totalbuf cobbler-2.4.1/config/000077500000000000000000000000001227367477500144175ustar00rootroot00000000000000cobbler-2.4.1/config/auth.conf000066400000000000000000000000501227367477500162220ustar00rootroot00000000000000[xmlrpc_service_users] admin = DISABLED cobbler-2.4.1/config/cheetah_macros000066400000000000000000000001131227367477500173020ustar00rootroot00000000000000## define Cheetah functions here and reuse them throughout your templates cobbler-2.4.1/config/cobbler.conf000066400000000000000000000017161227367477500167030ustar00rootroot00000000000000# This configuration file allows cobbler data # to be accessed over HTTP. AliasMatch ^/cblr(?!/svc/)(.*)?$ "/var/www/cobbler$1" AliasMatch ^/cobbler_track(.*)?$ "/var/www/cobbler$1" #AliasMatch ^/cobbler(.*)?$ "/var/www/cobbler$1" Alias /cobbler /var/www/cobbler Alias /cobbler_webui_content /var/www/cobbler_webui_content WSGIScriptAliasMatch ^/cblr/svc/([^/]*) /var/www/cobbler/svc/services.py Options Indexes FollowSymLinks Order allow,deny Allow from all ProxyRequests off ProxyPass /cobbler_api http://localhost:25151/ ProxyPassReverse /cobbler_api http://localhost:25151/ BrowserMatch "MSIE" AuthDigestEnableQueryStringHack=On # the webui is now part of the "cobbler-web" package # and is visited at http://.../cobbler_web not this URL. # this is only a pointer to the new page. Options Indexes FollowSymLinks Order allow,deny Allow from all cobbler-2.4.1/config/cobbler_bash000066400000000000000000000045511227367477500167540ustar00rootroot00000000000000# emacs: -*- sh -*- # # bash completion file for cobbler # # Copyright 2008 John L. Villalovos # # This program is free software; you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation; either version 2 of the License, or (at your option) any later # version. # # This program is distributed in the hope that it will be useful, but WITHOUT # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more # details. # # You should have received a copy of the GNU General Public License along with # this program; if not, write to the Free Software Foundation, Inc., 51 # Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. # # Version History: # 0.1: Initial version to do some basic command line completion _cobbler() { local cur prev special specialsub SPECIAL_CMDS SPECIALSUB_CMDS COMPREPLY=() cur=${COMP_WORDS[COMP_CWORD]} prev=${COMP_WORDS[COMP_CWORD-1]} # Let's only run it once SPECIAL_CMDS=`cobbler-completion | sed "s/ /|/g"` for (( i=0; i < ${#COMP_WORDS[@]}-1; i++ )); do if [[ ${COMP_WORDS[i]} == @(${SPECIAL_CMDS}) ]]; then special=${COMP_WORDS[i]} break fi done if [ -n "$special" ] then # Take care of sub commands SPECIALSUB_CMDS=`cobbler-completion $special | sed "s/ /|/g"` if [ -n "${SPECIALSUB_CMDS}" ] ; then for (( i=0; i < ${#COMP_WORDS[@]}-1; i++ )); do if [[ ${COMP_WORDS[i]} == @(${SPECIALSUB_CMDS}) ]]; then specialsub=${COMP_WORDS[i]} break fi done else # This command has no subcommands COMPREPLY=( $( compgen -f -W '$( cobbler-completion $special )' -- $cur ) ) return 0 fi if [ -n "$specialsub" ] then COMPREPLY=( $( compgen -f -W '$( cobbler-completion $special $specialsub )' -- $cur ) ) return 0 else COMPREPLY=( $( compgen -W '$( cobbler-completion $special )' -- $cur ) ) return 0 fi fi case $cur in --*) COMPREPLY=( $( compgen -W 'help' -- $cur ) ) return 0 ;; -*) COMPREPLY=( $( compgen -W '-h' -- $cur ) ) return 0 ;; esac _count_args case $args in 1) COMPREPLY=( $( compgen -W '$( cobbler --helpbash )' -- $cur ) ) ;; esac } complete -F _cobbler cobbler cobbler-2.4.1/config/cobbler_web.conf000066400000000000000000000011231227367477500175300ustar00rootroot00000000000000# This configuration file enables the cobbler web # interface (django version) # Force everything to go to https RewriteEngine on RewriteCond %{HTTPS} off RewriteCond %{REQUEST_URI} ^/cobbler_web RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} # Use separate process group for wsgi WSGISocketPrefix /var/run/wsgi WSGIScriptAlias /cobbler_web /usr/share/cobbler/web/cobbler.wsgi WSGIDaemonProcess cobbler_web display-name=%{GROUP} WSGIProcessGroup cobbler_web WSGIPassAuthorization On = 2.4> Require all granted cobbler-2.4.1/config/cobblerd000077500000000000000000000065771227367477500161400ustar00rootroot00000000000000#!/bin/sh # # cobblerd Cobbler helper daemon ################################### # LSB header ### BEGIN INIT INFO # Provides: cobblerd # Required-Start: $network $xinetd $httpd # Default-Start: 3 4 5 # Default-Stop: 0 1 2 6 # Short-Description: daemon for libvirt virtualization API # Description: This is a daemon that a provides remote cobbler API # and status tracking ### END INIT INFO # chkconfig header # chkconfig: 345 99 99 # description: This is a daemon that provides a remote cobbler API # and status tracking # # processname: /usr/bin/cobblerd # Sanity checks. [ -x /usr/bin/cobblerd ] || exit 0 DEBIAN_VERSION=/etc/debian_version SUSE_RELEASE=/etc/SuSE-release # Source function library. if [ -f $DEBIAN_VERSION ]; then . /lib/lsb/init-functions elif [ -f $SUSE_RELEASE -a -r /etc/rc.status ]; then . /etc/rc.status else . /etc/rc.d/init.d/functions fi SERVICE=cobblerd PROCESS=cobblerd CONFIG_ARGS=" " if [ -f $DEBIAN_VERSION ]; then LOCKFILE=/var/lock/$SERVICE else LOCKFILE=/var/lock/subsys/$SERVICE fi WSGI=/usr/share/cobbler/web/cobbler.wsgi RETVAL=0 start() { echo -n "Starting cobbler daemon: " if [ -f $SUSE_RELEASE ]; then startproc -f -p /var/run/$SERVICE.pid /usr/bin/cobblerd $CONFIG_ARGS rc_status -v elif [ -e $DEBIAN_VERSION ]; then if [ -f $LOCKFILE ]; then echo -n "already started, lock file found" RETVAL=1 elif /usr/bin/python /usr/bin/cobblerd; then echo -n "OK" RETVAL=0 fi else daemon --check $SERVICE $PROCESS --daemonize $CONFIG_ARGS fi RETVAL=$? echo [ $RETVAL -eq 0 ] && touch $LOCKFILE [ -f $WSGI ] && touch $WSGI return $RETVAL } stop() { echo -n "Stopping cobbler daemon: " if [ -f $SUSE_RELEASE ]; then killproc -TERM /usr/bin/cobblerd rc_status -v elif [ -f $DEBIAN_VERSION ]; then # Added this since Debian's start-stop-daemon doesn't support spawned processes, will remove # when cobblerd supports stopping or PID files. if ps -ef | grep "/usr/bin/python /usr/bin/cobblerd" | grep -v grep | awk '{print $2}' | xargs kill &> /dev/null; then echo -n "OK" RETVAL=0 else echo -n "Daemon is not started" RETVAL=1 fi else killproc $PROCESS fi RETVAL=$? echo if [ $RETVAL -eq 0 ]; then rm -f $LOCKFILE rm -f /var/run/$SERVICE.pid fi } restart() { stop start } # See how we were called. case "$1" in start|stop|restart) $1 ;; status) if [ -f $SUSE_RELEASE ]; then echo -n "Checking for service cobblerd " checkproc /usr/bin/cobblerd rc_status -v elif [ -f $DEBIAN_VERSION ]; then if [ -f $LOCKFILE ]; then RETVAL=0 echo "cobblerd is running." else RETVAL=1 echo "cobblerd is stopped." fi else status $PROCESS RETVAL=$? fi ;; condrestart) [ -f $LOCKFILE ] && restart || : ;; reload) echo "can't reload configuration, you have to restart it" RETVAL=$? ;; *) echo "Usage: $0 {start|stop|status|restart|condrestart|reload}" exit 1 ;; esac exit $RETVAL cobbler-2.4.1/config/cobblerd.service000066400000000000000000000003771227367477500175640ustar00rootroot00000000000000[Unit] Description=Cobbler Helper Daemon After=syslog.target network.target [Service] ExecStart=/usr/bin/cobblerd -F ExecStartPost=-/usr/bin/touch /usr/share/cobbler/web/cobbler.wsgi PrivateTmp=yes KillMode=process [Install] WantedBy=multi-user.target cobbler-2.4.1/config/cobblerd_rotate000066400000000000000000000006431227367477500174770ustar00rootroot00000000000000/var/log/cobbler/cobbler.log { missingok notifempty rotate 4 weekly postrotate if [ -f /var/lock/subsys/cobblerd ]; then /etc/init.d/cobblerd condrestart > /dev/null fi endscript } /var/log/cobbler/tasks/*.log { weekly rotate 0 missingok ifempty nocompress nocreate nomail } /var/log/cobbler/install.log { missingok notifempty rotate 4 weekly } cobbler-2.4.1/config/completions000066400000000000000000000265641227367477500167130ustar00rootroot00000000000000--- aclsetup: "''addgroup": {} "''adduser": {} "''removegroup": {} "''removeuser": {} buildiso: "''iso": {} "''profiles": {} "''systems": {} "''tempdir": {} check: {} distro: add: "''arch": {} "''breed": {} "''clobber": {} "''in'place": {} "''initrd": {} "''kernel": {} "''kopts": {} "''ksmeta": {} "''name": {} "''no'sync": {} "''no'triggers": {} "''owners": {} copy: "''arch": {} "''breed": {} "''in'place": {} "''initrd": {} "''kernel": {} "''kopts": {} "''ksmeta": {} "''name": {} "''newname": {} "''no'sync": {} "''no'triggers": {} "''owners": {} dumpvars: "''name": {} edit: "''arch": {} "''breed": {} "''in'place": {} "''initrd": {} "''kernel": {} "''kopts": {} "''ksmeta": {} "''name": {} "''no'sync": {} "''no'triggers": {} "''owners": {} find: "''arch": {} "''breed": {} "''initrd": {} "''kernel": {} "''kopts": {} "''ksmeta": {} "''name": {} "''no'triggers": {} "''owners": {} list: "''name": {} remove: "''name": {} "''no'triggers": {} "''recursive": {} rename: "''arch": {} "''breed": {} "''in'place": {} "''initrd": {} "''kernel": {} "''kopts": {} "''ksmeta": {} "''name": {} "''newname": {} "''no'sync": {} "''no'triggers": {} "''owners": {} report: {} image: add: "''clobber": {} "''file": {} "''name": {} "''no'sync": {} "''no'triggers": {} "''owners": {} "''virt'bridge": {} "''virt'cpus": {} "''virt'file'size": {} "''virt'path": {} "''virt'ram": {} "''virt'type": {} "''xml'file": {} copy: "''file": {} "''name": {} "''newname": {} "''no'sync": {} "''no'triggers": {} "''owners": {} "''virt'bridge": {} "''virt'cpus": {} "''virt'file'size": {} "''virt'path": {} "''virt'ram": {} "''virt'type": {} "''xml'file": {} dumpvars: "''name": {} edit: "''file": {} "''name": {} "''no'sync": {} "''no'triggers": {} "''owners": {} "''virt'bridge": {} "''virt'cpus": {} "''virt'file'size": {} "''virt'path": {} "''virt'ram": {} "''virt'type": {} "''xml'file": {} find: "''file": {} "''name": {} "''owners": {} "''virt'bridge": {} "''virt'cpus": {} "''virt'file'size": {} "''virt'path": {} "''virt'ram": {} "''virt'type": {} "''xml'file": {} list: "''name": {} remove: "''name": {} "''no'triggers": {} rename: "''file": {} "''name": {} "''newname": {} "''no'sync": {} "''no'triggers": {} "''owners": {} "''virt'bridge": {} "''virt'cpus": {} "''virt'file'size": {} "''virt'path": {} "''virt'ram": {} "''virt'type": {} "''xml'file": {} report: {} import: "''arch": {} "''available'as": {} "''kickstart": {} "''mirror": {} "''name": {} "''path": {} "''rsync'flags": {} list: "''what": {} profile: add: "''clobber": {} "''dhcp'tag": {} "''distro": {} "''in'place": {} "''inherit": {} "''kickstart": {} "''kopts": {} "''ksmeta": {} "''name": {} "''no'sync": {} "''no'triggers": {} "''owners": {} "''repos": {} "''server'override": {} "''virt'bridge": {} "''virt'cpus": {} "''virt'file'size": {} "''virt'path": {} "''virt'ram": {} "''virt'type": {} copy: "''dhcp'tag": {} "''distro": {} "''in'place": {} "''inherit": {} "''kickstart": {} "''kopts": {} "''ksmeta": {} "''name": {} "''newname": {} "''no'sync": {} "''no'triggers": {} "''owners": {} "''repos": {} "''server'override": {} "''virt'bridge": {} "''virt'cpus": {} "''virt'file'size": {} "''virt'path": {} "''virt'ram": {} "''virt'type": {} dumpvars: "''name": {} edit: "''dhcp'tag": {} "''distro": {} "''in'place": {} "''inherit": {} "''kickstart": {} "''kopts": {} "''ksmeta": {} "''name": {} "''no'sync": {} "''no'triggers": {} "''owners": {} "''repos": {} "''server'override": {} "''virt'bridge": {} "''virt'cpus": {} "''virt'file'size": {} "''virt'path": {} "''virt'ram": {} "''virt'type": {} find: "''dhcp'tag": {} "''distro": {} "''inherit": {} "''kickstart": {} "''kopts": {} "''ksmeta": {} "''name": {} "''owners": {} "''repos": {} "''server'override": {} "''virt'bridge": {} "''virt'cpus": {} "''virt'file'size": {} "''virt'path": {} "''virt'ram": {} "''virt'type": {} list: "''name": {} remove: "''name": {} "''no'triggers": {} "''owners": {} "''recursive": {} rename: "''dhcp'tag": {} "''distro": {} "''in'place": {} "''inherit": {} "''kickstart": {} "''kopts": {} "''ksmeta": {} "''name": {} "''newname": {} "''no'sync": {} "''no'triggers": {} "''owners": {} "''repos": {} "''server'override": {} "''virt'bridge": {} "''virt'cpus": {} "''virt'file'size": {} "''virt'path": {} "''virt'ram": {} "''virt'type": {} report: {} replicate: "''full'data'sync": {} "''include'systems": {} "''master": {} "''sync'kickstarts": {} "''sync'repos": {} "''sync'trees": {} "''sync'triggers": {} repo: add: "''arch": {} "''clobber": {} "''createrepo'flags": {} "''in'place": {} "''keep'updated": {} "''mirror": {} "''mirror'locally": {} "''name": {} "''no'sync": {} "''no'triggers": {} "''owners": {} "''priority": {} "''rpm'list": {} "''yumopts": {} copy: "''arch": {} "''createrepo'flags": {} "''in'place": {} "''keep'updated": {} "''mirror": {} "''mirror'locally": {} "''name": {} "''newname": {} "''no'sync": {} "''no'triggers": {} "''owners": {} "''priority": {} "''rpm'list": {} "''yumopts": {} dumpvars: "''name": {} edit: "''arch": {} "''createrepo'flags": {} "''in'place": {} "''keep'updated": {} "''mirror": {} "''mirror'locally": {} "''name": {} "''no'sync": {} "''no'triggers": {} "''owners": {} "''priority": {} "''rpm'list": {} "''yumopts": {} find: "''arch": {} "''createrepo'flags": {} "''keep'updated": {} "''mirror": {} "''mirror'locally": {} "''name": {} "''owners": {} "''priority": {} "''rpm'list": {} "''yumopts": {} list: "''name": {} remove: "''name": {} "''no'triggers": {} rename: "''arch": {} "''createrepo'flags": {} "''in'place": {} "''keep'updated": {} "''mirror": {} "''mirror'locally": {} "''name": {} "''newname": {} "''no'sync": {} "''no'triggers": {} "''owners": {} "''priority": {} "''rpm'list": {} "''yumopts": {} report: {} report: "''name": {} "''what": {} reposync: "''only": {} reserialize: {} status: {} sync: {} system: add: "''clobber": {} "''dhcp'tag": {} "''gateway": {} "''hostname": {} "''in'place": {} "''interface": {} "''ip": {} "''kickstart": {} "''kopts": {} "''ksmeta": {} "''mac": {} "''name": {} "''netboot'enabled": {} "''no'sync": {} "''no'triggers": {} "''owners": {} "''profile": {} "''server'override": {} "''netmask": {} "''virt'bridge": {} "''virt'cpus": {} "''virt'file'size": {} "''virt'path": {} "''virt'ram": {} "''virt'type": {} copy: "''dhcp'tag": {} "''gateway": {} "''hostname": {} "''in'place": {} "''interface": {} "''ip": {} "''kickstart": {} "''kopts": {} "''ksmeta": {} "''mac": {} "''name": {} "''netboot'enabled": {} "''newname": {} "''no'sync": {} "''no'triggers": {} "''owners": {} "''profile": {} "''server'override": {} "''netmask": {} "''virt'bridge": {} "''virt'cpus": {} "''virt'file'size": {} "''virt'path": {} "''virt'ram": {} "''virt'type": {} dumpvars: "''name": {} edit: "''dhcp'tag": {} "''gateway": {} "''hostname": {} "''in'place": {} "''interface": {} "''ip": {} "''kickstart": {} "''kopts": {} "''ksmeta": {} "''mac": {} "''name": {} "''netboot'enabled": {} "''no'sync": {} "''no'triggers": {} "''owners": {} "''profile": {} "''server'override": {} "''netmask": {} "''virt'bridge": {} "''virt'cpus": {} "''virt'file'size": {} "''virt'path": {} "''virt'ram": {} "''virt'type": {} find: "''dhcp'tag": {} "''gateway": {} "''hostname": {} "''ip": {} "''kickstart": {} "''kopts": {} "''ksmeta": {} "''mac": {} "''name": {} "''netboot'enabled": {} "''owners": {} "''profile": {} "''server'override": {} "''netmask": {} "''virt'bridge": {} "''virt'cpus": {} "''virt'file'size": {} "''virt'path": {} "''virt'ram": {} "''virt'type": {} list: "''name": {} remove: "''name": {} "''no'triggers": {} rename: "''dhcp'tag": {} "''gateway": {} "''hostname": {} "''in'place": {} "''interface": {} "''ip": {} "''kickstart": {} "''kopts": {} "''ksmeta": {} "''mac": {} "''name": {} "''netboot'enabled": {} "''newname": {} "''no'sync": {} "''no'triggers": {} "''owners": {} "''profile": {} "''server'override": {} "''netmask": {} "''virt'bridge": {} "''virt'cpus": {} "''virt'file'size": {} "''virt'path": {} "''virt'ram": {} "''virt'type": {} report: {} validateks: {} cobbler-2.4.1/config/distro_signatures.json000066400000000000000000000547131227367477500210740ustar00rootroot00000000000000{"breeds": { "redhat": { "rhel4": { "signatures":["RedHat/RPMS","CentOS/RPMS"], "version_file":"(redhat|sl|centos)-release-4(AS|WS|ES)[\\.-]+(.*)\\.rpm", "version_file_regex":null, "kernel_arch":"kernel-(.*).rpm", "kernel_arch_regex":null, "supported_arches":["i386","x86_64","ppc","ppc64"], "supported_repo_breeds":["rsync", "rhn", "yum"], "kernel_file":"vmlinuz(.*)", "initrd_file":"initrd(.*)\\.img", "isolinux_ok":false, "default_kickstart":"/var/lib/cobbler/kickstarts/sample.ks", "kernel_options":"", "kernel_options_post":"", "boot_files":[] }, "rhel5": { "signatures":["RedHat","Server","CentOS","Client"], "version_file":"(redhat|sl|centos)-release-5([^\\.][\\w]*)?[\\.-]+(.*)\\.rpm", "version_file_regex":null, "kernel_arch":"kernel-(.*).rpm", "kernel_arch_regex":null, "supported_arches":["i386","x86_64","ppc","ppc64"], "supported_repo_breeds":["rsync", "rhn", "yum"], "kernel_file":"vmlinuz(.*)", "initrd_file":"initrd(.*)\\.img", "isolinux_ok":false, "default_kickstart":"/var/lib/cobbler/kickstarts/sample.ks", "kernel_options":"", "kernel_options_post":"", "boot_files":[] }, "rhel6": { "signatures":["Packages"], "version_file":"(redhat|sl|centos)-release-(?!notes)([\\w]*-)*6[\\.-]+(.*)\\.rpm", "version_file_regex":null, "kernel_arch":"kernel-(.*).rpm", "kernel_arch_regex":null, "supported_arches":["i386","x86_64","ppc","ppc64"], "supported_repo_breeds":["rsync", "rhn", "yum"], "kernel_file":"vmlinuz(.*)", "initrd_file":"initrd(.*)\\.img", "isolinux_ok":false, "default_kickstart":"/var/lib/cobbler/kickstarts/sample_end.ks", "kernel_options":"", "kernel_options_post":"", "boot_files":[] }, "rhel7": { "signatures":["Packages"], "version_file":"(redhat|sl|centos)-release-(?!notes)([\\w]*-)7\\.(.*)\\.rpm", "version_file_regex":null, "kernel_arch":"kernel-(.*).rpm", "kernel_arch_regex":null, "supported_arches":["i386","x86_64","ppc","ppc64"], "supported_repo_breeds":["rsync", "rhn", "yum"], "kernel_file":"vmlinuz(.*)", "initrd_file":"initrd(.*)\\.img", "isolinux_ok":false, "default_kickstart":"/var/lib/cobbler/kickstarts/sample_end.ks", "kernel_options":"", "kernel_options_post":"", "boot_files":[] }, "fedora16": { "signatures":["Packages"], "version_file":"(fedora)-release-16-(.*)\\.noarch\\.rpm", "version_file_regex":null, "kernel_arch":"kernel-(.*)\\.rpm", "kernel_arch_regex":null, "supported_arches":["i386","x86_64","ppc","ppc64"], "supported_repo_breeds":["rsync", "rhn", "yum"], "kernel_file":"vmlinuz(.*)", "initrd_file":"initrd(.*)\\.img", "isolinux_ok":false, "default_kickstart":"/var/lib/cobbler/kickstarts/sample_end.ks", "kernel_options":"", "kernel_options_post":"", "boot_files":[] }, "fedora17": { "signatures":["Packages"], "version_file":"(fedora)-release-17-(.*)\\.noarch\\.rpm", "version_file_regex":null, "kernel_arch":"kernel-(.*)\\.rpm", "kernel_arch_regex":null, "supported_arches":["i386","x86_64","ppc","ppc64"], "supported_repo_breeds":["rsync", "rhn", "yum"], "kernel_file":"vmlinuz(.*)", "initrd_file":"initrd(.*)\\.img", "isolinux_ok":false, "default_kickstart":"/var/lib/cobbler/kickstarts/sample_end.ks", "kernel_options":"repo=$tree", "kernel_options_post":"", "boot_files":[] }, "fedora18": { "signatures":["Packages"], "version_file":"(fedora)-release-18-(.*)\\.noarch\\.rpm", "version_file_regex":null, "kernel_arch":"kernel-(.*)\\.rpm", "kernel_arch_regex":null, "supported_arches":["i386","x86_64","ppc","ppc64"], "supported_repo_breeds":["rsync", "rhn", "yum"], "kernel_file":"vmlinuz(.*)", "initrd_file":"initrd(.*)\\.img", "isolinux_ok":false, "default_kickstart":"/var/lib/cobbler/kickstarts/sample_end.ks", "kernel_options":"repo=$tree", "kernel_options_post":"", "boot_files":[] }, "fedora19": { "signatures":["Packages"], "version_file":"(fedora)-release-19-(.*)\\.noarch\\.rpm", "version_file_regex":null, "kernel_arch":"kernel-(.*)\\.rpm", "kernel_arch_regex":null, "supported_arches":["i386","x86_64","ppc","ppc64"], "supported_repo_breeds":["rsync", "rhn", "yum"], "kernel_file":"vmlinuz(.*)", "initrd_file":"initrd(.*)\\.img", "isolinux_ok":false, "default_kickstart":"/var/lib/cobbler/kickstarts/sample_end.ks", "kernel_options":"repo=$tree", "kernel_options_post":"", "boot_files":[] }, "fedora20": { "signatures":["Packages"], "version_file":"(fedora)-release-20-(.*)\\.noarch\\.rpm", "version_file_regex":null, "kernel_arch":"kernel-(.*)\\.rpm", "kernel_arch_regex":null, "supported_arches":["i386","x86_64","ppc","ppc64"], "supported_repo_breeds":["rsync", "rhn", "yum"], "kernel_file":"vmlinuz(.*)", "initrd_file":"initrd(.*)\\.img", "isolinux_ok":false, "default_kickstart":"/var/lib/cobbler/kickstarts/sample_end.ks", "kernel_options":"repo=$tree", "kernel_options_post":"", "boot_files":[] } }, "debian": { "squeeze": { "signatures":["dists"], "version_file":"Release", "version_file_regex":"Codename: squeeze", "kernel_arch":"linux-headers-(.*)\\.deb", "kernel_arch_regex":null, "supported_arches":["i386","amd64"], "supported_repo_breeds":["apt"], "kernel_file":"vmlinuz(.*)", "initrd_file":"initrd(.*)\\.gz", "isolinux_ok":false, "default_kickstart":"/var/lib/cobbler/kickstarts/sample.seed", "kernel_options":"", "kernel_options_post":"", "boot_files":[] }, "wheezy": { "signatures":["dists"], "version_file":"Release", "version_file_regex":"Codename: wheezy", "kernel_arch":"linux-headers-(.*)\\.deb", "kernel_arch_regex":null, "supported_arches":["i386","amd64"], "supported_repo_breeds":["apt"], "kernel_file":"vmlinuz(.*)", "initrd_file":"initrd(.*)\\.gz", "isolinux_ok":false, "default_kickstart":"/var/lib/cobbler/kickstarts/sample.seed", "kernel_options":"", "kernel_options_post":"", "boot_files":[] } }, "ubuntu": { "lucid": { "signatures":["dists", ".disk"], "version_file":"Release|mini-info", "version_file_regex":"Codename: lucid|Ubuntu 10.04", "kernel_arch":"linux-headers-(.*)\\.deb", "kernel_arch_regex":null, "supported_arches":["i386","amd64"], "supported_repo_breeds":["apt"], "kernel_file":"linux(.*)", "initrd_file":"initrd(.*)\\.gz", "isolinux_ok":false, "default_kickstart":"/var/lib/cobbler/kickstarts/sample.seed", "kernel_options":"", "kernel_options_post":"", "boot_files":[] }, "oneiric": { "signatures":["dists", ".disk"], "version_file":"Release|mini-info", "version_file_regex":"Codename: oneiric|Ubuntu 11.10", "kernel_arch":"linux-headers-(.*)\\.deb", "kernel_arch_regex":null, "supported_arches":["i386","amd64"], "supported_repo_breeds":["apt"], "kernel_file":"linux(.*)", "initrd_file":"initrd(.*)\\.gz", "isolinux_ok":false, "default_kickstart":"/var/lib/cobbler/kickstarts/sample.seed", "kernel_options":"", "kernel_options_post":"", "boot_files":[] }, "precise": { "signatures":["dists", ".disk"], "version_file":"Release|mini-info", "version_file_regex":"Codename: precise|Ubuntu 12.04", "kernel_arch":"linux-headers-(.*)\\.deb", "kernel_arch_regex":null, "supported_arches":["i386","amd64"], "supported_repo_breeds":["apt"], "kernel_file":"linux(.*)", "initrd_file":"initrd(.*)\\.gz", "isolinux_ok":false, "default_kickstart":"/var/lib/cobbler/kickstarts/sample.seed", "kernel_options":"", "kernel_options_post":"", "boot_files":[] }, "quantal": { "signatures":["dists", ".disk"], "version_file":"Release|mini-info", "version_file_regex":"Codename: quantal|Ubuntu 12.10", "kernel_arch":"linux-headers-(.*)\\.deb", "kernel_arch_regex":null, "supported_arches":["i386","amd64"], "supported_repo_breeds":["apt"], "kernel_file":"linux(.*)", "initrd_file":"initrd(.*)\\.gz", "isolinux_ok":false, "default_kickstart":"/var/lib/cobbler/kickstarts/sample.seed", "kernel_options":"", "kernel_options_post":"", "boot_files":[] }, "raring": { "signatures":["dists", ".disk"], "version_file":"Release|mini-info", "version_file_regex":"Codename: raring|Ubuntu 13.04", "kernel_arch":"linux-headers-(.*)\\.deb", "kernel_arch_regex":null, "supported_arches":["i386","amd64"], "supported_repo_breeds":["apt"], "kernel_file":"linux(.*)", "initrd_file":"initrd(.*)\\.gz", "isolinux_ok":false, "default_kickstart":"/var/lib/cobbler/kickstarts/sample.seed", "kernel_options":"", "kernel_options_post":"", "boot_files":[] }, "saucy": { "signatures":["dists", ".disk"], "version_file":"Release|mini-info", "version_file_regex":"Codename: saucy|Ubuntu 13.10", "kernel_arch":"linux-headers-(.*)\\.deb", "kernel_arch_regex":null, "supported_arches":["i386","amd64"], "supported_repo_breeds":["apt"], "kernel_file":"linux(.*)", "initrd_file":"initrd(.*)\\.gz", "isolinux_ok":false, "default_kickstart":"/var/lib/cobbler/kickstarts/sample.seed", "kernel_options":"", "kernel_options_post":"", "boot_files":[] } }, "suse": { "opensuse11.2": { "signatures":["suse"], "version_file":"openSUSE-release-11.2-(.*).rpm", "version_file_regex":null, "kernel_arch":"kernel-(.*)\\.rpm", "kernel_arch_regex":null, "supported_arches":["i386","i586","x86_64"], "supported_repo_breeds":["yum"], "kernel_file":"(linux|vmlinuz(.*))", "initrd_file":"initrd(.*)", "isolinux_ok":false, "default_kickstart":"/var/lib/cobbler/kickstarts/sample_autoyast.xml", "kernel_options":"install=$tree", "kernel_options_post":"", "boot_files":[] }, "opensuse11.3": { "signatures":["suse"], "version_file":"openSUSE-release-11.3-(.*).rpm", "version_file_regex":null, "kernel_arch":"kernel-(.*)\\.rpm", "kernel_arch_regex":null, "supported_arches":["i386","i586","x86_64"], "supported_repo_breeds":["yum"], "kernel_file":"(linux|vmlinuz(.*))", "initrd_file":"initrd(.*)", "isolinux_ok":false, "default_kickstart":"/var/lib/cobbler/kickstarts/sample_autoyast.xml", "kernel_options":"install=$tree", "kernel_options_post":"", "boot_files":[] }, "opensuse11.4": { "signatures":["suse"], "version_file":"openSUSE-release-11.4-(.*).rpm", "version_file_regex":null, "kernel_arch":"kernel-(.*)\\.rpm", "kernel_arch_regex":null, "supported_arches":["i386","i586","x86_64"], "supported_repo_breeds":["yum"], "kernel_file":"(linux|vmlinuz(.*))", "initrd_file":"initrd(.*)", "isolinux_ok":false, "default_kickstart":"/var/lib/cobbler/kickstarts/sample_autoyast.xml", "kernel_options":"install=$tree", "kernel_options_post":"", "boot_files":[] }, "opensuse12.1": { "signatures":["suse"], "version_file":"openSUSE-release-12.1-(.*).rpm", "version_file_regex":null, "kernel_arch":"kernel-(.*)\\.rpm", "kernel_arch_regex":null, "supported_arches":["i386","i586","x86_64"], "supported_repo_breeds":["yum"], "kernel_file":"(linux|vmlinuz(.*))", "initrd_file":"initrd(.*)", "isolinux_ok":false, "default_kickstart":"/var/lib/cobbler/kickstarts/sample_autoyast.xml", "kernel_options":"install=$tree", "kernel_options_post":"", "boot_files":[] }, "opensuse12.2": { "signatures":["suse"], "version_file":"openSUSE-release-12.2-(.*).rpm", "version_file_regex":null, "kernel_arch":"kernel-(.*)\\.rpm", "kernel_arch_regex":null, "supported_arches":["i386","i586","x86_64"], "supported_repo_breeds":["yum"], "kernel_file":"(linux|vmlinuz(.*))", "initrd_file":"initrd(.*)", "isolinux_ok":false, "default_kickstart":"/var/lib/cobbler/kickstarts/sample_autoyast.xml", "kernel_options":"install=$tree", "kernel_options_post":"", "boot_files":[] }, "opensuse12.3": { "signatures":["suse"], "version_file":"openSUSE-release-12.3-(.*).rpm", "version_file_regex":null, "kernel_arch":"kernel-(.*)\\.rpm", "kernel_arch_regex":null, "supported_arches":["i386","i586","x86_64"], "supported_repo_breeds":["yum"], "kernel_file":"(linux|vmlinuz(.*))", "initrd_file":"initrd(.*)", "isolinux_ok":false, "default_kickstart":"/var/lib/cobbler/kickstarts/sample_autoyast.xml", "kernel_options":"", "kernel_options_post":"", "boot_files":[] }, "opensuse13.1": { "signatures":["suse"], "version_file":"openSUSE-release-13.1-(.*).rpm", "version_file_regex":null, "kernel_arch":"kernel-(.*)\\.rpm", "kernel_arch_regex":null, "supported_arches":["i386","i586","x86_64"], "supported_repo_breeds":["yum"], "kernel_file":"(linux|vmlinuz(.*))", "initrd_file":"initrd(.*)", "isolinux_ok":false, "default_kickstart":"/var/lib/cobbler/kickstarts/sample_autoyast.xml", "kernel_options":"", "kernel_options_post":"", "boot_files":[] }, "sles11": { "signatures":["suse"], "version_file":"sles-release-11-(.*).rpm", "version_file_regex":null, "kernel_arch":"kernel-(.*)\\.rpm", "kernel_arch_regex":null, "supported_arches":["i386","i586","x86_64","ppc64"], "supported_repo_breeds":["yum"], "kernel_file":"linux[64.gz]?", "initrd_file":"initrd[64]?", "isolinux_ok":false, "default_kickstart":"/var/lib/cobbler/kickstarts/sample_autoyast.xml", "kernel_options":"install=$tree", "kernel_options_post":"", "boot_files":[] }, "sles11sp1": { "signatures":["suse"], "version_file":"sles-release-11.1-(.*).rpm", "version_file_regex":null, "kernel_arch":"kernel-(.*)\\.rpm", "kernel_arch_regex":null, "supported_arches":["i386","i586","x86_64","ppc64"], "supported_repo_breeds":["yum"], "kernel_file":"linux[64.gz]?", "initrd_file":"initrd[64]?", "isolinux_ok":false, "default_kickstart":"/var/lib/cobbler/kickstarts/sample_autoyast.xml", "kernel_options":"install=$tree", "kernel_options_post":"", "boot_files":[] }, "sles11sp2": { "signatures":["suse"], "version_file":"sles-release-11.2-(.*).rpm", "version_file_regex":null, "kernel_arch":"kernel-(.*)\\.rpm", "kernel_arch_regex":null, "supported_arches":["i386","i586","x86_64","ppc64"], "supported_repo_breeds":["yum"], "kernel_file":"linux[64.gz]?", "initrd_file":"initrd[64]?", "isolinux_ok":false, "default_kickstart":"/var/lib/cobbler/kickstarts/sample_autoyast.xml", "kernel_options":"install=$tree", "kernel_options_post":"", "boot_files":[] }, "sles11sp3": { "signatures":["suse"], "version_file":"sles-release-11.3-(.*).rpm", "version_file_regex":null, "kernel_arch":"kernel-(.*)\\.rpm", "kernel_arch_regex":null, "supported_arches":["i386","i586","x86_64","ppc64"], "supported_repo_breeds":["yum"], "kernel_file":"linux[64.gz]?", "initrd_file":"initrd[64]?", "isolinux_ok":false, "default_kickstart":"/var/lib/cobbler/kickstarts/sample_autoyast.xml", "kernel_options":"", "kernel_options_post":"", "boot_files":[] } }, "vmware": { "esx4": { "signatures":["VMware/RPMS"], "version_file":"vmware-esx-vmware-release-(.*)\\.rpm", "version_file_regex":null, "kernel_arch":"kernel-(.*)\\.x86_64\\.rpm", "kernel_arch_regex":null, "supported_arches":["x86_64"], "supported_repo_breeds":["yum"], "kernel_file":"vmlinuz", "initrd_file":"initrd\\.img", "isolinux_ok":true, "default_kickstart":"/var/lib/cobbler/kickstarts/sample_esx4.ks", "kernel_options":"", "kernel_options_post":"", "boot_files":[] }, "esxi4": { "signatures":["imagedd.bz2"], "version_file":"vmkernel\\.gz", "version_file_regex":"^.*ESXi 4.1\\.(\\d)+ \\[Releasebuild-([\\d]+)\\].*$", "kernel_arch":"vmkernel\\.gz", "kernel_arch_regex":"^.*SystemVsiCpuArch.*(X86_64).*$", "supported_arches":["x86_64"], "supported_repo_breeds":[], "kernel_file":"mboot\\.c32", "initrd_file":"vmkboot\\.gz", "isolinux_ok":false, "default_kickstart":"/var/lib/cobbler/kickstarts/sample_esxi4.ks", "kernel_options":"", "kernel_options_post":"", "boot_files":["vmkernel.gz","sys.vgz","cim.vgz","ienviron.vgz","install.vgz"] }, "esxi5": { "signatures":["tboot.b00"], "version_file":"s\\.v00", "version_file_regex":"^.*ESXi 5\\.0\\.(.*)build-([\\d]+).*$", "kernel_arch":"tools\\.t00", "kernel_arch_regex":"^.*(x86_64).*$", "supported_arches":["x86_64"], "supported_repo_breeds":[], "kernel_file":"mboot\\.c32", "initrd_file":"imgpayld\\.tgz", "isolinux_ok":false, "default_kickstart":"/var/lib/cobbler/kickstarts/sample_esxi5.ks", "kernel_options":"", "kernel_options_post":"", "template_files":"/etc/cobbler/pxe/bootcfg_esxi5.template=$local_img_path/cobbler-boot.cfg", "boot_files":["*.*"] }, "esxi51": { "signatures":["tboot.b00"], "version_file":"s\\.v00", "version_file_regex":"^.*ESXi 5\\.1\\.(.*)build-([\\d]+).*$", "kernel_arch":"tools\\.t00", "kernel_arch_regex":"^.*(x86_64).*$", "supported_arches":["x86_64"], "supported_repo_breeds":[], "kernel_file":"mboot\\.c32", "initrd_file":"imgpayld\\.tgz", "isolinux_ok":false, "default_kickstart":"/var/lib/cobbler/kickstarts/sample_esxi5.ks", "kernel_options":"", "kernel_options_post":"", "template_files":"/etc/cobbler/pxe/bootcfg_esxi51.template=$local_img_path/cobbler-boot.cfg", "boot_files":["*.*"] }, "esxi55": { "signatures":["tboot.b00"], "version_file":"s\\.v00", "version_file_regex":"^.*ESXi 5\\.5\\.(.*)build-([\\d]+).*$", "kernel_arch":"tools\\.t00", "kernel_arch_regex":"^.*(x86_64).*$", "supported_arches":["x86_64"], "supported_repo_breeds":[], "kernel_file":"mboot\\.c32", "initrd_file":"imgpayld\\.tgz", "isolinux_ok":false, "default_kickstart":"/var/lib/cobbler/kickstarts/sample_esxi5.ks", "kernel_options":"", "kernel_options_post":"", "template_files":"/etc/cobbler/pxe/bootcfg_esxi55.template=$local_img_path/cobbler-boot.cfg", "boot_files":["*.*"] } }, "freebsd": { "8.2": { "signatures":["boot"], "version_file":"8\\.2-RELEASE", "version_file_regex":null, "kernel_arch":"device\\.hints", "kernel_arch_regex":"^.*FreeBSD: src/sys/(.*)/conf/GENERIC\\.hints.*$", "supported_arches":["i386","amd64"], "supported_repo_breeds":[], "kernel_file":"pxeboot(.*)", "initrd_file":"mfsroot\\.gz", "isolinux_ok":false, "default_kickstart":"", "kernel_options":"", "kernel_options_post":"", "boot_files":[] }, "8.3": { "signatures":["boot"], "version_file":"8\\.3-RELEASE", "version_file_regex":null, "kernel_arch":"device\\.hints", "kernel_arch_regex":"^.*FreeBSD: src/sys/(.*)/conf/GENERIC\\.hints.*$", "supported_arches":["i386","amd64"], "supported_repo_breeds":[], "kernel_file":"pxeboot(.*)", "initrd_file":"mfsroot\\.gz", "isolinux_ok":false, "default_kickstart":"", "kernel_options":"", "kernel_options_post":"", "boot_files":[] }, "8.4": { "signatures":["boot"], "version_file":"8\\.4-RELEASE", "version_file_regex":null, "kernel_arch":"device\\.hints", "kernel_arch_regex":"^.*FreeBSD: src/sys/(.*)/conf/GENERIC\\.hints.*$", "supported_arches":["i386","amd64"], "supported_repo_breeds":[], "kernel_file":"pxeboot(.*)", "initrd_file":"mfsroot\\.gz", "isolinux_ok":false, "default_kickstart":"", "kernel_options":"", "kernel_options_post":"", "boot_files":[] }, "9.0": { "signatures":["boot"], "version_file":"device\\.hints", "version_file_regex":"^.*FreeBSD: release/9.0(.*)/sys/(.*)/conf/GENERIC.hints.*$", "kernel_arch":"device\\.hints", "kernel_arch_regex":"^.*FreeBSD: release/9.0(.*)/sys/(.*)/conf/GENERIC.hints.*$", "supported_arches":["i386","amd64"], "supported_repo_breeds":[], "kernel_file":"pxeboot(.*)", "initrd_file":"mbr", "isolinux_ok":false, "default_kickstart":"", "kernel_options":"", "kernel_options_post":"", "boot_files":[] } }, "xen": { "xcp16": { "signatures":["packages.main"], "version_file":"^XS-REPOSITORY$", "version_file_regex":"^.*product=\"XCP\" version=\"1\\.6\\.([0-9]+)\".*$", "kernel_arch":"xen\\.gz", "kernel_arch_regex":"^.*(x86_64).*$", "supported_arches":["x86_64"], "supported_repo_breeds":[], "kernel_file":"vmlinuz", "initrd_file":"xen\\.gz", "isolinux_ok":false, "default_kickstart":"", "kernel_options":"dom0_max_vcpus=1-2 dom0_mem=max:752M,752M", "kernel_options_post":"", "boot_files":["install.img"] }, "xenserver620": { "signatures":["packages.xenserver"], "version_file":"^XS-REPOSITORY$", "version_file_regex":"^.*product=\"XenServer\" version=\"6\\.2\\.([0-9]+)\".*$", "kernel_arch":"xen\\.gz", "kernel_arch_regex":"^.*(x86_64).*$", "supported_arches":["x86_64"], "supported_repo_breeds":[], "kernel_file":"mboot\\.c32", "initrd_file":"xen\\.gz", "isolinux_ok":false, "default_kickstart":"", "kernel_options":"", "kernel_options_post":"", "boot_files":["install.img"] } }, "unix": { }, "windows": { }, "generic": { "generic26": { "signatures":[], "version_file":"", "version_file_regex":"", "kernel_arch":"", "kernel_arch_regex":"", "supported_arches":["i386","x86_64"], "supported_repo_breeds":[], "kernel_file":"", "initrd_file":"", "isolinux_ok":false, "default_kickstart":"", "kernel_options":"", "kernel_options_post":"", "boot_files":[] } } } } cobbler-2.4.1/config/import_rsync_whitelist000066400000000000000000000035361227367477500211750ustar00rootroot00000000000000#----------------------------------------------- # RHEL/CentOS/SciLinux # Ex: cobbler import \ # --name=fedora-16-remote \ # --path=rsync://mirrors.kernel.org/mirrors/fedora/releases/16/Fedora/x86_64/os/ \ # --available-as=http://mirrors.kernel.org/fedora/releases/16/Fedora/x86_64/os/ #----------------------------------------------- + RedHat/ + RedHat/RPMS/ + RedHat/rpms/ + RedHat/Base/ + Fedora/ + Fedora/RPMS/ + Fedora/rpms/ + CentOS/ + CentOS/RPMS/ + CentOS/rpms/ + CentOS/ + Packages/ + Packages/*/ + Server/ + Client/ + SL/ + images/ + images/pxeboot/ + images/pxeboot/* + isolinux/ + isolinux/* + */*-release* - */kernel-debug*.rpm - */kernel-devel*.rpm - */kernel-doc*.rpm - */kernel-headers*.rpm + */kernel-*.rpm #----------------------------------------------- # Debian/Ubuntu #----------------------------------------------- + pool/ + dists/ + dists/*/ + main/ + main/debian-installer/ + main/installer*/ + main/installer*/current/ + main/installer*/current/images/ # all of these should be under the current/images directory... + netboot/ + netboot/ubuntu-installer/ + netboot/ubuntu-installer/amd64/ + netboot/ubuntu-installer/i386/ + netboot/ubuntu-installer/*/initrd* + netboot/ubuntu-installer/*/linu* #----------------------------------------------- # SuSE # Ex: cobbler import \ # --name=suse-11.4-remote \ # --path=rsync://mirrors.kernel.org/mirrors/opensuse/distribution/11.4/repo/oss/ \ # --available-as=http://mirrors.kernel.org/opensuse/distribution/11.4/repo/oss/ #----------------------------------------------- + boot/ + boot/i386/ + boot/i386/loader/ + boot/i386/loader/initrd + boot/i386/loader/linux + boot/x86_64/ + boot/x86_64/loader/ + boot/x86_64/loader/initrd + boot/x86_64/loader/linux + suse/ #----------------------------------------------- # Exclude everything else #----------------------------------------------- - * cobbler-2.4.1/config/modules.conf000066400000000000000000000060011227367477500167330ustar00rootroot00000000000000# cobbler module configuration file # ================================= # authentication: # what users can log into the WebUI and Read-Write XMLRPC? # choices: # authn_denyall -- no one (default) # authn_configfile -- use /etc/cobbler/users.digest (for basic setups) # authn_passthru -- ask Apache to handle it (used for kerberos) # authn_ldap -- authenticate against LDAP # authn_spacewalk -- ask Spacewalk/Satellite (experimental) # authn_pam -- use PAM facilities # authn_testing -- username/password is always testing/testing (debug) # (user supplied) -- you may write your own module # WARNING: this is a security setting, do not choose an option blindly. # for more information: # https://github.com/cobbler/cobbler/wiki/Cobbler-web-interface # https://github.com/cobbler/cobbler/wiki/Security-overview # https://github.com/cobbler/cobbler/wiki/Kerberos # https://github.com/cobbler/cobbler/wiki/Ldap [authentication] module = authn_denyall # authorization: # once a user has been cleared by the WebUI/XMLRPC, what can they do? # choices: # authz_allowall -- full access for all authneticated users (default) # authz_ownership -- use users.conf, but add object ownership semantics # (user supplied) -- you may write your own module # WARNING: this is a security setting, do not choose an option blindly. # If you want to further restrict cobbler with ACLs for various groups, # pick authz_ownership. authz_allowall does not support ACLs. configfile # does but does not support object ownership which is useful as an additional # layer of control. # for more information: # https://github.com/cobbler/cobbler/wiki/Cobbler-web-interface # https://github.com/cobbler/cobbler/wiki/Security-overview # https://github.com/cobbler/cobbler/wiki/Web-authorization [authorization] module = authz_allowall # dns: # chooses the DNS management engine if manage_dns is enabled # in /etc/cobbler/settings, which is off by default. # choices: # manage_bind -- default, uses BIND/named # manage_dnsmasq -- uses dnsmasq, also must select dnsmasq for dhcp below # NOTE: more configuration is still required in /etc/cobbler # for more information: # https://github.com/cobbler/cobbler/wiki/Dns-management [dns] module = manage_bind # dhcp: # chooses the DHCP management engine if manage_dhcp is enabled # in /etc/cobbler/settings, which is off by default. # choices: # manage_isc -- default, uses ISC dhcpd # manage_dnsmasq -- uses dnsmasq, also must select dnsmasq for dns above # NOTE: more configuration is still required in /etc/cobbler # for more information: # https://github.com/cobbler/cobbler/wiki/Dhcp-management [dhcp] module = manage_isc # tftpd: # chooses the TFTP management engine if manage_tftp is enabled # in /etc/cobbler/settings, which is ON by default. # # choices: # manage_in_tftpd -- default, uses the system's tftp server # manage_tftpd_py -- uses cobbler's tftp server # [tftpd] module = manage_in_tftpd #-------------------------------------------------- cobbler-2.4.1/config/mongodb.conf000066400000000000000000000000531227367477500167110ustar00rootroot00000000000000[connection] host = localhost port = 27017 cobbler-2.4.1/config/rsync.exclude000066400000000000000000000005601227367477500171310ustar00rootroot00000000000000### files to exclude from "cobbler import" commands that use ### rsync mirrors. by default ISOs, PPC code, and debug ### RPM's are not transferred. Some users may want to ### re-enable debug RPM's. **/debug/** **/alpha/** **/source/** **/SRPMS/** **/*.iso **/kde-i18n** pool/**/*.dsc pool/**/*.gz ### Avoid deleting the local cache created by createrepo **/cache/** cobbler-2.4.1/config/settings000066400000000000000000000455341227367477500162150ustar00rootroot00000000000000--- # cobbler settings file # restart cobblerd and run "cobbler sync" after making changes # This config file is in YAML 1.0 format # see http://yaml.org # ========================================================== # if 1, cobbler will allow insertions of system records that duplicate # the --dns-name information of other system records. In general, # this is undesirable and should be left 0. allow_duplicate_hostnames: 0 # if 1, cobbler will allow insertions of system records that duplicate # the ip address information of other system records. In general, # this is undesirable and should be left 0. allow_duplicate_ips: 0 # if 1, cobbler will allow insertions of system records that duplicate # the mac address information of other system records. In general, # this is undesirable. allow_duplicate_macs: 0 # if 1, cobbler will allow settings to be changed dynamically without # a restart of the cobblerd daemon. You can only change this variable # by manually editing the settings file, and you MUST restart cobblerd # after changing it. allow_dynamic_settings: 0 # by default, installs are *not* set to send installation logs to the cobbler # # # server. With 'anamon_enabled', kickstart templates may use the pre_anamon # # # snippet to allow remote live monitoring of their installations from the # # # cobbler server. Installation logs will be stored under # # # /var/log/cobbler/anamon/. NOTE: This does allow an xmlrpc call to send logs # # # to this directory, without authentication, so enable only if you are # # # ok with this limitation. anamon_enabled: 0 # If using authn_pam in the modules.conf, this can be configured # to change the PAM service authentication will be tested against. # The default value is "login". authn_pam_service: "login" # How long the authentication token is valid for, in seconds auth_token_expiration: 3600 # Email out a report when cobbler finishes installing a system. # enabled: set to 1 to turn this feature on # sender: optional # email: which addresses to email # smtp_server: used to specify another server for an MTA # subject: use the default subject unless overridden build_reporting_enabled: 0 build_reporting_sender: "" build_reporting_email: [ 'root@localhost' ] build_reporting_smtp_server: "localhost" build_reporting_subject: "" build_reporting_ignorelist: [ "" ] # Cheetah-language kickstart templates can import Python modules. # while this is a useful feature, it is not safe to allow them to # import anything they want. This whitelists which modules can be # imported through Cheetah. Users can expand this as needed but # should never allow modules such as subprocess or those that # allow access to the filesystem as Cheetah templates are evaluated # by cobblerd as code. cheetah_import_whitelist: - "random" - "re" - "time" # Default createrepo_flags to use for new repositories. If you have # createrepo >= 0.4.10, consider "-c cache --update -C", which can # dramatically improve your "cobbler reposync" time. "-s sha" # enables working with Fedora repos from F11/F12 from EL-4 or # EL-5 without python-hashlib installed (which is not available # on EL-4) createrepo_flags: "-c cache -s sha" # if no kickstart is specified to profile add, use this template default_kickstart: /var/lib/cobbler/kickstarts/default.ks # configure all installed systems to use these nameservers by default # unless defined differently in the profile. For DHCP configurations # you probably do /not/ want to supply this. default_name_servers: [] # if using the authz_ownership module (see the Wiki), objects # created without specifying an owner are assigned to this # owner and/or group. Can be a comma seperated list. default_ownership: - "admin" # cobbler has various sample kickstart templates stored # in /var/lib/cobbler/kickstarts/. This controls # what install (root) password is set up for those # systems that reference this variable. The factory # default is "cobbler" and cobbler check will warn if # this is not changed. # The simplest way to change the password is to run # openssl passwd -1 # and put the output between the "" below. default_password_crypted: "$1$mF86/UHC$WvcIcX2t6crBz2onWxyac." # the default template type to use in the absence of any # other detected template. If you do not specify the template # with '#template=' on the first line of your # templates/snippets, cobbler will assume try to use the # following template engine to parse the templates. # # Current valid values are: cheetah, jinja2 default_template_type: "cheetah" # for libvirt based installs in koan, if no virt bridge # is specified, which bridge do we try? For EL 4/5 hosts # this should be xenbr0, for all versions of Fedora, try # "virbr0". This can be overriden on a per-profile # basis or at the koan command line though this saves # typing to just set it here to the most common option. default_virt_bridge: xenbr0 # use this as the default disk size for virt guests (GB) default_virt_file_size: 5 # use this as the default memory size for virt guests (MB) default_virt_ram: 512 # if koan is invoked without --virt-type and no virt-type # is set on the profile/system, what virtualization type # should be assumed? Values: xenpv, xenfv, qemu, vmware # (NOTE: this does not change what virt_type is chosen by import) default_virt_type: xenpv # enable gPXE booting? Enabling this option will cause cobbler # to copy the undionly.kpxe file to the tftp root directory, # and if a profile/system is configured to boot via gpxe it will # chain load off pxelinux.0. # Default: 0 enable_gpxe: 0 # controls whether cobbler will add each new profile entry to the default # PXE boot menu. This can be over-ridden on a per-profile # basis when adding/editing profiles with --enable-menu=0/1. Users # should ordinarily leave this setting enabled unless they are concerned # with accidental reinstalls from users who select an entry at the PXE # boot menu. Adding a password to the boot menus templates # may also be a good solution to prevent unwanted reinstallations enable_menu: 1 # enable Func-integration? This makes sure each installed machine is set up # to use func out of the box, which is a powerful way to script and control # remote machines. # Func lives at http://fedorahosted.org/func # read more at https://github.com/cobbler/cobbler/wiki/Func-integration # you will need to mirror Fedora/EPEL packages for this feature, so see # https://github.com/cobbler/cobbler/wiki/Manage-yum-repos if you want cobbler # to help you with this func_auto_setup: 0 func_master: overlord.example.org # change this port if Apache is not running plaintext on port # 80. Most people can leave this alone. http_port: 80 # kernel options that should be present in every cobbler installation. # kernel options can also be applied at the distro/profile/system # level. kernel_options: ksdevice: bootif lang: ' ' text: ~ # s390 systems require additional kernel options in addition to the # above defaults kernel_options_s390x: RUNKS: 1 ramdisk_size: 40000 root: /dev/ram0 ro: ~ ip: off vnc: ~ # configuration options if using the authn_ldap module. See the # the Wiki for details. This can be ignored if you are not using # LDAP for WebUI/XMLRPC authentication. ldap_server: "ldap.example.com" ldap_base_dn: "DC=example,DC=com" ldap_port: 389 ldap_tls: 1 ldap_anonymous_bind: 1 ldap_search_bind_dn: '' ldap_search_passwd: '' ldap_search_prefix: 'uid=' ldap_tls_cacertfile: '' ldap_tls_keyfile: '' ldap_tls_certfile: '' # cobbler has a feature that allows for integration with config management # systems such as Puppet. The following parameters work in conjunction with # --mgmt-classes and are described in furhter detail at: # https://github.com/cobbler/cobbler/wiki/Using-cobbler-with-a-configuration-management-system mgmt_classes: [] mgmt_parameters: from_cobbler: 1 # if enabled, this setting ensures that puppet is installed during # machine provision, a client certificate is generated and a # certificate signing request is made with the puppet master server puppet_auto_setup: 0 # when puppet starts on a system after installation it needs to have # its certificate signed by the puppet master server. Enabling the # following feature will ensure that the puppet server signs the # certificate after installation if the puppet master server is # running on the same machine as cobbler. This requires # puppet_auto_setup above to be enabled sign_puppet_certs_automatically: 0 # location of the puppet executable, used for revoking certificates puppetca_path: "/usr/bin/puppet" # when a puppet managed machine is reinstalled it is necessary to # remove the puppet certificate from the puppet master server before a # new certificate is signed (see above). Enabling the following # feature will ensure that the certificate for the machine to be # installed is removed from the puppet master server if the puppet # master server is running on the same machine as cobbler. This # requires puppet_auto_setup above to be enabled remove_old_puppet_certs_automatically: 0 # choose a --server argument when running puppetd/puppet agent during kickstart #puppet_server: 'puppet' # let cobbler know that you're using a newer version of puppet # choose version 3 to use: 'puppet agent'; version 2 uses status quo: 'puppetd' #puppet_version: 2 # choose whether to enable puppet parameterized classes or not. # puppet versions prior to 2.6.5 do not support parameters #puppet_parameterized_classes: 1 # set to 1 to enable Cobbler's DHCP management features. # the choice of DHCP management engine is in /etc/cobbler/modules.conf manage_dhcp: 0 # set to 1 to enable Cobbler's DNS management features. # the choice of DNS mangement engine is in /etc/cobbler/modules.conf manage_dns: 0 # set to path of bind chroot to create bind-chroot compatible bind # configuration files. This should be automatically detected. bind_chroot_path: "" # set to the ip address of the master bind DNS server for creating secondary # bind configuration files bind_master: 127.0.0.1 # set to 1 to enable Cobbler's TFTP management features. # the choice of TFTP mangement engine is in /etc/cobbler/modules.conf manage_tftpd: 1 # set to 1 to enable Cobbler's RSYNC management features. manage_rsync: 0 # if using BIND (named) for DNS management in /etc/cobbler/modules.conf # and manage_dns is enabled (above), this lists which zones are managed # See the Wiki (https://github.com/cobbler/cobbler/wiki/Dns-management) for more info manage_forward_zones: [] manage_reverse_zones: [] # if using cobbler with manage_dhcp, put the IP address # of the cobbler server here so that PXE booting guests can find it # if you do not set this correctly, this will be manifested in TFTP open timeouts. next_server: 127.0.0.1 # settings for power management features. optional. # see https://github.com/cobbler/cobbler/wiki/Power-management to learn more # choices (refer to codes.py): # apc_snmp bladecenter bullpap drac ether_wake ilo integrity # ipmilan ipmitool lpar rsa virsh wti power_management_default_type: 'ipmitool' # the commands used by the power management module are sourced # from what directory? power_template_dir: "/etc/cobbler/power" # if this setting is set to 1, cobbler systems that pxe boot # will request at the end of their installation to toggle the # --netboot-enabled record in the cobbler system record. This eliminates # the potential for a PXE boot loop if the system is set to PXE # first in it's BIOS order. Enable this if PXE is first in your BIOS # boot order, otherwise leave this disabled. See the manpage # for --netboot-enabled. pxe_just_once: 0 # the templates used for PXE config generation are sourced # from what directory? pxe_template_dir: "/etc/cobbler/pxe" # Path to where system consoles are consoles: "/var/consoles" # Are you using a Red Hat management platform in addition to Cobbler? # Cobbler can help you register to it. Choose one of the following: # "off" : I'm not using Red Hat Network, Satellite, or Spacewalk # "hosted" : I'm using Red Hat Network # "site" : I'm using Red Hat Satellite Server or Spacewalk # You will also want to read: https://github.com/cobbler/cobbler/wiki/Tips-for-RHN redhat_management_type: "off" # if redhat_management_type is enabled, choose your server # "management.example.org" : For Satellite or Spacewalk # "xmlrpc.rhn.redhat.com" : For Red Hat Network # This setting is also used by the code that supports using Spacewalk/Satellite users/passwords # within Cobbler Web and Cobbler XMLRPC. Using RHN Hosted for this is not supported. # This feature can be used even if redhat_management_type is off, you just have # to have authn_spacewalk selected in modules.conf redhat_management_server: "xmlrpc.rhn.redhat.com" # specify the default Red Hat authorization key to use to register # system. If left blank, no registration will be attempted. Similarly # you can set the --redhat-management-key to blank on any system to # keep it from trying to register. redhat_management_key: "" # if using authn_spacewalk in modules.conf to let cobbler authenticate # against Satellite/Spacewalk's auth system, by default it will not allow per user # access into Cobbler Web and Cobbler XMLRPC. # in order to permit this, the following setting must be enabled HOWEVER # doing so will permit all Spacewalk/Satellite users of certain types to edit all # of cobbler's configuration. # these roles are: config_admin and org_admin # users should turn this on only if they want this behavior and # do not have a cross-multi-org seperation concern. If you have # a single org in your satellite, it's probably safe to turn this # on and then you can use CobblerWeb alongside a Satellite install. redhat_management_permissive: 0 # if set to 1, allows /usr/bin/cobbler-register (part of the koan package) # to be used to remotely add new cobbler system records to cobbler. # this effectively allows for registration of new hardware from system # records. register_new_installs: 0 # Flags to use for yum's reposync. If your version of yum reposync # does not support -l, you may need to remove that option. reposync_flags: "-l -n -d" # when DHCP and DNS management are enabled, cobbler sync can automatically # restart those services to apply changes. The exception for this is # if using ISC for DHCP, then omapi eliminates the need for a restart. # omapi, however, is experimental and not recommended for most configurations. # If DHCP and DNS are going to be managed, but hosted on a box that # is not on this server, disable restarts here and write some other # script to ensure that the config files get copied/rsynced to the destination # box. This can be done by modifying the restart services trigger. # Note that if manage_dhcp and manage_dns are disabled, the respective # parameter will have no effect. Most users should not need to change # this. restart_dns: 1 restart_dhcp: 1 # install triggers are scripts in /var/lib/cobbler/triggers/install # that are triggered in kickstart pre and post sections. Any # executable script in those directories is run. They can be used # to send email or perform other actions. They are currently # run as root so if you do not need this functionality you can # disable it, though this will also disable "cobbler status" which # uses a logging trigger to audit install progress. run_install_triggers: 1 # enables a trigger which version controls all changes to /var/lib/cobbler # when add, edit, or sync events are performed. This can be used # to revert to previous database versions, generate RSS feeds, or for # other auditing or backup purposes. "git" and "hg" are currently suported, # but git is the recommend SCM for use with this feature. scm_track_enabled: 0 scm_track_mode: "git" # this is the address of the cobbler server -- as it is used # by systems during the install process, it must be the address # or hostname of the system as those systems can see the server. # if you have a server that appears differently to different subnets # (dual homed, etc), you need to read the --server-override section # of the manpage for how that works. server: 127.0.0.1 # If set to 1, all commands will be forced to use the localhost address # instead of using the above value which can force commands like # cobbler sync to open a connection to a remote address if one is in the # configuration and would traceback. client_use_localhost: 0 # If set to 1, all commands to the API (not directly to the XMLRPC # server) will go over HTTPS instead of plaintext. Be sure to change # the http_port setting to the correct value for the web server client_use_https: 0 # this is a directory of files that cobbler uses to make # templating easier. See the Wiki for more information. Changing # this directory should not be required. snippetsdir: /var/lib/cobbler/snippets # Normally if a kickstart is specified at a remote location, this # URL will be passed directly to the kickstarting system, thus bypassing # the usual snippet templating Cobbler does for local kickstart files. If # this option is enabled, Cobbler will fetch the file contents internally # and serve a templated version of the file to the client. template_remote_kickstarts: 0 # should new profiles for virtual machines default to auto booting with the physical host when the physical host reboots? # this can be overridden on each profile or system object. virt_auto_boot: 1 # cobbler's web directory. Don't change this setting -- see the # Wiki on "relocating your cobbler install" if your /var partition # is not large enough. webdir: /var/www/cobbler # cobbler's public XMLRPC listens on this port. Change this only # if absolutely needed, as you'll have to start supplying a new # port option to koan if it is not the default. xmlrpc_port: 25151 # "cobbler repo add" commands set cobbler up with repository # information that can be used during kickstart and is automatically # set up in the cobbler kickstart templates. By default, these # are only available at install time. To make these repositories # usable on installed systems (since cobbler makes a very convient) # mirror, set this to 1. Most users can safely set this to 1. Users # who have a dual homed cobbler server, or are installing laptops that # will not always have access to the cobbler server may wish to leave # this as 0. In that case, the cobbler mirrored yum repos are still # accessable at http://cobbler.example.org/cblr/repo_mirror and yum # configuration can still be done manually. This is just a shortcut. yum_post_install_mirror: 1 # the default yum priority for all the distros. This is only used # if yum-priorities plugin is used. 1=maximum. Tweak with caution. yum_distro_priority: 1 # Flags to use for yumdownloader. Not all versions may support # --resolve. yumdownloader_flags: "--resolve" # sort and indent JSON output to make it more human-readable serializer_pretty_json: 0 # replication rsync options for distros, kickstarts, snippets set to override default value of "-avzH" replicate_rsync_options: "-avzH" # replication rsync options for repos set to override default value of "-avzH" replicate_repo_rsync_options: "-avzH" cobbler-2.4.1/config/users.conf000066400000000000000000000015201227367477500164250ustar00rootroot00000000000000# Cobbler WebUI / Web Services authorization config file # # NOTICE: # this file is only used when /etc/cobbler/modules.conf # specifies an authorization mode of either: # # (A) authz_configfile # (B) authz_ownership # # For (A), any user in this file, in any group, are allowed # full access to any object in cobbler configuration. # # For (B), users in the "admins" group are allowed full access # to any object, otherwise users can only edit an object if # their username/group is listed as an owner of that object. If a # user is not listed in this file they will have no access. # # cobbler command line example: # # cobbler system edit --name=server1 --owner=dbas,mac,pete,jack # # NOTE: yes, you do need the equal sign after the names. # don't remove that part. It's reserved for future use. [admins] admin = "" cobbler = "" cobbler-2.4.1/config/users.digest000066400000000000000000000000611227367477500167560ustar00rootroot00000000000000cobbler:Cobbler:a2d6bae81669d707b72c0bd9806e01f3 cobbler-2.4.1/contrib/000077500000000000000000000000001227367477500146125ustar00rootroot00000000000000cobbler-2.4.1/contrib/cheetah_macros000066400000000000000000000202201227367477500174760ustar00rootroot00000000000000 ## Comment every line containing the $pattern given ## Ex: preserve a record of an old value before changing it. ## ## $comment_lines('/etc/resolv.conf', 'nameserver') ## echo "nameserver 192.168.0.1" >> /etc/resolv.conf ## #def comment_lines($filename, $pattern, $commentchar='#') perl -npe 's/^(.*${pattern}.*)$/${commentchar}\${1}/' -i '$filename' #end def ## Comments every line which contains only the exact pattern. ## This one works like comment_lines(), except that a line cannot contain any ## additional text. #def comment_lines_exact($filename, $pattern, $commentchar='#') perl -npe 's/^(${pattern})$/${commentchar}\${1}/' -f '$filename' #end def ## Uncomments every (commented) line containing the pattern ## Patterns should not contain the # ## Ex: enable all the suggested values in the Samba configuration ## (This isn't the greatest example, but it makes a point) ## ## $uncomment_lines('/etc/samba/smb.conf', ';') ## #def uncomment_lines($filename, $pattern, $commentchar='#') perl -npe 's/^[ \t]*${commentchar}(.*${pattern}.*)$/\${1}/' -i '$filename' #end def ## Nullify (by changing to 'true') all instances of a given sh command. This ## does understand lines with multiple commands (separated by ';') and also ## knows to ignore comments. Consider other options before using this ## method. ## Ex: remove 'exit 0' commands from a shell script, so that we can append the ## script and be relatively certain that the new parts will be executed. ## ## $delete_command('etc/cron.daily/some_script.sh', 'exit[ \t]*0') ## echo '# More scipt' >> /etc/cron.daily/some_script.sh ## #def delete_command($filename, $pattern) sed -nr ' h s/^([^#]*)(#?.*)$/\1/ s/((^|;)[ \t]*)${pattern}([ \t]*($|;))/\1true\3/g s/((^|;)[ \t]*)${pattern}([ \t]*($|;))/\1true\3/g x s/^([^#]*)(#?.*)$/\2/ H x s/\n// p ' -i '$filename' #end def ## Replace a configuration parameter value, or add it if it doesn't exist. ## Assumes format is [param_name] [value] ## Ex: Change the maximum password age to 30 days ## ## $set_config_value('/etc/login.defs', 'PASS_MAX_DAYS', '30') ## #def set_config_value($filename, $param_name, $value) if [ -n \"\$(grep -Ee '^[ \t]*${param_name}[ \t]+' '$filename')\" ] then perl -npe 's/^([ \t]*${param_name}[ \t]+)[\x21-\x7E]*([ \t]*(#.*)?)$/\${1}${sedesc($value)}\${2}/' -i '$filename' else echo '$param_name $value' >> '$filename' fi #end def ## Replace a configuration parameter value, or add it if it doesn't exist. ## Assues format is [param_name] [delimiter] [value], where [delimiter] is ## usually '='. ## This works the same way as set_config_value(), except that this version ## is used if a character separates a parameter from its value. #def set_config_value_delim($filename, $param_name, $delim, $value) if [ -n \"\$(grep -Ee '^[ \t]*${param_name}[ \t]*${delim}[ \t]*' '$filename')\" ] then perl -npe 's/^([ \t]*${param_name}[ \t]*${delim}[ \t]*)[\x21-\x7E]*([ \t]*(#.*)?)$/${1}${sedesc($value)}${2}/' -i '$filename' else echo '$param_name$delim$value' >> '$filename' fi #end def ## Copy a file from the server to the client. ## Ex: Copy a template for samba configuration ## ## (once at the top of the kickstart template) ## #set files = $snippetsdir + '/files/' ## (when you need to copy a file) ## $copy_over_file('etc/samba/smb.conf', '/etc/samba/smb.conf') ## ## Additionally, copied files can be templated: ## ---------etc/samba/smb.conf------------- ## ... ## [global] ## server string = $profile_name ## ... ## ---------------------------------------- #def copy_over_file($serverfile, $clientfile) cat << 'EOF' > '$clientfile' #include $files + $serverfile EOF #end def ## Copy a file from the server and append the contents to a file on the ## client. ## This works the same as copy_over_file(), except it appends the file rather ## than replacing the file. #def copy_append_file($serverfile, $clientfile) cat << 'EOF' >> '$clientfile' #include $files + $serverfile EOF #end def ## Convenience function: Copy/append several files at once. This accepts a ## list of tuples. The first element indicates whether to overwrite ('w') or ## append ('a'). The second element is the file name on both the server and ## the client (a '/' is prepended on the client side). ## Ex: copy a template for samba and audit configuration ## ## $copy_files([ ## ('w', 'etc/samba/smb.conf'), ## ('w', 'etc/audit.rules'), ## ]) ## #def copy_files($filelist) #for $thisfile in $filelist #if $thisfile[0] == 'a' $copy_append_file($thisfile[1], '/' + $thisfile[1]) #else $copy_over_file($thisfile[1], '/' + $thisfile[1]) #end if #end for #end def ## Append some content to the todo file. NOTE: $todofile must be defined ## before using this (unless you want unexpected results). Be sure to end ## the content with 'EOF' ## Ex: Instruct the admin to set an appropriate nameserver. ## ## (once at the top of the kickstart template) ## #set global $todofile = '/root/kstodo' ## (as needed) ## $TODO() ## Edit /etc/resolv.conf to configure your local nameserver ## EOF ## ## This will prevent inconsistency and accidents. You should avoid using: ## ## echo "Edit /etc/resolv.conf..." >> /root/kstodo ## ## It's easy to forget to use >> to append instead of >, which will clobber all ## previous todo notices. It's also easy to forget the filename, was it kstodo ## or ks-todo? #def TODO() cat << 'EOF' >> '$todofile' #end def ## Set the owner, group, and permissions for several files. Assignment can ## be plain ('p') or recursive. If recursive you can assign everything ('r') ## or just files ('f'). This method takes a list of tuples. The first element ## of each indicates which style. The remaining elements are owner, group, ## and mode respectively. If 'f' is used, an additional element is a find ## pattern that can further restrict assignments (use '*' if no additional ## restrict is desired). ## NOTE: I used the word 'plain' instead of 'single', because wildcards can ## still be used in 'plain' mode. ## Ex: correct the permissions of serveral important files and directories: ## ## $set_permissions([ ## ('p', 'root', 'root', '700', '/root'), ## ('f', 'root', 'root', '600', '/root', '*'), ## ('r', 'root', 'root', '/etc/cron.*'), ## ('p', 'root', 'root', '644', '/etc/samba/smb.conf'), ## ]) ## #def set_permissions($filelist) #for $file in $filelist #if $file[0] == 'p' #if $file[1] != '' and $file[2] != '' chown '$file[1]:$file[2]' '$file[4]' #else #if $file[1] != '' chown '$file[1]' '$file[4]' #end if #if $file[2] != '' chgrp '$file[2]' '$file[4]' #end if #end if #if $file[3] != '' chmod '$file[3]' '$file[4]' #end if #elif $file[0] == 'r' #if $file[1] != '' and $file[2] != '' chown -R '$file[1]:$file[2]' '$file[4]' #else #if $file[1] != '' chown -R '$file[1]' '$file[4]' #end if #if $file[2] != '' chgrp -R '$file[2]' '$file[4]' #end if #end if #if $file[3] != '' chmod -R '$file[3]' '$file[4]' #end if #elif $file[0] == 'f' #if $file[1] != '' and $file[2] != '' find $file[4] -name '$file[5]' -type f -exec chown -R '$file[1]:$file[2]' {} \; #else #if $file[1] != '' find $file[4] -name '$file[5]' -type f -exec chown -R '$file[1]' {} \; #end if #if $file[2] != '' find $file[4] -name '$file[5]' -type f -exec chgrp -R '$file[2]' {} \; #end if #end if #if $file[3] != '' find $file[4] -name '$file[5]' -type f -exec chmod -R '$file[3]' {} \; #end if #end if #end for #end def ## Cheeseball an entire directory. ## This will include (in sequence) all file in a given directory into a ## kickstart template. ## Ex: include a 'misc' directory of templates ## ## $includeall('misc') ## ## Now in cobbler/snippets/misc: ## ---------------avinstall----------------- ## wget http://some.server.com/some-av-package.tar.gz ## tar -xzf some-av-package.tar.gz ## ./some-av-package/install.sh ## rm some-av-package.tar.gz ## rm -rf some-av-package ## ----------------------------------------- ## ---------------fwinstall----------------- ## wget http://some.server.com/fw-linux-installer.sh ## chmod +x fw-linux-installer.sh ## ./fw-linux-installer.sh ## rm fw-linux-installer.sh ## ----------------------------------------- ## #def includeall($dir) #import os #for $file in $os.listdir($snippetsdir + '/' + $dir) #include $snippetsdir + '/' + $dir + '/' + $file #end for #end def cobbler-2.4.1/debian/000077500000000000000000000000001227367477500143745ustar00rootroot00000000000000cobbler-2.4.1/debian/README.Debian000066400000000000000000000002611227367477500164340ustar00rootroot00000000000000cobbler for Debian ------------------ -- root Wed, 11 Feb 2009 22:50:37 +0100 cobbler-2.4.1/debian/changelog000066400000000000000000000004151227367477500162460ustar00rootroot00000000000000cobbler (1.5.0-1) unstable; urgency=low * Release bump -- Jasper Capel Tue, 24 Feb 2009 19:25:21 +0100 cobbler (1.4.1-1) unstable; urgency=low * Initial release -- Jasper Capel Wed, 11 Feb 2009 22:50:37 +0100 cobbler-2.4.1/debian/cobbler.default000066400000000000000000000003531227367477500173530ustar00rootroot00000000000000# Defaults for cobbler initscript # sourced by /etc/init.d/cobbler # installed at /etc/default/cobbler by the maintainer scripts # # This is a POSIX shell fragment # # Additional options that are passed to the Daemon. DAEMON_OPTS="" cobbler-2.4.1/debian/compat000066400000000000000000000000021227367477500155720ustar00rootroot000000000000007 cobbler-2.4.1/debian/control000066400000000000000000000020061227367477500157750ustar00rootroot00000000000000Source: cobbler Section: unknown Priority: optional Maintainer: Jasper Poppe Build-Depends: debhelper (>= 7), python-cheetah, python-yaml, git-core, python Standards-Version: 3.8.0 Homepage: https://fedoraproject.org/cobbler Package: cobbler Architecture: all Depends: ${python:Depends}, python, apache2, libapache2-mod-python, python-support, python-yaml, python-netaddr, python-cheetah, syslinux | syslinux-common Suggests: rsync, tftpd-hpa, mkisofs, tftpd-hpa, dhcp3-server, createrepo Description: Install server Cobbler is a network install server. Cobbler supports PXE, virtualized installs, and reinstalling existing Linux machines. The last two modes use a helper tool, 'koan', that integrates with cobbler. Cobbler's advanced features include importing distributions from DVDs and rsync mirrors, kickstart templating, integrated yum mirroring, and built-in DHCP/DNS Management. Cobbler has a Python and XMLRPC API for integration with other applications. There is also a web interface. cobbler-2.4.1/debian/copyright000066400000000000000000000007011227367477500163250ustar00rootroot00000000000000This package was debianized by Jasper Capel on Wed, 11 Feb 2009 22:50:37 +0100. It was downloaded from http://www.cobblerd.org/ Upstream Author(s): Michael DeHaan Copyright: Copyright (C) 2009 Red Hat, Inc and Others License: GPLv2 The Debian packaging is (C) 2009, Jasper Capel and is licensed under the GPL, see `/usr/share/common-licenses/GPL'. cobbler-2.4.1/debian/dirs000066400000000000000000000000211227367477500152510ustar00rootroot00000000000000usr/bin usr/sbin cobbler-2.4.1/debian/docs000066400000000000000000000000071227367477500152440ustar00rootroot00000000000000README cobbler-2.4.1/debian/init.d.ex000066400000000000000000000077701227367477500161320ustar00rootroot00000000000000#! /bin/sh # # skeleton example file to build /etc/init.d/ scripts. # This file should be used to construct scripts for /etc/init.d. # # Written by Miquel van Smoorenburg . # Modified for Debian # by Ian Murdock . # Further changes by Javier Fernandez-Sanguino # # Version: @(#)skeleton 1.9 26-Feb-2001 miquels@cistron.nl # PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin DAEMON=/usr/sbin/cobbler NAME=cobbler DESC=cobbler test -x $DAEMON || exit 0 LOGDIR=/var/log/cobbler PIDFILE=/var/run/$NAME.pid DODTIME=1 # Time to wait for the server to die, in seconds # If this value is set too low you might not # let some servers to die gracefully and # 'restart' will not work # Include cobbler defaults if available if [ -f /etc/default/cobbler ] ; then . /etc/default/cobbler fi set -e running_pid() { # Check if a given process pid's cmdline matches a given name pid=$1 name=$2 [ -z "$pid" ] && return 1 [ ! -d /proc/$pid ] && return 1 cmd=`cat /proc/$pid/cmdline | tr "\000" "\n"|head -n 1 |cut -d : -f 1` # Is this the expected child? [ "$cmd" != "$name" ] && return 1 return 0 } running() { # Check if the process is running looking at /proc # (works for all users) # No pidfile, probably no daemon present [ ! -f "$PIDFILE" ] && return 1 # Obtain the pid and check it against the binary name pid=`cat $PIDFILE` running_pid $pid $DAEMON || return 1 return 0 } force_stop() { # Forcefully kill the process [ ! -f "$PIDFILE" ] && return if running ; then kill -15 $pid # Is it really dead? [ -n "$DODTIME" ] && sleep "$DODTIME"s if running ; then kill -9 $pid [ -n "$DODTIME" ] && sleep "$DODTIME"s if running ; then echo "Cannot kill $LABEL (pid=$pid)!" exit 1 fi fi fi rm -f $PIDFILE return 0 } case "$1" in start) echo -n "Starting $DESC: " start-stop-daemon --start --quiet --pidfile $PIDFILE \ --exec $DAEMON -- $DAEMON_OPTS if running ; then echo "$NAME." else echo " ERROR." fi ;; stop) echo -n "Stopping $DESC: " start-stop-daemon --stop --quiet --pidfile $PIDFILE \ --exec $DAEMON echo "$NAME." ;; force-stop) echo -n "Forcefully stopping $DESC: " force_stop if ! running ; then echo "$NAME." else echo " ERROR." fi ;; #reload) # # If the daemon can reload its config files on the fly # for example by sending it SIGHUP, do it here. # # If the daemon responds to changes in its config file # directly anyway, make this a do-nothing entry. # # echo "Reloading $DESC configuration files." # start-stop-daemon --stop --signal 1 --quiet --pidfile \ # /var/run/$NAME.pid --exec $DAEMON #;; force-reload) # # If the "reload" option is implemented, move the "force-reload" # option to the "reload" entry above. If not, "force-reload" is # just the same as "restart" except that it does nothing if the # daemon isn't already running. # check wether $DAEMON is running. If so, restart start-stop-daemon --stop --test --quiet --pidfile \ /var/run/$NAME.pid --exec $DAEMON \ && $0 restart \ || exit 0 ;; restart) echo -n "Restarting $DESC: " start-stop-daemon --stop --quiet --pidfile \ /var/run/$NAME.pid --exec $DAEMON [ -n "$DODTIME" ] && sleep $DODTIME start-stop-daemon --start --quiet --pidfile \ /var/run/$NAME.pid --exec $DAEMON -- $DAEMON_OPTS echo "$NAME." ;; status) echo -n "$LABEL is " if running ; then echo "running" else echo " not running." exit 1 fi ;; *) N=/etc/init.d/$NAME # echo "Usage: $N {start|stop|restart|reload|force-reload}" >&2 echo "Usage: $N {start|stop|restart|force-reload|status|force-stop}" >&2 exit 1 ;; esac exit 0 cobbler-2.4.1/debian/init.d.lsb000066400000000000000000000220431227367477500162640ustar00rootroot00000000000000#!/bin/sh # # Example init.d script with LSB support. # # Please read this init.d carefully and modify the sections to # adjust it to the program you want to run. # # Copyright (c) 2007 Javier Fernandez-Sanguino # # This is free software; you may redistribute it and/or modify # it under the terms of the GNU General Public License as # published by the Free Software Foundation; either version 2, # or (at your option) any later version. # # This is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License with # the Debian operating system, in /usr/share/common-licenses/GPL; if # not, write to the Free Software Foundation, Inc., 59 Temple Place, # Suite 330, Boston, MA 02111-1307 USA # ### BEGIN INIT INFO # Provides: cobbler # Required-Start: $network $local_fs # Required-Stop: # Should-Start: $named # Should-Stop: # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: # Description: # <...> # <...> ### END INIT INFO PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin DAEMON=/usr/bin/cobblerd # Introduce the server's location here NAME=#PACKAGE # Introduce the short server's name here DESC=#PACKAGE # Introduce a short description here LOGDIR=/var/log/cobbler # Log directory to use PIDFILE=/var/run/$NAME.pid test -x $DAEMON || exit 0 . /lib/lsb/init-functions # Default options, these can be overriden by the information # at /etc/default/$NAME DAEMON_OPTS="" # Additional options given to the server DIETIME=10 # Time to wait for the server to die, in seconds # If this value is set too low you might not # let some servers to die gracefully and # 'restart' will not work #STARTTIME=2 # Time to wait for the server to start, in seconds # If this value is set each time the server is # started (on start or restart) the script will # stall to try to determine if it is running # If it is not set and the server takes time # to setup a pid file the log message might # be a false positive (says it did not start # when it actually did) LOGFILE=$LOGDIR/$NAME.log # Server logfile #DAEMONUSER=cobbler # Users to run the daemons as. If this value # is set start-stop-daemon will chuid the server # Include defaults if available if [ -f /etc/default/$NAME ] ; then . /etc/default/$NAME fi # Use this if you want the user to explicitly set 'RUN' in # /etc/default/ #if [ "x$RUN" != "xyes" ] ; then # log_failure_msg "$NAME disabled, please adjust the configuration to your needs " # log_failure_msg "and then set RUN to 'yes' in /etc/default/$NAME to enable it." # exit 1 #fi # Check that the user exists (if we set a user) # Does the user exist? if [ -n "$DAEMONUSER" ] ; then if getent passwd | grep -q "^$DAEMONUSER:"; then # Obtain the uid and gid DAEMONUID=`getent passwd |grep "^$DAEMONUSER:" | awk -F : '{print $3}'` DAEMONGID=`getent passwd |grep "^$DAEMONUSER:" | awk -F : '{print $4}'` else log_failure_msg "The user $DAEMONUSER, required to run $NAME does not exist." exit 1 fi fi set -e running_pid() { # Check if a given process pid's cmdline matches a given name pid=$1 name=$2 [ -z "$pid" ] && return 1 [ ! -d /proc/$pid ] && return 1 cmd=`cat /proc/$pid/cmdline | tr "\000" "\n"|head -n 1 |cut -d : -f 1` # Is this the expected server [ "$cmd" != "$name" ] && return 1 return 0 } running() { # Check if the process is running looking at /proc # (works for all users) # No pidfile, probably no daemon present [ ! -f "$PIDFILE" ] && return 1 pid=`cat $PIDFILE` running_pid $pid $DAEMON || return 1 return 0 } start_server() { # Start the process using the wrapper if [ -z "$DAEMONUSER" ] ; then start_daemon -p $PIDFILE $DAEMON -- $DAEMON_OPTS errcode=$? else # if we are using a daemonuser then change the user id start-stop-daemon --start --quiet --pidfile $PIDFILE \ --chuid $DAEMONUSER \ --exec $DAEMON -- $DAEMON_OPTS errcode=$? fi return $errcode } stop_server() { # Stop the process using the wrapper if [ -z "$DAEMONUSER" ] ; then killproc -p $PIDFILE $DAEMON errcode=$? else # if we are using a daemonuser then look for process that match start-stop-daemon --stop --quiet --pidfile $PIDFILE \ --user $DAEMONUSER \ --exec $DAEMON errcode=$? fi return $errcode } reload_server() { [ ! -f "$PIDFILE" ] && return 1 pid=pidofproc $PIDFILE # This is the daemon's pid # Send a SIGHUP kill -1 $pid return $? } force_stop() { # Force the process to die killing it manually [ ! -e "$PIDFILE" ] && return if running ; then kill -15 $pid # Is it really dead? sleep "$DIETIME"s if running ; then kill -9 $pid sleep "$DIETIME"s if running ; then echo "Cannot kill $NAME (pid=$pid)!" exit 1 fi fi fi rm -f $PIDFILE } case "$1" in start) log_daemon_msg "Starting $DESC " "$NAME" # Check if it's running first if running ; then log_progress_msg "apparently already running" log_end_msg 0 exit 0 fi if start_server ; then # NOTE: Some servers might die some time after they start, # this code will detect this issue if STARTTIME is set # to a reasonable value [ -n "$STARTTIME" ] && sleep $STARTTIME # Wait some time if running ; then # It's ok, the server started and is running log_end_msg 0 else # It is not running after we did start log_end_msg 1 fi else # Either we could not start it log_end_msg 1 fi ;; stop) log_daemon_msg "Stopping $DESC" "$NAME" if running ; then # Only stop the server if we see it running errcode=0 stop_server || errcode=$? log_end_msg $errcode else # If it's not running don't do anything log_progress_msg "apparently not running" log_end_msg 0 exit 0 fi ;; force-stop) # First try to stop gracefully the program $0 stop if running; then # If it's still running try to kill it more forcefully log_daemon_msg "Stopping (force) $DESC" "$NAME" errcode=0 force_stop || errcode=$? log_end_msg $errcode fi ;; restart|force-reload) log_daemon_msg "Restarting $DESC" "$NAME" errcode=0 stop_server || errcode=$? # Wait some sensible amount, some server need this [ -n "$DIETIME" ] && sleep $DIETIME start_server || errcode=$? [ -n "$STARTTIME" ] && sleep $STARTTIME running || errcode=$? log_end_msg $errcode ;; status) log_daemon_msg "Checking status of $DESC" "$NAME" if running ; then log_progress_msg "running" log_end_msg 0 else log_progress_msg "apparently not running" log_end_msg 1 exit 1 fi ;; # Use this if the daemon cannot reload reload) log_warning_msg "Reloading $NAME daemon: not implemented, as the daemon" log_warning_msg "cannot re-read the config file (use restart)." ;; # And this if it cann #reload) # # If the daemon can reload its config files on the fly # for example by sending it SIGHUP, do it here. # # If the daemon responds to changes in its config file # directly anyway, make this a do-nothing entry. # # log_daemon_msg "Reloading $DESC configuration files" "$NAME" # if running ; then # reload_server # if ! running ; then # Process died after we tried to reload # log_progress_msg "died on reload" # log_end_msg 1 # exit 1 # fi # else # log_progress_msg "server is not running" # log_end_msg 1 # exit 1 # fi #;; *) N=/etc/init.d/$NAME echo "Usage: $N {start|stop|force-stop|restart|force-reload|status}" >&2 exit 1 ;; esac exit 0 cobbler-2.4.1/debian/rules000077500000000000000000000035331227367477500154600ustar00rootroot00000000000000#!/usr/bin/make -f # -*- makefile -*- # Sample debian/rules that uses debhelper. # This file was originally written by Joey Hess and Craig Small. # As a special exception, when this file is copied by dh-make into a # dh-make output file, you may use that output file without restriction. # This special exception was added by Craig Small in version 0.37 of dh-make. # Uncomment this to turn on verbose mode. #export DH_VERBOSE=1 configure: configure-stamp configure-stamp: dh_testdir # Add here commands to configure the package. touch configure-stamp build: build-stamp build-stamp: configure-stamp dh_testdir # Add here commands to compile the package. $(MAKE) #docbook-to-man debian/cobbler.sgml > cobbler.1 touch $@ clean: dh_testdir dh_testroot rm -f build-stamp configure-stamp # Add here commands to clean up after the build process. $(MAKE) clean dh_clean install: build dh_testdir dh_testroot dh_clean -k dh_installdirs # Add here commands to install the package into debian/cobbler. $(MAKE) debinstall DESTDIR=$(CURDIR)/debian/cobbler # Build architecture-independent files here. binary-indep: build install # We have nothing to do by default. dh_testdir dh_testroot dh_installchangelogs CHANGELOG dh_installdocs dh_installexamples # dh_install # dh_installmenu # dh_installdebconf # dh_installlogrotate # dh_installemacsen # dh_installpam # dh_installmime # dh_pysupport FIXME should make this working instead of using dh_python dh_python dh_installinit # dh_installcron # dh_installinfo dh_installman dh_link dh_strip dh_compress dh_fixperms # dh_perl # dh_makeshlibs dh_installdeb dh_shlibdeps dh_gencontrol dh_md5sums dh_builddeb # Build architecture-dependent files here. binary-arch: build install binary: binary-indep binary-arch .PHONY: build clean binary-indep binary-arch binary install configure cobbler-2.4.1/debian/watch.ex000066400000000000000000000014151227367477500160410ustar00rootroot00000000000000# Example watch control file for uscan # Rename this file to "watch" and then you can run the "uscan" command # to check for upstream updates and more. # See uscan(1) for format # Compulsory line, this is a version 3 file version=3 # Uncomment to examine a Webpage # #http://www.example.com/downloads.php cobbler-(.*)\.tar\.gz # Uncomment to examine a Webserver directory #http://www.example.com/pub/cobbler-(.*)\.tar\.gz # Uncommment to examine a FTP server #ftp://ftp.example.com/pub/cobbler-(.*)\.tar\.gz debian uupdate # Uncomment to find new files on sourceforge, for devscripts >= 2.9 # http://sf.net/cobbler/cobbler-(.*)\.tar\.gz # Uncomment to find new files on GooglePages # http://example.googlepages.com/foo.html cobbler-(.*)\.tar\.gz cobbler-2.4.1/docs/000077500000000000000000000000001227367477500141025ustar00rootroot00000000000000cobbler-2.4.1/docs/cobbler-register.pod000066400000000000000000000040141227367477500200370ustar00rootroot00000000000000=head1 NAME cobbler-register - create a cobbler system record =head1 SYNOPSIS cobbler-register [--server=] --profile= [--fqdn=] =head1 DESCRIPTION Running cobbler-register on a system will create a cobbler system record for that system on a remote cobbler server. No changes will be made on the system itself. =head1 DETAILS When installing new machines into a cobbler managed datacenter/lab, it helps to not have to manually enter in the network information for those systems. Using cobbler register either from a kickstart or a live environment (or even SSH) can help seed the cobbler database. Network information is discovered automatically for all physical interfaces. cobbler-register will attempt to discover the hostname, though if 'localhost.localdomain' is found, it will have to use some other data for the cobbler system record. This is probably not what you want, so specify --fqdn in this instance to override that registration value. For this to work, the "register_new_installs" setting must be enabled on the remote cobbler server. When the remote system record is created, for safety reasons, it will be set in Cobbler to be "netboot disabled". Use "cobbler system edit --name=foo --netboot-enabled=1" to set the machine to reinstall, where "foo" is the name of the new system record. =head1 ENVIRONMENT VARIABLES cobbler-register respects the COBBLER_SERVER variable to specify the cobbler server to use. This is a convenient way to avoid using the --server option. This variable is set automatically on systems installed via cobbler, assuming standard kickstart templates are used. If you need to change this on an installed system, edit /etc/profile.d/cobbler.{csh,sh}. =head1 ADDITIONAL Reading the cobbler manpage, the koan manpage, and www.cobblerd.org is highly recommended. The mailing list is cobbler@lists.fedorahosted.org. Subscribe at https://fedorahosted.org/mailman/listinfo/cobbler =head1 AUTHOR Michael DeHaan cobbler-2.4.1/docs/cobbler.pod000066400000000000000000001673061227367477500162330ustar00rootroot00000000000000=head1 NAME cobbler - a provisioning and update server cobbler is a provisioning (installation) and update server. It supports deployments via PXE (network booting), virtualization (Xen, QEMU/KVM, or VMware), and re-installs of existing Linux systems. The latter two features are enabled by usage of 'koan' on the remote system. Update server features include yum mirroring and integration of those mirrors with kickstart. Cobbler has a command line interface, Web UI, and extensive Python and XMLRPC APIs for integration with external scripts and applications. =head1 SYNOPSIS cobbler command [subcommand] [--arg1=value1] [--arg2=value2] =head1 DESCRIPTION Cobbler manages provisioning using a tiered concept of Distributions, Profiles, Systems, and (optionally) Images and Repositories. Distributions contain information about what kernel and initrd are used, plus metadata (required kernel parameters, etc). Profiles associate a Distribution with a kickstart file and optionally customize the metadata further. Systems associate a MAC, IP, and other networking details with a profile and optionally customize the metadata further. Repositories contain yum mirror information. Using cobbler to mirror repositories is an optional feature, though provisioning and package management share a lot in common. Images are a catch-all concept for things that do not play nicely in the "distribution" category. Most users will not need these records initially and these are described later in the document. The main advantage of cobbler is that it glues together many disjoint technologies and concepts and abstracts the user from the need to understand them. It allows the systems administrator to concentrate on what he needs to do, and not how it is done. This manpage will focus on the cobbler command line tool for use in configuring cobbler. There is also mention of the Cobbler WebUI which is usable for day-to-day operation of Cobbler once installed/configured. Docs on the API and XMLRPC components are available online at http://www.cobblerd.org. Most users will be interested in the Web UI and should set it up, though the command line is needed for initial configuration -- in particular "cobbler check" and "cobbler import", as well as the repo mirroring features. All of these are described later in the documentation. =head1 SEE ALSO For help in building kickstarts, try using the "system-config-kickstart" tool, or install a new system and look at the /root/anaconda-ks.cfg file left over from the installer. General kickstart questions can also be asked at kickstart-list@redhat.com. Cobbler ships some kickstart templates in /etc/cobbler that may also prove helpful. Also see the aforementioned webpages for additional documentation, user contributed tips, and so on. =head1 COBBLER USAGE =head2 SETUP After installing, run "cobbler check" to verify that cobbler's ecosystem is configured correctly. Cobbler check will direct you on how to modify it's config files using a text editor. Any problems detected should be corrected, with the potential exception of DHCP related warnings where you will need to use your judgement as to whether they apply to your environment. Run "cobbler sync" after making any changes to the configuration files to ensure those changes are applied to the environment. It is especially important that the server name field be accurate in /etc/cobbler/settings, without this field being correct, kickstart trees will not be found, and automated installations will fail. For PXE, if DHCP is to be run from the cobbler server, the dhcp configuration file should be changed as suggested by "cobbler check". If DHCP is not run locally, the "next-server" field on the DHCP server should at minimum point to the cobbler server's IP and the filename should be set to "pxelinux.0". Alternatively, cobbler can also generate your dhcp configuration file if you want to run dhcp locally -- this is covered in a later section. If you don't already have a DHCP setup managed by some other tool, allowing cobbler to manage your DHCP environment will prove to be useful as it can manage DHCP reservations and other data. If you already have a DHCP setup, moving an existing setup to be managed from within cobbler is relatively painless -- though usage of the DHCP management feature is entirely optional. If you are not interested in network booting via PXE and just want to use koan to install virtual systems or replace existing ones, DHCP configuration can be totally ignored. Koan also has a live CD (see koan's manpage) capability that can be used to simulate PXE environments. =head2 DISTRIBUTIONS This first step towards configuring what you want to install is to add a distribution record to cobbler's configuration. If there is an rsync mirror, DVD, NFS, or filesystem tree available that you would rather import instead, skip down to the documentation about the "import" command. It's really a lot easier to follow the import workflow -- it only requires waiting for the mirror content to be copied and/or scanned. Imported mirrors also save time during install since they don't have to hit external install sources. If you want to be explicit with distribution definition, however, here's how it works: B =over =item name a string identifying the distribution, this should be something like "rhel4". =item kernel An absolute filesystem path to a kernel image =item initrd An absolute filesystem path to a initrd image =item kopts Sets kernel command-line arguments that the distro, and profiles/systems depending on it, will use. To remove a kernel argument that may be added by a higher cobbler object (or in the global settings), you can prefix it with a "!". Example: --kopts="foo=bar baz=3 asdf !gulp" This example passes the arguments "foo=bar baz=3 asdf" but will make sure "gulp" is not passed even if it was requested at a level higher up in the cobbler configuration. =item kopts-post This is just like --kopts, though it governs kernel options on the installed OS, as opposed to kernel options fed to the installer. The syntax is exactly the same. This requires some special snippets to be found in your kickstart template in order for this to work. Kickstart templating is described later on in this document. Example: "noapic" =item arch Sets the architecture for the PXE bootloader and also controls how koan's --replace-self option will operate. The default setting ('standard') will use pxelinux. Set to 'ia64' to use elilo. 'ppc' and 'ppc64' use yaboot. 's390x' is not PXEable, but koan supports it for reinstalls. 'x86' and 'x86_64' effectively do the same thing as standard. If you perform a cobbler import, the arch field will be auto-assigned. =item ksmeta This is an advanced feature that sets kickstart variables to substitute, thus enabling kickstart files to be treated as templates. Templates are powered using Cheetah and are described further along in this manpage as well as on the Cobbler Wiki. Example: --ksmeta="foo=bar baz=3 asdf" See the section on "Kickstart Templating" for further information. =item breed Controls how various physical and virtual parameters, including kernel arguments for automatic installation, are to be treated. Defaults to "redhat", which is a suitable value for Fedora and CentOS as well. It means anything redhat based. There is limited experimental support for specifying "debian", "ubuntu", or "suse", which treats the kickstart file as a different format and changes the kernel arguments appropriately. Support for other types of distributions is possible in the future. See the Wiki for the latest information about support for these distributions. The file used for the answer file, regardless of the breed setting, is the value used for --kickstart when creating the profile. In other words, if another distro calls their answer file something other than a "kickstart", the kickstart parameter still governs where that answer file is. =item os-version Generally this field can be ignored. It is intended to alter some hardware setup for virtualized instances when provisioning guests with koan. The valid options for --os-version vary depending on what is specified for --breed. If you specify an invalid option, the error message will contain a list of valid os versions that can be used. If you do not know the os version or it does not appear in the list, omitting this argument or using "other" should be perfectly fine. Largely this is needed to support older distributions in virtualized settings, such as "rhel2.1", one of the OS choices if the breed is set to "redhat". If you do not encounter any problems with virtualized instances, this option can be safely ignored. =item owners Users with small sites and a limited number of admins can probably ignore this option. All cobbler objects (distros, profiles, systems, and repos) can take a --owners parameter to specify what cobbler users can edit particular objects. This only applies to the Cobbler WebUI and XMLRPC interface, not the "cobbler" command line tool run from the shell. Furthermore, this is only respected by the "authz_ownership" module which must be enabled in /etc/cobbler/modules.conf. The value for --owners is a space separated list of users and groups as specified in /etc/cobbler/users.conf. For more information see the users.conf file as well as the Cobbler Wiki. In the default Cobbler configuration, this value is completely ignored, as is users.conf. =item template-files This feature allows cobbler to be used as a configuration management system. The argument is a space delimited string of key=value pairs. Each key is the path to a template file, each value is the path to install the file on the system. This is described in further detail on the Cobbler Wiki and is implemented using special code in the post install. Koan also can retrieve these files from a cobbler server on demand, effectively allowing cobbler to function as a lightweight templated configuration management system. =item redhat-management-key If you're using Red Hat Network, Red Hat Satellite Server, or Spacewalk, you can store your authentication keys here and Cobbler can add the neccessary authentication code to your kickstart where the snippet named "redhat_register" is included. Read more about setup in /etc/cobbler/settings. =back =head2 PROFILES A profile associates a distribution to additional specialized options, such as a kickstart automation file. Profiles are the core unit of provisioning and at least one profile must exist for every distribution to be provisioned. A profile might represent, for instance, a web server or desktop configuration. In this way, profiles define a role to be performed. B Arguments are the same as listed for distributions, save for the removal of "arch" and "breed", and with the additions listed below: =over =item name A descriptive name. This could be something like "rhel5webservers" or "f9desktops". =item distro The name of a previously defined cobbler distribution. This value is required. =item kickstart Local filesystem path to a kickstart file. http:// URLs (even CGI's) are also accepted, but a local file path is recommended, so that the kickstart templating engine can be taken advantage of. If this parameter is not provided, the kickstart file will default to /var/lib/cobbler/kickstarts/default.ks. This file is initially blank, meaning default kickstarts are not automated "out of the box". Admins can change the default.ks if they desire. When using kickstart files, they can be placed anywhere on the filesystem, but the recommended path is /var/lib/cobbler/kickstarts. If using the webapp to create new kickstarts, this is where the web application will put them. =item nameservers If your nameservers are not provided by DHCP, you can specify a space separated list of addresses here to configure each of the installed nodes to use them (provided the kickstarts used are installed on a per-system basis). Users with DHCP setups should not need to use this option. This is available to set in profiles to avoid having to set it repeatedly for each system record. =item virt-file-size (Virt-only) How large the disk image should be in Gigabytes. The default is "5". This can be a comma separated list (ex: "5,6,7") to allow for multiple disks of different sizes depending on what is given to --virt-path. This should be input as a integer or decimal value without units. =item virt-ram (Virt-only) How many megabytes of RAM to consume. The default is 512 MB. This should be input as an integer without units. =item virt-type (Virt-only) Koan can install images using either Xen paravirt ("xenpv") or QEMU/KVM ("qemu"). Choose one or the other strings to specify, or values will default to attempting to find a compatible installation type on the client system ("auto"). See the "koan" manpage for more documentation. The default virt-type can be configured in the cobbler settings file such that this parameter does not have to be provided. Other virtualization types are supported, for information on those options (such as VMware), see the Cobbler Wiki. =item virt-cpus (Virt-only) How many virtual CPUs should koan give the virtual machine? The default is 1. This is an integer. =item virt-path (Virt-only) Where to store the virtual image on the host system. Except for advanced cases, this parameter can usually be omitted. For disk images, the value is usually an absolute path to an existing directory with an optional file name component. There is support for specifying partitions "/dev/sda4" or volume groups "VolGroup00", etc. For multiple disks, separate the values with commas such as "VolGroup00,VolGroup00" or "/dev/sda4,/dev/sda5". Both those examples would create two disks for the VM. =item virt-bridge (Virt-only) This specifies the default bridge to use for all systems defined under this profile. If not specified, it will assume the default value in the cobbler settings file, which as shipped in the RPM is 'xenbr0'. If using KVM, this is most likely not correct. You may want to override this setting in the system object. Bridge settings are important as they define how outside networking will reach the guest. For more information on bridge setup, see the Cobbler Wiki, where there is a section describing koan usage. =item repos This is a space delimited list of all the repos (created with "cobbler repo add" and updated with "cobbler reposync") that this profile can make use of during kickstart installation. For example, an example might be --repos="fc6i386updates fc6i386extras" if the profile wants to access these two mirrors that are already mirrored on the cobbler server. Repo management is described in greater depth later in the manpage. =item parent This is an advanced feature. Profiles may inherit from other profiles in lieu of specifying --distro. Inherited profiles will override any settings specified in their parent, with the exception of --ksmeta (templating) and --kopts (kernel options), which will be blended together. Example: If profile A has --kopts="x=7 y=2", B inherits from A, and B has --kopts="x=9 z=2", the actual kernel options that will be used for B are "x=9 y=2 z=2". Example: If profile B has --virt-ram=256 and A has --virt-ram of 512, profile B will use the value 256. Example: If profile A has a --virt-file-size of 5 and B does not specify a size, B will use the value from A. =item server-override This parameter should be useful only in select circumstances. If machines are on a subnet that cannot access the cobbler server using the name/IP as configured in the cobbler settings file, use this parameter to override that server name. See also --dhcp-tag for configuring the next server and DHCP information of the system if you are also using Cobbler to help manage your DHCP configuration. =back =head2 SYSTEMS System records map a piece of hardware (or a virtual machine) with the cobbler profile to be assigned to run on it. This may be thought of as choosing a role for a specific system. Note that if provisioning via koan and PXE menus alone, it is not required to create system records in cobbler, though they are useful when system specific customizations are required. One such customization would be defining the MAC address. If there is a specific role intended for a given machine, system records should be created for it. System commands have a wider variety of control offered over network details. In order to use these to the fullest possible extent, the kickstart template used by cobbler must contain certain kickstart snippets (sections of code specifically written for Cobbler to make these values become reality). Compare your kickstart templates with the stock ones in /var/lib/cobbler/kickstarts if you have upgraded, to make sure you can take advantage of all options to their fullest potential. If you are a new cobbler user, base your kickstarts off of these templates. Non-kickstart based distributions, while supported by Cobbler, may not be able to use all of these features. Read more about networking setup at: https://github.com/cobbler/cobbler/wiki/Advanced-networking B Adds a cobbler System to the configuration. Arguments are specified as per "profile add" with the following changes: =over =item name The system name works like the name option for other commands. If the name looks like a MAC address or an IP, the name will implicitly be used for either --mac or --ip of the first interface, respectively. However, it's usually better to give a descriptive name -- don't rely on this behavior. A system created with name "default" has special semantics. If a default system object exists, it sets all undefined systems to PXE to a specific profile. Without a "default" system name created, PXE will fall through to local boot for unconfigured systems. When using "default" name, don't specify any other arguments than --profile ... they won't be used. =item --mac Specifying a mac address via --mac allows the system object to boot directly to a specific profile via PXE, bypassing cobbler's PXE menu. If the name of the cobbler system already looks like a mac address, this is inferred from the system name and does not need to be specified. MAC addresses have the format AA:BB:CC:DD:EE:FF. It's highly recommended to register your MAC-addresses in Cobbler if you're using static addressing with multiple interfaces, or if you are using any of the advanced networking features like bonding, bridges or VLANs. Cobbler does contain a feature (enabled in /etc/cobbler/settings) that can automatically add new system records when it finds profiles being provisioned on hardware it has seen before. This may help if you do not have a report of all the MAC addresses in your datacenter/lab configuration. =item --ip-address If cobbler is configured to generate a DHCP configuration (see advanced section), use this setting to define a specific IP for this system in DHCP. Leaving off this parameter will result in no DHCP management for this particular system. Example: --ip-address=192.168.1.50 Note for Itanium users: this setting is always required for IA64 regardless of whether DHCP management is enabled. If DHCP management is disabled and the interface is labelled --static=1, this setting will be used for static IP configuration. Special feature: To control the default PXE behavior for an entire subnet, this field can also be passed in using CIDR notation. If --ip is CIDR, do not specify any other arguments other than --name and --profile. When using the CIDR notation trick, don't specify any arguments other than --name and --profile... they won't be used. =item --dns-name If using the DNS management feature (see advanced section -- cobbler supports auto-setup of BIND and dnsmasq), use this to define a hostname for the system to receive from DNS. Example: --dns-name=mycomputer.example.com This is a per-interface parameter. If you have multiple interfaces, it may be different for each interface, for example, assume a DMZ / dual-homed setup. =item --gateway and --subnet If you are using static IP configurations and the interface is flagged --static=1, these will be applied. Subnet is a per-interface parameter. Because of the way gateway is stored on the installed OS, gateway is a global parameter. You may use --static-routes for per-interface customizations if required. =item --hostname This field corresponds to the hostname set in a systems /etc/sysconfig/network file. This has no bearing on DNS, even when manage_dns is enabled. Use --dns-name instead for that feature. This parameter is assigned once per system, it is not a per-interface setting. =item --power-address, --power-type, --power-user, --power-pass, --power-id Cobbler contains features that enable integration with power management for easier installation, reinstallation, and management of machines in a datacenter environment. These parameters are described online at https://github.com/cobbler/cobbler/wiki/Power-management. If you have a power-managed datacenter/lab setup, usage of these features may be something you are interested in. =item --static Indicates that this interface is statically configured. Many fields (such as gateway/subnet) will not be used unless this field is enabled. This is a per-interface setting. =item --static-routes This is a space delimited list of ip/mask:gateway routing information in that format. Most systems will not need this information. This is a per-interface setting. =item --virt-bridge (Virt-only) While --virt-bridge is present in the profile object (see above), here it works on an interface by interface basis. For instance it would be possible to have --virt-bridge0=xenbr0 and --virt-bridge1=xenbr1. If not specified in cobbler for each interface, koan will use the value as specified in the profile for each interface, which may not always be what is intended, but will be sufficient in most cases. This is a per-interface setting. =item --kickstart While it is recommended that the --kickstart parameter is only used within for the "profile add" command, there are limited scenarios when an install base switching to cobbler may have legacy kickstarts created on a per-system basis (one kickstart for each system, nothing shared) and may not want to immediately make use of the cobbler templating system. This allows specifying a kickstart for use on a per-system basis. Creation of a parent profile is still required. If the kickstart is a filesystem location, it will still be treated as a cobbler template. =item --netboot-enabled If set false, the system will be provisionable through koan but not through standard PXE. This will allow the system to fall back to default PXE boot behavior without deleting the cobbler system object. The default value allows PXE. Cobbler contains a PXE boot loop prevention feature (pxe_just_once, can be enabled in /etc/cobbler/settings) that can automatically trip off this value after a system gets done installing. This can prevent installs from appearing in an endless loop when the system is set to PXE first in the BIOS order. =item --ldap-enabled, --ldap-type Cobbler contains features that enable ldap management for easier configuration after system provisioning. If set true, koan will run the ldap command as defined by the systems ldap_type. The default value is false. =item --monit-enabled If set true, koan will reload monit after each configuration run. The default value is false. =item --repos-enabled If set true, koan can reconfigure repositories after installation. This is described further on the Cobbler Wiki, https://github.com/cobbler/cobbler/wiki/Manage-yum-repos. =item --dhcp-tag If you are setting up a PXE environment with multiple subnets/gateways, and are using cobbler to manage a DHCP configuration, you will probably want to use this option. If not, it can be ignored. By default, the dhcp tag for all systems is "default" and means that in the DHCP template files the systems will expand out where $insert_cobbler_systems_definitions is found in the DHCP template. However, you may want certain systems to expand out in other places in the DHCP config file. Setting --dhcp-tag=subnet2 for instance, will cause that system to expand out where $insert_cobbler_system_definitions_subnet2 is found, allowing you to insert directives to specify different subnets (or other parameters) before the DHCP configuration entries for those particular systems. This is described further on the Cobbler Wiki. =item --interface By default flags like --ip, --mac, --dhcp-tag, --dns-name, --subnet, --virt-bridge, and --static-routes operate on the first network interface defined for a system (eth0). However, cobbler supports an arbitrary number of interfaces. Using --interface=eth1 for instance, will allow creating and editing of a second interface. Interface naming notes: Additional interfaces can be specified (for example: eth1, or any name you like, as long as it does not conflict with any reserved names such as kernel module names) for use with the edit command. Defining VLANs this way is also supported, of you want to add VLAN 5 on interface eth0, simply name your interface eth0:5. Example: cobbler system edit --name=foo --ip-address=192.168.1.50 --mac=AA:BB:CC:DD:EE:A0 cobbler system edit --name=foo --interface=eth0 --ip-address=192.168.1.51 --mac=AA:BB:CC:DD:EE:A1 cobbler system report foo Interfaces can be deleted using the --delete-interface option. Example: cobbler system edit --name=foo --interface=eth2 --delete-interface =item --interface-type, --interface-master and --bonding-opts/--bridge-opts One of the other advanced networking features supported by Cobbler is NIC bonding and bridging. You can use this to bond multiple physical network interfaces to one single logical interface to reduce single points of failure in your network, or to create bridged interfaces for things like tunnels and virtual machine networks. Supported values for the --interface-type parameter are "bond", "bond_slave", "bridge", "bridge_slave" and "bonded_bridge_slave". If one of the "_slave" options is specified, you also need to define the master-interface for this bond using --interface-master=INTERFACE. Bonding and bridge options for the master-interface may be specified using --bonding-opts="foo=1 bar=2" or --bridge-opts="foo=1 bar=2", respectively. Note: The options "master" and "slave" are deprecated, and are assumed to me "bond" and "bond_slave" when encountered. When a system object is saved, the deprecated values will be overwritten with the new, correct values. Example: cobbler system edit --name=foo --interface=eth0 --mac=AA:BB:CC:DD:EE:00 --interface-type=bond_slave --interface-master=bond0 cobbler system edit --name=foo --interface=eth1 --mac=AA:BB:CC:DD:EE:01 --interface-type=bond_slave --interface-master=bond0 cobbler system edit --name=foo --interface=bond0 --interface-type=bond --bonding-opts="mode=active-backup miimon=100" --ip-address=192.168.0.63 --subnet=255.255.255.0 --gateway=192.168.0.1 --static=1 More information about networking setup is available at https://github.com/cobbler/cobbler/wiki/Advanced-networking To review what networking configuration you have for any object, run "cobbler system report" at any time: Example: cobbler system report --name=foo =back =head2 REPOSITORIES Repository mirroring allows cobbler to mirror not only install trees ("cobbler import" does this for you) but also optional packages, 3rd party content, and even updates. Mirroring all of this content locally on your network will result in faster, more up-to-date installations and faster updates. If you are only provisioning a home setup, this will probably be overkill, though it can be very useful for larger setups (labs, datacenters, etc). B =over =item mirror The address of the yum mirror. This can be an rsync:// URL, an ssh location, or a http:// or ftp:// mirror location. Filesystem paths also work. The mirror address should specify an exact repository to mirror -- just one architecture and just one distribution. If you have a separate repo to mirror for a different arch, add that repo separately. Here's an example of what looks like a good URL: rsync://yourmirror.example.com/fedora-linux-core/updates/6/i386 (for rsync protocol) http://mirrors.kernel.org/fedora/extras/6/i386/ (for http://) user@yourmirror.example.com/fedora-linux-core/updates/6/i386 (for SSH) Experimental support is also provided for mirroring RHN content when you need a fast local mirror. The mirror syntax for this is --mirror=rhn://channel-name and you must have entitlements for this to work. This requires the cobbler server to be installed on RHEL5 or later. You will also need a version of yum-utils equal or greater to 1.0.4. =item name This name is used as the save location for the mirror. If the mirror represented, say, Fedora Core 6 i386 updates, a good name would be "fc6i386updates". Again, be specific. This name corresponds with values given to the --repos parameter of "cobbler profile add". If a profile has a --repos value that matches the name given here, that repo can be automatically set up during provisioning (when supported) and installed systems will also use the boot server as a mirror (unless "yum_post_install_mirror" is disabled in the settings file). By default the provisioning server will act as a mirror to systems it installs, which may not be desirable for laptop configurations, etc. Distros that can make use of yum repositories during kickstart include FC6 and later, RHEL 5 and later, and derivative distributions. See the documentation on "cobbler profile add" for more information. =item rpm-list By specifying a space-delimited list of package names for --rpm-list, one can decide to mirror only a part of a repo (the list of packages given, plus dependencies). This may be helpful in conserving time/space/bandwidth. For instance, when mirroring FC6 Extras, it may be desired to mirror just cobbler and koan, and skip all of the game packages. To do this, use --rpm-list="cobbler koan". This option only works for http:// and ftp:// repositories (as it is powered by yumdownloader). It will be ignored for other mirror types, such as local paths and rsync:// mirrors. =item createrepo-flags Specifies optional flags to feed into the createrepo tool, which is called when "cobbler reposync" is run for the given repository. The defaults are '-c cache'. =item keep-updated Specifies that the named repository should not be updated during a normal "cobbler reposync". The repo may still be updated by name. The repo should be synced at least once before disabling this feature See "cobbler reposync" below. =item mirror-locally When set to "N", specifies that this yum repo is to be referenced directly via kickstarts and not mirrored locally on the cobbler server. Only http:// and ftp:// mirror urls are supported when using --mirror-locally=N, you cannot use filesystem URLs. =item priority Specifies the priority of the repository (the lower the number, the higher the priority), which applies to installed machines using the repositories that also have the yum priorities plugin installed. The default priority for the plugin is 99, as is that of all cobbler mirrored repositories. =item arch Specifies what architecture the repository should use. By default the current system arch (of the server) is used, which may not be desirable. Using this to override the default arch allows mirroring of source repositories (using --arch=src). =item yumopts Sets values for additional yum options that the repo should use on installed systems. For instance if a yum plugin takes a certain parameter "alpha" and "beta", use something like --yumopts="alpha=2 beta=3". =item breed Ordinarily cobbler's repo system will understand what you mean without supplying this parameter, though you can set it explicitly if needed. =back =head2 MANAGEMENT CLASSES Management classes allows cobbler to function as an configuration management system. Cobbler currently supports the following resource types: =over =item 1. Packages =item 2. Files =back Resources are executed in the order listed above. B =over =item name The name of the mgmtclass. Use this name when adding a management class to a system, profile, or distro. To add a mgmtclass to an existing system use something like (cobbler system edit --name="madhatter" --mgmt-classes="http mysql"). =item comment A comment that describes the functions of the management class. =item packages Specifies a list of package resources required by the management class. =item files Specifies a list of file resources required by the management class. =back =head1 MANAGEMENT RESOURCES Resources are the lego blocks of configuration management. Resources are grouped together via Management Classes, which are then linked to a system. Cobbler supports two (2) resource types. Resources are configured in the order listed below. =over =item 1. Packages =item 2. Files =back =head2 PACKAGE RESOURCES Package resources are managed using cobbler package add =head3 Actions =over =item install Install the package. [Default] =item uninstall Uninstall the package. =back =head3 Attributes =over =item installer Which package manager to use, vaild options [rpm|yum]. =item version Which version of the package to install. =back =head3 Examples B =head2 FILE RESOURCES =head3 Actions =over =item create Create the file. [Default] =item remove Remove the file. =back =head3 Attributes =over =item mode Permission mode (as in chmod). =item group The group owner of the file. =item user The user for the file. =item path The path for the file. =item template The template for the file. =back =head3 Examples B =head2 DISPLAYING CONFIGURATION ENTRIES The following commands are usable regardless of how you are using cobbler. "report" gives detailed configuration info. "list" just lists the names of items in the configuration. Run these commands to check how you have cobbler configured. B B B B Alternatively, you could look at the configuration files in /var/lib/cobbler to see the same information. =head2 DELETING CONFIGURATION ENTRIES If you want to remove a specific object, use the remove command with the name that was used to add it. B B B B B B B B =head2 EDITING If you want to change a particular setting without doing an "add" again, use the "edit" command, using the same name you gave when you added the item. Anything supplied in the parameter list will overwrite the settings in the existing object, preserving settings not mentioned. B =head2 COPYING Objects can also be copied: B =head2 RENAMING Objects can also be renamed, as long as other objects don't reference them. B =head2 REPLICATING Cobbler can replicate configurations from a master cobbler server. Each cobbler server is still expected to have a locally relevant /etc/cobbler/cobbler.conf and modules.conf, as these files are not synced. This feature is intended for load-balancing, disaster-recovery, backup, or multiple geography support. B Cobbler can replicate data from a central server. Objects that need to be replicated should be specified with a pattern, such as --profiles="webservers* dbservers*" or --systems="*.example.org". All objects matched by the pattern, and all dependencies of those objects matched by the pattern (recursively) will be transferred from the remote server to the central server. This is to say if you intend to transfer "*.example.org" and the definition of the systems have not changed, but a profile above them has changed, the changes to that profile will also be transferred. In the case where objects are more recent on the local server, those changes will not be overridden locally. Common data locations will be rsync'ed from the master server unless --omit-data is specified. To delete objects that are no longer present on the master server, use --prune. Warning: this will delete all object types not present on the remote server from the local server, and is recursive. If you use prune, it is best to manage cobbler centrally and not expect changes made on the slave servers to be preserved. It is not currently possible to just prune objects of a specific type. =head2 REBUILDING CONFIGURATIONS B Cobbler sync is used to repair or rebuild the contents /tftpboot or /var/www/cobbler when something has changed behind the scenes. It brings the filesystem up to date with the configuration as understood by cobbler. Sync should be run whenever files in /var/lib/cobbler are manually edited (which is not recommended except for the settings file) or when making changes to kickstart files. In practice, this should not happen often, though running sync too many times does not cause any adverse effects. If using cobbler to manage a DHCP and/or DNS server (see the advanced section of this manpage), sync does need to be run after systems are added to regenerate and reload the DHCP/DNS configurations. The sync process can also be kicked off from the web interface. =head1 EXAMPLES =head2 IMPORT WORKFLOW Import is a very useful command that makes starting out with cobbler very quick and easy. This example shows how to create a provisioning infrastructure from a distribution mirror or DVD ISO. Then a default PXE configuration is created, so that by default systems will PXE boot into a fully automated install process for that distribution. You can use a network rsync mirror, a mounted DVD location, or a tree you have available via a network filesystem. Import knows how to autodetect the architecture of what is being imported, though to make sure things are named correctly, it's always a good idea to specify --arch. For instance, if you import a distribution named "fedora8" from an ISO, and it's an x86_64 ISO, specify --arch=x86_64 and the distro will be named "fedora8-x86_64" automatically, and the right architecture field will also be set on the distribution object. If you are batch importing an entire mirror (containing multiple distributions and arches), you don't have to do this, as cobbler will set the names for things based on the paths it finds. B B # OR B # OR (using an external NAS box without mirroring) B # wait for mirror to rsync... B B B B =head2 NON-IMPORT (MANUAL) WORKFLOW The following example uses a local kernel and initrd file (already downloaded), and shows how profiles would be created using two different kickstarts -- one for a web server configuration and one for a database server. Then, a machine is assigned to each profile. B B B B B B B B =head2 REPOSITORY MIRRORING WORKFLOW The following example shows how to set up a repo mirror for two repositories, and create a profile that will auto install those repository configurations on provisioned systems using that profile. B # set up your cobbler distros here. B B B B =head2 VIRTUALIZATION For Virt, be sure the distro uses the correct kernel (if paravirt) and follow similar steps as above, adding additional parameters as desired: B Specify reasonable values for the Virt image size (in GB) and RAM requirements (in MB): B Define systems if desired. koan can also provision based on the profile name. B If you have just installed cobbler, be sure that the "cobblerd" service is running and that port 25151 is unblocked. See the manpage for koan for the client side steps. =head1 ADVANCED TOPICS =head2 PXE MENUS Cobbler will automatically generate PXE menus for all profiles it has defined. Running "cobbler sync" is required to generate and update these menus. To access the menus, type "menu" at the "boot:" prompt while a system is PXE booting. If nothing is typed, the network boot will default to a local boot. If "menu" is typed, the user can then choose and provision any cobbler profile the system knows about. If the association between a system (MAC address) and a profile is already known, it may be more useful to just use "system add" commands and declare that relationship in cobbler; however many use cases will prefer having a PXE system, especially when provisioning is done at the same time as installing new physical machines. If this behavior is not desired, run "cobbler system add --name=default --profile=plugh" to default all PXE booting machines to get a new copy of the profile "plugh". To go back to the menu system, run "cobbler system remove --name=default" and then "cobbler sync" to regenerate the menus. When using PXE menu deployment exclusively, it is not necessary to make cobbler system records, although the two can easily be mixed. Additionally, note that all files generated for the pxe menu configurations are templatable, so if you wish to change the color scheme or equivalent, see the files in /etc/cobbler. =head2 KICKSTART TEMPLATING The --ksmeta options above require more explanation. If and only if --kickstart options reference filesystem URLs, --ksmeta allows for templating of the kickstart files to achieve advanced functions. If the --ksmeta option for a profile read --ksmeta="foo=7 bar=llama", anywhere in the kickstart file where the string "$bar" appeared would be replaced with the string "llama". To apply these changes, "cobbler sync" must be run to generate custom kickstarts for each profile/system. For NFS and HTTP kickstart URLs, the "--ksmeta" options will have no effect. This is a good reason to let cobbler manage your kickstart files, though the URL functionality is provided for integration with legacy infrastructure, possibly including web apps that already generate kickstarts. Templated kickstart files are processed by the templating program/package Cheetah, so anything you can do in a Cheetah template can be done to a kickstart template. Learn more at http://www.cheetahtemplate.org/learn.html When working with Cheetah, be sure to escape any shell macros that look like "$(this)" with something like "\$(this)" or errors may show up during the sync process. The Cobbler Wiki also contains numerous Cheetah examples that should prove useful in using this feature. =head2 KICKSTART SNIPPETS Anywhere a kickstart template mentions SNIPPET::snippet_name, the file named /var/lib/cobbler/snippets/snippet_name (if present) will be included automatically in the kickstart template. This serves as a way to recycle frequently used kickstart snippets without duplication. Snippets can contain templating variables, and the variables will be evaluated according to the profile and/or system as one would expect. Snippets can also be overridden for specific profile names or system names. This is described on the Cobbler Wiki. =head2 KICKSTART VALIDATION To check for potential errors in kickstarts, prior to installation, use "cobbler validateks". This function will check all profile and system kickstarts for detectable errors. Since pykickstart is not future-Anaconda-version aware, there may be some false positives. It should be noted that "cobbler validateks" runs on the rendered kickstart output, not kickstart templates themselves. =head2 DHCP MANAGEMENT Cobbler can optionally help you manage DHCP server. This feature is off by default. Choose either "management = isc_and_bind" in /etc/cobbler/dhcp.template or "management = "dnsmasq" in /etc/cobbler/modules.conf. Then set "manage_dhcp" to 1 in /etc/cobbler/settings. This allows DHCP to be managed via "cobbler system add" commands, when you specify the mac address and IP address for systems you add into cobbler. Depending on your choice, cobbler will use /etc/cobbler/dhcpd.template or /etc/cobbler/dnsmasq.template as a starting point. This file must be user edited for the user's particular networking environment. Read the file and understand how the particular app (ISC dhcpd or dnsmasq) work before proceeding. If you already have DHCP configuration data that you would like to preserve (say DHCP was manually configured earlier), insert the relevant portions of it into the template file, as running "cobbler sync" will overwrite your previous configuration. NOTE: Itanium systems names also need to be assigned to a distro that was created with the "--arch=ia64" parameter. If you have Itanium systems, you must (for now) choose 'dhcp_isc' for /etc/cobbler/modules.conf and manage_dhcp in the /etc/cobbler/settings file, and are required to use --ip when creating the system object in order for those systems to PXE. This is due to an elilo limitation. By default, the DHCP configuration file will be updated each time "cobbler sync" is run, and not until then, so it is important to remember to use "cobbler sync" when using this feature. If omapi_enabled is set to 1 in /etc/cobbler/settings, the need to sync when adding new system records can be eliminated. However, the omapi feature is experimental and is not recommended for most users. =head2 DNS CONFIGURATION MANAGEMENT Cobbler can optionally manage DNS configuration using BIND and dnsmasq. Choose either "management = isc_and_bind" or "management = dnsmasq" in /etc/cobbler/modules.conf and then enable manage_dns in /etc/cobbler/settings. This feature is off by default. If using BIND, you must define the zones to be managed with the options 'manage_forward_zones' and 'manage_reverse_zones'. (See the Wiki for more information on this). If using BIND, Cobbler will use /etc/cobbler/named.template and /etc/cobbler/zone.template as a starting point for the named.conf and individual zone files, respectively. You may drop zone-specific template files in /etc/cobbler/zone_templates/name-of-zone which will override the default. These files must be user edited for the user's particular networking environment. Read the file and understand how BIND works before proceeding. If using dnsmasq, the template is /etc/cobbler/dnsmasq.template. Read this file and understand how dnsmasq works before proceeding. All managed files (whether zone files and named.conf for BIND, or dnsmasq.conf for dnsmasq) will be updated each time ``cobbler sync'' is run, and not until then, so it is important to remember to use ``cobbler sync'' when using this feature. =head2 SERVICE DISCOVERY (AVAHI) If the avahi-tools package is installed, cobblerd will broadcast it's presence on the network, allowing it to be discovered by koan with the koan --server=DISCOVER parameter. =head2 IMPORTING TREES Cobbler can auto-add distributions and profiles from remote sources, whether this is a filesystem path or an rsync mirror. This can save a lot of time when setting up a new provisioning environment. Import is a feature that many users will want to take advantage of, and is very simple to use. After an import is run, cobbler will try to detect the distribution type and automatically assign kickstarts. By default, it will provision the system by erasing the hard drive, setting up eth0 for dhcp, and using a default password of "cobbler". If this is undesirable, edit the kickstart files in /etc/cobbler to do something else or change the kickstart setting after cobbler creates the profile. Mirrored content is saved automatically in /var/www/cobbler/ks_mirror. Example: B Example2: B Example3: B Example4: B Example5: B Once imported, run a "cobbler list" or "cobbler report" to see what you've added. By default, the rsync operations will exclude content of certain architectures, debug RPMs, and ISO images -- to change what is excluded during an import, see /etc/cobbler/rsync.exclude. Note that all of the import commands will mirror install tree content into /var/www/cobbler unless a network accessible location is given with --available-as. --available-as will be primarily used when importing distros stored on an external NAS box, or potentially on another partition on the same machine that is already accessible via http:// or ftp://. For import methods using rsync, additional flags can be passed to rsync with the option --rsync-flags. Should you want to force the usage of a specific cobbler kickstart template for all profiles created by an import, you can feed the option --kickstart to import, to bypass the built-in kickstart auto-detection. =head2 DEFAULT PXE BOOT BEHAVIOR What happens when PXE booting a system when cobbler has no record of the system being booted? By default, cobbler will configure PXE to boot to the contents of /etc/cobbler/default.pxe, which (if unmodified) will just fall through to the local boot process. Administrators can modify this file if they like to change that behavior. An easy way to specify a default cobbler profile to PXE boot is to create a system named "default". This will cause /etc/cobbler/default.pxe to be ignored. To restore the previous behavior do a "cobbler system remove" on the "default" system. B B As mentioned in earlier sections, it is also possible to control the default behavior for a specific network: B =head2 REPO MANAGEMENT This has already been covered a good bit in the command reference section. Yum repository management is an optional feature, and is not required to provision through cobbler. However, if cobbler is configured to mirror certain repositories, it can then be used to associate profiles with those repositories. Systems installed under those profiles will then be autoconfigured to use these repository mirrors in /etc/yum.repos.d, and if supported (Fedora Core 6 and later) these repositories can be leveraged even within Anaconda. This can be useful if (A) you have a large install base, (B) you want fast installation and upgrades for your systems, or (C) have some extra software not in a standard repository but want provisioned systems to know about that repository. Make sure there is plenty of space in cobbler's webdir, which defaults to /var/www/cobbler. B Cobbler reposync is the command to use to update repos as configured with "cobbler repo add". Mirroring can take a long time, and usage of cobbler reposync prior to usage is needed to ensure provisioned systems have the files they need to actually use the mirrored repositories. If you just add repos and never run "cobbler reposync", the repos will never be mirrored. This is probably a command you would want to put on a crontab, though the frequency of that crontab and where the output goes is left up to the systems administrator. For those familiar with yum's reposync, cobbler's reposync is (in most uses) a wrapper around the yum command. Please use "cobbler reposync" to update cobbler mirrors, as yum's reposync does not perform all required steps. Also cobbler adds support for rsync and SSH locations, where as yum's reposync only supports what yum supports (http/ftp). If you ever want to update a certain repository you can run: B When updating repos by name, a repo will be updated even if it is set to be not updated during a regular reposync operation (ex: cobbler repo edit --name=reponame1 --keep-updated=0). Note that if a cobbler import provides enough information to use the boot server as a yum mirror for core packages, cobbler can set up kickstarts to use the cobbler server as a mirror instead of the outside world. If this feature is desirable, it can be turned on by setting yum_post_install_mirror to 1 in /etc/settings ((and running "cobbler sync"). You should not use this feature if machines are provisioned on a different VLAN/network than production, or if you are provisioning laptops that will want to acquire updates on multiple networks. The flags --tries=N (for example, --tries=3) and --no-fail should likely be used when putting reposync on a crontab. They ensure network glitches in one repo can be retried and also that a failure to synchronize one repo does not stop other repositories from being synchronized. =head2 PXE BOOT LOOP PREVENTION If you have your machines set to PXE first in the boot order (ahead of hard drives), change the "pxe_just_once" flag in /etc/cobbler/settings to 1. This will set the machines to not PXE on successive boots once they complete one install. To re-enable PXE for a specific system, run the following command: B =head2 KICKSTART TRACKING Cobbler knows how to keep track of the status of kickstarting machines. B Using the status command will show when cobbler thinks a machine started kickstarting and when it finished, provided the proper snippets are found in the kickstart template. This is a good way to track machines that may have gone interactive (or stalled/crashed) during kickstarts. =head2 IMAGES Cobbler can help with booting images physically and virtually, though the usage of these commands varies substantially by the type of image. Non-image based deployments are generally easier to work with and lead to more sustaintable infrastructure. Some manual use of other commands beyond of what is typically required of cobbler may be needed to prepare images for use with this feature. =head2 TRIGGERS Triggers provide a way to integrate cobbler with arbitrary 3rd party software without modifying cobbler's code. When adding a distro, profile, system, or repo, all scripts in /var/lib/cobbler/triggers/add are executed for the particular object type. Each particular file must be executable and it is executed with the name of the item being added as a parameter. Deletions work similarly -- delete triggers live in /var/lib/cobbler/triggers/delete. Order of execution is arbitrary, and cobbler does not ship with any triggers by default. There are also other kinds of triggers -- these are described on the Cobbler Wiki. For larger configurations, triggers should be written in Python -- in which case they are installed differently. This is also documented on the Wiki. =head2 API Cobbler also makes itself available as an XMLRPC API for use by higher level management software. Learn more at http://www.cobblerd.org. =head2 WEB USER INTERFACE Most of the day-to-day actions in cobbler's command line can be performed in Cobbler's Web UI. To enable and access the WebUI, see the following documentation: https://github.com/cobbler/cobbler/wiki/Cobbler-web-interface =head2 BOOT CD Cobbler can build all of it's profiles into a bootable CD image using the "cobbler buildiso" command. This allows for PXE-menu like bringup of bare metal in environments where PXE is not possible. Another more advanced method is described in the koan manpage, though this method is easier and sufficient for most applications. =head2 POWER MANAGEMENT Cobbler contains a power management feature that allows the user to associate system records in cobbler with the power management configuration attached to them. This can ease installation by making it easy to reassign systems to new operating systems and then reboot those systems. Read more about this feature at https://github.com/cobbler/cobbler/wiki/Power-management =head2 CONFIG MANAGEMENT INTEGRATION Cobbler contains features for integrating an installation environment with a configuration management system, which handles the configuration of the system after it is installed by allowing changes to configuration files and settings. You can read more about this feature at https://github.com/cobbler/cobbler/wiki/Built-in-configuration-management and https://github.com/cobbler/cobbler/wiki/Using-cobbler-with-a-configuration-management-system. Both features may be considered experimental as of time of the 1.4 release. =head1 EXIT_STATUS cobbler's command line returns a zero for success and non-zero for failure. =head1 ADDITIONAL RESOURCES Cobbler has a mailing list for user and development-related questions/comments at cobbler@lists.fedorahosted.org. To subscribe, visit https://fedorahosted.org/mailman/listinfo/cobbler IRC channel: irc.freenode.net (#cobbler) Official web site, bug tracker, and Wiki: http://www.cobblerd.org =head1 AUTHOR Michael DeHaan cobbler-2.4.1/docs/koan.pod000066400000000000000000000110401227367477500155320ustar00rootroot00000000000000=head1 NAME koan - kickstart over a network, client side helper for cobbler =head1 SYNOPSIS koan --server=hostname [--list=type] [--virt|--replace-self|--display] [--profile=name] [--system=name] [--image=name] [--add-reinstall-entry] [--virt-name=name] [--virt-path=path] [--virt-type=type] [--nogfx] [--static-interface=name] [--kexec] =head1 DESCRIPTION Koan stands for "kickstart-over-a-network" and is a client-side helper program for use with Cobbler. koan allows for both network provisioning of new virtualized guests (Xen, QEMU/KVM, VMware) and re-installation of an existing system. When invoked, koan requests install information from a remote cobbler boot server, it then kicks off installations based on what is retrieved from cobbler and fed in on the koan command line. The examples below show the various use cases. =head1 LISTING REMOTE COBBLER OBJECTS To browse remote objects on a cobbler server and see what you can install using koan, run one of the following commands: koan --server=cobbler.example.org --list=profiles koan --server=cobbler.example.org --list=systems koan --server=cobbler.example.org --list=images =head1 LEARNING MORE ABOUT REMOTE COBBLER OBJECTS To learn more about what you are about to install, run one of the following commands: koan --server=cobbler.example.org --display --profile=name koan --server=cobbler.example.org --display --system=name koan --server=cobbler.example.org --display --image=name =head1 REINSTALLING EXISTING SYSTEMS Using --replace-self will reinstall the existing system the next time you reboot. koan --server=cobbler.example.org --replace-self --profile=name koan --server=cobbler.example.org --replace-self --system=name Additionally, adding the flag --add-reinstall-entry will make it add the entry to grub for reinstallation but will not make it automatically pick that option on the next boot. Also the flag --kexec can be appended, which will launch the installer without needing to reboot. Not all kernels support this option. =head1 INSTALLING VIRTUALIZED SYSTEMS Using --virt will install virtual machines as defined by Cobbler. There are various overrides you can use if not everything in cobbler is defined as you like it. koan --server=cobbler.example.org --virt --profile=name koan --server=cobbler.example.org --virt --system=name koan --server=cobbler.example.org --virt --image=name Some of the overrides that can be used with --virt are: Flag Explanation Example --virt-name name of virtual machine to create testmachine --virt-type forces usage of qemu/xen/vmware qemu --virt-bridge name of bridge device virbr0 --virt-path overwrite this disk partition /dev/sda4 --virt-path use this directory /opt/myimages --virt-path use this existing LVM volume VolGroup00 --nogfx do not use VNC graphics (Xen only) (does not take options) Nearly all of these variables can also be defined and centrally managed by the Cobbler server. If installing virtual machines in environments without DHCP, use of --system instead of --profile is required. Additionally use --static-interface=eth0 to supply which interface to use to supply network information. The installer will boot from this virtual interface. Leaving off --static-interface will result in an unsuccessful network installation. =head1 CONFIGURATION MANAGEMENT Using --update-config will update a system configuration as defined by Cobbler. koan --server=cobbler.example.org --update-config Additionally, adding the flag --summary will print configuration run stats. Koan passes in the system's FQDN in the background during the configuration request. Cobbler will match this FQDN to a configured system defined by Cobbler. The FQDN (Fully Qualified Domain Name) maps to the system's hostname field. =head1 ENVIRONMENT VARIABLES Koan respects the COBBLER_SERVER variable to specify the cobbler server to use. This is a convenient way to avoid using the --server option for each command. This variable is set automatically on systems installed via cobbler, assuming standard kickstart templates are used. If you need to change this on an installed system, edit /etc/profile.d/cobbler.{csh,sh}. =head1 ADDITIONAL Reading the cobbler manpage and www.cobblerd.org is highly recommended. The mailing list is cobbler@lists.fedorahosted.org. Subscribe at https://fedorahosted.org/mailman/listinfo/cobbler =head1 AUTHOR Michael DeHaan cobbler-2.4.1/kickstarts/000077500000000000000000000000001227367477500153345ustar00rootroot00000000000000cobbler-2.4.1/kickstarts/default.ks000066400000000000000000000001631227367477500173170ustar00rootroot00000000000000# this file intentionally left blank # admins: edit it as you like, or leave it blank for non-interactive install cobbler-2.4.1/kickstarts/esxi4-ks.cfg000066400000000000000000000000261227367477500174620ustar00rootroot00000000000000# Test ESXi 5 ks file cobbler-2.4.1/kickstarts/esxi5-ks.cfg000066400000000000000000000000261227367477500174630ustar00rootroot00000000000000# Test ESXi 5 ks file cobbler-2.4.1/kickstarts/legacy.ks000066400000000000000000000025301227367477500171370ustar00rootroot00000000000000#platform=x86, AMD64, or Intel EM64T # System authorization information auth --useshadow --enablemd5 # System bootloader configuration bootloader --location=mbr # Partition clearing information clearpart --all --initlabel # Use text mode install text # Firewall configuration firewall --enabled # Run the Setup Agent on first boot firstboot --disable # System keyboard keyboard us # System language lang en_US # Use network installation url --url=$tree # Network information $SNIPPET('network_config') # Reboot after installation reboot #Root password rootpw --iscrypted $default_password_crypted # SELinux configuration selinux --disabled # Do not configure the X Window System skipx # System timezone timezone America/New_York # Install OS instead of upgrade install # Clear the Master Boot Record zerombr # Allow anaconda to partition the system as needed autopart %pre $SNIPPET('log_ks_pre') $SNIPPET('kickstart_start') $SNIPPET('pre_install_network_config') $SNIPPET('pre_anamon') %packages %post $SNIPPET('log_ks_post') # Begin yum configuration $yum_config_stanza # End yum configuration $SNIPPET('post_install_kernel_options') $SNIPPET('post_install_network_config') $SNIPPET('download_config_files') $SNIPPET('koan_environment') $SNIPPET('redhat_register') $SNIPPET('cobbler_register') # Begin final steps $SNIPPET('kickstart_done') # End final steps cobbler-2.4.1/kickstarts/pxerescue.ks000066400000000000000000000004441227367477500177000ustar00rootroot00000000000000# Rescue Boot Template # Set the language and language support lang en_US # uncomment for legacy system (e.g. RHEL4) # langsupport en_US # Set the keyboard keyboard "us" # Network kickstart network --bootproto dhcp # Rescue method (only NFS/FTP/HTTP currently supported) url --url=$tree cobbler-2.4.1/kickstarts/sample.ks000066400000000000000000000033001227367477500171500ustar00rootroot00000000000000#platform=x86, AMD64, or Intel EM64T # System authorization information auth --useshadow --enablemd5 # System bootloader configuration bootloader --location=mbr # Partition clearing information clearpart --all --initlabel # Use text mode install text # Firewall configuration firewall --enabled # Run the Setup Agent on first boot firstboot --disable # System keyboard keyboard us # System language lang en_US # Use network installation url --url=$tree # If any cobbler repo definitions were referenced in the kickstart profile, include them here. $yum_repo_stanza # Network information $SNIPPET('network_config') # Reboot after installation reboot #Root password rootpw --iscrypted $default_password_crypted # SELinux configuration selinux --disabled # Do not configure the X Window System skipx # System timezone timezone America/New_York # Install OS instead of upgrade install # Clear the Master Boot Record zerombr # Allow anaconda to partition the system as needed autopart %pre $SNIPPET('log_ks_pre') $SNIPPET('kickstart_start') $SNIPPET('pre_install_network_config') # Enable installation monitoring $SNIPPET('pre_anamon') %packages $SNIPPET('func_install_if_enabled') $SNIPPET('puppet_install_if_enabled') %post $SNIPPET('log_ks_post') # Start yum configuration $yum_config_stanza # End yum configuration $SNIPPET('post_install_kernel_options') $SNIPPET('post_install_network_config') $SNIPPET('func_register_if_enabled') $SNIPPET('puppet_register_if_enabled') $SNIPPET('download_config_files') $SNIPPET('koan_environment') $SNIPPET('redhat_register') $SNIPPET('cobbler_register') # Enable post-install boot notification $SNIPPET('post_anamon') # Start final steps $SNIPPET('kickstart_done') # End final steps cobbler-2.4.1/kickstarts/sample.seed000066400000000000000000000133671227367477500174710ustar00rootroot00000000000000# Mostly based on the Ubuntu installation guide # https://help.ubuntu.com/12.04/installation-guide/ # Preseeding only locale sets language, country and locale. d-i debian-installer/locale string en_US # Keyboard selection. # Disable automatic (interactive) keymap detection. d-i console-setup/ask_detect boolean false d-i keyboard-configuration/layoutcode string us d-i keyboard-configuration/variantcode string # netcfg will choose an interface that has link if possible. This makes it # skip displaying a list if there is more than one interface. #set $myhostname = $getVar('hostname',$getVar('name','cobbler')).replace("_","-") d-i netcfg/choose_interface select auto d-i netcfg/get_hostname string $myhostname # If non-free firmware is needed for the network or other hardware, you can # configure the installer to always try to load it, without prompting. Or # change to false to disable asking. # d-i hw-detect/load_firmware boolean true # NTP/Time Setup d-i time/zone string US/Eastern d-i clock-setup/utc boolean true d-i clock-setup/ntp boolean true d-i clock-setup/ntp-server string ntp.ubuntu.com # Setup the installation source d-i mirror/country string manual d-i mirror/http/hostname string $http_server d-i mirror/http/directory string $install_source_directory d-i mirror/http/proxy string #set $os_v = $getVar('os_version','') #if $os_v and $os_v.lower()[0] > 'p' # Required at least for 12.10+ d-i live-installer/net-image string http://$http_server/cobbler/links/$distro_name/install/filesystem.squashfs #end if # Suite to install. # d-i mirror/suite string precise # d-i mirror/udeb/suite string precise # Components to use for loading installer components (optional). #d-i mirror/udeb/components multiselect main, restricted # Disk Partitioning # Use LVM, and wipe out anything that already exists d-i partman/choose_partition select finish d-i partman/confirm boolean true d-i partman/confirm_nooverwrite boolean true d-i partman-auto/method string lvm d-i partman-lvm/device_remove_lvm boolean true d-i partman-lvm/confirm boolean true d-i partman-lvm/confirm_nooverwrite boolean true d-i partman-md/device_remove_md boolean true d-i partman-partitioning/confirm_write_new_label boolean true # You can choose one of the three predefined partitioning recipes: # - atomic: all files in one partition # - home: separate /home partition # - multi: separate /home, /usr, /var, and /tmp partitions d-i partman-auto/choose_recipe select atomic # If you just want to change the default filesystem from ext3 to something # else, you can do that without providing a full recipe. # d-i partman/default_filesystem string ext4 # root account and password d-i passwd/root-login boolean true d-i passwd/root-password-crypted password $default_password_crypted # skip creation of a normal user account. d-i passwd/make-user boolean false # You can choose to install restricted and universe software, or to install # software from the backports repository. # d-i apt-setup/restricted boolean true # d-i apt-setup/universe boolean true # d-i apt-setup/backports boolean true # Uncomment this if you don't want to use a network mirror. # d-i apt-setup/use_mirror boolean false # Select which update services to use; define the mirrors to be used. # Values shown below are the normal defaults. # d-i apt-setup/services-select multiselect security # d-i apt-setup/security_host string security.ubuntu.com # d-i apt-setup/security_path string /ubuntu $SNIPPET('preseed_apt_repo_config') # Enable deb-src lines # d-i apt-setup/local0/source boolean true # URL to the public key of the local repository; you must provide a key or # apt will complain about the unauthenticated repository and so the # sources.list line will be left commented out # d-i apt-setup/local0/key string http://local.server/key # By default the installer requires that repositories be authenticated # using a known gpg key. This setting can be used to disable that # authentication. Warning: Insecure, not recommended. # d-i debian-installer/allow_unauthenticated boolean true # Individual additional packages to install # wget is REQUIRED otherwise quite a few things won't work # later in the build (like late-command scripts) d-i pkgsel/include string ntp ssh wget # Use the following option to add additional boot parameters for the # installed system (if supported by the bootloader installer). # Note: options passed to the installer will be added automatically. d-i debian-installer/add-kernel-opts string $kernel_options_post # Avoid that last message about the install being complete. d-i finish-install/reboot_in_progress note ## Figure out if we're kickstarting a system or a profile #if $getVar('system_name','') != '' #set $what = "system" #else #set $what = "profile" #end if # This first command is run as early as possible, just after preseeding is read. # d-i preseed/early_command string [command] d-i preseed/early_command string wget -O- \ http://$http_server/cblr/svc/op/script/$what/$name/?script=preseed_early_default | \ /bin/sh -s # This command is run immediately before the partitioner starts. It may be # useful to apply dynamic partitioner preseeding that depends on the state # of the disks (which may not be visible when preseed/early_command runs). # d-i partman/early_command \ # string debconf-set partman-auto/disk "\$(list-devices disk | head -n1)" # This command is run just before the install finishes, but when there is # still a usable /target directory. You can chroot to /target and use it # directly, or use the apt-install and in-target commands to easily install # packages and run commands in the target system. # d-i preseed/late_command string [command] d-i preseed/late_command string wget -O- \ http://$http_server/cblr/svc/op/script/$what/$name/?script=preseed_late_default | \ chroot /target /bin/sh -s cobbler-2.4.1/kickstarts/sample_autoyast.xml000066400000000000000000000061451227367477500212760ustar00rootroot00000000000000 false ## without the next 6 lines autoyast will ask for confirmation bevore installation false true $SNIPPET('hosts.xml') $SNIPPET('kdump.xml') english en_US $SNIPPET('networking.xml') 3 true root 0 /root /bin/bash 0 $default_password_crypted root ## we have to include the pre-scripts tag to get kickstart_start included ## SuSE has an annoying habit on ppc64 of changing the system ## boot order after installation. This makes it non-trivial to ## automatically re-install future OS. If you want to workaround ## this, un-comment out the following two lines and the two ## lines in the post-scripts section: #set global $wrappedscript = 'save_boot_device' ## $SNIPPET('suse_scriptwrapper.xml') ## This plugin wrapper provides the flexibility to call pure shell ## snippets which can be used directly on kickstart and with with ## wrapper on SuSE. ## ## To use it ## - exchange name_of_pure_shell_snippet with the name of this shell snippet ## - and remove the '##' in front of the line with suse_scriptwrapper.xml ## #set global $wrappedscript = 'name_of_pure_shell_snippet' ## $SNIPPET('suse_scriptwrapper.xml') ## SuSE has an annoying habit on ppc64 of changing the system ## boot order after installation. This makes it non-trivial to ## automatically re-install future OS. If you want to workaround ## this, un-comment out the following two lines and the two ## lines in the pre-scripts section: #set global $wrappedscript = 'restore_boot_device' ## $SNIPPET('suse_scriptwrapper.xml') cobbler-2.4.1/kickstarts/sample_end.ks000066400000000000000000000033511227367477500200040ustar00rootroot00000000000000# kickstart template for Fedora 8 and later. # (includes %end blocks) # do not use with earlier distros #platform=x86, AMD64, or Intel EM64T # System authorization information auth --useshadow --enablemd5 # System bootloader configuration bootloader --location=mbr # Partition clearing information clearpart --all --initlabel # Use text mode install text # Firewall configuration firewall --enabled # Run the Setup Agent on first boot firstboot --disable # System keyboard keyboard us # System language lang en_US # Use network installation url --url=$tree # If any cobbler repo definitions were referenced in the kickstart profile, include them here. $yum_repo_stanza # Network information $SNIPPET('network_config') # Reboot after installation reboot #Root password rootpw --iscrypted $default_password_crypted # SELinux configuration selinux --disabled # Do not configure the X Window System skipx # System timezone timezone America/New_York # Install OS instead of upgrade install # Clear the Master Boot Record zerombr # Allow anaconda to partition the system as needed autopart %pre $SNIPPET('log_ks_pre') $SNIPPET('kickstart_start') $SNIPPET('pre_install_network_config') # Enable installation monitoring $SNIPPET('pre_anamon') %end %packages $SNIPPET('func_install_if_enabled') %end %post $SNIPPET('log_ks_post') # Start yum configuration $yum_config_stanza # End yum configuration $SNIPPET('post_install_kernel_options') $SNIPPET('post_install_network_config') $SNIPPET('func_register_if_enabled') $SNIPPET('download_config_files') $SNIPPET('koan_environment') $SNIPPET('redhat_register') $SNIPPET('cobbler_register') # Enable post-install boot notification $SNIPPET('post_anamon') # Start final steps $SNIPPET('kickstart_done') # End final steps %end cobbler-2.4.1/kickstarts/sample_esx4.ks000066400000000000000000000000001227367477500201050ustar00rootroot00000000000000cobbler-2.4.1/kickstarts/sample_esxi4.ks000066400000000000000000000005041227367477500202670ustar00rootroot00000000000000# sample Kickstart for ESXi install url $tree rootpw --iscrypted $default_password_crypted accepteula reboot autopart --firstdisk --overwritevmfs $SNIPPET('network_config_esxi') %pre --unsupported --interpreter=busybox $SNIPPET('kickstart_start') %post --unsupported --interpreter=busybox $SNIPPET('kickstart_done') cobbler-2.4.1/kickstarts/sample_esxi5.ks000066400000000000000000000006021227367477500202670ustar00rootroot00000000000000# # Sample scripted installation file # for ESXi 5+ # vmaccepteula reboot --noeject rootpw --iscrypted $default_password_crypted install --firstdisk --overwritevmfs clearpart --firstdisk --overwritevmfs $SNIPPET('network_config') %pre --interpreter=busybox $SNIPPET('kickstart_start') $SNIPPET('pre_install_network_config') %post --interpreter=busybox $SNIPPET('kickstart_done') cobbler-2.4.1/kickstarts/sample_old.seed000066400000000000000000000065331227367477500203240ustar00rootroot00000000000000#platform=x86, AMD64, or Intel EM64T # System authorization information # System bootloader configuration d-i grub-installer/only_debian boolean true #grub-installer grub-installer/bootdev string hd0 d-i grub-installer/bootdev string hd0 ### add kernel postinst options (--kopts-post) d-i debian-installer/add-kernel-opts string $kernel_options_post # Partition clearing information ### Partitioning available methods are: "regular", "lvm" and "crypto" d-i partman-auto/disk string /dev/sda d-i partman-auto/method string lvm d-i partman-auto/purge_lvm_from_device boolean true d-i partman-lvm/device_remove_lvm boolean true d-i partman-lvm/confirm boolean true #d-i partman-auto/init_automatically_partition \\ # select Guided - use entire disk and set up LVM #d-i partman-auto/expert_recipe_file string /recipe d-i partman-auto/choose_recipe select atomic d-i partman/confirm_write_new_label boolean true d-i partman/choose_partition select finish d-i partman/confirm boolean true # Use text mode install # Firewall configuration # Run the Setup Agent on first boot # System keyboard d-i console-setup/dont_ask_layout note d-i console-keymaps-at/keymap select us # System language # Use network installation # NOTE : The suite seems to be hardcoded on installer d-i mirror/suite string $suite d-i mirror/country string enter information manually d-i mirror/http/hostname string $hostname d-i mirror/http/directory string $directory d-i mirror/http/proxy string # If any cobbler repo definitions were referenced in the kickstart profile, include them here. #apt-setup-udeb apt-setup/services-select multiselect security d-i apt-setup/services-select multiselect security d-i apt-setup/security_host string $hostname$directory-security d-i apt-setup/volatile_host string $hostname$directory-volatile # Network information # NOTE : this questions are asked before downloading preseed #d-i netcfg/get_hostname string unassigned-hostname #d-i netcfg/get_domain string unassigned-hostname # Reboot after installation finish-install finish-install/reboot_in_progress note #Root password d-i passwd/root-password-crypted password \$1\$mF86/UHC\$WvcIcX2t6crBz2onWxyac. user-setup-udeb passwd/root-login boolean true user-setup-udeb passwd/make-user boolean false # SELinux configuration # Do not configure the X Window System # System timezone clock-setup clock-setup/utc boolean false tzsetup-udeb time/zone select America/New_York # Install OS instead of upgrade # Clear the Master Boot Record # Select individual packages and groups for install d-i pkgsel/include string openssh-server tasksel tasksel/first multiselect standard, desktop # Disable automatic updates d-i pkgsel/update-policy select none # Debian specific configuration # See http://www.debian.org/releases/stable/i386/apbs04.html.en & preseed documentation # By default the installer requires that repositories be authenticated # using a known gpg key. This setting can be used to disable that # authentication. Warning: Insecure, not recommended. d-i debian-installer/allow_unauthenticated string true # Some versions of the installer can report back on what software you have # installed, and what software you use. The default is not to report back, # but sending reports helps the project determine what software is most # popular and include it on CDs. popularity-contest popularity-contest/participate boolean false cobbler-2.4.1/koan/000077500000000000000000000000001227367477500141025ustar00rootroot00000000000000cobbler-2.4.1/koan/__init__.py000066400000000000000000000000001227367477500162010ustar00rootroot00000000000000cobbler-2.4.1/koan/app.py000077500000000000000000002271311227367477500152450ustar00rootroot00000000000000""" koan = kickstart over a network a tool for network provisioning of virtualization (xen,kvm/qemu,vmware) and network re-provisioning of existing Linux systems. used with 'cobbler'. see manpage for usage. Copyright 2006-2008 Red Hat, Inc and Others. Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import random import os import traceback import tempfile import shlex ANCIENT_PYTHON = 0 try: try: from optparse import OptionParser except: from opt_parse import OptionParser # importing this for backwards compat with 2.2 try: import subprocess as sub_process except: import sub_process except: # the main "replace-self" codepath of koan must support # Python 1.5. Other sections may use 2.3 features (nothing newer) # provided they are conditionally imported. This is to support # EL 2.1. -- mpd ANCIENT_PYTHON = 1 True = 1 False = 0 import exceptions import time import shutil import errno import re import sys import xmlrpclib import string import glob import socket import utils import time import configurator COBBLER_REQUIRED = 1.300 """ koan --virt [--profile=webserver|--system=name] --server=hostname koan --replace-self --profile=foo --server=hostname [--kexec] """ DISPLAY_PARAMS = [ "name", "distro","profile", "kickstart","ks_meta", "install_tree","kernel","initrd", "netboot_enabled", "kernel_options", "repos", "virt_ram", "virt_disk", "virt_disk_driver", "virt_type", "virt_path", "virt_auto_boot", "virt_pxe_boot", ] def main(): """ Command line stuff... """ try: utils.setupLogging("koan") except: # most likely running RHEL3, where we don't need virt logging anyway pass if ANCIENT_PYTHON: print "- command line usage on this version of python is unsupported" print "- usage via spacewalk APIs only. Python x>=2.3 required" return p = OptionParser() p.add_option("-k", "--kopts", dest="kopts_override", help="append additional kernel options") p.add_option("-l", "--list", dest="list_items", help="lists remote items (EX: profiles, systems, or images)") p.add_option("-v", "--virt", dest="is_virt", action="store_true", help="install new virtual guest") p.add_option("-u", "--update-files", dest="is_update_files", action="store_true", help="update templated files from cobbler config management") p.add_option("-c", "--update-config", dest="is_update_config", action="store_true", help="update system configuration from cobbler config management") p.add_option("", "--summary", dest="summary", action="store_true", help="print configuration run stats") p.add_option("-V", "--virt-name", dest="virt_name", help="use this name for the virtual guest") p.add_option("-r", "--replace-self", dest="is_replace", action="store_true", help="reinstall this host at next reboot") p.add_option("-D", "--display", dest="is_display", action="store_true", help="display the configuration stored in cobbler for the given object") p.add_option("-p", "--profile", dest="profile", help="use this cobbler profile") p.add_option("-y", "--system", dest="system", help="use this cobbler system") p.add_option("-i", "--image", dest="image", help="use this cobbler image") p.add_option("-s", "--server", dest="server", default=os.environ.get("COBBLER_SERVER",""), help="attach to this cobbler server") p.add_option("-S", "--static-interface", dest="static_interface", help="use static network configuration from this interface while installing") p.add_option("-t", "--port", dest="port", help="cobbler port (default 80)") p.add_option("-w", "--vm-poll", dest="should_poll", action="store_true", help="for xen/qemu/KVM, poll & restart the VM after the install is done") p.add_option("-P", "--virt-path", dest="virt_path", help="override virt install location") p.add_option("", "--force-path", dest="force_path", action="store_true", help="Force overwrite of virt install location") p.add_option("-T", "--virt-type", dest="virt_type", help="override virt install type") p.add_option("-B", "--virt-bridge", dest="virt_bridge", help="override virt bridge") p.add_option("-n", "--nogfx", action="store_true", dest="no_gfx", help="disable Xen graphics (xenpv,xenfv)") p.add_option("", "--virt-auto-boot", action="store_true", dest="virt_auto_boot", help="set VM for autoboot") p.add_option("", "--virt-pxe-boot", action="store_true", dest="virt_pxe_boot", help="PXE boot for installation override") p.add_option("", "--add-reinstall-entry", dest="add_reinstall_entry", action="store_true", help="when used with --replace-self, just add entry to grub, do not make it the default") p.add_option("-C", "--livecd", dest="live_cd", action="store_true", help="used by the custom livecd only, not for humans") p.add_option("", "--kexec", dest="use_kexec", action="store_true", help="Instead of writing a new bootloader config when using --replace-self, just kexec the new kernel and initrd") p.add_option("", "--no-copy-default", dest="no_copy_default", action="store_true", help="Do not copy the kernel args from the default kernel entry when using --replace-self") p.add_option("", "--embed", dest="embed_kickstart", action="store_true", help="When used with --replace-self, embed the kickstart in the initrd to overcome potential DHCP timeout issues. (seldom needed)") p.add_option("", "--qemu-disk-type", dest="qemu_disk_type", help="when used with --virt_type=qemu, add select of disk driver types: ide,scsi,virtio") p.add_option("", "--qemu-net-type", dest="qemu_net_type", help="when used with --virt_type=qemu, select type of network device to use: e1000, ne2k_pci, pcnet, rtl8139, virtio") p.add_option("", "--qemu-machine-type", dest="qemu_machine_type", help="when used with --virt_type=qemu, select type of machine type to emulate: pc, pc-1.0, pc-0.15") p.add_option("", "--wait", dest="wait", type='int', default=0, # default to 0 for koan backwards compatibility help="pass the --wait= argument to virt-install") p.add_option("", "--noreboot", dest="noreboot", default=False, # default to False for koan backwards compatibility action="store_true", help="pass the --noreboot argument to virt-install") p.add_option("", "--import", dest="osimport", default=False, # default to False for koan backwards compatibility action="store_true", help="pass the --import argument to virt-install") (options, args) = p.parse_args() try: k = Koan() k.list_items = options.list_items k.server = options.server k.is_virt = options.is_virt k.is_update_files = options.is_update_files k.is_update_config = options.is_update_config k.summary = options.summary k.is_replace = options.is_replace k.is_display = options.is_display k.profile = options.profile k.system = options.system k.image = options.image k.live_cd = options.live_cd k.virt_path = options.virt_path k.force_path = options.force_path k.virt_type = options.virt_type k.virt_bridge = options.virt_bridge k.no_gfx = options.no_gfx k.add_reinstall_entry = options.add_reinstall_entry k.kopts_override = options.kopts_override k.static_interface = options.static_interface k.use_kexec = options.use_kexec k.no_copy_default = options.no_copy_default k.should_poll = options.should_poll k.embed_kickstart = options.embed_kickstart k.virt_auto_boot = options.virt_auto_boot k.virt_pxe_boot = options.virt_pxe_boot k.qemu_disk_type = options.qemu_disk_type k.qemu_net_type = options.qemu_net_type k.qemu_machine_type = options.qemu_machine_type k.virtinstall_wait = options.wait k.virtinstall_noreboot= options.noreboot k.virtinstall_osimport= options.osimport if options.virt_name is not None: k.virt_name = options.virt_name if options.port is not None: k.port = options.port k.run() except Exception, e: (xa, xb, tb) = sys.exc_info() try: getattr(e,"from_koan") print str(e)[1:-1] # nice exception, no traceback needed except: print xa print xb print string.join(traceback.format_list(traceback.extract_tb(tb))) return 1 return 0 #======================================================= class InfoException(exceptions.Exception): """ Custom exception for tracking of fatal errors. """ def __init__(self,value,**args): self.value = value % args self.from_koan = 1 def __str__(self): return repr(self.value) #======================================================= class Koan: def __init__(self): """ Constructor. Arguments will be filled in by optparse... """ self.server = None self.system = None self.profile = None self.list_profiles = None self.list_systems = None self.is_virt = None self.is_update_files = None self.is_update_config = None self.summary = None self.is_replace = None self.port = None self.static_interface = None self.virt_name = None self.virt_type = None self.virt_path = None self.force_path = None self.qemu_disk_type = None self.qemu_net_type = None self.qemu_machine_type = None self.virt_auto_boot = None self.virt_pxe_boot = None self.virtinstall_wait = None self.virtinstall_noreboot = None self.virtinstall_osimport = None # This option adds the --copy-default argument to /sbin/grubby # which uses the default boot entry in the grub.conf # as template for the new entry being added to that file. # look at /sbin/grubby --help for more info self.no_copy_default = None #--------------------------------------------------- def run(self): """ koan's main function... """ # we can get the info we need from either the cobbler server # or a kickstart file if self.server is None: raise InfoException, "no server specified" # check to see that exclusive arguments weren't used together found = 0 for x in (self.is_virt, self.is_replace, self.is_update_files, self.is_display, self.list_items, self.is_update_config): if x: found = found+1 if found != 1: raise InfoException, "choose: --virt, --replace-self, --update-files, --list=what, or --display" # This set of options are only valid with --server if not self.server or self.server == "": if self.list_items or self.profile or self.system or self.port: raise InfoException, "--server is required" self.xmlrpc_server = utils.connect_to_server(server=self.server, port=self.port) if self.list_items: self.list(self.list_items) return if not os.getuid() == 0: if self.is_virt: print "warning: running as non root" else: print "this operation requires root access" return 3 # if both --profile and --system were ommitted, autodiscover if self.is_virt: if (self.profile is None and self.system is None and self.image is None): raise InfoException, "must specify --profile, --system, or --image" else: if (self.profile is None and self.system is None and self.image is None): self.system = self.autodetect_system(allow_interactive=self.live_cd) if self.system is None: while self.profile is None: self.profile = self.ask_profile() # if --virt-type was specified and invalid, then fail if self.virt_type is not None: self.virt_type = self.virt_type.lower() if self.virt_type not in [ "qemu", "xenpv", "xenfv", "xen", "vmware", "vmwarew", "auto", "kvm" ]: if self.virt_type == "xen": self.virt_type = "xenpv" raise InfoException, "--virt-type should be qemu, xenpv, xenfv, vmware, vmwarew, kvm, or auto" # if --qemu-disk-type was called without --virt-type=qemu, then fail if (self.qemu_disk_type is not None): self.qemu_disk_type = self.qemu_disk_type.lower() if self.virt_type not in [ "qemu", "auto", "kvm" ]: raise InfoException, "--qemu-disk-type must use with --virt-type=qemu" # if --qemu-net-type was called without --virt-type=qemu, then fail if (self.qemu_net_type is not None): self.qemu_net_type = self.qemu_net_type.lower() if self.virt_type not in [ "qemu", "auto", "kvm" ]: raise InfoException, "--qemu-net-type must use with --virt-type=qemu" # if --qemu-machine-type was called without --virt-type=qemu, then fail if (self.qemu_machine_type is not None): self.qemu_machine_type = self.qemu_machine_type.lower() if self.virt_type not in [ "qemu", "auto", "kvm" ]: raise InfoException, "--qemu-machine-type must use with --virt-type=qemu" # if --static-interface and --profile was called together, then fail if self.static_interface is not None and self.profile is not None: raise InfoException, "--static-interface option is incompatible with --profile option use --system instead" # perform one of three key operations if self.is_virt: self.virt() elif self.is_replace: if self.use_kexec: self.kexec_replace() else: self.replace() elif self.is_update_files: self.update_files() elif self.is_update_config: self.update_config() else: self.display() # -------------------------------------------------- def ask_profile(self): """ Used by the live CD mode, if the system can not be auto-discovered, show a list of available profiles and ask the user what they want to install. """ # FIXME: use a TUI library to make this more presentable. try: available_profiles = self.xmlrpc_server.get_profiles() except: traceback.print_exc() self.connect_fail() print "\n- which profile to install?\n" for x in available_profiles: print "%s" % x["name"] sys.stdout.write("\n?>") data = sys.stdin.readline().strip() for x in available_profiles: print "comp (%s,%s)" % (x["name"],data) if x["name"] == data: return data return None #--------------------------------------------------- def autodetect_system(self, allow_interactive=False): """ Determine the name of the cobbler system record that matches this MAC address. """ systems = self.get_data("systems") my_netinfo = utils.get_network_info() my_interfaces = my_netinfo.keys() mac_criteria = [] ip_criteria = [] for my_interface in my_interfaces: mac_criteria.append(my_netinfo[my_interface]["mac_address"].upper()) ip_criteria.append(my_netinfo[my_interface]["ip_address"]) detected_systems = [] systems = self.get_data("systems") for system in systems: obj_name = system["name"] for (obj_iname, obj_interface) in system['interfaces'].iteritems(): mac = obj_interface["mac_address"].upper() ip = obj_interface["ip_address"].upper() for my_mac in mac_criteria: if mac == my_mac: detected_systems.append(obj_name) for my_ip in ip_criteria: if ip == my_ip: detected_systems.append(obj_name) detected_systems = utils.uniqify(detected_systems) if len(detected_systems) > 1: raise InfoException, "Error: Multiple systems matched" elif len(detected_systems) == 0: if not allow_interactive: mac_criteria = utils.uniqify(mac_criteria, purge="?") ip_criteria = utils.uniqify(ip_criteria, purge="?") raise InfoException, "Error: Could not find a matching system with MACs: %s or IPs: %s" % (",".join(mac_criteria), ",".join(ip_criteria)) else: return None elif len(detected_systems) == 1: print "- Auto detected: %s" % detected_systems[0] return detected_systems[0] #--------------------------------------------------- def safe_load(self,hashv,primary_key,alternate_key=None,default=None): if hashv.has_key(primary_key): return hashv[primary_key] elif alternate_key is not None and hashv.has_key(alternate_key): return hashv[alternate_key] else: return default #--------------------------------------------------- def net_install(self,after_download): """ Actually kicks off downloads and auto-ks or virt installs """ # initialise the profile, from the server if any if self.profile: profile_data = self.get_data("profile",self.profile) elif self.system: profile_data = self.get_data("system",self.system) elif self.image: profile_data = self.get_data("image",self.image) else: # shouldn't end up here, right? profile_data = {} if profile_data.get("kickstart","") != "": # fix URLs if profile_data["kickstart"][0] == "/" or profile_data["template_remote_kickstarts"]: if not self.system: profile_data["kickstart"] = "http://%s/cblr/svc/op/ks/profile/%s" % (profile_data['http_server'], profile_data['name']) else: profile_data["kickstart"] = "http://%s/cblr/svc/op/ks/system/%s" % (profile_data['http_server'], profile_data['name']) # If breed is ubuntu/debian we need to source the install tree differently # as preseeds are used instead of kickstarts. if profile_data["breed"] in [ "ubuntu", "debian" ]: self.get_install_tree_for_debian_ubuntu(profile_data) else: # find_kickstart source tree in the kickstart file self.get_install_tree_from_kickstart(profile_data) # if we found an install_tree, and we don't have a kernel or initrd # use the ones in the install_tree if self.safe_load(profile_data,"install_tree"): if not self.safe_load(profile_data,"kernel"): profile_data["kernel"] = profile_data["install_tree"] + "/images/pxeboot/vmlinuz" if not self.safe_load(profile_data,"initrd"): profile_data["initrd"] = profile_data["install_tree"] + "/images/pxeboot/initrd.img" # find the correct file download location if not self.is_virt: if os.path.exists("/boot/efi/EFI/redhat/elilo.conf"): # elilo itanium support, may actually still work download = "/boot/efi/EFI/redhat" else: # whew, we have a sane bootloader download = "/boot" else: # ensure we have a good virt type choice and know where # to download the kernel/initrd if self.virt_type is None: self.virt_type = self.safe_load(profile_data,'virt_type',default=None) if self.virt_type is None or self.virt_type == "": self.virt_type = "auto" # if virt type is auto, reset it to a value we can actually use if self.virt_type == "auto": if profile_data.get("xml_file","") != "": raise InfoException("xmlfile based installations are not supported") elif profile_data.has_key("file"): print "- ISO or Image based installation, always uses --virt-type=qemu" self.virt_type = "qemu" else: # FIXME: auto never selects vmware, maybe it should if we find it? if not ANCIENT_PYTHON: cmd = sub_process.Popen("/bin/uname -r", stdout=sub_process.PIPE, shell=True) uname_str = cmd.communicate()[0] if uname_str.find("xen") != -1: self.virt_type = "xenpv" elif os.path.exists("/usr/bin/qemu-img"): self.virt_type = "qemu" else: # assume Xen, we'll check to see if virt-type is really usable later. raise InfoException, "Not running a Xen kernel and qemu is not installed" print "- no virt-type specified, auto-selecting %s" % self.virt_type # now that we've figured out our virt-type, let's see if it is really usable # rather than showing obscure error messages from Xen to the user :) if self.virt_type in [ "xenpv", "xenfv" ]: cmd = sub_process.Popen("uname -r", stdout=sub_process.PIPE, shell=True) uname_str = cmd.communicate()[0] # correct kernel on dom0? if uname_str < "2.6.37" and uname_str.find("xen") == -1: raise InfoException("kernel >= 2.6.37 or kernel-xen needs to be in use") # xend installed? if not os.path.exists("/usr/sbin/xend"): raise InfoException("xen package needs to be installed") # xend running? rc = sub_process.call("/usr/sbin/xend status", stderr=None, stdout=None, shell=True) if rc != 0: raise InfoException("xend needs to be started") # for qemu if self.virt_type in [ "qemu", "kvm" ]: # qemu package installed? if not os.path.exists("/usr/bin/qemu-img"): raise InfoException("qemu package needs to be installed") # is libvirt new enough? # Note: in some newer distros (like Fedora 19) the python-virtinst package has been # subsumed into virt-install. If we don't have one check to see if we have the other. rc, version_str = utils.subprocess_get_response(shlex.split('rpm -q virt-install'), True) if rc != 0: rc, version_str = utils.subprocess_get_response(shlex.split('rpm -q python-virtinst'), True) if rc != 0 or version_str.find("virtinst-0.1") != -1 or version_str.find("virtinst-0.0") != -1: raise InfoException("need python-virtinst >= 0.2 or virt-install package to do installs for qemu/kvm (depending on your OS)") # for vmware if self.virt_type == "vmware" or self.virt_type == "vmwarew": # FIXME: if any vmware specific checks are required (for deps) do them here. pass if self.virt_type == "virt-image": if not os.path.exists("/usr/bin/virt-image"): raise InfoException("virt-image not present, downlevel virt-install package?") # for both virt types if os.path.exists("/etc/rc.d/init.d/libvirtd"): rc = sub_process.call("/sbin/service libvirtd status", stdout=None, shell=True) if rc != 0: # libvirt running? raise InfoException("libvirtd needs to be running") if self.virt_type in [ "xenpv" ]: # we need to fetch the kernel/initrd to do this download = "/var/lib/xen" elif self.virt_type in [ "xenfv", "vmware", "vmwarew" ] : # we are downloading sufficient metadata to initiate PXE, no D/L needed download = None else: # qemu # fullvirt, can use set_location in virtinst library, no D/L needed yet download = None # download required files if not self.is_display and download is not None: self.get_distro_files(profile_data, download) # perform specified action after_download(self, profile_data) #--------------------------------------------------- def get_install_tree_from_kickstart(self,profile_data): """ Scan the kickstart configuration for either a "url" or "nfs" command take the install_tree url from that """ if profile_data["breed"] == "suse": kopts = profile_data["kernel_options"] options = kopts.split(" ") profile_data["install_tree"] = "" for opt in options: if opt.startswith("install="): profile_data["install_tree"] = opt.replace("install=","") break else: try: if profile_data["kickstart"][:4] == "http": if not self.system: url_fmt = "http://%s/cblr/svc/op/ks/profile/%s" else: url_fmt = "http://%s/cblr/svc/op/ks/system/%s" url = url_fmt % (self.server, profile_data['name']) else: url = profile_data["kickstart"] raw = utils.urlread(url) lines = raw.splitlines() method_re = re.compile('(?P\s*url\s.*)|(?P\s*nfs\s.*)') url_parser = OptionParser() url_parser.add_option("--url", dest="url") url_parser.add_option("--proxy", dest="proxy") nfs_parser = OptionParser() nfs_parser.add_option("--dir", dest="dir") nfs_parser.add_option("--server", dest="server") for line in lines: match = method_re.match(line) if match: cmd = match.group("urlcmd") if cmd: (options,args) = url_parser.parse_args(shlex.split(cmd)[1:]) profile_data["install_tree"] = options.url break cmd = match.group("nfscmd") if cmd: (options,args) = nfs_parser.parse_args(shlex.split(cmd)[1:]) profile_data["install_tree"] = "nfs://%s:%s" % (options.server,options.dir) break if self.safe_load(profile_data,"install_tree"): print "install_tree:", profile_data["install_tree"] else: print "warning: kickstart found but no install_tree found" except: # unstable to download the kickstart, however this might not # be an error. For instance, xen FV installations of non # kickstart OS's... pass #--------------------------------------------------- def get_install_tree_for_debian_ubuntu(self, profile_data): """ Split ks_meta to obtain the tree path. Generate the install_tree using the http_server and the tree obtained from splitting ks_meta """ try: tree = profile_data["ks_meta"].split() # Ensure we only take the tree in case ks_meta args are passed # First check for tree= in ks_meta arguments meta_re=re.compile('tree=') tree_found='' for entry in tree: if meta_re.match(entry): tree_found=entry.split("=")[-1] break if tree_found=='': # assume tree information as first argument tree = tree.split()[0] else: tree=tree_found tree_re = re.compile ('(http|ftp|nfs):') # Next check for installation tree on remote server if tree_re.match(tree): tree = tree.replace("@@http_server@@", profile_data["http_server"]) profile_data["install_tree"] = tree else: # Now take the first parameter as the local path profile_data["install_tree"] = "http://" + profile_data["http_server"] + tree if self.safe_load(profile_data,"install_tree"): print "install_tree:", profile_data["install_tree"] else: print "warning: kickstart found but no install_tree found" except: pass #--------------------------------------------------- def list(self,what): if what not in [ "images", "profiles", "systems", "distros", "repos" ]: raise InfoException("koan does not know how to list that") data = self.get_data(what) for x in data: if x.has_key("name"): print x["name"] return True #--------------------------------------------------- def display(self): def after_download(self, profile_data): for x in DISPLAY_PARAMS: if profile_data.has_key(x): value = profile_data[x] if x == 'kernel_options': value = self.calc_kernel_args(profile_data) print "%20s : %s" % (x, value) return self.net_install(after_download) #--------------------------------------------------- def virt(self): """ Handle virt provisioning. """ def after_download(self, profile_data): self.virt_net_install(profile_data) return self.net_install(after_download) #--------------------------------------------------- def update_files(self): """ Contact the cobbler server and wget any config-management files in cobbler that we are providing to nodes. Basically this turns cobbler into a lighweight configuration management system for folks who are not needing a more complex CMS. Read more at: https://github.com/cobbler/cobbler/wiki/Built-in-configuration-management """ # FIXME: make this a utils.py function if self.profile: profile_data = self.get_data("profile",self.profile) elif self.system: profile_data = self.get_data("system",self.system) elif self.image: profile_data = self.get_data("image",self.image) else: # shouldn't end up here, right? profile_data = {} # BOOKMARK template_files = profile_data["template_files"] template_files = utils.input_string_or_hash(template_files) template_keys = template_files.keys() print "- template map: %s" % template_files print "- processing for files to download..." for src in template_keys: dest = template_files[src] save_as = dest dest = dest.replace("_","__") dest = dest.replace("/","_") if not save_as.startswith("/"): # this is a file in the template system that is not to be downloaded continue print "- file: %s" % save_as pattern = "http://%s/cblr/svc/op/template/%s/%s/path/%s" if profile_data.has_key("interfaces"): url = pattern % (profile_data["http_server"],"system",profile_data["name"],dest) else: url = pattern % (profile_data["http_server"],"profile",profile_data["name"],dest) if not os.path.exists(os.path.dirname(save_as)): os.makedirs(os.path.dirname(save_as)) cmd = [ "/usr/bin/wget", url, "--output-document", save_as ] utils.subprocess_call(cmd) return True #--------------------------------------------------- def update_config(self): """ Contact the cobbler server and update the system configuration using cobbler's built-in configuration management. Configs are based on a combination of mgmt-classes assigned to the system, profile, and distro. """ # FIXME get hostname from utils? hostname = socket.gethostname() server = self.xmlrpc_server try: config = server.get_config_data(hostname) except: traceback.print_exc() self.connect_fail() # FIXME should we version this, maybe append a timestamp? node_config_data = "/var/lib/koan/config/localconfig.json" f = open(node_config_data, 'w') f.write(config) f.close() print "- Starting configuration run for %s" % (hostname) runtime_start = time.time() configure = configurator.KoanConfigure(config) stats = configure.run() runtime_end = time.time() if self.summary: pstats = (stats["pkg"]['nsync'],stats["pkg"]['osync'],stats["pkg"]['fail'],stats["pkg"]['runtime']) dstats = (stats["dir"]['nsync'],stats["dir"]['osync'],stats["dir"]['fail'],stats["dir"]['runtime']) fstats = (stats["files"]['nsync'],stats["files"]['osync'],stats["files"]['fail'],stats["files"]['runtime']) nsync = pstats[0] + dstats[0] + fstats[0] osync = pstats[1] + dstats[1] + fstats[1] fail = pstats[2] + dstats[2] + fstats[2] total_resources = (nsync + osync + fail) total_runtime = (runtime_end - runtime_start) print print "\tResource Report" print "\t-------------------------" print "\t In Sync: %d" % nsync print "\tOut of Sync: %d" % osync print "\t Fail: %d" % fail print "\t-------------------------" print "\tTotal Resources: %d" % total_resources print "\t Total Runtime: %.02f" % total_runtime for status in ["repos_status", "ldap_status", "monit_status"]: if status in stats: print print "\t%s" % status print "\t-------------------------" print "\t%s" % stats[status] print "\t-------------------------" print print "\tResource |In Sync|OO Sync|Failed|Runtime" print "\t----------------------------------------" print "\t Packages: %d %d %d %.02f" % pstats print "\t Directories: %d %d %d %.02f" % dstats print "\t Files: %d %d %d %.02f" % fstats print #--------------------------------------------------- def kexec_replace(self): """ Prepare to morph existing system by downloading new kernel and initrd and preparing kexec to execute them. Allow caller to do final 'kexec -e' invocation; this allows modules such as network drivers to be unloaded (for cases where an immediate kexec would leave the driver in an invalid state. """ def after_download(self, profile_data): k_args = self.calc_kernel_args(profile_data) kickstart = self.safe_load(profile_data,'kickstart') arch = self.safe_load(profile_data,'arch') (make, version) = utils.os_release() if (make == "centos" and version < 7) or (make == "redhat" and version < 7) or (make == "fedora" and version < 10): # embed the initrd in the kickstart file because of libdhcp and/or pump # needing the help due to some DHCP timeout potential in some certain # network configs. if self.embed_kickstart: self.build_initrd( self.safe_load(profile_data,'initrd_local'), kickstart, profile_data ) # Validate kernel argument length (limit depends on architecture -- # see asm-*/setup.h). For example: # asm-i386/setup.h:#define COMMAND_LINE_SIZE 256 # asm-ia64/setup.h:#define COMMAND_LINE_SIZE 512 # asm-powerpc/setup.h:#define COMMAND_LINE_SIZE 512 # asm-s390/setup.h:#define COMMAND_LINE_SIZE 896 # asm-x86_64/setup.h:#define COMMAND_LINE_SIZE 256 # arch/x86/include/asm/setup.h:#define COMMAND_LINE_SIZE 2048 if arch.startswith("ppc") or arch.startswith("ia64"): if len(k_args) > 511: raise InfoException, "Kernel options are too long, 512 chars exceeded: %s" % k_args elif arch.startswith("s390"): if len(k_args) > 895: raise InfoException, "Kernel options are too long, 896 chars exceeded: %s" % k_args elif len(k_args) > 2048: raise InfoException, "Kernel options are too long, 2048 chars exceeded: %s" % k_args utils.subprocess_call([ 'kexec', '--load', '--initrd=%s' % (self.safe_load(profile_data,'initrd_local'),), '--command-line=%s' % (k_args,), self.safe_load(profile_data,'kernel_local') ]) print "Kernel loaded; run 'kexec -e' to execute" return self.net_install(after_download) #--------------------------------------------------- def get_boot_loader_info(self): if ANCIENT_PYTHON: # FIXME: implement this to work w/o subprocess if os.path.exists("/etc/grub.conf"): return (0, "grub") else: return (0, "lilo") cmd = [ "/sbin/grubby", "--bootloader-probe" ] probe_process = sub_process.Popen(cmd, stdout=sub_process.PIPE) which_loader = probe_process.communicate()[0] return probe_process.returncode, which_loader def replace(self): """ Handle morphing an existing system through downloading new kernel, new initrd, and installing a kickstart in the initrd, then manipulating grub. """ try: shutil.rmtree("/var/spool/koan") except OSError, (err, msg): if err != errno.ENOENT: raise try: os.makedirs("/var/spool/koan") except OSError, (err, msg): if err != errno.EEXIST: raise def after_download(self, profile_data): use_grubby = False use_grub2 = False (make, version) = utils.os_release() if make in ['ubuntu', 'debian']: if not os.path.exists("/usr/sbin/update-grub"): raise InfoException, "grub2 is not installed" use_grub2 = True else: if not os.path.exists("/sbin/grubby"): raise InfoException, "grubby is not installed" use_grubby = True k_args = self.calc_kernel_args(profile_data,replace_self=1) kickstart = self.safe_load(profile_data,'kickstart') if (make == "centos" and version < 7) or (make == "redhat" and version < 7) or (make == "fedora" and version < 10): # embed the initrd in the kickstart file because of libdhcp and/or pump # needing the help due to some DHCP timeout potential in some certain # network configs. if self.embed_kickstart: self.build_initrd( self.safe_load(profile_data,'initrd_local'), kickstart, profile_data ) if not ANCIENT_PYTHON: arch_cmd = sub_process.Popen("/bin/uname -m", stdout=sub_process.PIPE, shell=True) arch = arch_cmd.communicate()[0] else: arch = "i386" # Validate kernel argument length (limit depends on architecture -- # see asm-*/setup.h). For example: # asm-i386/setup.h:#define COMMAND_LINE_SIZE 256 # asm-ia64/setup.h:#define COMMAND_LINE_SIZE 512 # asm-powerpc/setup.h:#define COMMAND_LINE_SIZE 512 # asm-s390/setup.h:#define COMMAND_LINE_SIZE 896 # asm-x86_64/setup.h:#define COMMAND_LINE_SIZE 256 # arch/x86/include/asm/setup.h:#define COMMAND_LINE_SIZE 2048 if not ANCIENT_PYTHON: if arch.startswith("ppc") or arch.startswith("ia64"): if len(k_args) > 511: raise InfoException, "Kernel options are too long, 512 chars exceeded: %s" % k_args elif arch.startswith("s390"): if len(k_args) > 895: raise InfoException, "Kernel options are too long, 896 chars exceeded: %s" % k_args elif len(k_args) > 2048: raise InfoException, "Kernel options are too long, 2048 chars exceeded: %s" % k_args if use_grubby: cmd = [ "/sbin/grubby", "--add-kernel", self.safe_load(profile_data,'kernel_local'), "--initrd", self.safe_load(profile_data,'initrd_local'), "--args", "\"%s\"" % k_args ] if not self.no_copy_default: cmd.append("--copy-default") boot_probe_ret_code, probe_output = self.get_boot_loader_info() if boot_probe_ret_code == 0 and string.find(probe_output, "lilo") >= 0: cmd.append("--lilo") if self.add_reinstall_entry: cmd.append("--title=Reinstall") else: cmd.append("--make-default") cmd.append("--title=kick%s" % int(time.time())) if self.live_cd: cmd.append("--bad-image-okay") cmd.append("--boot-filesystem=/") cmd.append("--config-file=/tmp/boot/boot/grub/grub.conf") # Are we running on ppc? if not ANCIENT_PYTHON: if arch.startswith("ppc"): cmd.append("--yaboot") elif arch.startswith("s390"): cmd.append("--zipl") utils.subprocess_call(cmd) # Any post-grubby processing required (e.g. ybin, zipl, lilo)? if not ANCIENT_PYTHON and arch.startswith("ppc"): # FIXME - CHRP hardware uses a 'PPC PReP Boot' partition and doesn't require running ybin print "- applying ybin changes" cmd = [ "/sbin/ybin" ] utils.subprocess_call(cmd) elif not ANCIENT_PYTHON and arch.startswith("s390"): print "- applying zipl changes" cmd = [ "/sbin/zipl" ] utils.subprocess_call(cmd) else: # if grubby --bootloader-probe returns lilo, # apply lilo changes if boot_probe_ret_code == 0 and string.find(probe_output, "lilo") != -1: print "- applying lilo changes" cmd = [ "/sbin/lilo" ] utils.subprocess_call(cmd) elif use_grub2: # Use grub2 for --replace-self kernel_local = self.safe_load(profile_data,'kernel_local') initrd_local = self.safe_load(profile_data,'initrd_local') # Set name for grub2 menuentry if self.add_reinstall_entry: name = "Reinstall: %s" % profile_data['name'] else: name = "%s" % profile_data['name'] # Set paths for Ubuntu/Debian # TODO: Add support for other distros when they ship grub2 if make in ['ubuntu', 'debian']: grub_file = "/etc/grub.d/42_koan" grub_default_file = "/etc/default/grub" cmd = ["update-grub"] default_cmd = ['sed', '-i', 's/^GRUB_DEFAULT\=.*$/GRUB_DEFAULT="%s"/g' % name, grub_default_file] # Create grub2 menuentry grub_entry = """ cat < initrd.tmp || xz -dc %s > initrd.tmp if mount -o loop -t ext2 initrd.tmp initrd >&/dev/null ; then cp ks.cfg initrd/ ln initrd/ks.cfg initrd/tmp/ks.cfg umount initrd gzip -c initrd.tmp > initrd_final else echo "mount failed; treating initrd as a cpio archive..." cd initrd cpio -id <../initrd.tmp cp /var/spool/koan/ks.cfg . ln ks.cfg tmp/ks.cfg find . | cpio -o -H newc | gzip -9 > ../initrd_final echo "...done" fi """ % initrd #--------------------------------------------------- def build_initrd(self,initrd,kickstart,data): """ Crack open an initrd and install the kickstart file. """ # save kickstart to file ksdata = utils.urlread(kickstart) fd = open("/var/spool/koan/ks.cfg","w+") if ksdata is not None: fd.write(ksdata) fd.close() # handle insertion of kickstart based on type of initrd fd = open("/var/spool/koan/insert.sh","w+") fd.write(self.get_insert_script(initrd)) fd.close() utils.subprocess_call([ "/bin/bash", "/var/spool/koan/insert.sh" ]) shutil.copyfile("/var/spool/koan/initrd_final", initrd) #--------------------------------------------------- def connect_fail(self): raise InfoException, "Could not communicate with %s:%s" % (self.server, self.port) #--------------------------------------------------- def get_data(self,what,name=None): try: if what[-1] == "s": data = getattr(self.xmlrpc_server, "get_%s" % what)() else: data = getattr(self.xmlrpc_server, "get_%s_for_koan" % what)(name) except: traceback.print_exc() self.connect_fail() if data == {}: raise InfoException("No entry/entries found") return data #--------------------------------------------------- def get_ips(self,strdata): """ Return a list of IP address strings found in argument. warning: not IPv6 friendly """ return re.findall(r'\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}',strdata) #--------------------------------------------------- def get_macs(self,strdata): """ Return a list of MAC address strings found in argument. """ return re.findall(r'[A-F0-9]{2}:[A-F0-9]{2}:[A-F0-9]{2}:[A-F0-9]{2}:[A-F:0-9]{2}:[A-F:0-9]{2}', strdata.upper()) #--------------------------------------------------- def is_ip(self,strdata): """ Is strdata an IP? warning: not IPv6 friendly """ return self.get_ips(strdata) and True or False #--------------------------------------------------- def is_mac(self,strdata): """ Return whether the argument is a mac address. """ return self.get_macs(strdata) and True or False #--------------------------------------------------- def get_distro_files(self,profile_data, download_root): """ Using distro data (fetched from bootconf tree), determine what kernel and initrd to download, and save them locally. """ os.chdir(download_root) distro = self.safe_load(profile_data,'distro') kernel = self.safe_load(profile_data,'kernel') initrd = self.safe_load(profile_data,'initrd') kernel_short = os.path.basename(kernel) initrd_short = os.path.basename(initrd) kernel_save = "%s/%s_koan" % (download_root, kernel_short) initrd_save = "%s/%s_koan" % (download_root, initrd_short) if self.server: if kernel[0] == "/": kernel = "http://%s/cobbler/images/%s/%s" % (profile_data["http_server"], distro, kernel_short) if initrd[0] == "/": initrd = "http://%s/cobbler/images/%s/%s" % (profile_data["http_server"], distro, initrd_short) try: print "downloading initrd %s to %s" % (initrd_short, initrd_save) print "url=%s" % initrd utils.urlgrab(initrd,initrd_save) print "downloading kernel %s to %s" % (kernel_short, kernel_save) print "url=%s" % kernel utils.urlgrab(kernel,kernel_save) except: traceback.print_exc() raise InfoException, "error downloading files" profile_data['kernel_local'] = kernel_save profile_data['initrd_local'] = initrd_save #--------------------------------------------------- def calc_kernel_args(self, pd, replace_self=0): kickstart = self.safe_load(pd,'kickstart') options = self.safe_load(pd,'kernel_options',default='') breed = self.safe_load(pd,'breed') kextra = "" if kickstart is not None and kickstart != "": if breed is not None and breed == "suse": kextra = "autoyast=" + kickstart elif breed is not None and breed == "debian" or breed =="ubuntu": kextra = "auto-install/enable=true priority=critical url=" + kickstart else: kextra = "ks=" + kickstart if options !="": kextra = kextra + " " + options # parser issues? lang needs a trailing = and somehow doesn't have it. # convert the from-cobbler options back to a hash # so that we can override it in a way that works as intended hashv = utils.input_string_or_hash(kextra) if self.static_interface is not None and (breed == "redhat" or breed == "suse" or breed == "debian" or breed == "ubuntu"): interface_name = self.static_interface interfaces = self.safe_load(pd, "interfaces") if interface_name.startswith("eth"): alt_interface_name = interface_name.replace("eth", "intf") interface_data = self.safe_load(interfaces, interface_name, alt_interface_name) else: interface_data = self.safe_load(interfaces, interface_name) ip = self.safe_load(interface_data, "ip_address") netmask = self.safe_load(interface_data, "netmask") gateway = self.safe_load(pd, "gateway") dns = self.safe_load(pd, "name_servers") if breed == "debian" or breed == "ubuntu": hostname = self.safe_load(pd, "hostname") name = self.safe_load(pd, "name") if hostname != "" or name != "": if hostname != "": # if this is a FQDN, grab the first bit my_hostname = hostname.split(".")[0] _domain = hostname.split(".")[1:] if _domain: my_domain = ".".join(_domain) else: my_hostname = name.split(".")[0] _domain = name.split(".")[1:] if _domain: my_domain = ".".join(_domain) hashv["hostname"] = my_hostname hashv["domain"] = my_domain if breed == "suse": hashv["netdevice"] = self.static_interface else: hashv["ksdevice"] = self.static_interface if ip is not None: if breed == "suse": hashv["hostip"] = ip elif breed == "debian" or breed == "ubuntu": hashv["netcfg/get_ipaddress"] = ip else: hashv["ip"] = ip if netmask is not None: if breed == "debian" or breed == "ubuntu": hashv["netcfg/get_netmask"] = netmask else: hashv["netmask"] = netmask if gateway is not None: if breed == "debian" or breed == "ubuntu": hashv["netcfg/get_gateway"] = gateway else: hashv["gateway"] = gateway if dns is not None: if breed == "suse": hashv["nameserver"] = dns[0] elif breed == "debian" or breed == "ubuntu": hashv["netcfg/get_nameservers"] = " ".join(dns) else: hashv["dns"] = ",".join(dns) if replace_self and self.embed_kickstart: hashv["ks"] = "file:ks.cfg" if self.kopts_override is not None: hash2 = utils.input_string_or_hash(self.kopts_override) hashv.update(hash2) options = utils.hash_to_string(hashv) options = string.replace(options, "lang ","lang= ") # if using ksdevice=bootif that only works for PXE so replace # it with something that will work options = string.replace(options, "ksdevice=bootif","ksdevice=link") return options #--------------------------------------------------- def virt_net_install(self,profile_data): """ Invoke virt guest-install (or tweaked copy thereof) """ pd = profile_data self.load_virt_modules() arch = self.safe_load(pd,'arch','x86') kextra = self.calc_kernel_args(pd) (uuid, create_func, fullvirt, can_poll) = self.virt_choose(pd) virtname = self.calc_virt_name(pd) ram = self.calc_virt_ram(pd) vcpus = self.calc_virt_cpus(pd) path_list = self.calc_virt_path(pd, virtname) size_list = self.calc_virt_filesize(pd) driver_list = self.calc_virt_drivers(pd) if self.virt_type == 'openvz': disks = None else: disks = self.merge_disk_data(path_list,size_list,driver_list) virt_auto_boot = self.calc_virt_autoboot(pd, self.virt_auto_boot) virt_pxe_boot = self.calc_virt_pxeboot(pd, self.virt_pxe_boot) results = create_func( name = virtname, ram = ram, disks = disks, uuid = uuid, extra = kextra, vcpus = vcpus, profile_data = profile_data, arch = arch, no_gfx = self.no_gfx, fullvirt = fullvirt, bridge = self.virt_bridge, virt_type = self.virt_type, virt_auto_boot = virt_auto_boot, virt_pxe_boot = virt_pxe_boot, qemu_driver_type = self.qemu_disk_type, qemu_net_type = self.qemu_net_type, qemu_machine_type = self.qemu_machine_type, wait = self.virtinstall_wait, noreboot = self.virtinstall_noreboot, osimport = self.virtinstall_osimport, ) #print results if can_poll is not None and self.should_poll: import libvirt print "- polling for virt completion" conn = None if can_poll == "xen": conn = libvirt.open(None) elif can_poll == "qemu": conn = libvirt.open("qemu:///system") else: raise InfoException("Don't know how to poll this virt-type") ct = 0 while True: time.sleep(3) state = utils.get_vm_state(conn, virtname) if state == "running": print "- install is still running, sleeping for 1 minute (%s)" % ct ct = ct + 1 time.sleep(60) elif state == "crashed": print "- the install seems to have crashed." return "failed" elif state == "shutdown": print "- shutdown VM detected, is the install done? Restarting!" utils.find_vm(conn, virtname).create() return results else: raise InfoException("internal error, bad virt state") if virt_auto_boot: if self.virt_type in [ "xenpv", "xenfv" ]: if not utils.create_xendomains_symlink(virtname): print "- warning: failed to setup autoboot for %s, it will have to be configured manually" % virtname elif self.virt_type in [ "qemu", "kvm" ]: utils.libvirt_enable_autostart(virtname) elif self.virt_type in [ "openvz" ]: pass else: print "- warning: don't know how to autoboot this virt type yet" # else... return results #--------------------------------------------------- def load_virt_modules(self): try: import xencreate import qcreate import imagecreate except: traceback.print_exc() raise InfoException("no virtualization support available, install python-virtinst or virt-install?") #--------------------------------------------------- def virt_choose(self, pd): fullvirt = False can_poll = None if (self.image is not None) and (pd["image_type"] == "virt-clone"): fullvirt = True uuid = None import imagecreate creator = imagecreate.start_install elif self.virt_type in [ "xenpv", "xenfv" ]: uuid = self.get_uuid(self.calc_virt_uuid(pd)) import xencreate creator = xencreate.start_install if self.virt_type == "xenfv": fullvirt = True can_poll = "xen" elif self.virt_type in [ "qemu", "kvm" ] : fullvirt = True uuid = None import qcreate creator = qcreate.start_install can_poll = "qemu" elif self.virt_type == "vmware": import vmwcreate uuid = None creator = vmwcreate.start_install elif self.virt_type == "vmwarew": import vmwwcreate uuid = None creator = vmwwcreate.start_install elif self.virt_type == "openvz": import openvzcreate uuid = None creator = openvzcreate.start_install else: raise InfoException, "Unspecified virt type: %s" % self.virt_type return (uuid, creator, fullvirt, can_poll) #--------------------------------------------------- def merge_disk_data(self, paths, sizes, drivers): counter = 0 disks = [] for p in paths: path = paths[counter] if counter >= len(sizes): size = sizes[-1] else: size = sizes[counter] if counter >= len(drivers): driver = drivers[-1] else: driver = drivers[counter] disks.append([path,size,driver]) counter = counter + 1 if len(disks) == 0: print "paths: ", paths print "sizes: ", sizes print "drivers: ", drivers raise InfoException, "Disk configuration not resolvable!" return disks #--------------------------------------------------- def calc_virt_name(self,profile_data): if self.virt_name is not None: # explicit override name = self.virt_name elif profile_data.has_key("interfaces"): # this is a system object, just use the name name = profile_data["name"] else: # just use the time, we used to use the MAC # but that's not really reliable when there are more # than one. name = time.ctime(time.time()) # keep libvirt happy with the names return name.replace(":","_").replace(" ","_") #-------------------------------------------------- def calc_virt_autoboot(self,data,override_autoboot=False): if override_autoboot: return True autoboot = self.safe_load(data,'virt_auto_boot',0) autoboot = str(autoboot).lower() if autoboot in [ "1", "true", "y", "yes" ]: return True return False #-------------------------------------------------- def calc_virt_pxeboot(self,data,override_pxeboot=False): if override_pxeboot: return True pxeboot = self.safe_load(data,'virt_pxe_boot',0) pxeboot = str(pxeboot).lower() if pxeboot in [ "1", "true", "y", "yes" ]: return True return False #-------------------------------------------------- def calc_virt_filesize(self,data,default_filesize=0): # MAJOR FIXME: are there overrides? size = self.safe_load(data,'virt_file_size','xen_file_size',0) tokens = str(size).split(",") accum = [] for t in tokens: accum.append(self.calc_virt_filesize2(data,size=t)) return accum #--------------------------------------------------- def calc_virt_filesize2(self,data,default_filesize=1,size=0): """ Assign a virt filesize if none is given in the profile. """ err = False try: int(size) except: err = True if size is None or size == '': err = True if err: print "invalid file size specified, using defaults" return default_filesize return int(size) #--------------------------------------------------- def calc_virt_drivers(self,data): driver = self.safe_load(data,'virt_disk_driver',default='raw') tokens = driver.split(",") accum = [] for t in tokens: # FIXME: this list should be pulled out of # the virtinst VirtualDisk class, but # not all versions of virtinst have a # nice list to use if t in ('raw', 'qcow', 'qcow2', 'aio', 'vmdk', 'qed'): accum.append(t) else: print "invalid disk driver specified, defaulting to 'raw'" accum.append('raw') return accum #--------------------------------------------------- def calc_virt_ram(self,data,default_ram=64): """ Assign a virt ram size if none is given in the profile. """ size = self.safe_load(data,'virt_ram','xen_ram',0) err = False try: int(size) except: err = True if size is None or size == '' or int(size) < default_ram: err = True if err: print "invalid RAM size specified, using defaults." return default_ram return int(size) #--------------------------------------------------- def calc_virt_cpus(self,data,default_cpus=1): """ Assign virtual CPUs if none is given in the profile. """ size = self.safe_load(data,'virt_cpus',default=default_cpus) try: isize = int(size) except: traceback.print_exc() return default_cpus return isize #--------------------------------------------------- def calc_virt_mac(self,data): if not self.is_virt: return None # irrelevant if self.is_mac(self.system): return self.system.upper() return self.random_mac() #--------------------------------------------------- def calc_virt_uuid(self,data): # TODO: eventually we may want to allow some koan CLI # option (or cobbler system option) for passing in the UUID. # Until then, it's random. return None """ Assign a UUID if none/invalid is given in the profile. """ my_id = self.safe_load(data,'virt_uuid','xen_uuid',0) uuid_re = re.compile('[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}') err = False try: str(my_id) except: err = True if my_id is None or my_id == '' or not uuid_re.match(id): err = True if err and my_id is not None: print "invalid UUID specified. randomizing..." return None return my_id #---------------------------------------------------- def calc_virt_path(self,pd,name): # input is either a single item or a string list # it's not in the arguments to this function .. it's from one of many # potential sources location = self.virt_path if location is None: # no explicit CLI override, what did the cobbler server say? location = self.safe_load(pd, 'virt_path', default=None) if location is None or location == "": # not set in cobbler either? then assume reasonable defaults if self.virt_type in [ "xenpv", "xenfv" ]: prefix = "/var/lib/xen/images/" elif self.virt_type in [ "qemu", "kvm" ]: prefix = "/var/lib/libvirt/images/" elif self.virt_type == "vmwarew": prefix = "/var/lib/vmware/%s/" % name else: prefix = "/var/lib/vmware/images/" if not os.path.exists(prefix): print "- creating: %s" % prefix os.makedirs(prefix) return [ "%s/%s-disk0" % (prefix, name) ] # ok, so now we have a user that either through cobbler or some other # source *did* specify a location. It might be a list. virt_sizes = self.calc_virt_filesize(pd) path_splitted = location.split(",") paths = [] count = -1 for x in path_splitted: count = count + 1 path = self.calc_virt_path2(pd,name,offset=count,location=x,sizes=virt_sizes) paths.append(path) return paths #--------------------------------------------------- def calc_virt_path2(self,pd,name,offset=0,location=None,sizes=[]): # Parse the command line to determine if this is a # path, a partition, or a volume group parameter # file Ex: /foo # partition Ex: /dev/foo # volume-group Ex: vg-name(:lv-name) # # chosing the disk image name (if applicable) is somewhat # complicated ... # use default location for the virt type if not location.startswith("/dev/") and location.startswith("/"): # filesystem path if os.path.isdir(location): return "%s/%s-disk%s" % (location, name, offset) elif not os.path.exists(location) and os.path.isdir(os.path.dirname(location)): return location else: if self.force_path: return location else: raise InfoException, "The location %s is an existing file. Consider '--force-path' to overwrite it." % location elif location.startswith("/dev/"): # partition if os.path.exists(location): return location else: raise InfoException, "virt path is not a valid block device" else: # it's a volume group, verify that it exists if location.find(':') == -1: vgname = location lvname = "%s-disk%s" % (name,offset) else: vgname, lvname = location.split(':')[:2] args = "vgs -o vg_name" print "%s" % args vgnames = sub_process.Popen(args, shell=True, stdout=sub_process.PIPE).communicate()[0] print vgnames if vgnames.find(vgname) == -1: raise InfoException, "The volume group [%s] does not exist." % vgname # check free space args = "LANG=C vgs --noheadings -o vg_free --units g %s" % vgname print args cmd = sub_process.Popen(args, stdout=sub_process.PIPE, shell=True) freespace_str = cmd.communicate()[0] freespace_str = freespace_str.split("\n")[0].strip() freespace_str = freespace_str.lower().replace("g","").replace(',', '.') # remove gigabytes print "(%s)" % freespace_str freespace = int(float(freespace_str)) virt_size = self.calc_virt_filesize(pd) if len(virt_size) > offset: virt_size = sizes[offset] else: return sizes[-1] if freespace >= int(virt_size): # look for LVM partition named foo, create if doesn't exist args = "lvs --noheadings -o lv_name %s" % vgname print "%s" % args lvs_str=sub_process.Popen(args, stdout=sub_process.PIPE, shell=True).communicate()[0] print lvs_str # have to create it? found_lvs = False for lvs in lvs_str.split("\n"): if lvs.strip() == lvname: found_lvs = True break if not found_lvs: args = "lvcreate -L %sG -n %s %s" % (virt_size, lvname, vgname) print "%s" % args lv_create = sub_process.call(args, shell=True) if lv_create != 0: raise InfoException, "LVM creation failed" # partition location partition_location = "/dev/mapper/%s-%s" % (vgname.replace('-','--'),lvname.replace('-','--')) # check whether we have SELinux enabled system args = "/usr/sbin/selinuxenabled" if os.path.exists(args) and sub_process.call(args) == 0: # required context type context_type = "virt_image_t" # change security context type to required one args = "/usr/bin/chcon -t %s %s" % (context_type, partition_location) print "%s" % args change_context = sub_process.call(args, close_fds=True, shell=True) # modify SELinux policy in order to preserve security context # between reboots args = "/usr/sbin/semanage fcontext -a -t %s %s" % (context_type, partition_location) print "%s" % args change_context |= sub_process.call(args, close_fds=True, shell=True) if change_context != 0: raise InfoException, "SELinux security context setting to LVM partition failed" # return partition location return partition_location else: raise InfoException, "volume group needs %s GB free space." % virt_size def randomUUID(self): """ Generate a random UUID. Copied from xend/uuid.py """ rc = [] for x in range(0, 16): rc.append(random.randint(0,255)) return rc def uuidToString(self, u): """ return uuid as a string """ return "-".join(["%02x" * 4, "%02x" * 2, "%02x" * 2, "%02x" * 2, "%02x" * 6]) % tuple(u) def get_uuid(self,uuid): """ return the passed-in uuid, or a random one if it's not set. """ if uuid: return uuid return self.uuidToString(self.randomUUID()) if __name__ == "__main__": main() cobbler-2.4.1/koan/configurator.py000066400000000000000000000276571227367477500171770ustar00rootroot00000000000000""" Configuration class. Copyright 2010 Kelsey Hightower Kelsey Hightower This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA module for configuring repos, ldap, packages, files, and monit """ import filecmp import shutil import utils import subprocess import tempfile import stat import os.path import sys import time import pwd import grp import simplejson as json try: import yum sys.path.append('/usr/share/yum-cli') import cli yum_available = True except: yum_available = False class KoanConfigure: """ Used for all configuration methods, used by koan to configure repos, ldap, files, packages, and monit. """ def __init__(self, config): """Constructor. Requires json config object.""" self.config = json.JSONDecoder().decode(config) self.stats = {} self.dist = utils.check_dist() #---------------------------------------------------------------------- def configure_repos(self): # Enables the possibility to use different types of repos if yum_available and self.dist == "redhat": self.configure_yum_repos() def configure_yum_repos(self): """Configure YUM repositories.""" print "- Configuring Repos" old_repo = '/etc/yum.repos.d/config.repo' # Stage a tempfile to hold new file contents _tempfile = tempfile.NamedTemporaryFile() _tempfile.write(self.config['repo_data']) _tempfile.flush() new_repo = _tempfile.name # Check if repo resource exist, create if missing if os.path.isfile(old_repo): if not filecmp.cmp(old_repo, new_repo): utils.sync_file(old_repo, new_repo, 0, 0, 644) self.stats['repos_status'] = "Success: Repos in sync" else: self.stats['repos_status'] = "Success: Repos in sync" else: print " %s not found, creating..." % (old_repo) open(old_repo,'w').close() utils.sync_file(old_repo, new_repo, 0, 0, 644) self.stats['repos_status'] = "Success: Repos in sync" _tempfile.close() #---------------------------------------------------------------------- def configure_ldap(self): """Configure LDAP by running the specified LDAP command.""" print "- Configuring LDAP" rc = subprocess.call(self.config['ldap_data'],shell="True") if rc == 0: self.stats['ldap_status'] = "Success: LDAP has been configured" else: self.stats['ldap_status'] = "ERROR: configuring LDAP failed" #---------------------------------------------------------------------- def configure_monit(self): """Start or reload Monit""" print "- Configuring Monit" ret = subprocess.call(['/sbin/service', 'monit', 'status']) if ret == 0: _ret = subprocess.call(['/usr/bin/monit', 'reload']) if _ret == 0: self.stats['monit_status'] = "Running: Monit has been reloaded" else: _ret = subprocess.call(['/sbin/service', 'monit', 'start']) if _ret == 0: self.stats['monit_status'] = "Running: Monit has been started" else: self.stats['monit_status'] = "Stopped: Failed to start monit" #---------------------------------------------------------------------- def configure_packages(self): # Enables the possibility to use different types of package configurators if yum_available and self.dist == "redhat": self.configure_yum_packages() def configure_yum_packages(self): """Configure package resources.""" print "- Configuring Packages" runtime_start = time.time() nsync = 0 osync = 0 fail = 0 packages = self.config['packages'] yb = yum.YumBase() yb.preconf.debuglevel = 0 yb.preconf.errorlevel = 0 yb.doTsSetup() yb.doRpmDBSetup() ybc = cli.YumBaseCli() ybc.preconf.debuglevel = 0 ybc.preconf.errorlevel = 0 ybc.conf.assumeyes = True ybc.doTsSetup() ybc.doRpmDBSetup() create_pkg_list = [] remove_pkg_list = [] for package in packages: action = packages[package]['action'] # In the near future, will use install_name vs package # as it includes a more specific package name: "package-version" install_name = packages[package]['install_name'] if yb.isPackageInstalled(package): if action == 'create': nsync += 1 if action == 'remove': remove_pkg_list.append(package) if not yb.isPackageInstalled(package): if action == 'create': create_pkg_list.append(package) if action == 'remove': nsync += 1 # Don't waste time with YUM if there is nothing to do. doTransaction = False if create_pkg_list: print " Packages out of sync: %s" % create_pkg_list ybc.installPkgs(create_pkg_list) osync += len(create_pkg_list) doTransaction = True if remove_pkg_list: print " Packages out of sync: %s" % remove_pkg_list ybc.erasePkgs(remove_pkg_list) osync += len(remove_pkg_list) doTransaction = True if doTransaction: ybc.buildTransaction() ybc.doTransaction() runtime_end = time.time() runtime = (runtime_end - runtime_start) self.stats['pkg'] = {'runtime': runtime,'nsync': nsync,'osync': osync,'fail' : fail} #---------------------------------------------------------------------- def configure_directories(self): """ Configure directory resources.""" print "- Configuring Directories" runtime_start = time.time() nsync = 0 osync = 0 fail = 0 files = self.config['files'] # Split out directories _dirs = [d for d in files if files[d]['is_dir']] # Configure directories first for dir in _dirs: action = files[dir]['action'] odir = files[dir]['path'] protected_dirs = ['/','/bin','/boot','/dev','/etc','/lib','/lib64','/proc','/sbin','/sys','/usr','/var'] if os.path.isdir(odir): if os.path.realpath(odir) in protected_dirs: print " %s is a protected directory, skipping..." % os.path.realpath(odir) fail += 1 continue if action == 'create': nmode = int(files[dir]['mode'],8) nuid = pwd.getpwnam(files[dir]['owner'])[2] ngid = grp.getgrnam(files[dir]['group'])[2] # Compare old and new directories, sync if permissions mismatch if os.path.isdir(odir): dstat = os.stat(odir) omode = stat.S_IMODE(dstat.st_mode) ouid = pwd.getpwuid(dstat.st_uid)[2] ogid = grp.getgrgid(dstat.st_gid)[2] if omode != nmode or ouid != nuid or ogid != ngid: os.chmod(odir,nmode) os.chown(odir,nuid,ngid) osync += 1 else: nsync += 1 else: print " Directory out of sync, creating %s" % odir os.makedirs(odir,nmode) os.chown(odir,nuid,ngid) osync += 1 elif action == 'remove': if os.path.isdir(odir): print " Directory out of sync, removing %s" % odir shutil.rmtree(odir) osync += 1 else: nsync += 1 else: pass runtime_end = time.time() runtime = (runtime_end - runtime_start) self.stats['dir'] = {'runtime': runtime,'nsync': nsync,'osync': osync,'fail': fail} #---------------------------------------------------------------------- def configure_files(self): """ Configure file resources.""" print "- Configuring Files" runtime_start = time.time() nsync = 0 osync = 0 fail = 0 files = self.config['files'] # Split out files _files = [f for f in files if files[f]['is_dir'] is False] for file in _files: action = files[file]['action'] ofile = files[file]['path'] if action == 'create': nmode = int(files[file]['mode'],8) nuid = pwd.getpwnam(files[file]['owner'])[2] ngid = grp.getgrnam(files[file]['group'])[2] # Stage a tempfile to hold new file contents _tempfile = tempfile.NamedTemporaryFile() _tempfile.write(files[file]['content']) _tempfile.flush() nfile = _tempfile.name # Compare new and old files, sync if permissions or contents mismatch if os.path.isfile(ofile): fstat = os.stat(ofile) omode = stat.S_IMODE(fstat.st_mode) ouid = pwd.getpwuid(fstat.st_uid)[2] ogid = grp.getgrgid(fstat.st_gid)[2] if not filecmp.cmp(ofile, nfile) or omode != nmode or ogid != ngid or ouid != nuid: utils.sync_file(ofile, nfile, nuid, ngid, nmode) osync += 1 else: nsync += 1 elif os.path.dirname(ofile): # Create the file only if the base directory exists open(ofile,'w').close() utils.sync_file(ofile, nfile, nuid, ngid, nmode) osync += 1 else: print " Base directory not found, %s required." % (os.path.dirname(ofile)) fail += 1 _tempfile.close() elif action == 'remove': if os.path.isfile(file): os.remove(ofile) osync += 1 else: nsync += 1 else: pass runtime_end = time.time() runtime = (runtime_end - runtime_start) self.stats['files'] = {'runtime': runtime,'nsync': nsync,'osync': osync,'fail': fail} #---------------------------------------------------------------------- def run(self): # Configure resources in a specific order: repos, ldap, packages, directories, files, monit if self.config['repos_enabled']: self.configure_repos() if self.config['ldap_enabled']: self.configure_ldap() self.configure_packages() self.configure_directories() self.configure_files() if self.config['monit_enabled']: self.configure_monit() return self.statscobbler-2.4.1/koan/imagecreate.py000066400000000000000000000020631227367477500167230ustar00rootroot00000000000000""" Virtualization installation functions for image based deployment Copyright 2008 Red Hat, Inc and Others. Bryan Kearney Original version based on virt-image David Lutterkort This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import utils import virtinstall def start_install(*args, **kwargs): cmd = virtinstall.build_commandline("import", *args, **kwargs) utils.subprocess_call(cmd) cobbler-2.4.1/koan/live/000077500000000000000000000000001227367477500150415ustar00rootroot00000000000000cobbler-2.4.1/koan/live/base.cfg000066400000000000000000000067341227367477500164460ustar00rootroot00000000000000# live CD script for koan live CD # see documentation here: lang en_US.UTF-8 keyboard us timezone US/Eastern auth --useshadow --enablemd5 selinux --disabled firewall --disabled # the password will NOT be needed for usage of this live CD rootpw --iscrypted \$1\$mF86/UHC\$WvcIcX2t6crBz2onWxyac. services --disable sshd # TODO: how to replace i386 with $basearch # TODO: apparently calling it fedora-dev instead of a-dev makes things # not work. Perhaps it has something to do with the default repos in # /etc/yum.repos.d not getting properly disabled? repo --name=todos --baseurl=http://download.fedora.redhat.com/pub/fedora/linux/releases/10/Everything/i386/os/ repo --name=updatez --baseurl=http://download.fedora.redhat.com/pub/fedora/linux/updates/10/i386/ repo --name=newkoan --baseurl=file:///tmp/newkoan/ text bootloader --location=mbr install zerombr part / --fstype ext3 --size=1024 --grow --ondisk=/dev/sda --asprimary part swap --size=1027 --ondisk=/dev/sda --asprimary %packages @base #@core @hardware-support file syslinux kernel bash util-linux koan avahi-tools #aspell-* -m17n-db-* -man-pages-* # gimp help is huge -gimp-help # lose the compat stuff -compat* # space sucks -gnome-user-docs -specspo -esc -samba-client -a2ps -vino -redhat-lsb -sox # smartcards won't really work on the livecd. and we _need_ space -coolkey -ccid # duplicate functionality -tomboy -pinfo -wget # scanning takes quite a bit of space :/ -xsane -xsane-gimp # while hplip requires pyqt, it has to go -hplip #-*debuginfo # error kernel bash koan policycoreutils grub eject tree %post cat > /etc/rc.d/init.d/fedora-live << EOF #!/bin/bash # # live: Init script for live image # # chkconfig: 345 99 99 # description: Init script for live image. #if ! strstr "\`cat /proc/cmdline\`" liveimg || [ "\$1" != "start" ] || [ -e /.liveimg-configured ] ; then # exit 0 #fi exists() { which \$1 >/dev/null 2>&1 || return \$* } touch /.liveimg-configured echo "RUN_FIRSTBOOT=NO" > /etc/sysconfig/firstboot useradd -c "Fedora Live" fedora passwd -d fedora > /dev/null # don't start cron/at as they tend to spawn things which are # disk intensive that are painful on a live image chkconfig --level 345 crond off chkconfig --level 345 atd off chkconfig --level 345 anacron off chkconfig --level 345 readahead_early off chkconfig --level 345 readahead_later off # Stopgap fix for RH #217966; should be fixed in HAL instead touch /media/.hal-mtab # take over a drive to use as temporary space sfdisk /dev/sda -uM << ESFDISK ,1000 ; ESFDISK mkfs -t ext3 /dev/sda1 # fix fstab/mtab cat >> /etc/fstab << EFSTAB /dev/sda1 /tmp/boot ext3 defaults,noatime 0 0 EFSTAB cat >> /etc/mtab << EMTAB /dev/sda1 /tmp/boot ext3 rw,noatime 0 0 EMTAB # make a boot directory on the filesystem so grub can be happy mkdir /tmp/boot mount /dev/sda1 /tmp/boot mkdir -p /tmp/boot/boot # install grub mknod /dev/mapper/livecd-rw b 8 0 grub-install --root-directory=/tmp/boot/ --no-floppy /dev/sda # need a grub.conf file to run grubby from within koan cat > /tmp/boot/boot/grub/grub.conf << EGRUB # grub.conf default=0 timeout=5 #splashimage=(hd0,0)/boot/grub/splash.xpm.gz hiddenmenu title spacer root (hd0,0) kernel /boot/vmlinuz initrd /boot/initrd.img EGRUB # now we're ready to do it for real INSERT_KOAN_ARGS --livecd # once through debugging # eject # reboot EOF chmod 755 /etc/rc.d/init.d/fedora-live /sbin/restorecon /etc/rc.d/init.d/fedora-live /sbin/chkconfig --add fedora-live # save a little bit of space at least... rm -f /boot/initrd* cobbler-2.4.1/koan/live/build.py000066400000000000000000000055251227367477500165210ustar00rootroot00000000000000""" Validates whether the system is reasonably well configured for serving up content. This is the code behind 'cobbler check'. Copyright 2007-2008, Red Hat, Inc and Others Michael DeHaan This software may be freely redistributed under the terms of the GNU general public license. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. """ # usage: --server=bootserver.example.com --koan="--profile=FOO" # requires latest git://git.fedoraproject.org/git/hosted/livecd import optparse import subprocess import sys import os # this configuration is the kickstart for the live CD, not the install system # tweak at your own risk basef = open("./base.cfg") base_config = basef.read() basef.close() # packages to put on the LiveCD #packages = [ # "kernel", "bash", "koan", "policycoreutils", "grub", "eject", "tree" #] #======= def main(args): p = optparse.OptionParser() p.add_option("-k","--koan",action="store",help="additional koan arguments, if any") p.add_option("-s","--server",action="store",help="cobbler server address") (options,args) = p.parse_args() if options.server is None: print >>sys.stderr, "error: --server is required" sys.exit(1) if options.koan is None: options.koan = "--replace-self --server=%s" % options.server if options.koan.find("--server") == -1 and options.koan.find("-s") == -1: options.koan = options.koan + " --server=%s" % options.server if options.koan.find("--replace-self") == -1: options.koan = options.koan + " --replace-self" if not os.path.exists("/usr/bin/livecd-creator"): print "livecd-tools needs to be installed" sys.exit(1) if not os.path.exists("/usr/bin/createrepo"): print "createrepo needs to be installed" sys.exit(1) if not os.path.exists("/sbin/mksquashfs"): print "squashfs-tools needs to be installed" sys.exit(1) # create the local repo so we can have the latest koan # even if it's not in Fedora yet subprocess.call("createrepo ../../rpm-build",shell=True) subprocess.call("mkdir -p /tmp/newkoan", shell=True) subprocess.call("cp -r ../../rpm-build/* /tmp/newkoan/",shell=True) # write config file cfg = open("/tmp/koanlive.cfg","w+") cfg.write(base_config.replace("INSERT_KOAN_ARGS", "/usr/bin/koan %s" % options.koan)) cfg.close() # ====== cmd = "livecd-creator" cmd = cmd + " --fslabel=koan-live-cd" cmd = cmd + " --config=/tmp/koanlive.cfg" #for x in packages: # cmd = cmd + " --package=%s" % x print "running: %s" % cmd try: os.remove("koan-live-cd.iso") except: print "existing file not removed" subprocess.call(cmd, shell=True) if __name__ == "__main__": main(sys.argv) cobbler-2.4.1/koan/openvzcreate.py000066400000000000000000000125561227367477500171720ustar00rootroot00000000000000""" OpenVZ container-type virtualization installation functions. Copyright 2012 Artem Kanarev , Sergey Podushkin This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import os import exceptions class OVZCreateException(exceptions.Exception): pass def start_install(*args, **kwargs): # check for Openvz tools presence # can be this apps installed in some other place? vzcfgvalidate = '/usr/sbin/vzcfgvalidate' vzctl = '/usr/sbin/vzctl' if not os.path.exists(vzcfgvalidate) or not os.path.exists(vzctl): raise OVZCreateException("Cannot find %s and/or %s! Are OpenVZ tools installed?" % (vzcfgvalidate, vzctl)) # params, that can be defined/redefined through ks_meta keys_for_meta = [ 'KMEMSIZE', # "14372700:14790164", 'LOCKEDPAGES', # "2048:2048", 'PRIVVMPAGES', # "65536:69632", 'SHMPAGES', # "21504:21504", 'NUMPROC', # "240:240", 'VMGUARPAGES', # "33792:unlimited", 'OOMGUARPAGES', # "26112:unlimited", 'NUMTCPSOCK', # "360:360", 'NUMFLOCK', # "188:206", 'NUMPTY', # "16:16", 'NUMSIGINFO', # "256:256", 'TCPSNDBUF', # "1720320:2703360", 'TCPRCVBUF', # "1720320:2703360", 'OTHERSOCKBUF', # "1126080:2097152", 'DGRAMRCVBUF', # "262144:262144", 'NUMOTHERSOCK', # "120", 'DCACHESIZE', # "3409920:3624960", 'NUMFILE', # "9312:9312", 'AVNUMPROC', # "180:180", 'NUMIPTENT', # "128:128", 'DISKINODES', # "200000:220000", 'QUOTATIME', # "0", 'VE_ROOT', # "/vz/root/$VEID", 'VE_PRIVATE', # "/vz/private/$VEID", 'SWAPPAGES', # "0:1G", 'ONBOOT', # "yes" ] sysname = kwargs['name'] kickstart = kwargs['profile_data']['kickstart'] template = kwargs['profile_data']['breed'] # we use it for --ostemplate parameter hostname = kwargs['profile_data']['hostname'] ipadd = kwargs['profile_data']['ip_address_eth0'] nameserver = kwargs['profile_data']['name_servers'][0] diskspace = kwargs['profile_data']['virt_file_size'] physpages = kwargs['profile_data']['virt_ram'] cpus = kwargs['profile_data']['virt_cpus'] onboot = kwargs['profile_data']['virt_auto_boot'] # we get [0,1] ot [False,True] and have to map it to [no,yes] onboot = 'yes' if onboot == '1' or onboot == True else 'no' CTID = None vz_meta = {} # get all vz_ parameters from ks_meta for item in kwargs['profile_data']['ks_meta'].split(): var = item.split('=') if var[0].startswith('vz_'): vz_meta[var[0].replace('vz_','').upper()] = var[1] if vz_meta.has_key('CTID') and vz_meta['CTID']: try: CTID = int(vz_meta['CTID']) del vz_meta['CTID'] except ValueError: print "Invalid CTID in ks_meta. Exiting..." return 1 else: raise OVZCreateException('Mandatory "vz_ctid" parameter not found in ks_meta!') confiname ='/etc/vz/conf/%d.conf' % CTID # this is the minimal config. we can define additional parameters or override some of them in ks_meta min_config = { 'PHYSPAGES':"0:%sM" % physpages, 'SWAPPAGES':"0:1G", 'DISKSPACE':"%sG:%sG" % (diskspace, diskspace), 'DISKINODES':"200000:220000", 'QUOTATIME':"0", 'CPUUNITS':"1000", 'CPUS':cpus, 'VE_ROOT':"/vz/root/$VEID", 'VE_PRIVATE':"/vz/private/$VEID", 'OSTEMPLATE':template, 'NAME':sysname, 'HOSTNAME':hostname, 'IP_ADDRESS':ipadd, 'NAMESERVER':nameserver, } # merge with override full_config = dict([(k, vz_meta[k] if vz_meta.has_key(k) and k in keys_for_meta else min_config[k]) for k in set(min_config.keys() + vz_meta.keys())]) # write config file for container f = open(confiname, 'w+') for key, val in full_config.items(): f.write('%s="%s"\n' % (key,val)) f.close() # validate the config file cmd = '%s %s' % (vzcfgvalidate, confiname) if not os.system(cmd.strip()): # now install the container tree cmd = '/usr/bin/ovz-install %s %s %s' % (sysname, kickstart, full_config['VE_PRIVATE'].replace('$VEID','%d' % CTID)) if not os.system(cmd.strip()): # if everything fine, start the container cmd = '%s start %s' % (vzctl, CTID) if os.system(cmd.strip()): raise OVZCreateException("Start container %s failed" % CTID) else: raise OVZCreateException("Container creation %s failed" % CTID) else: raise OVZCreateException("Container %s config file is not valid" % CTID) cobbler-2.4.1/koan/opt_parse.py000066400000000000000000001656161227367477500164670ustar00rootroot00000000000000"""optparse - a powerful, extensible, and easy-to-use option parser. By Greg Ward Originally distributed as Optik; see http://optik.sourceforge.net/ . If you have problems with this module, please do not file bugs, patches, or feature requests with Python; instead, use Optik's SourceForge project page: http://sourceforge.net/projects/optik For support, use the optik-users@lists.sourceforge.net mailing list (http://lists.sourceforge.net/lists/listinfo/optik-users). """ # Python developers: please do not make changes to this file, since # it is automatically generated from the Optik source code. __version__ = "1.5.3" __all__ = ['Option', 'SUPPRESS_HELP', 'SUPPRESS_USAGE', 'Values', 'OptionContainer', 'OptionGroup', 'OptionParser', 'HelpFormatter', 'IndentedHelpFormatter', 'TitledHelpFormatter', 'OptParseError', 'OptionError', 'OptionConflictError', 'OptionValueError', 'BadOptionError'] __copyright__ = """ Copyright (c) 2001-2006 Gregory P. Ward. All rights reserved. Copyright (c) 2002-2006 Python Software Foundation. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of the author nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """ import sys, os import types import text_wrap as textwrap # repackaged here because not in RHEL3 def _repr(self): return "<%s at 0x%x: %s>" % (self.__class__.__name__, id(self), self) # This file was generated from: # Id: option_parser.py 527 2006-07-23 15:21:30Z greg # Id: option.py 522 2006-06-11 16:22:03Z gward # Id: help.py 527 2006-07-23 15:21:30Z greg # Id: errors.py 509 2006-04-20 00:58:24Z gward try: from gettext import gettext except ImportError: def gettext(message): return message _ = gettext class OptParseError (Exception): def __init__(self, msg): self.msg = msg def __str__(self): return self.msg class OptionError (OptParseError): """ Raised if an Option instance is created with invalid or inconsistent arguments. """ def __init__(self, msg, option): self.msg = msg self.option_id = str(option) def __str__(self): if self.option_id: return "option %s: %s" % (self.option_id, self.msg) else: return self.msg class OptionConflictError (OptionError): """ Raised if conflicting options are added to an OptionParser. """ class OptionValueError (OptParseError): """ Raised if an invalid option value is encountered on the command line. """ class BadOptionError (OptParseError): """ Raised if an invalid option is seen on the command line. """ def __init__(self, opt_str): self.opt_str = opt_str def __str__(self): return _("no such option: %s") % self.opt_str class AmbiguousOptionError (BadOptionError): """ Raised if an ambiguous option is seen on the command line. """ def __init__(self, opt_str, possibilities): BadOptionError.__init__(self, opt_str) self.possibilities = possibilities def __str__(self): return (_("ambiguous option: %s (%s?)") % (self.opt_str, ", ".join(self.possibilities))) class HelpFormatter: """ Abstract base class for formatting option help. OptionParser instances should use one of the HelpFormatter subclasses for formatting help; by default IndentedHelpFormatter is used. Instance attributes: parser : OptionParser the controlling OptionParser instance indent_increment : int the number of columns to indent per nesting level max_help_position : int the maximum starting column for option help text help_position : int the calculated starting column for option help text; initially the same as the maximum width : int total number of columns for output (pass None to constructor for this value to be taken from the $COLUMNS environment variable) level : int current indentation level current_indent : int current indentation level (in columns) help_width : int number of columns available for option help text (calculated) default_tag : str text to replace with each option's default value, "%default" by default. Set to false value to disable default value expansion. option_strings : { Option : str } maps Option instances to the snippet of help text explaining the syntax of that option, e.g. "-h, --help" or "-fFILE, --file=FILE" _short_opt_fmt : str format string controlling how short options with values are printed in help text. Must be either "%s%s" ("-fFILE") or "%s %s" ("-f FILE"), because those are the two syntaxes that Optik supports. _long_opt_fmt : str similar but for long options; must be either "%s %s" ("--file FILE") or "%s=%s" ("--file=FILE"). """ NO_DEFAULT_VALUE = "none" def __init__(self, indent_increment, max_help_position, width, short_first): self.parser = None self.indent_increment = indent_increment self.help_position = self.max_help_position = max_help_position if width is None: try: width = int(os.environ['COLUMNS']) except (KeyError, ValueError): width = 80 width -= 2 self.width = width self.current_indent = 0 self.level = 0 self.help_width = None # computed later self.short_first = short_first self.default_tag = "%default" self.option_strings = {} self._short_opt_fmt = "%s %s" self._long_opt_fmt = "%s=%s" def set_parser(self, parser): self.parser = parser def set_short_opt_delimiter(self, delim): if delim not in ("", " "): raise ValueError( "invalid metavar delimiter for short options: %r" % delim) self._short_opt_fmt = "%s" + delim + "%s" def set_long_opt_delimiter(self, delim): if delim not in ("=", " "): raise ValueError( "invalid metavar delimiter for long options: %r" % delim) self._long_opt_fmt = "%s" + delim + "%s" def indent(self): self.current_indent += self.indent_increment self.level += 1 def dedent(self): self.current_indent -= self.indent_increment assert self.current_indent >= 0, "Indent decreased below 0." self.level -= 1 def format_usage(self, usage): raise NotImplementedError, "subclasses must implement" def format_heading(self, heading): raise NotImplementedError, "subclasses must implement" def _format_text(self, text): """ Format a paragraph of free-form text for inclusion in the help output at the current indentation level. """ text_width = self.width - self.current_indent indent = " "*self.current_indent return textwrap.fill(text, text_width, initial_indent=indent, subsequent_indent=indent) def format_description(self, description): if description: return self._format_text(description) + "\n" else: return "" def format_epilog(self, epilog): if epilog: return "\n" + self._format_text(epilog) + "\n" else: return "" def expand_default(self, option): if self.parser is None or not self.default_tag: return option.help default_value = self.parser.defaults.get(option.dest) if default_value is NO_DEFAULT or default_value is None: default_value = self.NO_DEFAULT_VALUE return option.help.replace(self.default_tag, str(default_value)) def format_option(self, option): # The help for each option consists of two parts: # * the opt strings and metavars # eg. ("-x", or "-fFILENAME, --file=FILENAME") # * the user-supplied help string # eg. ("turn on expert mode", "read data from FILENAME") # # If possible, we write both of these on the same line: # -x turn on expert mode # # But if the opt string list is too long, we put the help # string on a second line, indented to the same column it would # start in if it fit on the first line. # -fFILENAME, --file=FILENAME # read data from FILENAME result = [] opts = self.option_strings[option] opt_width = self.help_position - self.current_indent - 2 if len(opts) > opt_width: opts = "%*s%s\n" % (self.current_indent, "", opts) indent_first = self.help_position else: # start help on same line as opts opts = "%*s%-*s " % (self.current_indent, "", opt_width, opts) indent_first = 0 result.append(opts) if option.help: help_text = self.expand_default(option) help_lines = textwrap.wrap(help_text, self.help_width) result.append("%*s%s\n" % (indent_first, "", help_lines[0])) result.extend(["%*s%s\n" % (self.help_position, "", line) for line in help_lines[1:]]) elif opts[-1] != "\n": result.append("\n") return "".join(result) def store_option_strings(self, parser): self.indent() max_len = 0 for opt in parser.option_list: strings = self.format_option_strings(opt) self.option_strings[opt] = strings max_len = max(max_len, len(strings) + self.current_indent) self.indent() for group in parser.option_groups: for opt in group.option_list: strings = self.format_option_strings(opt) self.option_strings[opt] = strings max_len = max(max_len, len(strings) + self.current_indent) self.dedent() self.dedent() self.help_position = min(max_len + 2, self.max_help_position) self.help_width = self.width - self.help_position def format_option_strings(self, option): """Return a comma-separated list of option strings & metavariables.""" if option.takes_value(): metavar = option.metavar or option.dest.upper() short_opts = [self._short_opt_fmt % (sopt, metavar) for sopt in option._short_opts] long_opts = [self._long_opt_fmt % (lopt, metavar) for lopt in option._long_opts] else: short_opts = option._short_opts long_opts = option._long_opts if self.short_first: opts = short_opts + long_opts else: opts = long_opts + short_opts return ", ".join(opts) class IndentedHelpFormatter (HelpFormatter): """Format help with indented section bodies. """ def __init__(self, indent_increment=2, max_help_position=24, width=None, short_first=1): HelpFormatter.__init__( self, indent_increment, max_help_position, width, short_first) def format_usage(self, usage): return _("Usage: %s\n") % usage def format_heading(self, heading): return "%*s%s:\n" % (self.current_indent, "", heading) class TitledHelpFormatter (HelpFormatter): """Format help with underlined section headers. """ def __init__(self, indent_increment=0, max_help_position=24, width=None, short_first=0): HelpFormatter.__init__ ( self, indent_increment, max_help_position, width, short_first) def format_usage(self, usage): return "%s %s\n" % (self.format_heading(_("Usage")), usage) def format_heading(self, heading): return "%s\n%s\n" % (heading, "=-"[self.level] * len(heading)) def _parse_num(val, type): if val[:2].lower() == "0x": # hexadecimal radix = 16 elif val[:2].lower() == "0b": # binary radix = 2 val = val[2:] or "0" # have to remove "0b" prefix elif val[:1] == "0": # octal radix = 8 else: # decimal radix = 10 return type(val, radix) def _parse_int(val): return _parse_num(val, int) def _parse_long(val): return _parse_num(val, long) _builtin_cvt = { "int" : (_parse_int, _("integer")), "long" : (_parse_long, _("long integer")), "float" : (float, _("floating-point")), "complex" : (complex, _("complex")) } def check_builtin(option, opt, value): (cvt, what) = _builtin_cvt[option.type] try: return cvt(value) except ValueError: raise OptionValueError( _("option %s: invalid %s value: %r") % (opt, what, value)) def check_choice(option, opt, value): if value in option.choices: return value else: choices = ", ".join(map(repr, option.choices)) raise OptionValueError( _("option %s: invalid choice: %r (choose from %s)") % (opt, value, choices)) # Not supplying a default is different from a default of None, # so we need an explicit "not supplied" value. NO_DEFAULT = ("NO", "DEFAULT") class Option: """ Instance attributes: _short_opts : [string] _long_opts : [string] action : string type : string dest : string default : any nargs : int const : any choices : [string] callback : function callback_args : (any*) callback_kwargs : { string : any } help : string metavar : string """ # The list of instance attributes that may be set through # keyword args to the constructor. ATTRS = ['action', 'type', 'dest', 'default', 'nargs', 'const', 'choices', 'callback', 'callback_args', 'callback_kwargs', 'help', 'metavar'] # The set of actions allowed by option parsers. Explicitly listed # here so the constructor can validate its arguments. ACTIONS = ("store", "store_const", "store_true", "store_false", "append", "append_const", "count", "callback", "help", "version") # The set of actions that involve storing a value somewhere; # also listed just for constructor argument validation. (If # the action is one of these, there must be a destination.) STORE_ACTIONS = ("store", "store_const", "store_true", "store_false", "append", "append_const", "count") # The set of actions for which it makes sense to supply a value # type, ie. which may consume an argument from the command line. TYPED_ACTIONS = ("store", "append", "callback") # The set of actions which *require* a value type, ie. that # always consume an argument from the command line. ALWAYS_TYPED_ACTIONS = ("store", "append") # The set of actions which take a 'const' attribute. CONST_ACTIONS = ("store_const", "append_const") # The set of known types for option parsers. Again, listed here for # constructor argument validation. TYPES = ("string", "int", "long", "float", "complex", "choice") # Dictionary of argument checking functions, which convert and # validate option arguments according to the option type. # # Signature of checking functions is: # check(option : Option, opt : string, value : string) -> any # where # option is the Option instance calling the checker # opt is the actual option seen on the command-line # (eg. "-a", "--file") # value is the option argument seen on the command-line # # The return value should be in the appropriate Python type # for option.type -- eg. an integer if option.type == "int". # # If no checker is defined for a type, arguments will be # unchecked and remain strings. TYPE_CHECKER = { "int" : check_builtin, "long" : check_builtin, "float" : check_builtin, "complex": check_builtin, "choice" : check_choice, } # CHECK_METHODS is a list of unbound method objects; they are called # by the constructor, in order, after all attributes are # initialized. The list is created and filled in later, after all # the methods are actually defined. (I just put it here because I # like to define and document all class attributes in the same # place.) Subclasses that add another _check_*() method should # define their own CHECK_METHODS list that adds their check method # to those from this class. CHECK_METHODS = None # -- Constructor/initialization methods ---------------------------- def __init__(self, *opts, **attrs): # Set _short_opts, _long_opts attrs from 'opts' tuple. # Have to be set now, in case no option strings are supplied. self._short_opts = [] self._long_opts = [] opts = self._check_opt_strings(opts) self._set_opt_strings(opts) # Set all other attrs (action, type, etc.) from 'attrs' dict self._set_attrs(attrs) # Check all the attributes we just set. There are lots of # complicated interdependencies, but luckily they can be farmed # out to the _check_*() methods listed in CHECK_METHODS -- which # could be handy for subclasses! The one thing these all share # is that they raise OptionError if they discover a problem. for checker in self.CHECK_METHODS: checker(self) def _check_opt_strings(self, opts): # Filter out None because early versions of Optik had exactly # one short option and one long option, either of which # could be None. opts = filter(None, opts) if not opts: raise TypeError("at least one option string must be supplied") return opts def _set_opt_strings(self, opts): for opt in opts: if len(opt) < 2: raise OptionError( "invalid option string %r: " "must be at least two characters long" % opt, self) elif len(opt) == 2: if not (opt[0] == "-" and opt[1] != "-"): raise OptionError( "invalid short option string %r: " "must be of the form -x, (x any non-dash char)" % opt, self) self._short_opts.append(opt) else: if not (opt[0:2] == "--" and opt[2] != "-"): raise OptionError( "invalid long option string %r: " "must start with --, followed by non-dash" % opt, self) self._long_opts.append(opt) def _set_attrs(self, attrs): for attr in self.ATTRS: if attrs.has_key(attr): setattr(self, attr, attrs[attr]) del attrs[attr] else: if attr == 'default': setattr(self, attr, NO_DEFAULT) else: setattr(self, attr, None) if attrs: attrs = attrs.keys() attrs.sort() raise OptionError( "invalid keyword arguments: %s" % ", ".join(attrs), self) # -- Constructor validation methods -------------------------------- def _check_action(self): if self.action is None: self.action = "store" elif self.action not in self.ACTIONS: raise OptionError("invalid action: %r" % self.action, self) def _check_type(self): if self.type is None: if self.action in self.ALWAYS_TYPED_ACTIONS: if self.choices is not None: # The "choices" attribute implies "choice" type. self.type = "choice" else: # No type given? "string" is the most sensible default. self.type = "string" else: # Allow type objects or builtin type conversion functions # (int, str, etc.) as an alternative to their names. (The # complicated check of __builtin__ is only necessary for # Python 2.1 and earlier, and is short-circuited by the # first check on modern Pythons.) import __builtin__ if ( type(self.type) is types.TypeType or (hasattr(self.type, "__name__") and getattr(__builtin__, self.type.__name__, None) is self.type) ): self.type = self.type.__name__ if self.type == "str": self.type = "string" if self.type not in self.TYPES: raise OptionError("invalid option type: %r" % self.type, self) if self.action not in self.TYPED_ACTIONS: raise OptionError( "must not supply a type for action %r" % self.action, self) def _check_choice(self): if self.type == "choice": if self.choices is None: raise OptionError( "must supply a list of choices for type 'choice'", self) elif type(self.choices) not in (types.TupleType, types.ListType): raise OptionError( "choices must be a list of strings ('%s' supplied)" % str(type(self.choices)).split("'")[1], self) elif self.choices is not None: raise OptionError( "must not supply choices for type %r" % self.type, self) def _check_dest(self): # No destination given, and we need one for this action. The # self.type check is for callbacks that take a value. takes_value = (self.action in self.STORE_ACTIONS or self.type is not None) if self.dest is None and takes_value: # Glean a destination from the first long option string, # or from the first short option string if no long options. if self._long_opts: # eg. "--foo-bar" -> "foo_bar" self.dest = self._long_opts[0][2:].replace('-', '_') else: self.dest = self._short_opts[0][1] def _check_const(self): if self.action not in self.CONST_ACTIONS and self.const is not None: raise OptionError( "'const' must not be supplied for action %r" % self.action, self) def _check_nargs(self): if self.action in self.TYPED_ACTIONS: if self.nargs is None: self.nargs = 1 elif self.nargs is not None: raise OptionError( "'nargs' must not be supplied for action %r" % self.action, self) def _check_callback(self): if self.action == "callback": if not callable(self.callback): raise OptionError( "callback not callable: %r" % self.callback, self) if (self.callback_args is not None and type(self.callback_args) is not types.TupleType): raise OptionError( "callback_args, if supplied, must be a tuple: not %r" % self.callback_args, self) if (self.callback_kwargs is not None and type(self.callback_kwargs) is not types.DictType): raise OptionError( "callback_kwargs, if supplied, must be a dict: not %r" % self.callback_kwargs, self) else: if self.callback is not None: raise OptionError( "callback supplied (%r) for non-callback option" % self.callback, self) if self.callback_args is not None: raise OptionError( "callback_args supplied for non-callback option", self) if self.callback_kwargs is not None: raise OptionError( "callback_kwargs supplied for non-callback option", self) CHECK_METHODS = [_check_action, _check_type, _check_choice, _check_dest, _check_const, _check_nargs, _check_callback] # -- Miscellaneous methods ----------------------------------------- def __str__(self): return "/".join(self._short_opts + self._long_opts) __repr__ = _repr def takes_value(self): return self.type is not None def get_opt_string(self): if self._long_opts: return self._long_opts[0] else: return self._short_opts[0] # -- Processing methods -------------------------------------------- def check_value(self, opt, value): checker = self.TYPE_CHECKER.get(self.type) if checker is None: return value else: return checker(self, opt, value) def convert_value(self, opt, value): if value is not None: if self.nargs == 1: return self.check_value(opt, value) else: return tuple([self.check_value(opt, v) for v in value]) def process(self, opt, value, values, parser): # First, convert the value(s) to the right type. Howl if any # value(s) are bogus. value = self.convert_value(opt, value) # And then take whatever action is expected of us. # This is a separate method to make life easier for # subclasses to add new actions. return self.take_action( self.action, self.dest, opt, value, values, parser) def take_action(self, action, dest, opt, value, values, parser): if action == "store": setattr(values, dest, value) elif action == "store_const": setattr(values, dest, self.const) elif action == "store_true": setattr(values, dest, True) elif action == "store_false": setattr(values, dest, False) elif action == "append": values.ensure_value(dest, []).append(value) elif action == "append_const": values.ensure_value(dest, []).append(self.const) elif action == "count": setattr(values, dest, values.ensure_value(dest, 0) + 1) elif action == "callback": args = self.callback_args or () kwargs = self.callback_kwargs or {} self.callback(self, opt, value, parser, *args, **kwargs) elif action == "help": parser.print_help() parser.exit() elif action == "version": parser.print_version() parser.exit() else: raise RuntimeError, "unknown action %r" % self.action return 1 # class Option SUPPRESS_HELP = "SUPPRESS"+"HELP" SUPPRESS_USAGE = "SUPPRESS"+"USAGE" # For compatibility with Python 2.2 try: True, False except NameError: (True, False) = (1, 0) def isbasestring(x): return isinstance(x, types.StringType) or isinstance(x, types.UnicodeType) class Values: def __init__(self, defaults=None): if defaults: for (attr, val) in defaults.items(): setattr(self, attr, val) def __str__(self): return str(self.__dict__) __repr__ = _repr def __cmp__(self, other): if isinstance(other, Values): return cmp(self.__dict__, other.__dict__) elif isinstance(other, types.DictType): return cmp(self.__dict__, other) else: return -1 def _update_careful(self, dict): """ Update the option values from an arbitrary dictionary, but only use keys from dict that already have a corresponding attribute in self. Any keys in dict without a corresponding attribute are silently ignored. """ for attr in dir(self): if dict.has_key(attr): dval = dict[attr] if dval is not None: setattr(self, attr, dval) def _update_loose(self, dict): """ Update the option values from an arbitrary dictionary, using all keys from the dictionary regardless of whether they have a corresponding attribute in self or not. """ self.__dict__.update(dict) def _update(self, dict, mode): if mode == "careful": self._update_careful(dict) elif mode == "loose": self._update_loose(dict) else: raise ValueError, "invalid update mode: %r" % mode def read_module(self, modname, mode="careful"): __import__(modname) mod = sys.modules[modname] self._update(vars(mod), mode) def read_file(self, filename, mode="careful"): vars = {} execfile(filename, vars) self._update(vars, mode) def ensure_value(self, attr, value): if not hasattr(self, attr) or getattr(self, attr) is None: setattr(self, attr, value) return getattr(self, attr) class OptionContainer: """ Abstract base class. Class attributes: standard_option_list : [Option] list of standard options that will be accepted by all instances of this parser class (intended to be overridden by subclasses). Instance attributes: option_list : [Option] the list of Option objects contained by this OptionContainer _short_opt : { string : Option } dictionary mapping short option strings, eg. "-f" or "-X", to the Option instances that implement them. If an Option has multiple short option strings, it will appears in this dictionary multiple times. [1] _long_opt : { string : Option } dictionary mapping long option strings, eg. "--file" or "--exclude", to the Option instances that implement them. Again, a given Option can occur multiple times in this dictionary. [1] defaults : { string : any } dictionary mapping option destination names to default values for each destination [1] [1] These mappings are common to (shared by) all components of the controlling OptionParser, where they are initially created. """ def __init__(self, option_class, conflict_handler, description): # Initialize the option list and related data structures. # This method must be provided by subclasses, and it must # initialize at least the following instance attributes: # option_list, _short_opt, _long_opt, defaults. self._create_option_list() self.option_class = option_class self.set_conflict_handler(conflict_handler) self.set_description(description) def _create_option_mappings(self): # For use by OptionParser constructor -- create the master # option mappings used by this OptionParser and all # OptionGroups that it owns. self._short_opt = {} # single letter -> Option instance self._long_opt = {} # long option -> Option instance self.defaults = {} # maps option dest -> default value def _share_option_mappings(self, parser): # For use by OptionGroup constructor -- use shared option # mappings from the OptionParser that owns this OptionGroup. self._short_opt = parser._short_opt self._long_opt = parser._long_opt self.defaults = parser.defaults def set_conflict_handler(self, handler): if handler not in ("error", "resolve"): raise ValueError, "invalid conflict_resolution value %r" % handler self.conflict_handler = handler def set_description(self, description): self.description = description def get_description(self): return self.description def destroy(self): """see OptionParser.destroy().""" del self._short_opt del self._long_opt del self.defaults # -- Option-adding methods ----------------------------------------- def _check_conflict(self, option): conflict_opts = [] for opt in option._short_opts: if self._short_opt.has_key(opt): conflict_opts.append((opt, self._short_opt[opt])) for opt in option._long_opts: if self._long_opt.has_key(opt): conflict_opts.append((opt, self._long_opt[opt])) if conflict_opts: handler = self.conflict_handler if handler == "error": raise OptionConflictError( "conflicting option string(s): %s" % ", ".join([co[0] for co in conflict_opts]), option) elif handler == "resolve": for (opt, c_option) in conflict_opts: if opt.startswith("--"): c_option._long_opts.remove(opt) del self._long_opt[opt] else: c_option._short_opts.remove(opt) del self._short_opt[opt] if not (c_option._short_opts or c_option._long_opts): c_option.container.option_list.remove(c_option) def add_option(self, *args, **kwargs): """add_option(Option) add_option(opt_str, ..., kwarg=val, ...) """ if type(args[0]) is types.StringType: option = self.option_class(*args, **kwargs) elif len(args) == 1 and not kwargs: option = args[0] if not isinstance(option, Option): raise TypeError, "not an Option instance: %r" % option else: raise TypeError, "invalid arguments" self._check_conflict(option) self.option_list.append(option) option.container = self for opt in option._short_opts: self._short_opt[opt] = option for opt in option._long_opts: self._long_opt[opt] = option if option.dest is not None: # option has a dest, we need a default if option.default is not NO_DEFAULT: self.defaults[option.dest] = option.default elif not self.defaults.has_key(option.dest): self.defaults[option.dest] = None return option def add_options(self, option_list): for option in option_list: self.add_option(option) # -- Option query/removal methods ---------------------------------- def get_option(self, opt_str): return (self._short_opt.get(opt_str) or self._long_opt.get(opt_str)) def has_option(self, opt_str): return (self._short_opt.has_key(opt_str) or self._long_opt.has_key(opt_str)) def remove_option(self, opt_str): option = self._short_opt.get(opt_str) if option is None: option = self._long_opt.get(opt_str) if option is None: raise ValueError("no such option %r" % opt_str) for opt in option._short_opts: del self._short_opt[opt] for opt in option._long_opts: del self._long_opt[opt] option.container.option_list.remove(option) # -- Help-formatting methods --------------------------------------- def format_option_help(self, formatter): if not self.option_list: return "" result = [] for option in self.option_list: if not option.help is SUPPRESS_HELP: result.append(formatter.format_option(option)) return "".join(result) def format_description(self, formatter): return formatter.format_description(self.get_description()) def format_help(self, formatter): result = [] if self.description: result.append(self.format_description(formatter)) if self.option_list: result.append(self.format_option_help(formatter)) return "\n".join(result) class OptionGroup (OptionContainer): def __init__(self, parser, title, description=None): self.parser = parser OptionContainer.__init__( self, parser.option_class, parser.conflict_handler, description) self.title = title def _create_option_list(self): self.option_list = [] self._share_option_mappings(self.parser) def set_title(self, title): self.title = title def destroy(self): """see OptionParser.destroy().""" OptionContainer.destroy(self) del self.option_list # -- Help-formatting methods --------------------------------------- def format_help(self, formatter): result = formatter.format_heading(self.title) formatter.indent() result += OptionContainer.format_help(self, formatter) formatter.dedent() return result class OptionParser (OptionContainer): """ Class attributes: standard_option_list : [Option] list of standard options that will be accepted by all instances of this parser class (intended to be overridden by subclasses). Instance attributes: usage : string a usage string for your program. Before it is displayed to the user, "%prog" will be expanded to the name of your program (self.prog or os.path.basename(sys.argv[0])). prog : string the name of the current program (to override os.path.basename(sys.argv[0])). epilog : string paragraph of help text to print after option help option_groups : [OptionGroup] list of option groups in this parser (option groups are irrelevant for parsing the command-line, but very useful for generating help) allow_interspersed_args : bool = true if true, positional arguments may be interspersed with options. Assuming -a and -b each take a single argument, the command-line -ablah foo bar -bboo baz will be interpreted the same as -ablah -bboo -- foo bar baz If this flag were false, that command line would be interpreted as -ablah -- foo bar -bboo baz -- ie. we stop processing options as soon as we see the first non-option argument. (This is the tradition followed by Python's getopt module, Perl's Getopt::Std, and other argument- parsing libraries, but it is generally annoying to users.) process_default_values : bool = true if true, option default values are processed similarly to option values from the command line: that is, they are passed to the type-checking function for the option's type (as long as the default value is a string). (This really only matters if you have defined custom types; see SF bug #955889.) Set it to false to restore the behaviour of Optik 1.4.1 and earlier. rargs : [string] the argument list currently being parsed. Only set when parse_args() is active, and continually trimmed down as we consume arguments. Mainly there for the benefit of callback options. largs : [string] the list of leftover arguments that we have skipped while parsing options. If allow_interspersed_args is false, this list is always empty. values : Values the set of option values currently being accumulated. Only set when parse_args() is active. Also mainly for callbacks. Because of the 'rargs', 'largs', and 'values' attributes, OptionParser is not thread-safe. If, for some perverse reason, you need to parse command-line arguments simultaneously in different threads, use different OptionParser instances. """ standard_option_list = [] def __init__(self, usage=None, option_list=None, option_class=Option, version=None, conflict_handler="error", description=None, formatter=None, add_help_option=True, prog=None, epilog=None): OptionContainer.__init__( self, option_class, conflict_handler, description) self.set_usage(usage) self.prog = prog self.version = version self.allow_interspersed_args = True self.process_default_values = True if formatter is None: formatter = IndentedHelpFormatter() self.formatter = formatter self.formatter.set_parser(self) self.epilog = epilog # Populate the option list; initial sources are the # standard_option_list class attribute, the 'option_list' # argument, and (if applicable) the _add_version_option() and # _add_help_option() methods. self._populate_option_list(option_list, add_help=add_help_option) self._init_parsing_state() def destroy(self): """ Declare that you are done with this OptionParser. This cleans up reference cycles so the OptionParser (and all objects referenced by it) can be garbage-collected promptly. After calling destroy(), the OptionParser is unusable. """ OptionContainer.destroy(self) for group in self.option_groups: group.destroy() del self.option_list del self.option_groups del self.formatter # -- Private methods ----------------------------------------------- # (used by our or OptionContainer's constructor) def _create_option_list(self): self.option_list = [] self.option_groups = [] self._create_option_mappings() def _add_help_option(self): self.add_option("-h", "--help", action="help", help=_("show this help message and exit")) def _add_version_option(self): self.add_option("--version", action="version", help=_("show program's version number and exit")) def _populate_option_list(self, option_list, add_help=True): if self.standard_option_list: self.add_options(self.standard_option_list) if option_list: self.add_options(option_list) if self.version: self._add_version_option() if add_help: self._add_help_option() def _init_parsing_state(self): # These are set in parse_args() for the convenience of callbacks. self.rargs = None self.largs = None self.values = None # -- Simple modifier methods --------------------------------------- def set_usage(self, usage): if usage is None: self.usage = _("%prog [options]") elif usage is SUPPRESS_USAGE: self.usage = None # For backwards compatibility with Optik 1.3 and earlier. elif usage.lower().startswith("usage: "): self.usage = usage[7:] else: self.usage = usage def enable_interspersed_args(self): self.allow_interspersed_args = True def disable_interspersed_args(self): self.allow_interspersed_args = False def set_process_default_values(self, process): self.process_default_values = process def set_default(self, dest, value): self.defaults[dest] = value def set_defaults(self, **kwargs): self.defaults.update(kwargs) def _get_all_options(self): options = self.option_list[:] for group in self.option_groups: options.extend(group.option_list) return options def get_default_values(self): if not self.process_default_values: # Old, pre-Optik 1.5 behaviour. return Values(self.defaults) defaults = self.defaults.copy() for option in self._get_all_options(): default = defaults.get(option.dest) if isbasestring(default): opt_str = option.get_opt_string() defaults[option.dest] = option.check_value(opt_str, default) return Values(defaults) # -- OptionGroup methods ------------------------------------------- def add_option_group(self, *args, **kwargs): # XXX lots of overlap with OptionContainer.add_option() if type(args[0]) is types.StringType: group = OptionGroup(self, *args, **kwargs) elif len(args) == 1 and not kwargs: group = args[0] if not isinstance(group, OptionGroup): raise TypeError, "not an OptionGroup instance: %r" % group if group.parser is not self: raise ValueError, "invalid OptionGroup (wrong parser)" else: raise TypeError, "invalid arguments" self.option_groups.append(group) return group def get_option_group(self, opt_str): option = (self._short_opt.get(opt_str) or self._long_opt.get(opt_str)) if option and option.container is not self: return option.container return None # -- Option-parsing methods ---------------------------------------- def _get_args(self, args): if args is None: return sys.argv[1:] else: return args[:] # don't modify caller's list def parse_args(self, args=None, values=None): """ parse_args(args : [string] = sys.argv[1:], values : Values = None) -> (values : Values, args : [string]) Parse the command-line options found in 'args' (default: sys.argv[1:]). Any errors result in a call to 'error()', which by default prints the usage message to stderr and calls sys.exit() with an error message. On success returns a pair (values, args) where 'values' is an Values instance (with all your option values) and 'args' is the list of arguments left over after parsing options. """ rargs = self._get_args(args) if values is None: values = self.get_default_values() # Store the halves of the argument list as attributes for the # convenience of callbacks: # rargs # the rest of the command-line (the "r" stands for # "remaining" or "right-hand") # largs # the leftover arguments -- ie. what's left after removing # options and their arguments (the "l" stands for "leftover" # or "left-hand") self.rargs = rargs self.largs = largs = [] self.values = values try: stop = self._process_args(largs, rargs, values) except (BadOptionError, OptionValueError), err: self.error(str(err)) args = largs + rargs return self.check_values(values, args) def check_values(self, values, args): """ check_values(values : Values, args : [string]) -> (values : Values, args : [string]) Check that the supplied option values and leftover arguments are valid. Returns the option values and leftover arguments (possibly adjusted, possibly completely new -- whatever you like). Default implementation just returns the passed-in values; subclasses may override as desired. """ return (values, args) def _process_args(self, largs, rargs, values): """_process_args(largs : [string], rargs : [string], values : Values) Process command-line arguments and populate 'values', consuming options and arguments from 'rargs'. If 'allow_interspersed_args' is false, stop at the first non-option argument. If true, accumulate any interspersed non-option arguments in 'largs'. """ while rargs: arg = rargs[0] # We handle bare "--" explicitly, and bare "-" is handled by the # standard arg handler since the short arg case ensures that the # len of the opt string is greater than 1. if arg == "--": del rargs[0] return elif arg[0:2] == "--": # process a single long option (possibly with value(s)) self._process_long_opt(rargs, values) elif arg[:1] == "-" and len(arg) > 1: # process a cluster of short options (possibly with # value(s) for the last one only) self._process_short_opts(rargs, values) elif self.allow_interspersed_args: largs.append(arg) del rargs[0] else: return # stop now, leave this arg in rargs # Say this is the original argument list: # [arg0, arg1, ..., arg(i-1), arg(i), arg(i+1), ..., arg(N-1)] # ^ # (we are about to process arg(i)). # # Then rargs is [arg(i), ..., arg(N-1)] and largs is a *subset* of # [arg0, ..., arg(i-1)] (any options and their arguments will have # been removed from largs). # # The while loop will usually consume 1 or more arguments per pass. # If it consumes 1 (eg. arg is an option that takes no arguments), # then after _process_arg() is done the situation is: # # largs = subset of [arg0, ..., arg(i)] # rargs = [arg(i+1), ..., arg(N-1)] # # If allow_interspersed_args is false, largs will always be # *empty* -- still a subset of [arg0, ..., arg(i-1)], but # not a very interesting subset! def _match_long_opt(self, opt): """_match_long_opt(opt : string) -> string Determine which long option string 'opt' matches, ie. which one it is an unambiguous abbrevation for. Raises BadOptionError if 'opt' doesn't unambiguously match any long option string. """ return _match_abbrev(opt, self._long_opt) def _process_long_opt(self, rargs, values): arg = rargs.pop(0) # Value explicitly attached to arg? Pretend it's the next # argument. if "=" in arg: (opt, next_arg) = arg.split("=", 1) rargs.insert(0, next_arg) had_explicit_value = True else: opt = arg had_explicit_value = False opt = self._match_long_opt(opt) option = self._long_opt[opt] if option.takes_value(): nargs = option.nargs if len(rargs) < nargs: if nargs == 1: self.error(_("%s option requires an argument") % opt) else: self.error(_("%s option requires %d arguments") % (opt, nargs)) elif nargs == 1: value = rargs.pop(0) else: value = tuple(rargs[0:nargs]) del rargs[0:nargs] elif had_explicit_value: self.error(_("%s option does not take a value") % opt) else: value = None option.process(opt, value, values, self) def _process_short_opts(self, rargs, values): arg = rargs.pop(0) stop = False i = 1 for ch in arg[1:]: opt = "-" + ch option = self._short_opt.get(opt) i += 1 # we have consumed a character if not option: raise BadOptionError(opt) if option.takes_value(): # Any characters left in arg? Pretend they're the # next arg, and stop consuming characters of arg. if i < len(arg): rargs.insert(0, arg[i:]) stop = True nargs = option.nargs if len(rargs) < nargs: if nargs == 1: self.error(_("%s option requires an argument") % opt) else: self.error(_("%s option requires %d arguments") % (opt, nargs)) elif nargs == 1: value = rargs.pop(0) else: value = tuple(rargs[0:nargs]) del rargs[0:nargs] else: # option doesn't take a value value = None option.process(opt, value, values, self) if stop: break # -- Feedback methods ---------------------------------------------- def get_prog_name(self): if self.prog is None: return os.path.basename(sys.argv[0]) else: return self.prog def expand_prog_name(self, s): return s.replace("%prog", self.get_prog_name()) def get_description(self): return self.expand_prog_name(self.description) def exit(self, status=0, msg=None): if msg: sys.stderr.write(msg) sys.exit(status) def error(self, msg): """error(msg : string) Print a usage message incorporating 'msg' to stderr and exit. If you override this in a subclass, it should not return -- it should either exit or raise an exception. """ self.print_usage(sys.stderr) self.exit(2, "%s: error: %s\n" % (self.get_prog_name(), msg)) def get_usage(self): if self.usage: return self.formatter.format_usage( self.expand_prog_name(self.usage)) else: return "" def print_usage(self, file=None): """print_usage(file : file = stdout) Print the usage message for the current program (self.usage) to 'file' (default stdout). Any occurence of the string "%prog" in self.usage is replaced with the name of the current program (basename of sys.argv[0]). Does nothing if self.usage is empty or not defined. """ if self.usage: print >>file, self.get_usage() def get_version(self): if self.version: return self.expand_prog_name(self.version) else: return "" def print_version(self, file=None): """print_version(file : file = stdout) Print the version message for this program (self.version) to 'file' (default stdout). As with print_usage(), any occurence of "%prog" in self.version is replaced by the current program's name. Does nothing if self.version is empty or undefined. """ if self.version: print >>file, self.get_version() def format_option_help(self, formatter=None): if formatter is None: formatter = self.formatter formatter.store_option_strings(self) result = [] result.append(formatter.format_heading(_("Options"))) formatter.indent() if self.option_list: result.append(OptionContainer.format_option_help(self, formatter)) result.append("\n") for group in self.option_groups: result.append(group.format_help(formatter)) result.append("\n") formatter.dedent() # Drop the last "\n", or the header if no options or option groups: return "".join(result[:-1]) def format_epilog(self, formatter): return formatter.format_epilog(self.epilog) def format_help(self, formatter=None): if formatter is None: formatter = self.formatter result = [] if self.usage: result.append(self.get_usage() + "\n") if self.description: result.append(self.format_description(formatter) + "\n") result.append(self.format_option_help(formatter)) result.append(self.format_epilog(formatter)) return "".join(result) # used by test suite def _get_encoding(self, file): encoding = getattr(file, "encoding", None) if not encoding: encoding = sys.getdefaultencoding() return encoding def print_help(self, file=None): """print_help(file : file = stdout) Print an extended help message, listing all options and any help text provided with them, to 'file' (default stdout). """ if file is None: file = sys.stdout encoding = self._get_encoding(file) file.write(self.format_help().encode(encoding, "replace")) # class OptionParser def _match_abbrev(s, wordmap): """_match_abbrev(s : string, wordmap : {string : Option}) -> string Return the string key in 'wordmap' for which 's' is an unambiguous abbreviation. If 's' is found to be ambiguous or doesn't match any of 'words', raise BadOptionError. """ # Is there an exact match? if wordmap.has_key(s): return s else: # Isolate all words with s as a prefix. possibilities = [word for word in wordmap.keys() if word.startswith(s)] # No exact match, so there had better be just one possibility. if len(possibilities) == 1: return possibilities[0] elif not possibilities: raise BadOptionError(s) else: # More than one possible completion: ambiguous prefix. possibilities.sort() raise AmbiguousOptionError(s, possibilities) # Some day, there might be many Option classes. As of Optik 1.3, the # preferred way to instantiate Options is indirectly, via make_option(), # which will become a factory function when there are many Option # classes. make_option = Option cobbler-2.4.1/koan/qcreate.py000077500000000000000000000022151227367477500161030ustar00rootroot00000000000000""" Virtualization installation functions. Copyright 2007-2008 Red Hat, Inc and Others. Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA module for creating fullvirt guests via KVM/kqemu/qemu requires python-virtinst-0.200 (or virt-install in later distros). """ import utils import virtinstall def start_install(*args, **kwargs): virtinstall.create_image_file(*args, **kwargs) cmd = virtinstall.build_commandline("qemu:///system", *args, **kwargs) utils.subprocess_call(cmd) cobbler-2.4.1/koan/register.py000077500000000000000000000137401227367477500163100ustar00rootroot00000000000000""" registration tool for cobbler. Copyright 2009 Red Hat, Inc and Others. Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import random import os import traceback try: from optparse import OptionParser except: from opt_parse import OptionParser # importing this for backwards compat with 2.2 import exceptions try: import subprocess as sub_process except: import sub_process import time import errno import sys import xmlrpclib import glob import socket import utils import string import pprint # usage: cobbler-register [--server=server] [--fqdn=hostname] --profile=foo def main(): """ Command line stuff... """ p = OptionParser() p.add_option("-s", "--server", dest="server", default=os.environ.get("COBBLER_SERVER",""), help="attach to this cobbler server") p.add_option("-f", "--fqdn", dest="hostname", default="", help="override the discovered hostname") p.add_option("-p", "--port", dest="port", default="80", help="cobbler port (default 80)") p.add_option("-P", "--profile", dest="profile", default="", help="assign this profile to this system") p.add_option("-b", "--batch", dest="batch", action="store_true", help="indicates this is being run from a script") (options, args) = p.parse_args() #if not os.getuid() == 0: # print "koan requires root access" # return 3 try: k = Register() k.server = options.server k.port = options.port k.profile = options.profile k.hostname = options.hostname k.batch = options.batch k.run() except Exception, e: (xa, xb, tb) = sys.exc_info() try: getattr(e,"from_koan") print str(e)[1:-1] # nice exception, no traceback needed except: print xa print xb print string.join(traceback.format_list(traceback.extract_tb(tb))) return 1 return 0 #======================================================= class InfoException(exceptions.Exception): """ Custom exception for tracking of fatal errors. """ def __init__(self,value,**args): self.value = value % args self.from_koan = 1 def __str__(self): return repr(self.value) #======================================================= class Register: def __init__(self): """ Constructor. Arguments will be filled in by optparse... """ self.server = "" self.port = "" self.profile = "" self.hostname = "" self.batch = "" #--------------------------------------------------- def run(self): """ Commence with the registration already. """ # not really required, but probably best that ordinary users don't try # to run this not knowing what it does. if os.getuid() != 0: raise InfoException("root access is required to register") print "- preparing to koan home" self.conn = utils.connect_to_server(self.server, self.port) reg_info = {} print "- gathering network info" netinfo = utils.get_network_info() reg_info["interfaces"] = netinfo print "- checking hostname" sysname = "" if self.hostname != "" and self.hostname != "*AUTO*": hostname = self.hostname sysname = self.hostname else: hostname = socket.getfqdn() if hostname == "localhost.localdomain": if self.hostname == '*AUTO*': hostname = "" sysname = str(time.time()) else: raise InfoException("must specify --fqdn, could not discover") if sysname == "": sysname = hostname if self.profile == "": raise InfoException("must specify --profile") # we'll do a profile check here just to avoid some log noise on the remote end. # network duplication checks and profile checks also happen on the remote end. avail_profiles = self.conn.get_profiles() matched_profile = False for x in avail_profiles: if x.get("name","") == self.profile: matched_profile=True break reg_info['name'] = sysname reg_info['profile'] = self.profile reg_info['hostname'] = hostname if not matched_profile: raise InfoException("no such remote profile, see 'koan --list-profiles'") if not self.batch: self.conn.register_new_system(reg_info) print "- registration successful, new system name: %s" % sysname else: try: self.conn.register_new_system(reg_info) print "- registration successful, new system name: %s" % sysname except: traceback.print_exc() print "- registration failed, ignoring because of --batch" return if __name__ == "__main__": main() cobbler-2.4.1/koan/sub_process.py000066400000000000000000001157731227367477500170210ustar00rootroot00000000000000# subprocess - Subprocesses with accessible I/O streams # # For more information about this module, see PEP 324. # # This module should remain compatible with Python 2.2, see PEP 291. # # Copyright (c) 2003-2005 by Peter Astrand # # Licensed to PSF under a Contributor Agreement. # See http://www.python.org/2.4/license for licensing details. r"""subprocess - Subprocesses with accessible I/O streams This module allows you to spawn processes, connect to their input/output/error pipes, and obtain their return codes. This module intends to replace several other, older modules and functions, like: os.system os.spawn* os.popen* popen2.* commands.* Information about how the subprocess module can be used to replace these modules and functions can be found below. Using the subprocess module =========================== This module defines one class called Popen: class Popen(args, bufsize=0, executable=None, stdin=None, stdout=None, stderr=None, preexec_fn=None, close_fds=False, shell=False, cwd=None, env=None, universal_newlines=False, startupinfo=None, creationflags=0): Arguments are: args should be a string, or a sequence of program arguments. The program to execute is normally the first item in the args sequence or string, but can be explicitly set by using the executable argument. On UNIX, with shell=False (default): In this case, the Popen class uses os.execvp() to execute the child program. args should normally be a sequence. A string will be treated as a sequence with the string as the only item (the program to execute). On UNIX, with shell=True: If args is a string, it specifies the command string to execute through the shell. If args is a sequence, the first item specifies the command string, and any additional items will be treated as additional shell arguments. On Windows: the Popen class uses CreateProcess() to execute the child program, which operates on strings. If args is a sequence, it will be converted to a string using the list2cmdline method. Please note that not all MS Windows applications interpret the command line the same way: The list2cmdline is designed for applications using the same rules as the MS C runtime. bufsize, if given, has the same meaning as the corresponding argument to the built-in open() function: 0 means unbuffered, 1 means line buffered, any other positive value means use a buffer of (approximately) that size. A negative bufsize means to use the system default, which usually means fully buffered. The default value for bufsize is 0 (unbuffered). stdin, stdout and stderr specify the executed programs' standard input, standard output and standard error file handles, respectively. Valid values are PIPE, an existing file descriptor (a positive integer), an existing file object, and None. PIPE indicates that a new pipe to the child should be created. With None, no redirection will occur; the child's file handles will be inherited from the parent. Additionally, stderr can be STDOUT, which indicates that the stderr data from the applications should be captured into the same file handle as for stdout. If preexec_fn is set to a callable object, this object will be called in the child process just before the child is executed. If close_fds is true, all file descriptors except 0, 1 and 2 will be closed before the child process is executed. if shell is true, the specified command will be executed through the shell. If cwd is not None, the current directory will be changed to cwd before the child is executed. If env is not None, it defines the environment variables for the new process. If universal_newlines is true, the file objects stdout and stderr are opened as a text files, but lines may be terminated by any of '\n', the Unix end-of-line convention, '\r', the Macintosh convention or '\r\n', the Windows convention. All of these external representations are seen as '\n' by the Python program. Note: This feature is only available if Python is built with universal newline support (the default). Also, the newlines attribute of the file objects stdout, stdin and stderr are not updated by the communicate() method. The startupinfo and creationflags, if given, will be passed to the underlying CreateProcess() function. They can specify things such as appearance of the main window and priority for the new process. (Windows only) This module also defines two shortcut functions: call(*args, **kwargs): Run command with arguments. Wait for command to complete, then return the returncode attribute. The arguments are the same as for the Popen constructor. Example: retcode = call(["ls", "-l"]) Exceptions ---------- Exceptions raised in the child process, before the new program has started to execute, will be re-raised in the parent. Additionally, the exception object will have one extra attribute called 'child_traceback', which is a string containing traceback information from the childs point of view. The most common exception raised is OSError. This occurs, for example, when trying to execute a non-existent file. Applications should prepare for OSErrors. A ValueError will be raised if Popen is called with invalid arguments. Security -------- Unlike some other popen functions, this implementation will never call /bin/sh implicitly. This means that all characters, including shell metacharacters, can safely be passed to child processes. Popen objects ============= Instances of the Popen class have the following methods: poll() Check if child process has terminated. Returns returncode attribute. wait() Wait for child process to terminate. Returns returncode attribute. communicate(input=None) Interact with process: Send data to stdin. Read data from stdout and stderr, until end-of-file is reached. Wait for process to terminate. The optional stdin argument should be a string to be sent to the child process, or None, if no data should be sent to the child. communicate() returns a tuple (stdout, stderr). Note: The data read is buffered in memory, so do not use this method if the data size is large or unlimited. The following attributes are also available: stdin If the stdin argument is PIPE, this attribute is a file object that provides input to the child process. Otherwise, it is None. stdout If the stdout argument is PIPE, this attribute is a file object that provides output from the child process. Otherwise, it is None. stderr If the stderr argument is PIPE, this attribute is file object that provides error output from the child process. Otherwise, it is None. pid The process ID of the child process. returncode The child return code. A None value indicates that the process hasn't terminated yet. A negative value -N indicates that the child was terminated by signal N (UNIX only). Replacing older functions with the subprocess module ==================================================== In this section, "a ==> b" means that b can be used as a replacement for a. Note: All functions in this section fail (more or less) silently if the executed program cannot be found; this module raises an OSError exception. In the following examples, we assume that the subprocess module is imported with "from subprocess import *". Replacing /bin/sh shell backquote --------------------------------- output=`mycmd myarg` ==> output = Popen(["mycmd", "myarg"], stdout=PIPE).communicate()[0] Replacing shell pipe line ------------------------- output=`dmesg | grep hda` ==> p1 = Popen(["dmesg"], stdout=PIPE) p2 = Popen(["grep", "hda"], stdin=p1.stdout, stdout=PIPE) output = p2.communicate()[0] Replacing os.system() --------------------- sts = os.system("mycmd" + " myarg") ==> p = Popen("mycmd" + " myarg", shell=True) sts = os.waitpid(p.pid, 0) Note: * Calling the program through the shell is usually not required. * It's easier to look at the returncode attribute than the exitstatus. A more real-world example would look like this: try: retcode = call("mycmd" + " myarg", shell=True) if retcode < 0: print >>sys.stderr, "Child was terminated by signal", -retcode else: print >>sys.stderr, "Child returned", retcode except OSError, e: print >>sys.stderr, "Execution failed:", e Replacing os.spawn* ------------------- P_NOWAIT example: pid = os.spawnlp(os.P_NOWAIT, "/bin/mycmd", "mycmd", "myarg") ==> pid = Popen(["/bin/mycmd", "myarg"]).pid P_WAIT example: retcode = os.spawnlp(os.P_WAIT, "/bin/mycmd", "mycmd", "myarg") ==> retcode = call(["/bin/mycmd", "myarg"]) Vector example: os.spawnvp(os.P_NOWAIT, path, args) ==> Popen([path] + args[1:]) Environment example: os.spawnlpe(os.P_NOWAIT, "/bin/mycmd", "mycmd", "myarg", env) ==> Popen(["/bin/mycmd", "myarg"], env={"PATH": "/usr/bin"}) Replacing os.popen* ------------------- pipe = os.popen(cmd, mode='r', bufsize) ==> pipe = Popen(cmd, shell=True, bufsize=bufsize, stdout=PIPE).stdout pipe = os.popen(cmd, mode='w', bufsize) ==> pipe = Popen(cmd, shell=True, bufsize=bufsize, stdin=PIPE).stdin (child_stdin, child_stdout) = os.popen2(cmd, mode, bufsize) ==> p = Popen(cmd, shell=True, bufsize=bufsize, stdin=PIPE, stdout=PIPE, close_fds=True) (child_stdin, child_stdout) = (p.stdin, p.stdout) (child_stdin, child_stdout, child_stderr) = os.popen3(cmd, mode, bufsize) ==> p = Popen(cmd, shell=True, bufsize=bufsize, stdin=PIPE, stdout=PIPE, stderr=PIPE, close_fds=True) (child_stdin, child_stdout, child_stderr) = (p.stdin, p.stdout, p.stderr) (child_stdin, child_stdout_and_stderr) = os.popen4(cmd, mode, bufsize) ==> p = Popen(cmd, shell=True, bufsize=bufsize, stdin=PIPE, stdout=PIPE, stderr=STDOUT, close_fds=True) (child_stdin, child_stdout_and_stderr) = (p.stdin, p.stdout) Replacing popen2.* ------------------ Note: If the cmd argument to popen2 functions is a string, the command is executed through /bin/sh. If it is a list, the command is directly executed. (child_stdout, child_stdin) = popen2.popen2("somestring", bufsize, mode) ==> p = Popen(["somestring"], shell=True, bufsize=bufsize stdin=PIPE, stdout=PIPE, close_fds=True) (child_stdout, child_stdin) = (p.stdout, p.stdin) (child_stdout, child_stdin) = popen2.popen2(["mycmd", "myarg"], bufsize, mode) ==> p = Popen(["mycmd", "myarg"], bufsize=bufsize, stdin=PIPE, stdout=PIPE, close_fds=True) (child_stdout, child_stdin) = (p.stdout, p.stdin) The popen2.Popen3 and popen3.Popen4 basically works as subprocess.Popen, except that: * subprocess.Popen raises an exception if the execution fails * the capturestderr argument is replaced with the stderr argument. * stdin=PIPE and stdout=PIPE must be specified. * popen2 closes all filedescriptors by default, but you have to specify close_fds=True with subprocess.Popen. """ import sys mswindows = (sys.platform == "win32") import os import types import traceback if mswindows: import threading import msvcrt if 0: # <-- change this to use pywin32 instead of the _subprocess driver import pywintypes from win32api import GetStdHandle, STD_INPUT_HANDLE, \ STD_OUTPUT_HANDLE, STD_ERROR_HANDLE from win32api import GetCurrentProcess, DuplicateHandle, \ GetModuleFileName, GetVersion from win32con import DUPLICATE_SAME_ACCESS, SW_HIDE from win32pipe import CreatePipe from win32process import CreateProcess, STARTUPINFO, \ GetExitCodeProcess, STARTF_USESTDHANDLES, \ STARTF_USESHOWWINDOW, CREATE_NEW_CONSOLE from win32event import WaitForSingleObject, INFINITE, WAIT_OBJECT_0 else: from _subprocess import * class STARTUPINFO: dwFlags = 0 hStdInput = None hStdOutput = None hStdError = None class pywintypes: error = IOError else: import select import errno import fcntl import pickle __all__ = ["Popen", "PIPE", "STDOUT", "call"] try: MAXFD = os.sysconf("SC_OPEN_MAX") except: MAXFD = 256 # True/False does not exist on 2.2.0 try: False except NameError: False = 0 True = 1 _active = [] def _cleanup(): for inst in _active[:]: inst.poll() PIPE = -1 STDOUT = -2 def call(*args, **kwargs): """Run command with arguments. Wait for command to complete, then return the returncode attribute. The arguments are the same as for the Popen constructor. Example: retcode = call(["ls", "-l"]) """ return Popen(*args, **kwargs).wait() def list2cmdline(seq): """ Translate a sequence of arguments into a command line string, using the same rules as the MS C runtime: 1) Arguments are delimited by white space, which is either a space or a tab. 2) A string surrounded by double quotation marks is interpreted as a single argument, regardless of white space contained within. A quoted string can be embedded in an argument. 3) A double quotation mark preceded by a backslash is interpreted as a literal double quotation mark. 4) Backslashes are interpreted literally, unless they immediately precede a double quotation mark. 5) If backslashes immediately precede a double quotation mark, every pair of backslashes is interpreted as a literal backslash. If the number of backslashes is odd, the last backslash escapes the next double quotation mark as described in rule 3. """ # See # http://msdn.microsoft.com/library/en-us/vccelng/htm/progs_12.asp result = [] needquote = False for arg in seq: bs_buf = [] # Add a space to separate this argument from the others if result: result.append(' ') needquote = (" " in arg) or ("\t" in arg) if needquote: result.append('"') for c in arg: if c == '\\': # Don't know if we need to double yet. bs_buf.append(c) elif c == '"': # Double backspaces. result.append('\\' * len(bs_buf)*2) bs_buf = [] result.append('\\"') else: # Normal char if bs_buf: result.extend(bs_buf) bs_buf = [] result.append(c) # Add remaining backspaces, if any. if bs_buf: result.extend(bs_buf) if needquote: result.extend(bs_buf) result.append('"') return ''.join(result) class Popen(object): def __init__(self, args, bufsize=0, executable=None, stdin=None, stdout=None, stderr=None, preexec_fn=None, close_fds=False, shell=False, cwd=None, env=None, universal_newlines=False, startupinfo=None, creationflags=0): """Create new Popen instance.""" _cleanup() if not isinstance(bufsize, (int, long)): raise TypeError("bufsize must be an integer") if mswindows: if preexec_fn is not None: raise ValueError("preexec_fn is not supported on Windows " "platforms") if close_fds: raise ValueError("close_fds is not supported on Windows " "platforms") else: # POSIX if startupinfo is not None: raise ValueError("startupinfo is only supported on Windows " "platforms") if creationflags != 0: raise ValueError("creationflags is only supported on Windows " "platforms") self.stdin = None self.stdout = None self.stderr = None self.pid = None self.returncode = None self.universal_newlines = universal_newlines # Input and output objects. The general principle is like # this: # # Parent Child # ------ ----- # p2cwrite ---stdin---> p2cread # c2pread <--stdout--- c2pwrite # errread <--stderr--- errwrite # # On POSIX, the child objects are file descriptors. On # Windows, these are Windows file handles. The parent objects # are file descriptors on both platforms. The parent objects # are None when not using PIPEs. The child objects are None # when not redirecting. (p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite) = self._get_handles(stdin, stdout, stderr) self._execute_child(args, executable, preexec_fn, close_fds, cwd, env, universal_newlines, startupinfo, creationflags, shell, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite) if p2cwrite: self.stdin = os.fdopen(p2cwrite, 'wb', bufsize) if c2pread: if universal_newlines: self.stdout = os.fdopen(c2pread, 'rU', bufsize) else: self.stdout = os.fdopen(c2pread, 'rb', bufsize) if errread: if universal_newlines: self.stderr = os.fdopen(errread, 'rU', bufsize) else: self.stderr = os.fdopen(errread, 'rb', bufsize) _active.append(self) def _translate_newlines(self, data): data = data.replace("\r\n", "\n") data = data.replace("\r", "\n") return data if mswindows: # # Windows methods # def _get_handles(self, stdin, stdout, stderr): """Construct and return tupel with IO objects: p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite """ if stdin == None and stdout == None and stderr == None: return (None, None, None, None, None, None) p2cread, p2cwrite = None, None c2pread, c2pwrite = None, None errread, errwrite = None, None if stdin == None: p2cread = GetStdHandle(STD_INPUT_HANDLE) elif stdin == PIPE: p2cread, p2cwrite = CreatePipe(None, 0) # Detach and turn into fd p2cwrite = p2cwrite.Detach() p2cwrite = msvcrt.open_osfhandle(p2cwrite, 0) elif type(stdin) == types.IntType: p2cread = msvcrt.get_osfhandle(stdin) else: # Assuming file-like object p2cread = msvcrt.get_osfhandle(stdin.fileno()) p2cread = self._make_inheritable(p2cread) if stdout == None: c2pwrite = GetStdHandle(STD_OUTPUT_HANDLE) elif stdout == PIPE: c2pread, c2pwrite = CreatePipe(None, 0) # Detach and turn into fd c2pread = c2pread.Detach() c2pread = msvcrt.open_osfhandle(c2pread, 0) elif type(stdout) == types.IntType: c2pwrite = msvcrt.get_osfhandle(stdout) else: # Assuming file-like object c2pwrite = msvcrt.get_osfhandle(stdout.fileno()) c2pwrite = self._make_inheritable(c2pwrite) if stderr == None: errwrite = GetStdHandle(STD_ERROR_HANDLE) elif stderr == PIPE: errread, errwrite = CreatePipe(None, 0) # Detach and turn into fd errread = errread.Detach() errread = msvcrt.open_osfhandle(errread, 0) elif stderr == STDOUT: errwrite = c2pwrite elif type(stderr) == types.IntType: errwrite = msvcrt.get_osfhandle(stderr) else: # Assuming file-like object errwrite = msvcrt.get_osfhandle(stderr.fileno()) errwrite = self._make_inheritable(errwrite) return (p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite) def _make_inheritable(self, handle): """Return a duplicate of handle, which is inheritable""" return DuplicateHandle(GetCurrentProcess(), handle, GetCurrentProcess(), 0, 1, DUPLICATE_SAME_ACCESS) def _find_w9xpopen(self): """Find and return absolut path to w9xpopen.exe""" w9xpopen = os.path.join(os.path.dirname(GetModuleFileName(0)), "w9xpopen.exe") if not os.path.exists(w9xpopen): # Eeek - file-not-found - possibly an embedding # situation - see if we can locate it in sys.exec_prefix w9xpopen = os.path.join(os.path.dirname(sys.exec_prefix), "w9xpopen.exe") if not os.path.exists(w9xpopen): raise RuntimeError("Cannot locate w9xpopen.exe, which is " "needed for Popen to work with your " "shell or platform.") return w9xpopen def _execute_child(self, args, executable, preexec_fn, close_fds, cwd, env, universal_newlines, startupinfo, creationflags, shell, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite): """Execute program (MS Windows version)""" if not isinstance(args, types.StringTypes): args = list2cmdline(args) # Process startup details default_startupinfo = STARTUPINFO() if startupinfo == None: startupinfo = default_startupinfo if not None in (p2cread, c2pwrite, errwrite): startupinfo.dwFlags |= STARTF_USESTDHANDLES startupinfo.hStdInput = p2cread startupinfo.hStdOutput = c2pwrite startupinfo.hStdError = errwrite if shell: default_startupinfo.dwFlags |= STARTF_USESHOWWINDOW default_startupinfo.wShowWindow = SW_HIDE comspec = os.environ.get("COMSPEC", "cmd.exe") args = comspec + " /c " + args if (GetVersion() >= 0x80000000L or os.path.basename(comspec).lower() == "command.com"): # Win9x, or using command.com on NT. We need to # use the w9xpopen intermediate program. For more # information, see KB Q150956 # (http://web.archive.org/web/20011105084002/http://support.microsoft.com/support/kb/articles/Q150/9/56.asp) w9xpopen = self._find_w9xpopen() args = '"%s" %s' % (w9xpopen, args) # Not passing CREATE_NEW_CONSOLE has been known to # cause random failures on win9x. Specifically a # dialog: "Your program accessed mem currently in # use at xxx" and a hopeful warning about the # stability of your system. Cost is Ctrl+C wont # kill children. creationflags |= CREATE_NEW_CONSOLE # Start the process try: hp, ht, pid, tid = CreateProcess(executable, args, # no special security None, None, # must inherit handles to pass std # handles 1, creationflags, env, cwd, startupinfo) except pywintypes.error, e: # Translate pywintypes.error to WindowsError, which is # a subclass of OSError. FIXME: We should really # translate errno using _sys_errlist (or simliar), but # how can this be done from Python? raise WindowsError(*e.args) # Retain the process handle, but close the thread handle self._handle = hp self.pid = pid ht.Close() # Child is launched. Close the parent's copy of those pipe # handles that only the child should have open. You need # to make sure that no handles to the write end of the # output pipe are maintained in this process or else the # pipe will not close when the child process exits and the # ReadFile will hang. if p2cread != None: p2cread.Close() if c2pwrite != None: c2pwrite.Close() if errwrite != None: errwrite.Close() def poll(self): """Check if child process has terminated. Returns returncode attribute.""" if self.returncode == None: if WaitForSingleObject(self._handle, 0) == WAIT_OBJECT_0: self.returncode = GetExitCodeProcess(self._handle) _active.remove(self) return self.returncode def wait(self): """Wait for child process to terminate. Returns returncode attribute.""" if self.returncode == None: obj = WaitForSingleObject(self._handle, INFINITE) self.returncode = GetExitCodeProcess(self._handle) _active.remove(self) return self.returncode def _readerthread(self, fh, buffer): buffer.append(fh.read()) def communicate(self, input=None): """Interact with process: Send data to stdin. Read data from stdout and stderr, until end-of-file is reached. Wait for process to terminate. The optional input argument should be a string to be sent to the child process, or None, if no data should be sent to the child. communicate() returns a tuple (stdout, stderr).""" stdout = None # Return stderr = None # Return if self.stdout: stdout = [] stdout_thread = threading.Thread(target=self._readerthread, args=(self.stdout, stdout)) stdout_thread.setDaemon(True) stdout_thread.start() if self.stderr: stderr = [] stderr_thread = threading.Thread(target=self._readerthread, args=(self.stderr, stderr)) stderr_thread.setDaemon(True) stderr_thread.start() if self.stdin: if input != None: self.stdin.write(input) self.stdin.close() if self.stdout: stdout_thread.join() if self.stderr: stderr_thread.join() # All data exchanged. Translate lists into strings. if stdout != None: stdout = stdout[0] if stderr != None: stderr = stderr[0] # Translate newlines, if requested. We cannot let the file # object do the translation: It is based on stdio, which is # impossible to combine with select (unless forcing no # buffering). if self.universal_newlines and hasattr(open, 'newlines'): if stdout: stdout = self._translate_newlines(stdout) if stderr: stderr = self._translate_newlines(stderr) self.wait() return (stdout, stderr) else: # # POSIX methods # def _get_handles(self, stdin, stdout, stderr): """Construct and return tupel with IO objects: p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite """ p2cread, p2cwrite = None, None c2pread, c2pwrite = None, None errread, errwrite = None, None if stdin == None: pass elif stdin == PIPE: p2cread, p2cwrite = os.pipe() elif type(stdin) == types.IntType: p2cread = stdin else: # Assuming file-like object p2cread = stdin.fileno() if stdout == None: pass elif stdout == PIPE: c2pread, c2pwrite = os.pipe() elif type(stdout) == types.IntType: c2pwrite = stdout else: # Assuming file-like object c2pwrite = stdout.fileno() if stderr == None: pass elif stderr == PIPE: errread, errwrite = os.pipe() elif stderr == STDOUT: errwrite = c2pwrite elif type(stderr) == types.IntType: errwrite = stderr else: # Assuming file-like object errwrite = stderr.fileno() return (p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite) def _set_cloexec_flag(self, fd): try: cloexec_flag = fcntl.FD_CLOEXEC except AttributeError: cloexec_flag = 1 old = fcntl.fcntl(fd, fcntl.F_GETFD) fcntl.fcntl(fd, fcntl.F_SETFD, old | cloexec_flag) def _close_fds(self, but): for i in range(3, MAXFD): if i == but: continue try: os.close(i) except: pass def _execute_child(self, args, executable, preexec_fn, close_fds, cwd, env, universal_newlines, startupinfo, creationflags, shell, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite): """Execute program (POSIX version)""" if isinstance(args, types.StringTypes): args = [args] if shell: args = ["/bin/sh", "-c"] + args if executable == None: executable = args[0] # For transferring possible exec failure from child to parent # The first char specifies the exception type: 0 means # OSError, 1 means some other error. errpipe_read, errpipe_write = os.pipe() self._set_cloexec_flag(errpipe_write) self.pid = os.fork() if self.pid == 0: # Child try: # Close parent's pipe ends if p2cwrite: os.close(p2cwrite) if c2pread: os.close(c2pread) if errread: os.close(errread) os.close(errpipe_read) # Dup fds for child if p2cread: os.dup2(p2cread, 0) if c2pwrite: os.dup2(c2pwrite, 1) if errwrite: os.dup2(errwrite, 2) # Close pipe fds. Make sure we doesn't close the same # fd more than once. if p2cread: os.close(p2cread) if c2pwrite and c2pwrite not in (p2cread,): os.close(c2pwrite) if errwrite and errwrite not in (p2cread, c2pwrite): os.close(errwrite) # Close all other fds, if asked for if close_fds: self._close_fds(but=errpipe_write) if cwd != None: os.chdir(cwd) if preexec_fn: apply(preexec_fn) if env == None: os.execvp(executable, args) else: os.execvpe(executable, args, env) except: exc_type, exc_value, tb = sys.exc_info() # Save the traceback and attach it to the exception object exc_lines = traceback.format_exception(exc_type, exc_value, tb) exc_value.child_traceback = ''.join(exc_lines) os.write(errpipe_write, pickle.dumps(exc_value)) # This exitcode won't be reported to applications, so it # really doesn't matter what we return. os._exit(255) # Parent os.close(errpipe_write) if p2cread and p2cwrite: os.close(p2cread) if c2pwrite and c2pread: os.close(c2pwrite) if errwrite and errread: os.close(errwrite) # Wait for exec to fail or succeed; possibly raising exception data = os.read(errpipe_read, 1048576) # Exceptions limited to 1 MB os.close(errpipe_read) if data != "": os.waitpid(self.pid, 0) child_exception = pickle.loads(data) raise child_exception def _handle_exitstatus(self, sts): if os.WIFSIGNALED(sts): self.returncode = -os.WTERMSIG(sts) elif os.WIFEXITED(sts): self.returncode = os.WEXITSTATUS(sts) else: # Should never happen raise RuntimeError("Unknown child exit status!") _active.remove(self) def poll(self): """Check if child process has terminated. Returns returncode attribute.""" if self.returncode == None: try: pid, sts = os.waitpid(self.pid, os.WNOHANG) if pid == self.pid: self._handle_exitstatus(sts) except os.error: pass return self.returncode def wait(self): """Wait for child process to terminate. Returns returncode attribute.""" if self.returncode == None: pid, sts = os.waitpid(self.pid, 0) self._handle_exitstatus(sts) return self.returncode def communicate(self, input=None): """Interact with process: Send data to stdin. Read data from stdout and stderr, until end-of-file is reached. Wait for process to terminate. The optional input argument should be a string to be sent to the child process, or None, if no data should be sent to the child. communicate() returns a tuple (stdout, stderr).""" read_set = [] write_set = [] stdout = None # Return stderr = None # Return if self.stdin: # Flush stdio buffer. This might block, if the user has # been writing to .stdin in an uncontrolled fashion. self.stdin.flush() if input: write_set.append(self.stdin) else: self.stdin.close() if self.stdout: read_set.append(self.stdout) stdout = [] if self.stderr: read_set.append(self.stderr) stderr = [] while read_set or write_set: rlist, wlist, xlist = select.select(read_set, write_set, []) if self.stdin in wlist: # When select has indicated that the file is writable, # we can write up to PIPE_BUF bytes without risk # blocking. POSIX defines PIPE_BUF >= 512 bytes_written = os.write(self.stdin.fileno(), input[:512]) input = input[bytes_written:] if not input: self.stdin.close() write_set.remove(self.stdin) if self.stdout in rlist: data = os.read(self.stdout.fileno(), 1024) if data == "": self.stdout.close() read_set.remove(self.stdout) stdout.append(data) if self.stderr in rlist: data = os.read(self.stderr.fileno(), 1024) if data == "": self.stderr.close() read_set.remove(self.stderr) stderr.append(data) # All data exchanged. Translate lists into strings. if stdout != None: stdout = ''.join(stdout) if stderr != None: stderr = ''.join(stderr) # Translate newlines, if requested. We cannot let the file # object do the translation: It is based on stdio, which is # impossible to combine with select (unless forcing no # buffering). if self.universal_newlines and hasattr(open, 'newlines'): if stdout: stdout = self._translate_newlines(stdout) if stderr: stderr = self._translate_newlines(stderr) self.wait() return (stdout, stderr) def _demo_posix(): # # Example 1: Simple redirection: Get process list # plist = Popen(["ps"], stdout=PIPE).communicate()[0] print "Process list:" print plist # # Example 2: Change uid before executing child # if os.getuid() == 0: p = Popen(["id"], preexec_fn=lambda: os.setuid(100)) p.wait() # # Example 3: Connecting several subprocesses # print "Looking for 'hda'..." p1 = Popen(["dmesg"], stdout=PIPE) p2 = Popen(["grep", "hda"], stdin=p1.stdout, stdout=PIPE) print repr(p2.communicate()[0]) # # Example 4: Catch execution error # print print "Trying a weird file..." try: print Popen(["/this/path/does/not/exist"]).communicate() except OSError, e: if e.errno == errno.ENOENT: print "The file didn't exist. I thought so..." print "Child traceback:" print e.child_traceback else: print "Error", e.errno else: print >>sys.stderr, "Gosh. No error." def _demo_windows(): # # Example 1: Connecting several subprocesses # print "Looking for 'PROMPT' in set output..." p1 = Popen("set", stdout=PIPE, shell=True) p2 = Popen('find "PROMPT"', stdin=p1.stdout, stdout=PIPE) print repr(p2.communicate()[0]) # # Example 2: Simple execution of program # print "Executing calc..." p = Popen("calc") p.wait() if __name__ == "__main__": if mswindows: _demo_windows() else: _demo_posix() cobbler-2.4.1/koan/text_wrap.py000066400000000000000000000330531227367477500164750ustar00rootroot00000000000000"""Text wrapping and filling. (note: repackaged in koan because it's not present in RHEL3) """ # Copyright (C) 1999-2001 Gregory P. Ward. # Copyright (C) 2002, 2003 Python Software Foundation. # Written by Greg Ward __revision__ = "$Id: textwrap.py,v 1.32.8.2 2004/05/13 01:48:15 gward Exp $" import string, re # Do the right thing with boolean values for all known Python versions # (so this module can be copied to projects that don't depend on Python # 2.3, e.g. Optik and Docutils). try: True, False except NameError: (True, False) = (1, 0) __all__ = ['TextWrapper', 'wrap', 'fill'] # Hardcode the recognized whitespace characters to the US-ASCII # whitespace characters. The main reason for doing this is that in # ISO-8859-1, 0xa0 is non-breaking whitespace, so in certain locales # that character winds up in string.whitespace. Respecting # string.whitespace in those cases would 1) make textwrap treat 0xa0 the # same as any other whitespace char, which is clearly wrong (it's a # *non-breaking* space), 2) possibly cause problems with Unicode, # since 0xa0 is not in range(128). _whitespace = '\t\n\x0b\x0c\r ' class TextWrapper: """ Object for wrapping/filling text. The public interface consists of the wrap() and fill() methods; the other methods are just there for subclasses to override in order to tweak the default behaviour. If you want to completely replace the main wrapping algorithm, you'll probably have to override _wrap_chunks(). Several instance attributes control various aspects of wrapping: width (default: 70) the maximum width of wrapped lines (unless break_long_words is false) initial_indent (default: "") string that will be prepended to the first line of wrapped output. Counts towards the line's width. subsequent_indent (default: "") string that will be prepended to all lines save the first of wrapped output; also counts towards each line's width. expand_tabs (default: true) Expand tabs in input text to spaces before further processing. Each tab will become 1 .. 8 spaces, depending on its position in its line. If false, each tab is treated as a single character. replace_whitespace (default: true) Replace all whitespace characters in the input text by spaces after tab expansion. Note that if expand_tabs is false and replace_whitespace is true, every tab will be converted to a single space! fix_sentence_endings (default: false) Ensure that sentence-ending punctuation is always followed by two spaces. Off by default because the algorithm is (unavoidably) imperfect. break_long_words (default: true) Break words longer than 'width'. If false, those words will not be broken, and some lines might be longer than 'width'. """ whitespace_trans = string.maketrans(_whitespace, ' ' * len(_whitespace)) unicode_whitespace_trans = {} uspace = ord(u' ') for x in map(ord, _whitespace): unicode_whitespace_trans[x] = uspace # This funky little regex is just the trick for splitting # text up into word-wrappable chunks. E.g. # "Hello there -- you goof-ball, use the -b option!" # splits into # Hello/ /there/ /--/ /you/ /goof-/ball,/ /use/ /the/ /-b/ /option! # (after stripping out empty strings). wordsep_re = re.compile(r'(\s+|' # any whitespace r'-*\w{2,}-(?=\w{2,})|' # hyphenated words r'(?<=[\w\!\"\'\&\.\,\?])-{2,}(?=\w))') # em-dash # XXX will there be a locale-or-charset-aware version of # string.lowercase in 2.3? sentence_end_re = re.compile(r'[%s]' # lowercase letter r'[\.\!\?]' # sentence-ending punct. r'[\"\']?' # optional end-of-quote % string.lowercase) def __init__(self, width=70, initial_indent="", subsequent_indent="", expand_tabs=True, replace_whitespace=True, fix_sentence_endings=False, break_long_words=True): self.width = width self.initial_indent = initial_indent self.subsequent_indent = subsequent_indent self.expand_tabs = expand_tabs self.replace_whitespace = replace_whitespace self.fix_sentence_endings = fix_sentence_endings self.break_long_words = break_long_words # -- Private methods ----------------------------------------------- # (possibly useful for subclasses to override) def _munge_whitespace(self, text): """_munge_whitespace(text : string) -> string Munge whitespace in text: expand tabs and convert all other whitespace characters to spaces. Eg. " foo\tbar\n\nbaz" becomes " foo bar baz". """ if self.expand_tabs: text = text.expandtabs() if self.replace_whitespace: if isinstance(text, str): text = text.translate(self.whitespace_trans) elif isinstance(text, unicode): text = text.translate(self.unicode_whitespace_trans) return text def _split(self, text): """_split(text : string) -> [string] Split the text to wrap into indivisible chunks. Chunks are not quite the same as words; see wrap_chunks() for full details. As an example, the text Look, goof-ball -- use the -b option! breaks into the following chunks: 'Look,', ' ', 'goof-', 'ball', ' ', '--', ' ', 'use', ' ', 'the', ' ', '-b', ' ', 'option!' """ chunks = self.wordsep_re.split(text) chunks = filter(None, chunks) return chunks def _fix_sentence_endings(self, chunks): """_fix_sentence_endings(chunks : [string]) Correct for sentence endings buried in 'chunks'. Eg. when the original text contains "... foo.\nBar ...", munge_whitespace() and split() will convert that to [..., "foo.", " ", "Bar", ...] which has one too few spaces; this method simply changes the one space to two. """ i = 0 pat = self.sentence_end_re while i < len(chunks)-1: if chunks[i+1] == " " and pat.search(chunks[i]): chunks[i+1] = " " i += 2 else: i += 1 def _handle_long_word(self, chunks, cur_line, cur_len, width): """_handle_long_word(chunks : [string], cur_line : [string], cur_len : int, width : int) Handle a chunk of text (most likely a word, not whitespace) that is too long to fit in any line. """ space_left = max(width - cur_len, 1) # If we're allowed to break long words, then do so: put as much # of the next chunk onto the current line as will fit. if self.break_long_words: cur_line.append(chunks[0][0:space_left]) chunks[0] = chunks[0][space_left:] # Otherwise, we have to preserve the long word intact. Only add # it to the current line if there's nothing already there -- # that minimizes how much we violate the width constraint. elif not cur_line: cur_line.append(chunks.pop(0)) # If we're not allowed to break long words, and there's already # text on the current line, do nothing. Next time through the # main loop of _wrap_chunks(), we'll wind up here again, but # cur_len will be zero, so the next line will be entirely # devoted to the long word that we can't handle right now. def _wrap_chunks(self, chunks): """_wrap_chunks(chunks : [string]) -> [string] Wrap a sequence of text chunks and return a list of lines of length 'self.width' or less. (If 'break_long_words' is false, some lines may be longer than this.) Chunks correspond roughly to words and the whitespace between them: each chunk is indivisible (modulo 'break_long_words'), but a line break can come between any two chunks. Chunks should not have internal whitespace; ie. a chunk is either all whitespace or a "word". Whitespace chunks will be removed from the beginning and end of lines, but apart from that whitespace is preserved. """ lines = [] if self.width <= 0: raise ValueError("invalid width %r (must be > 0)" % self.width) while chunks: # Start the list of chunks that will make up the current line. # cur_len is just the length of all the chunks in cur_line. cur_line = [] cur_len = 0 # Figure out which static string will prefix this line. if lines: indent = self.subsequent_indent else: indent = self.initial_indent # Maximum width for this line. width = self.width - len(indent) # First chunk on line is whitespace -- drop it, unless this # is the very beginning of the text (ie. no lines started yet). if chunks[0].strip() == '' and lines: del chunks[0] while chunks: l = len(chunks[0]) # Can at least squeeze this chunk onto the current line. if cur_len + l <= width: cur_line.append(chunks.pop(0)) cur_len += l # Nope, this line is full. else: break # The current line is full, and the next chunk is too big to # fit on *any* line (not just this one). if chunks and len(chunks[0]) > width: self._handle_long_word(chunks, cur_line, cur_len, width) # If the last chunk on this line is all whitespace, drop it. if cur_line and cur_line[-1].strip() == '': del cur_line[-1] # Convert current line back to a string and store it in list # of all lines (return value). if cur_line: lines.append(indent + ''.join(cur_line)) return lines # -- Public interface ---------------------------------------------- def wrap(self, text): """wrap(text : string) -> [string] Reformat the single paragraph in 'text' so it fits in lines of no more than 'self.width' columns, and return a list of wrapped lines. Tabs in 'text' are expanded with string.expandtabs(), and all other whitespace characters (including newline) are converted to space. """ text = self._munge_whitespace(text) indent = self.initial_indent chunks = self._split(text) if self.fix_sentence_endings: self._fix_sentence_endings(chunks) return self._wrap_chunks(chunks) def fill(self, text): """fill(text : string) -> string Reformat the single paragraph in 'text' to fit in lines of no more than 'self.width' columns, and return a new string containing the entire wrapped paragraph. """ return "\n".join(self.wrap(text)) # -- Convenience interface --------------------------------------------- def wrap(text, width=70, **kwargs): """Wrap a single paragraph of text, returning a list of wrapped lines. Reformat the single paragraph in 'text' so it fits in lines of no more than 'width' columns, and return a list of wrapped lines. By default, tabs in 'text' are expanded with string.expandtabs(), and all other whitespace characters (including newline) are converted to space. See TextWrapper class for available keyword args to customize wrapping behaviour. """ w = TextWrapper(width=width, **kwargs) return w.wrap(text) def fill(text, width=70, **kwargs): """Fill a single paragraph of text, returning a new string. Reformat the single paragraph in 'text' to fit in lines of no more than 'width' columns, and return a new string containing the entire wrapped paragraph. As with wrap(), tabs are expanded and other whitespace characters converted to space. See TextWrapper class for available keyword args to customize wrapping behaviour. """ w = TextWrapper(width=width, **kwargs) return w.fill(text) # -- Loosely related functionality ------------------------------------- def dedent(text): """dedent(text : string) -> string Remove any whitespace than can be uniformly removed from the left of every line in `text`. This can be used e.g. to make triple-quoted strings line up with the left edge of screen/whatever, while still presenting it in the source code in indented form. For example: def test(): # end first line with \ to avoid the empty line! s = '''\ hello world ''' print repr(s) # prints ' hello\n world\n ' print repr(dedent(s)) # prints 'hello\n world\n' """ lines = text.expandtabs().split('\n') margin = None for line in lines: content = line.lstrip() if not content: continue indent = len(line) - len(content) if margin is None: margin = indent else: margin = min(margin, indent) if margin is not None and margin > 0: for i in range(len(lines)): lines[i] = lines[i][margin:] return '\n'.join(lines) cobbler-2.4.1/koan/utils.py000066400000000000000000000422231227367477500156170ustar00rootroot00000000000000""" koan = kickstart over a network general usage functions Copyright 2006-2008 Red Hat, Inc and Others. Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import os import traceback import tempfile import exceptions ANCIENT_PYTHON = 0 try: try: import subprocess as sub_process except: import sub_process import urllib2 except: ANCIENT_PYTHON = 1 import shutil import errno import re import sys import xmlrpclib import string import urlgrabber VIRT_STATE_NAME_MAP = { 0 : "running", 1 : "running", 2 : "running", 3 : "paused", 4 : "shutdown", 5 : "shutdown", 6 : "crashed" } VALID_DRIVER_TYPES = ['raw', 'qcow', 'qcow2', 'vmdk', 'qed'] class InfoException(exceptions.Exception): """ Custom exception for tracking of fatal errors. """ def __init__(self,value,**args): self.value = value % args self.from_koan = 1 def __str__(self): return repr(self.value) def setupLogging(appname): """ set up logging ... code borrowed/adapted from virt-manager """ import logging import logging.handlers dateFormat = "%a, %d %b %Y %H:%M:%S" fileFormat = "[%(asctime)s " + appname + " %(process)d] %(levelname)s (%(module)s:%(lineno)d) %(message)s" streamFormat = "%(asctime)s %(levelname)-8s %(message)s" filename = "/var/log/koan/koan.log" rootLogger = logging.getLogger() rootLogger.setLevel(logging.DEBUG) fileHandler = logging.handlers.RotatingFileHandler(filename, "a", 1024*1024, 5) fileHandler.setFormatter(logging.Formatter(fileFormat, dateFormat)) rootLogger.addHandler(fileHandler) streamHandler = logging.StreamHandler(sys.stderr) streamHandler.setFormatter(logging.Formatter(streamFormat, dateFormat)) streamHandler.setLevel(logging.DEBUG) rootLogger.addHandler(streamHandler) def urlread(url): """ to support more distributions, implement (roughly) some parts of urlread and urlgrab from urlgrabber, in ways that are less cool and less efficient. """ print "- reading URL: %s" % url if url is None or url == "": raise InfoException, "invalid URL: %s" % url elif url[0:3] == "nfs": try: ndir = os.path.dirname(url[6:]) nfile = os.path.basename(url[6:]) nfsdir = tempfile.mkdtemp(prefix="koan_nfs",dir="/tmp") nfsfile = os.path.join(nfsdir,nfile) cmd = ["mount","-t","nfs","-o","ro", ndir, nfsdir] subprocess_call(cmd) fd = open(nfsfile) data = fd.read() fd.close() cmd = ["umount",nfsdir] subprocess_call(cmd) return data except: traceback.print_exc() raise InfoException, "Couldn't mount and read URL: %s" % url elif url[0:4] == "http": try: fd = urllib2.urlopen(url) data = fd.read() fd.close() return data except: if ANCIENT_PYTHON: # this logic is to support python 1.5 and EL 2 import urllib fd = urllib.urlopen(url) data = fd.read() fd.close() return data traceback.print_exc() raise InfoException, "Couldn't download: %s" % url elif url[0:4] == "file": try: fd = open(url[5:]) data = fd.read() fd.close() return data except: raise InfoException, "Couldn't read file from URL: %s" % url else: raise InfoException, "Unhandled URL protocol: %s" % url def urlgrab(url,saveto): """ like urlread, but saves contents to disk. see comments for urlread as to why it's this way. """ data = urlread(url) fd = open(saveto, "w+") fd.write(data) fd.close() def subprocess_call(cmd,ignore_rc=0): """ Wrapper around subprocess.call(...) """ print "- %s" % cmd if not ANCIENT_PYTHON: rc = sub_process.call(cmd) else: cmd = string.join(cmd, " ") print "cmdstr=(%s)" % cmd rc = os.system(cmd) if rc != 0 and not ignore_rc: raise InfoException, "command failed (%s)" % rc return rc def subprocess_get_response(cmd, ignore_rc=False): """ Wrapper around subprocess.check_output(...) """ print "- %s" % cmd rc = 0 result = "" if not ANCIENT_PYTHON: p = sub_process.Popen(cmd, stdout=sub_process.PIPE) result = p.communicate()[0] rc = p.wait() else: cmd = string.join(cmd, " ") print "cmdstr=(%s)" % cmd rc = os.system(cmd) if not ignore_rc and rc != 0: raise InfoException, "command failed (%s)" % rc return rc, result def input_string_or_hash(options,delim=None,allow_multiples=True): """ Older cobbler files stored configurations in a flat way, such that all values for strings. Newer versions of cobbler allow dictionaries. This function is used to allow loading of older value formats so new users of cobbler aren't broken in an upgrade. """ if options is None: return {} elif type(options) == list: raise InfoException("No idea what to do with list: %s" % options) elif type(options) == type(""): new_dict = {} tokens = string.split(options, delim) for t in tokens: tokens2 = string.split(t,"=") if len(tokens2) == 1: # this is a singleton option, no value key = tokens2[0] value = None else: key = tokens2[0] value = tokens2[1] # if we're allowing multiple values for the same key, # check to see if this token has already been # inserted into the dictionary of values already if key in new_dict.keys() and allow_multiples: # if so, check to see if there is already a list of values # otherwise convert the dictionary value to an array, and add # the new value to the end of the list if type(new_dict[key]) == list: new_dict[key].append(value) else: new_dict[key] = [new_dict[key], value] else: new_dict[key] = value # dict.pop is not avail in 2.2 if new_dict.has_key(""): del new_dict[""] return new_dict elif type(options) == type({}): options.pop('',None) return options else: raise InfoException("invalid input type: %s" % type(options)) def hash_to_string(hash): """ Convert a hash to a printable string. used primarily in the kernel options string and for some legacy stuff where koan expects strings (though this last part should be changed to hashes) """ buffer = "" if type(hash) != dict: return hash for key in hash: value = hash[key] if value is None: buffer = buffer + str(key) + " " elif type(value) == list: # this value is an array, so we print out every # key=value for item in value: buffer = buffer + str(key) + "=" + str(item) + " " else: buffer = buffer + str(key) + "=" + str(value) + " " return buffer def nfsmount(input_path): # input: [user@]server:/foo/bar/x.img as string # output: (dirname where mounted, last part of filename) as 2-element tuple # FIXME: move this function to util.py so other modules can use it # we have to mount it first filename = input_path.split("/")[-1] dirpath = string.join(input_path.split("/")[:-1],"/") tempdir = tempfile.mkdtemp(suffix='.mnt', prefix='koan_', dir='/tmp') mount_cmd = [ "/bin/mount", "-t", "nfs", "-o", "ro", dirpath, tempdir ] print "- running: %s" % mount_cmd rc = sub_process.call(mount_cmd) if not rc == 0: shutil.rmtree(tempdir, ignore_errors=True) raise koan.InfoException("nfs mount failed: %s" % dirpath) # NOTE: option for a blocking install might be nice, so we could do this # automatically, if supported by virt-install print "after install completes, you may unmount and delete %s" % tempdir return (tempdir, filename) def find_vm(conn, vmid): """ Extra bonus feature: vmid = -1 returns a list of everything This function from Func: fedorahosted.org/func """ vms = [] # this block of code borrowed from virt-manager: # get working domain's name ids = conn.listDomainsID(); for id in ids: vm = conn.lookupByID(id) vms.append(vm) # get defined domain names = conn.listDefinedDomains() for name in names: vm = conn.lookupByName(name) vms.append(vm) if vmid == -1: return vms for vm in vms: if vm.name() == vmid: return vm raise InfoException("koan could not find the VM to watch: %s" % vmid) def get_vm_state(conn, vmid): """ Returns the state of a libvirt VM, by name. From Func: fedorahosted.org/func """ state = find_vm(conn, vmid).info()[0] return VIRT_STATE_NAME_MAP.get(state,"unknown") def check_dist(): """ Determines what distro we're running under. """ if os.path.exists("/etc/debian_version"): import lsb_release return lsb_release.get_distro_information()['ID'].lower() elif os.path.exists("/etc/SuSE-release"): return "suse" else: # valid for Fedora and all Red Hat / Fedora derivatives return "redhat" def os_release(): """ This code is borrowed from Cobbler and really shouldn't be repeated. """ if ANCIENT_PYTHON: return ("unknown", 0) if check_dist() == "redhat": fh = open("/etc/redhat-release") data = fh.read().lower() if data.find("fedora") != -1: make = "fedora" elif data.find("centos") != -1: make = "centos" else: make = "redhat" release_index = data.find("release") rest = data[release_index+7:-1] tokens = rest.split(" ") for t in tokens: try: return (make,float(t)) except ValueError, ve: pass raise CX("failed to detect local OS version from /etc/redhat-release") elif check_dist() == "debian": import lsb_release release = lsb_release.get_distro_information()['RELEASE'] return ("debian", release) elif check_dist() == "ubuntu": version = sub_process.check_output(("lsb_release","--release","--short")).rstrip() make = "ubuntu" return (make, float(version)) elif check_dist() == "suse": fd = open("/etc/SuSE-release") for line in fd.read().split("\n"): if line.find("VERSION") != -1: version = line.replace("VERSION = ","") if line.find("PATCHLEVEL") != -1: rest = line.replace("PATCHLEVEL = ","") make = "suse" return (make, float(version)) else: return ("unknown",0) def uniqify(lst, purge=None): temp = {} for x in lst: temp[x] = 1 if purge is not None: temp2 = {} for x in temp.keys(): if x != purge: temp2[x] = 1 temp = temp2 return temp.keys() def get_network_info(): try: import ethtool except: try: import rhpl.ethtool ethtool = rhpl.ethtool except: raise InfoException("the rhpl or ethtool module is required to use this feature (is your OS>=EL3?)") interfaces = {} # get names inames = ethtool.get_devices() for iname in inames: mac = ethtool.get_hwaddr(iname) if mac == "00:00:00:00:00:00": mac = "?" try: ip = ethtool.get_ipaddr(iname) if ip == "127.0.0.1": ip = "?" except: ip = "?" bridge = 0 module = "" try: nm = ethtool.get_netmask(iname) except: nm = "?" interfaces[iname] = { "ip_address" : ip, "mac_address" : mac, "netmask" : nm, "bridge" : bridge, "module" : module } # print interfaces return interfaces def connect_to_server(server=None,port=None): if server is None: server = os.environ.get("COBBLER_SERVER","") if server == "": raise InfoException("--server must be specified") if port is None: port = 80 connect_ok = 0 try_urls = [ "http://%s:%s/cobbler_api" % (server,port), "https://%s:%s/cobbler_api" % (server,port), ] for url in try_urls: print "- looking for Cobbler at %s" % url server = __try_connect(url) if server is not None: return server raise InfoException ("Could not find Cobbler.") def create_xendomains_symlink(name): """ Create an /etc/xen/auto/ symlink for use with "xendomains"-style VM boot upon dom0 reboot. """ src = "/etc/xen/%s" % name dst = "/etc/xen/auto/%s" % name # Make sure symlink does not already exist. if os.path.exists(dst): print "Could not create %s symlink. File already exists in this location." % dst return False # Verify that the destination is writable if not os.access(os.path.dirname(dst), os.W_OK): print "Could not create %s symlink. Please check write permissions and ownership." % dst return False # check that xen config file exists and create symlink if os.path.exists(src): os.symlink(src, dst) return True else: print "Could not create %s symlink, source file %s is missing." % (dst, src) return False def libvirt_enable_autostart(domain_name): import libvirt try: conn = libvirt.open("qemu:///system") conn.listDefinedDomains() domain = conn.lookupByName(domain_name) domain.setAutostart(1) except: raise InfoException("libvirt could not find domain %s" % domain_name) if not domain.autostart: raise InfoException("Could not enable autostart on domain %s." % domain_name) def make_floppy(kickstart): (fd, floppy_path) = tempfile.mkstemp(suffix='.floppy', prefix='tmp', dir="/tmp") print "- creating floppy image at %s" % floppy_path # create the floppy image file cmd = "dd if=/dev/zero of=%s bs=1440 count=1024" % floppy_path print "- %s" % cmd rc = os.system(cmd) if not rc == 0: raise InfoException("dd failed") # vfatify cmd = "mkdosfs %s" % floppy_path print "- %s" % cmd rc = os.system(cmd) if not rc == 0: raise InfoException("mkdosfs failed") # mount the floppy mount_path = tempfile.mkdtemp(suffix=".mnt", prefix='tmp', dir="/tmp") cmd = "mount -o loop -t vfat %s %s" % (floppy_path, mount_path) print "- %s" % cmd rc = os.system(cmd) if not rc == 0: raise InfoException("mount failed") # download the kickstart file onto the mounted floppy print "- downloading %s" % kickstart save_file = os.path.join(mount_path, "unattended.txt") urlgrabber.urlgrab(kickstart,filename=save_file) # umount cmd = "umount %s" % mount_path print "- %s" % cmd rc = os.system(cmd) if not rc == 0: raise InfoException("umount failed") # return the path to the completed disk image to pass to virt-install return floppy_path def sync_file(ofile, nfile, uid, gid, mode): sub_process.call(['/usr/bin/diff', ofile, nfile]) shutil.copy(nfile, ofile) os.chmod(ofile,mode) os.chown(ofile,uid,gid) #class ServerProxy(xmlrpclib.ServerProxy): # # def __init__(self, url=None): # try: # xmlrpclib.ServerProxy.__init__(self, url, allow_none=True) # except: # # for RHEL3's xmlrpclib -- cobblerd should strip Nones anyway # xmlrpclib.ServerProxy.__init__(self, url) def __try_connect(url): try: xmlrpc_server = xmlrpclib.Server(url) xmlrpc_server.ping() return xmlrpc_server except: traceback.print_exc() return None def create_qemu_image_file(path, size, driver_type): if driver_type not in VALID_DRIVER_TYPES: raise InfoException, "Invalid QEMU image type: %s" % driver_type cmd = ["qemu-img", "create", "-f", driver_type, path, "%sG" % size] try: subprocess_call(cmd) except: traceback.print_exc() raise InfoException, "Image file create failed: %s" % string.join(cmd, " ") cobbler-2.4.1/koan/virtinstall.py000066400000000000000000000330661227367477500170370ustar00rootroot00000000000000""" Virtualization installation functions. Currently somewhat Xen/paravirt specific, will evolve later. Copyright 2006-2008 Red Hat, Inc. Michael DeHaan Original version based on virtguest-install Jeremy Katz Option handling added by Andrew Puch Simplified for use as library by koan, Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import os import re import shlex import app as koan import utils # The virtinst module will no longer be availabe to import in some # distros. We need to get all the info we need from the virt-install # command line tool. This should work on both old and new variants, # as the virt-install command line tool has always been provided by # python-virtinst (and now the new virt-install rpm). rc, response = utils.subprocess_get_response( shlex.split('virt-install --version'), True) if rc == 0: virtinst_version = response else: virtinst_version = None # This one's trickier. We need a list of supported os varients, but # the man page explicitly says not to parse the result of this command. # But we need it, and there's no other way to get it. I spoke with the # virt-install maintainers and they said the point of that message # is that you can't absolutely depend on the output not changing, but # at the moment it's the only option for us. Long term plans are for # virt-install to switch to libosinfo for OS metadata tracking, which # provides a library and tools for querying valid OS values. Until # that's available and pervasive the best we can do is to use the # module if it's availabe and if not parse the command output. supported_variants = set() try: from virtinst import osdict for ostype in osdict.OS_TYPES.keys(): for variant in osdict.OS_TYPES[ostype]["variants"].keys(): supported_variants.add(variant) except: try: rc, response = utils.subprocess_get_response( shlex.split('virt-install --os-variant list')) variants = response.split('\n') for variant in variants: supported_variants.add(variant.split()[0]) except: pass # No problem, we'll just use generic def _sanitize_disks(disks): ret = [] for d in disks: driver_type = None if len(d) > 2: driver_type = d[2] if d[1] != 0 or d[0].startswith("/dev"): ret.append((d[0], d[1], driver_type)) else: raise koan.InfoException("this virtualization type does not work without a disk image, set virt-size in Cobbler to non-zero") return ret def _sanitize_nics(nics, bridge, profile_bridge, network_count): ret = [] if network_count is not None and not nics: # Fill in some stub nics so we can take advantage of the loop logic nics = {} for i in range(int(network_count)): nics["foo%s" % i] = { "interface_type" : "na", "mac_address": None, "virt_bridge": None, } if not nics: return ret interfaces = nics.keys() interfaces.sort() counter = -1 vlanpattern = re.compile("[a-zA-Z0-9]+\.[0-9]+") for iname in interfaces: counter = counter + 1 intf = nics[iname] if (intf["interface_type"] in ("master","bond","bridge","bonded_bridge_slave") or vlanpattern.match(iname) or iname.find(":") != -1): continue mac = intf["mac_address"] if not bridge: intf_bridge = intf["virt_bridge"] if intf_bridge == "": if profile_bridge == "": raise koan.InfoException("virt-bridge setting is not defined in cobbler") intf_bridge = profile_bridge else: if bridge.find(",") == -1: intf_bridge = bridge else: bridges = bridge.split(",") intf_bridge = bridges[counter] ret.append((intf_bridge, mac)) return ret def create_image_file(disks=None, **kwargs): disks = _sanitize_disks(disks) for path, size, driver_type in disks: if driver_type is None: continue if os.path.isdir(path) or os.path.exists(path): continue if str(size) == "0": continue utils.create_qemu_image_file(path, size, driver_type) def build_commandline(uri, name=None, ram=None, disks=None, uuid=None, extra=None, vcpus=None, profile_data=None, arch=None, no_gfx=False, fullvirt=False, bridge=None, virt_type=None, virt_auto_boot=False, virt_pxe_boot=False, qemu_driver_type=None, qemu_net_type=None, qemu_machine_type=None, wait=0, noreboot=False, osimport=False): # Set flags for CLI arguments based on the virtinst_version # tuple above. Older versions of python-virtinst don't have # a version easily accessible, so it will be None and we can # easily disable features based on that (RHEL5 and older usually) disable_autostart = False disable_virt_type = False disable_boot_opt = False disable_driver_type = False disable_net_model = False disable_machine_type = False oldstyle_macs = False oldstyle_accelerate = False if not virtinst_version: print ("- warning: old virt-install detected, a lot of features will be disabled") disable_autostart = True disable_boot_opt = True disable_virt_type = True disable_driver_type = True disable_net_model = True disable_machine_type = True oldstyle_macs = True oldstyle_accelerate = True import_exists = False # avoid duplicating --import parameter disable_extra = False # disable --extra-args on --import if osimport: disable_extra = True is_import = uri.startswith("import") if is_import: # We use the special value 'import' for imagecreate.py. Since # it is connection agnostic, just let virt-install choose the # best hypervisor. uri = "" fullvirt = None is_xen = uri.startswith("xen") is_qemu = uri.startswith("qemu") if is_qemu: if virt_type != "kvm": fullvirt = True else: fullvirt = None floppy = None cdrom = None location = None importpath = None if is_import: importpath = profile_data.get("file") if not importpath: raise koan.InfoException("Profile 'file' required for image " "install") elif profile_data.has_key("file"): if is_xen: raise koan.InfoException("Xen does not work with --image yet") # this is an image based installation input_path = profile_data["file"] print "- using image location %s" % input_path if input_path.find(":") == -1: # this is not an NFS path cdrom = input_path else: (tempdir, filename) = utils.nfsmount(input_path) cdrom = os.path.join(tempdir, filename) kickstart = profile_data.get("kickstart","") if kickstart != "": # we have a (windows?) answer file we have to provide # to the ISO. print "I want to make a floppy for %s" % kickstart floppy = utils.make_floppy(kickstart) elif is_qemu or is_xen: # images don't need to source this if not profile_data.has_key("install_tree"): raise koan.InfoException("Cannot find install source in kickstart file, aborting.") if not profile_data["install_tree"].endswith("/"): profile_data["install_tree"] = profile_data["install_tree"] + "/" location = profile_data["install_tree"] disks = _sanitize_disks(disks) nics = _sanitize_nics(profile_data.get("interfaces"), bridge, profile_data.get("virt_bridge"), profile_data.get("network_count")) if not nics: # for --profile you get one NIC, go define a system if you want more. # FIXME: can mac still be sent on command line in this case? if bridge is None: bridge = profile_data["virt_bridge"] if bridge == "": raise koan.InfoException("virt-bridge setting is not defined in cobbler") nics = [(bridge, None)] kernel = profile_data.get("kernel_local") initrd = profile_data.get("initrd_local") breed = profile_data.get("breed") os_version = profile_data.get("os_version") if os_version and breed == "ubuntu": os_version = "ubuntu%s" % os_version if os_version and breed == "debian": os_version = "debian%s" % os_version net_model = None disk_bus = None machine_type = None if is_qemu: net_model = qemu_net_type disk_bus = qemu_driver_type machine_type = qemu_machine_type if machine_type is None: machine_type = "pc" cmd = "virt-install " if uri: cmd += "--connect %s " % uri cmd += "--name %s " % name cmd += "--ram %s " % ram cmd += "--vcpus %s " % vcpus if uuid: cmd += "--uuid %s " % uuid if virt_auto_boot and not disable_autostart: cmd += "--autostart " if no_gfx: cmd += "--nographics " else: cmd += "--vnc " if is_qemu and virt_type: if not disable_virt_type: cmd += "--virt-type %s " % virt_type if is_qemu and machine_type and not disable_machine_type: cmd += "--machine %s " % machine_type if fullvirt or is_qemu or is_import: if fullvirt is not None: cmd += "--hvm " elif oldstyle_accelerate: cmd += "--accelerate " if is_qemu and extra and not(virt_pxe_boot) and not(disable_extra): cmd += ("--extra-args=\"%s\" " % (extra)) if virt_pxe_boot or is_xen: cmd += "--pxe " elif cdrom: cmd += "--cdrom %s " % cdrom elif location: cmd += "--location %s " % location elif importpath: cmd += "--import " import_exists = True if arch: cmd += "--arch %s " % arch else: cmd += "--paravirt " if not disable_boot_opt: cmd += ("--boot kernel=%s,initrd=%s,kernel_args=\"%s\" " % (kernel, initrd, extra)) else: if location: cmd += "--location %s " % location if extra: cmd += "--extra-args=\"%s\" " % extra if breed and breed != "other": if os_version and os_version != "other": if breed == "suse": suse_version_re = re.compile("^(opensuse[0-9]+)\.([0-9]+)$") if suse_version_re.match(os_version): os_version = suse_version_re.match(os_version).groups()[0] # make sure virt-install knows about our os_version, # otherwise default it to generic26 found = False if os_version in supported_variants: cmd += "--os-variant %s " % os_version else: print ("- warning: virt-install doesn't know this os_version, defaulting to generic26") cmd += "--os-variant generic26 " else: distro = "unix" if breed in [ "debian", "suse", "redhat" ]: distro = "linux" elif breed in [ "windows" ]: distro = "windows" cmd += "--os-type %s " % distro if importpath: # This needs to be the first disk for import to work cmd += "--disk path=%s " % importpath for path, size, driver_type in disks: print ("- adding disk: %s of size %s (driver type=%s)" % (path, size, driver_type)) cmd += "--disk path=%s" % (path) if str(size) != "0": cmd += ",size=%s" % size if disk_bus: cmd += ",bus=%s" % disk_bus if driver_type and not disable_driver_type: cmd += ",format=%s" % driver_type cmd += " " if floppy: cmd += "--disk path=%s,device=floppy " % floppy for bridge, mac in nics: cmd += "--network bridge=%s" % bridge if net_model and not disable_net_model: cmd += ",model=%s" % net_model if mac: if oldstyle_macs: cmd += " --mac=%s" % mac else: cmd += ",mac=%s" % mac cmd += " " cmd += "--wait %d " % int(wait) if noreboot: cmd += "--noreboot " if osimport and not(import_exists): cmd += "--import " cmd += "--noautoconsole " return shlex.split(cmd.strip()) cobbler-2.4.1/koan/vmwcreate.py000077500000000000000000000117251227367477500164620ustar00rootroot00000000000000""" Virtualization installation functions. Copyright 2007-2008 Red Hat, Inc and Others. Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import os import random import exceptions IMAGE_DIR = "/var/lib/vmware/images" VMX_DIR = "/var/lib/vmware/vmx" # FIXME: what to put for guestOS # FIXME: are other settings ok? TEMPLATE = """ #!/usr/bin/vmware config.version = "8" virtualHW.version = "4" numvcpus = "2" scsi0.present = "TRUE" scsi0.virtualDev = "lsilogic" scsi0:0.present = "TRUE" scsi0:0.writeThrough = "TRUE" ide1:0.present = "TRUE" ide1:0.deviceType = "cdrom-image" Ethernet0.present = "TRUE" Ethernet0.AddressType = "static" Ethernet0.Address = "%(MAC_ADDRESS)s" Ethernet0.virtualDev = "e1000" guestOS = "linux" priority.grabbed = "normal" priority.ungrabbed = "normal" powerType.powerOff = "hard" powerType.powerOn = "hard" powerType.suspend = "hard" powerType.reset = "hard" floppy0.present = "FALSE" scsi0:0.filename = "%(VMDK_IMAGE)s" displayName = "%(IMAGE_NAME)s" memsize = "%(MEMORY)s" """ #ide1:0.filename = "%(PATH_TO_ISO)s" class VirtCreateException(exceptions.Exception): pass def random_mac(): """ from xend/server/netif.py Generate a random MAC address. Uses OUI 00-50-56, allocated to VMWare. Last 3 fields are random. return: MAC address string """ mac = [ 0x00, 0x50, 0x56, random.randint(0x00, 0x3f), random.randint(0x00, 0xff), random.randint(0x00, 0xff) ] return ':'.join(map(lambda x: "%02x" % x, mac)) def make_disk(disksize,image): cmd = "vmware-vdiskmanager -c -a lsilogic -s %sGb -t 0 %s" % (disksize, image) print "- %s" % cmd rc = os.system(cmd) if rc != 0: raise VirtCreateException("command failed") def make_vmx(path,vmdk_image,image_name,mac_address,memory): template_params = { "VMDK_IMAGE" : vmdk_image, "IMAGE_NAME" : image_name, "MAC_ADDRESS" : mac_address.lower(), "MEMORY" : memory } templated = TEMPLATE % template_params fd = open(path,"w+") fd.write(templated) fd.close() def register_vmx(vmx_file): cmd = "vmware-cmd -s register %s" % vmx_file print "- %s" % cmd rc = os.system(cmd) if rc!=0: raise VirtCreateException("vmware registration failed") def start_vm(vmx_file): os.chmod(vmx_file,0755) cmd = "vmware-cmd %s start" % vmx_file print "- %s" % cmd rc = os.system(cmd) if rc != 0: raise VirtCreateException("vm start failed") def start_install(name=None, ram=None, disks=None, mac=None, uuid=None, extra=None, vcpus=None, profile_data=None, arch=None, no_gfx=False, fullvirt=True, bridge=None, virt_type=None, virt_auto_boot=False, qemu_driver_type=None, qemu_net_type=None): if profile_data.has_key("file"): raise koan.InfoException("vmware does not work with --image yet") mac = None if not profile_data.has_key("interfaces"): print "- vmware installation requires a system, not a profile" return 1 for iname in profile_data["interfaces"]: intf = profile_data["interfaces"][iname] mac = intf["mac_address"] if mac is None: print "- no MAC information available in this record, cannot install" return 1 print "DEBUG: name=%s" % name print "DEBUG: ram=%s" % ram print "DEBUG: mac=%s" % mac print "DEBUG: disks=%s" % disks # starts vmware using PXE. disk/mem info come from Cobbler # rest of the data comes from PXE which is also intended # to be managed by Cobbler. if not os.path.exists(IMAGE_DIR): os.makedirs(IMAGE_DIR) if not os.path.exists(VMX_DIR): os.makedirs(VMX_DIR) if len(disks) != 1: raise VirtCreateException("vmware support is limited to 1 virtual disk") diskname = disks[0][0] disksize = disks[0][1] image = "%s/%s" % (IMAGE_DIR, name) print "- saving virt disk image as %s" % image make_disk(disksize,image) vmx = "%s/%s" % (VMX_DIR, name) print "- saving vmx file as %s" % vmx make_vmx(vmx,image,name,mac,ram) register_vmx(vmx) start_vm(vmx) cobbler-2.4.1/koan/xencreate.py000077500000000000000000000023511227367477500164360ustar00rootroot00000000000000""" Virtualization installation functions. Currently somewhat Xen/paravirt specific, will evolve later. Copyright 2006-2008 Red Hat, Inc and Others. Michael DeHaan Original version based on virtguest-install Jeremy Katz Option handling added by Andrew Puch Simplified for use as library by koan, Michael DeHaan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import utils import virtinstall def start_install(*args, **kwargs): cmd = virtinstall.build_commandline("xen:///", *args, **kwargs) utils.subprocess_call(cmd) cobbler-2.4.1/mysql.schema000066400000000000000000000014401227367477500155000ustar00rootroot00000000000000CREATE DATABASE cobbler; GRANT ALL PRIVILEGES ON cobbler.* TO 'cobbler'@'%' IDENTIFIED BY 'testing123'; CREATE TABLE distro (name VARCHAR(100) NOT NULL PRIMARY KEY, data TEXT) ENGINE=innodb; CREATE TABLE profile (name VARCHAR(100) NOT NULL PRIMARY KEY, data TEXT) ENGINE=innodb; CREATE TABLE system (name VARCHAR(100) NOT NULL PRIMARY KEY, data TEXT) ENGINE=innodb; CREATE TABLE image (name VARCHAR(100) NOT NULL PRIMARY KEY, data TEXT) ENGINE=innodb; CREATE TABLE repo (name VARCHAR(100) NOT NULL PRIMARY KEY, data TEXT) ENGINE=innodb; CREATE TABLE mgmtclass (name VARCHAR(100) NOT NULL PRIMARY KEY, data TEXT) ENGINE=innodb; CREATE TABLE file (name VARCHAR(100) NOT NULL PRIMARY KEY, data TEXT) ENGINE=innodb; CREATE TABLE package (name VARCHAR(100) NOT NULL PRIMARY KEY, data TEXT) ENGINE=innodb; cobbler-2.4.1/newtests/000077500000000000000000000000001227367477500150265ustar00rootroot00000000000000cobbler-2.4.1/newtests/__init__.py000066400000000000000000000000001227367477500171250ustar00rootroot00000000000000cobbler-2.4.1/newtests/cli/000077500000000000000000000000001227367477500155755ustar00rootroot00000000000000cobbler-2.4.1/newtests/cli/__init__.py000066400000000000000000000000001227367477500176740ustar00rootroot00000000000000cobbler-2.4.1/newtests/cli/direct/000077500000000000000000000000001227367477500170475ustar00rootroot00000000000000cobbler-2.4.1/newtests/cli/direct/__init__.py000066400000000000000000000000001227367477500211460ustar00rootroot00000000000000cobbler-2.4.1/newtests/cli/direct/cobbler_cli_test.py000066400000000000000000000060321227367477500227200ustar00rootroot00000000000000import os import sys import unittest from cobbler import utils FAKE_INITRD="initrd-2.6.15-1.2054_FAKE.img" FAKE_INITRD2="initrd-2.5.16-2.2055_FAKE.img" FAKE_INITRD3="initrd-1.8.18-3.9999_FAKE.img" FAKE_KERNEL="vmlinuz-2.6.15-1.2054_FAKE" FAKE_KERNEL2="vmlinuz-2.5.16-2.2055_FAKE" FAKE_KERNEL3="vmlinuz-1.8.18-3.9999_FAKE" cleanup_dirs = [] class CobblerCLITest(unittest.TestCase): def setUp(self): """ Set up """ return def tearDown(self): """ Cleanup here """ return class Test_Direct(CobblerCLITest): """ Tests cobbler direct commands """ def test_00_cobbler_version(self): """Runs 'cobbler version'""" (data,rc) = utils.subprocess_sp(None,["cobbler","version"],shell=False) self.assertEqual(rc,0) def test_01_cobbler_status(self): """Runs 'cobbler status'""" (data,rc) = utils.subprocess_sp(None,["cobbler","status"],shell=False) self.assertEqual(rc,0) def test_02_cobbler_sync(self): """Runs 'cobbler sync'""" (data,rc) = utils.subprocess_sp(None,["cobbler","sync"],shell=False) self.assertEqual(rc,0) def test_03_cobbler_signature_report(self): """Runs 'cobbler signature report'""" (data,rc) = utils.subprocess_sp(None,["cobbler","signature","report"],shell=False) self.assertEqual(rc,0) def test_04_cobbler_signature_update(self): """Runs 'cobbler signature update'""" (data,rc) = utils.subprocess_sp(None,["cobbler","signature","update"],shell=False) self.assertEqual(rc,0) def test_05_cobbler_acl_adduser(self): """Runs 'cobbler aclsetup --adduser'""" (data,rc) = utils.subprocess_sp(None,["cobbler","aclsetup","--adduser=cobbler"],shell=False) self.assertEqual(rc,0) # TODO: verify user acl exists on directories def test_06_cobbler_acl_addgroup(self): """Runs 'cobbler aclsetup --addgroup'""" (data,rc) = utils.subprocess_sp(None,["cobbler","aclsetup","--addgroup=cobbler"],shell=False) self.assertEqual(rc,0) # TODO: verify group acl exists on directories def test_07_cobbler_acl_removeuser(self): """Runs 'cobbler aclsetup --removeuser'""" (data,rc) = utils.subprocess_sp(None,["cobbler","aclsetup","--removeuser=cobbler"],shell=False) self.assertEqual(rc,0) # TODO: verify user acl no longer exists on directories def test_08_cobbler_acl_removegroup(self): """Runs 'cobbler aclsetup --removegroup'""" (data,rc) = utils.subprocess_sp(None,["cobbler","aclsetup","--removegroup=cobbler"],shell=False) self.assertEqual(rc,0) # TODO: verify group acl no longer exists on directories def test_09_cobbler_reposync(self): """Runs 'cobbler reposync'""" (data,rc) = utils.subprocess_sp(None,["cobbler","reposync"],shell=False) self.assertEqual(rc,0) (data,rc) = utils.subprocess_sp(None,["cobbler","reposync","--tries=3"],shell=False) self.assertEqual(rc,0) (data,rc) = utils.subprocess_sp(None,["cobbler","reposync","--no-fail"],shell=False) self.assertEqual(rc,0) cobbler-2.4.1/newtests/cli/imports/000077500000000000000000000000001227367477500172725ustar00rootroot00000000000000cobbler-2.4.1/newtests/cli/imports/__init__.py000066400000000000000000000000001227367477500213710ustar00rootroot00000000000000cobbler-2.4.1/newtests/cli/imports/import_base.py000066400000000000000000000030651227367477500221540ustar00rootroot00000000000000import os import sys import unittest from cobbler import utils class CobblerImportTest(unittest.TestCase): imported_distros = [] def setUp(self): """ Set up, mounts NFS share """ def tearDown(self): """ Cleanup here """ for d in self.imported_distros: try: (data,rc) = utils.subprocess_sp(None,["cobbler","distro","remove","--recursive","--name=%s" % d],shell=False) except: print "Failed to remove distro '%s' during cleanup" % d def create_import_func(data): name = data["name"] desc = data["desc"] path = data["path"] def do_import(self): print "doing import, name=%s, desc=%s, path=%s" % (name,desc,path) (data,rc) = utils.subprocess_sp(None,["cobbler","import","--name=test-%s" % name,"--path=%s" % path],shell=False) print data self.assertEqual(rc,0) # TODO: scan output of import to build list of imported distros/profiles # and compare to expected list. Then use that list to run reports # and for later cleanup (data,rc) = utils.subprocess_sp(None,["cobbler","distro","report","--name=test-%s" % name],shell=False) print data self.assertEqual(rc,0) (data,rc) = utils.subprocess_sp(None,["cobbler","profile","report","--name=test-%s" % name],shell=False) print data self.assertEqual(rc,0) (data,rc) = utils.subprocess_sp(None,["cobbler","distro","remove","--recursive","--name=test-%s" % name],shell=False) print data self.assertEqual(rc,0) return do_import cobbler-2.4.1/newtests/cli/imports/test_debian/000077500000000000000000000000001227367477500215535ustar00rootroot00000000000000cobbler-2.4.1/newtests/cli/imports/test_debian/__init__.py000066400000000000000000000000001227367477500236520ustar00rootroot00000000000000cobbler-2.4.1/newtests/cli/imports/test_debian/debian_import_test.py000066400000000000000000000013231227367477500257770ustar00rootroot00000000000000import os import sys import unittest from cobbler import utils from newtests.cli.imports.import_base import CobblerImportTest from newtests.cli.imports.import_base import create_import_func class Test_Debian_Imports(CobblerImportTest): """ Tests imports of various distros """ pass distros = [ {"name":"debian_6.0.5-x86_64", "desc":"Debian Sarge (6.0.5) amd64", "path":"/vagrant/distros/debian_6.0.5_amd64"}, ] for i in range(0,len(distros)): test_func = create_import_func(distros[i]) test_func.__name__ = 'test_debian_%02d_import_%s' % (i,distros[i]["name"]) test_func.__doc__ = "Import of %s" % distros[i]["desc"] setattr(Test_Debian_Imports, test_func.__name__, test_func) del test_func cobbler-2.4.1/newtests/cli/imports/test_freebsd/000077500000000000000000000000001227367477500217435ustar00rootroot00000000000000cobbler-2.4.1/newtests/cli/imports/test_freebsd/__init__.py000066400000000000000000000000001227367477500240420ustar00rootroot00000000000000cobbler-2.4.1/newtests/cli/imports/test_freebsd/freebsd_import_test.py000066400000000000000000000017721227367477500263670ustar00rootroot00000000000000import os import sys import unittest from cobbler import utils from newtests.cli.imports.import_base import CobblerImportTest from newtests.cli.imports.import_base import create_import_func class Test_FreeBSD_Imports(CobblerImportTest): """ Tests imports of various distros """ pass distros = [ {"name":"freebsd8.2-x86_64", "desc":"FreeBSD 8.2 amd64", "path":"/vagrant/distros/freebsd8.2_amd64"}, {"name":"freebsd8.3-x86_64", "desc":"FreeBSD 8.3 amd64", "path":"/vagrant/distros/freebsd8.3_amd64"}, {"name":"freebsd9.0-i386", "desc":"FreeBSD 9.0 i386", "path":"/vagrant/distros/freebsd9.0_i386"}, {"name":"freebsd9.0-x86_64", "desc":"FreeBSD 9.0 amd64", "path":"/vagrant/distros/freebsd9.0_amd64"}, ] for i in range(0,len(distros)): test_func = create_import_func(distros[i]) test_func.__name__ = 'test_freebsd_%02d_import_%s' % (i,distros[i]["name"]) test_func.__doc__ = "Import of %s" % distros[i]["desc"] setattr(Test_FreeBSD_Imports, test_func.__name__, test_func) del test_func cobbler-2.4.1/newtests/cli/imports/test_redhat/000077500000000000000000000000001227367477500216005ustar00rootroot00000000000000cobbler-2.4.1/newtests/cli/imports/test_redhat/__init__.py000066400000000000000000000000001227367477500236770ustar00rootroot00000000000000cobbler-2.4.1/newtests/cli/imports/test_redhat/redhat_import_test.py000066400000000000000000000023601227367477500260530ustar00rootroot00000000000000import os import sys import unittest from cobbler import utils from newtests.cli.imports.import_base import CobblerImportTest from newtests.cli.imports.import_base import create_import_func class Test_RedHat_Imports(CobblerImportTest): """ Tests imports of various distros """ pass distros = [ {"name":"rhel58-x86_64", "desc":"RHEL 5.8 x86_64", "path":"/vagrant/distros/rhel58_x86_64"}, {"name":"rhel63-x86_64", "desc":"RHEL 6.3 x86_64", "path":"/vagrant/distros/rhel63_x86_64"}, {"name":"centos63-x86_64", "desc":"CentOS 6.3 x86_64", "path":"/vagrant/distros/centos63_x86_64"}, {"name":"sl62-x86_64", "desc":"Scientific Linux 6.2 x86_64", "path":"/vagrant/distros/sl62_x86_64"}, {"name":"f16-x86_64", "desc":"Fedora 16 x86_64", "path":"/vagrant/distros/f16_x86_64"}, {"name":"f17-x86_64", "desc":"Fedora 17 x86_64", "path":"/vagrant/distros/f17_x86_64"}, {"name":"f18-x86_64", "desc":"Fedora 18 x86_64", "path":"/vagrant/distros/f18_x86_64"}, ] for i in range(0,len(distros)): test_func = create_import_func(distros[i]) test_func.__name__ = 'test_redhat_%02d_import_%s' % (i,distros[i]["name"]) test_func.__doc__ = "Import of %s" % distros[i]["desc"] setattr(Test_RedHat_Imports, test_func.__name__, test_func) del test_func cobbler-2.4.1/newtests/cli/imports/test_suse/000077500000000000000000000000001227367477500213105ustar00rootroot00000000000000cobbler-2.4.1/newtests/cli/imports/test_suse/__init__.py000066400000000000000000000000001227367477500234070ustar00rootroot00000000000000cobbler-2.4.1/newtests/cli/imports/test_suse/suse_import_test.py000066400000000000000000000026521227367477500252770ustar00rootroot00000000000000import os import sys import unittest from cobbler import utils from newtests.cli.imports.import_base import CobblerImportTest from newtests.cli.imports.import_base import create_import_func class Test_Suse_Imports(CobblerImportTest): """ Tests imports of various distros """ pass distros = [ {"name":"opensuse11.3-i386", "desc":"OpenSuSE 11.3 i586", "path":"/vagrant/distros/opensuse11.3_i586"}, {"name":"opensuse11.4-x86_64", "desc":"OpenSuSE 11.4 x86_64", "path":"/vagrant/distros/opensuse11.4_x86_64"}, {"name":"opensuse12.1-x86_64", "desc":"OpenSuSE 12.1 x86_64", "path":"/vagrant/distros/opensuse12.1_x86_64"}, {"name":"opensuse12.2-i386", "desc":"OpenSuSE 12.2 i586", "path":"/vagrant/distros/opensuse12.2_i586"}, {"name":"opensuse12.2-x86_64", "desc":"OpenSuSE 12.2 x86_64", "path":"/vagrant/distros/opensuse12.2_x86_64"}, {"name":"sles11_sp2-i386", "desc":"SLES 11 SP2 i586", "path":"/vagrant/distros/sles11_sp2_i586"}, {"name":"sles11_sp2-x86_64", "desc":"SLES 11 SP2 x86_64", "path":"/vagrant/distros/sles11_sp2_x86_64"}, {"name":"sles11_sp2-ppc64", "desc":"SLES 11 SP2 ppc64", "path":"/vagrant/distros/sles11_sp2_ppc64"}, ] for i in range(0,len(distros)): test_func = create_import_func(distros[i]) test_func.__name__ = 'test_suse_%02d_import_%s' % (i,distros[i]["name"]) test_func.__doc__ = "Import of %s" % distros[i]["desc"] setattr(Test_Suse_Imports, test_func.__name__, test_func) del test_func cobbler-2.4.1/newtests/cli/imports/test_ubuntu/000077500000000000000000000000001227367477500216535ustar00rootroot00000000000000cobbler-2.4.1/newtests/cli/imports/test_ubuntu/__init__.py000066400000000000000000000000001227367477500237520ustar00rootroot00000000000000cobbler-2.4.1/newtests/cli/imports/test_ubuntu/ubuntu_import_test.py000066400000000000000000000022011227367477500261730ustar00rootroot00000000000000import os import sys import unittest from cobbler import utils from newtests.cli.imports.import_base import CobblerImportTest from newtests.cli.imports.import_base import create_import_func class Test_Ubuntu_Imports(CobblerImportTest): """ Tests imports of various distros """ pass distros = [ {"name":"ubuntu12.04-server-x86_64", "desc":"Ubuntu Precise (12.04) Server amd64", "path":"/vagrant/distros/ubuntu_1204_server_amd64"}, {"name":"ubuntu12.04.1-server-i386", "desc":"Ubuntu Precise (12.04.1) Server i386", "path":"/vagrant/distros/ubuntu_1204_1_server_i386"}, {"name":"ubuntu12.10-server-x86_64", "desc":"Ubuntu Quantal (12.10) Server amd64", "path":"/vagrant/distros/ubuntu_1210_server_amd64"}, {"name":"ubuntu12.10-server-i386", "desc":"Ubuntu Quantal (12.10) Server i386", "path":"/vagrant/distros/ubuntu_1210_server_i386"}, ] for i in range(0,len(distros)): test_func = create_import_func(distros[i]) test_func.__name__ = 'test_ubuntu_%02d_import_%s' % (i,distros[i]["name"]) test_func.__doc__ = "Import of %s" % distros[i]["desc"] setattr(Test_Ubuntu_Imports, test_func.__name__, test_func) del test_func cobbler-2.4.1/newtests/cli/imports/test_vmware/000077500000000000000000000000001227367477500216325ustar00rootroot00000000000000cobbler-2.4.1/newtests/cli/imports/test_vmware/__init__.py000066400000000000000000000000001227367477500237310ustar00rootroot00000000000000cobbler-2.4.1/newtests/cli/imports/test_vmware/vmware_import_test.py000066400000000000000000000023021227367477500261330ustar00rootroot00000000000000import os import sys import unittest from cobbler import utils from newtests.cli.imports.import_base import CobblerImportTest from newtests.cli.imports.import_base import create_import_func class Test_VMWare_Imports(CobblerImportTest): """ Tests imports of various distros """ pass distros = [ {"name":"vmware_esx_4.0_u1-x86_64", "desc":"VMware ESX 4.0 update1", "path":"/vagrant/distros/vmware_esx_4.0_u1_208167_x86_64"}, {"name":"vmware_esx_4.0_u2-x86_64", "desc":"VMware ESX 4.0 update2", "path":"/vagrant/distros/vmware_esx_4.0_u2_261974_x86_64"}, {"name":"vmware_esxi4.1-x86_64", "desc":"VMware ESXi 4.1", "path":"/vagrant/distros/vmware_esxi4.1_348481_x86_64"}, {"name":"vmware_esxi5.0-x86_64", "desc":"VMware ESXi 5.0", "path":"/vagrant/distros/vmware_esxi5.0_469512_x86_64"}, {"name":"vmware_esxi5.1-x86_64", "desc":"VMware ESXi 5.1", "path":"/vagrant/distros/vmware_esxi5.1_799733_x86_64"}, ] for i in range(0,len(distros)): test_func = create_import_func(distros[i]) test_func.__name__ = 'test_vmware_%02d_import_%s' % (i,distros[i]["name"]) test_func.__doc__ = "Import of %s" % distros[i]["desc"] setattr(Test_VMWare_Imports, test_func.__name__, test_func) del test_func cobbler-2.4.1/newtests/xmlrpc/000077500000000000000000000000001227367477500163335ustar00rootroot00000000000000cobbler-2.4.1/newtests/xmlrpc/__init__.py000066400000000000000000000000001227367477500204320ustar00rootroot00000000000000cobbler-2.4.1/newtests/xmlrpc/cobbler_xmlrpc_test.py000066400000000000000000000503321227367477500227440ustar00rootroot00000000000000import logging import os import random import sys import time import unittest import xmlrpclib from cobbler import utils from cobbler import item_distro import cexceptions FAKE_INITRD="initrd-2.6.15-1.2054_FAKE.img" FAKE_INITRD2="initrd-2.5.16-2.2055_FAKE.img" FAKE_INITRD3="initrd-1.8.18-3.9999_FAKE.img" FAKE_KERNEL="vmlinuz-2.6.15-1.2054_FAKE" FAKE_KERNEL2="vmlinuz-2.5.16-2.2055_FAKE" FAKE_KERNEL3="vmlinuz-1.8.18-3.9999_FAKE" cleanup_dirs = [] class CobblerXMLRPCTest(unittest.TestCase): server = None def setUp(self): """ Sets up Cobbler API connection and logs in """ logging.basicConfig( stream=sys.stderr ) self.logger = logging.getLogger( self.__class__.__name__ ) self.logger.setLevel( logging.DEBUG ) self.url_api = utils.local_get_cobbler_api_url() self.url_xmlrpc = utils.local_get_cobbler_xmlrpc_url() self.remote = xmlrpclib.Server(self.url_api) self.shared_secret = utils.get_shared_secret() self.token = self.remote.login("", self.shared_secret) if not self.token: self.server.stop() sys.exit(1) # Create temp dir self.topdir = "/tmp/cobbler_test" try: os.makedirs(self.topdir) except: pass self.fk_initrd = os.path.join(self.topdir, FAKE_INITRD) self.fk_initrd2 = os.path.join(self.topdir, FAKE_INITRD2) self.fk_initrd3 = os.path.join(self.topdir, FAKE_INITRD3) self.fk_kernel = os.path.join(self.topdir, FAKE_KERNEL) self.fk_kernel2 = os.path.join(self.topdir, FAKE_KERNEL2) self.fk_kernel3 = os.path.join(self.topdir, FAKE_KERNEL3) self.redhat_kickstart = os.path.join(self.topdir, "test.ks") self.ubuntu_preseed = os.path.join(self.topdir, "test.seed") create = [ self.fk_initrd, self.fk_initrd2, self.fk_initrd3, self.fk_kernel, self.fk_kernel2, self.fk_kernel3, self.redhat_kickstart, self.ubuntu_preseed, ] for fn in create: f = open(fn,"w+") f.close() self.distro_fields = [ # TODO: fetchable files, boot files, etc. # field_name, good value(s), bad value(s) # ["",["",],["",]], ["name",["testdistro0",],[]], ["kernel",[self.fk_kernel,],["",]], ["initrd",[self.fk_initrd,],["",]], ["breed",["generic",],["badversion",]], ["os_version",["generic26",],["bados",]], ["arch",["i386","x86_64","ppc","ppc64"],["badarch",]], ["comment",["test comment",],[]], ["owners",["user1 user2 user3",],[]], ["kernel_options",["a=1 b=2 c=3 c=4 c=5 d e",],[]], ["kernel_options_post",["a=1 b=2 c=3 c=4 c=5 d e",],[]], ["ks_meta",["a=1 b=2 c=3 c=4 c=5 d e",],[]], ["mgmt_classes",["one two three",],[]], ["redhat_management_key",["abcd1234",],[]], ["redhat_management_server",["1.1.1.1",],[]], ] self.profile_fields = [ # TODO: fetchable files, boot files, etc. # repos, which have to exist # field_name, good value(s), bad value(s) # ["",["",],["",]], ["name",["testprofile0",],[]], ["distro",["testdistro0",],["baddistro",]], ["enable_gpxe",["yes","YES","1","0","no"],[]], ["enable_menu",["yes","YES","1","0","no"],[]], ["comment",["test comment",],[]], ["owners",["user1 user2 user3",],[]], ["kernel_options",["a=1 b=2 c=3 c=4 c=5 d e",],[]], ["kernel_options_post",["a=1 b=2 c=3 c=4 c=5 d e",],[]], ["ks_meta",["a=1 b=2 c=3 c=4 c=5 d e",],[]], ["kickstart",[self.redhat_kickstart,self.ubuntu_preseed],["/path/to/bad/kickstart",]], ["proxy",["testproxy",],[]], ["virt_auto_boot",["1","0"],["yes","no"]], ["virt_cpus",["<>","1","2"],["a",]], ["virt_file_size",["<>","5","10"],["a",]], ["virt_disk_driver",["<>","raw","qcow2","vmdk"],[]], ["virt_ram",["<>","256","1024"],["a",]], ["virt_type",["<>","xenpv","xenfv","qemu","kvm","vmware","openvz"],["bad",]], ["virt_bridge",["<>","br0","virbr0","xenbr0"],[]], ["virt_path",["<>","/path/to/test",],[]], ["dhcp_tag",["","foo"],[]], ["server",["1.1.1.1",],[]], ["name_servers",["1.1.1.1 1.1.1.2 1.1.1.3",],[]], ["name_servers_search",["example.com foo.bar.com",],[]], ["mgmt_classes",["one two three",],[]], ["mgmt_parameters",["<>",],["badyaml",]], # needs more test cases that are valid yaml ["redhat_management_key",["abcd1234",],[]], ["redhat_management_server",["1.1.1.1",],[]], ["template_remote_kickstarts",["yes","YES","1","0","no"],[]], ] def tearDown(self): """ Cleanup here """ return class Test_A_Create(CobblerXMLRPCTest): """ Tests creation of objects """ def test_00_create_distro(self): """Tests creation of a distro object""" distro = self.remote.new_distro(self.token) for field in self.distro_fields: (fname,fgood,fbad) = field for fb in fbad: try: self.remote.modify_distro(distro,fname,fb,self.token) except: pass else: self.fail("bad field (%s=%s) did not raise an exception" % (fname,fb)) for fg in fgood: try: self.assertTrue(self.remote.modify_distro(distro,fname,fg,self.token)) except: self.fail("good field (%s=%s) raised an exception" % (fname,fg)) self.assertTrue(self.remote.save_distro(distro,self.token)) def test_01_create_profile(self): """Tests creation of a profile object""" profile = self.remote.new_profile(self.token) for field in self.profile_fields: (fname,fgood,fbad) = field for fb in fbad: try: self.remote.modify_profile(profile,fname,fb,self.token) except: pass else: self.fail("bad field (%s=%s) did not raise an exception" % (fname,fb)) for fg in fgood: try: self.assertTrue(self.remote.modify_profile(profile,fname,fg,self.token)) except: self.fail("good field (%s=%s) raised an exception" % (fname,fg)) self.assertTrue(self.remote.save_profile(profile,self.token)) def test_02_create_subprofile(self): """Tests creation of a subprofile object""" subprofile = self.remote.new_subprofile(self.token) self.assertTrue(self.remote.modify_profile(subprofile,"name","testsubprofile0",self.token)) self.assertTrue(self.remote.modify_profile(subprofile,"parent","testprofile0",self.token)) # test fields #for field in item_profile.FIELDS: # (fname,def1,def2,display,editable,tooltip,values,type) = field # if fname not in ["name","distro","parent"] and editable: # if values and isinstance(values,list): # fvalue = random.choice(values) # else: # fvalue = "testing_" + fname # self.assertTrue(self.remote.modify_profile(subprofile,fname,fvalue,self.token)) self.assertTrue(self.remote.save_profile(subprofile,self.token)) def test_03_create_system(self): """Tests creation of a system object""" system = self.remote.new_system(self.token) self.assertTrue(self.remote.modify_system(system,"name","testsystem0",self.token)) self.assertTrue(self.remote.modify_system(system,"profile","testprofile0",self.token)) # test fields #for field in item_system.FIELDS: # (fname,def1,def2,display,editable,tooltip,values,type) = field # if fname not in ["name","profile"] and editable: # if values and isinstance(values,list): # fvalue = random.choice(values) # else: # fvalue = "testing_" + fname # self.assertTrue(self.remote.modify_system(system,fname,fvalue,self.token)) self.assertTrue(self.remote.save_system(system,self.token)) def test_04_create_repo(self): """Tests creation of a repo object""" repo = self.remote.new_repo(self.token) self.assertTrue(self.remote.modify_repo(repo,"name","testrepo0",self.token)) self.assertTrue(self.remote.modify_repo(repo,"mirror","http://www.sample.com/path/to/some/repo",self.token)) self.assertTrue(self.remote.modify_repo(repo,"mirror_locally","0",self.token)) # test fields #for field in item_repo.FIELDS: # (fname,def1,def2,display,editable,tooltip,values,type) = field # if fname not in ["name",] and editable: # if values and isinstance(values,list): # fvalue = random.choice(values) # else: # fvalue = "testing_" + fname # self.assertTrue(self.remote.modify_repo(repo,fname,fvalue,self.token)) self.assertTrue(self.remote.save_repo(repo,self.token)) def test_05_create_mgmtclass(self): """Tests creation of a mgmtclass object""" mgmtclass = self.remote.new_mgmtclass(self.token) self.assertTrue(self.remote.modify_mgmtclass(mgmtclass,"name","testmgmtclass0",self.token)) # test fields #for field in item_mgmtclass.FIELDS: # (fname,def1,def2,display,editable,tooltip,values,type) = field # if fname not in ["name",] and editable: # if values and isinstance(values,list): # fvalue = random.choice(values) # else: # fvalue = "testing_" + fname # self.assertTrue(self.remote.modify_mgmtclass(mgmtclass,fname,fvalue,self.token)) self.assertTrue(self.remote.save_mgmtclass(mgmtclass,self.token)) def test_06_create_image(self): """Tests creation of an image object""" image = self.remote.new_image(self.token) self.assertTrue(self.remote.modify_image(image,"name","testimage0",self.token)) # test fields #for field in item_image.FIELDS: # (fname,def1,def2,display,editable,tooltip,values,type) = field # if fname not in ["name",] and editable: # if values and isinstance(values,list): # fvalue = random.choice(values) # else: # fvalue = "testing_" + fname # self.assertTrue(self.remote.modify_image(image,fname,fvalue,self.token)) self.assertTrue(self.remote.save_image(image,self.token)) def test_07_create_file(self): """Tests creation of a file object""" #file = self.remote.new_file(self.token) #self.assertTrue(self.remote.modify_file(file,"name","testfile0",self.token)) # test fields #for field in item_file.FIELDS: # (fname,def1,def2,display,editable,tooltip,values,type) = field # if fname not in ["name",] and editable: # if values and isinstance(values,list): # fvalue = random.choice(values) # else: # fvalue = "testing_" + fname # self.assertTrue(self.remote.modify_file(file,fname,fvalue,self.token)) #self.assertTrue(self.remote.save_file(file,self.token)) pass def test_08_create_package(self): """Tests creation of a package object""" pass class Test_B_Get(CobblerXMLRPCTest): """ Tests the get_ calls for various objects """ def test_00_get_distro(self): """Get a distro object""" pass def test_01_get_profile(self): """Get a profile object""" pass def test_02_get_system(self): """Get a system object""" pass def test_03_get_repo(self): """Get a repo object""" pass def test_04_get_mgmtclass(self): """Get a mgmtclass object""" pass def test_05_get_image(self): """Get an image object""" pass def test_06_get_file(self): """Get a file object""" pass def test_07_get_package(self): """Get a package object""" pass def test_08_generic_get(self): """Get an object using the generic get_item() call""" pass class Test_C_Find(CobblerXMLRPCTest): """ Tests the remote find_ calls for various objects """ def test_00_find_distro(self): """Finding a distro object""" self.assertTrue(self.remote.find_distro({"name":"testdistro0"},self.token)) def test_01_find_profile(self): """Finding a profile object""" self.assertTrue(self.remote.find_profile({"name":"testprofile0"},self.token)) def test_02_find_system(self): """Finding a system object""" self.assertTrue(self.remote.find_system({"name":"testsystem0"},self.token)) def test_03_find_repo(self): """Finding a repo object""" self.assertTrue(self.remote.find_repo({"name":"testrepo0"},self.token)) def test_04_find_mgmtclass(self): """Finding a mgmtclass object""" self.assertTrue(self.remote.find_mgmtclass({"name":"testmgmtclass0"},self.token)) def test_05_find_image(self): """Finding an image object""" self.assertTrue(self.remote.find_image({"name":"testimage0"},self.token)) def test_06_find_file(self): """Finding a file object""" #self.assertTrue(self.remote.find_file({"name":"testfile0"},self.token)) pass def test_07_find_package(self): """Finding a package object""" pass def test_08_find_system_by_dnsname(self): """Finding a system by its dns name""" pass def test_09_generic_find(self): """Finding items using generic item_find() call""" pass class Test_D_Edit(CobblerXMLRPCTest): """ Tests the remote edit_ and save_ calls for objects """ def test_00_edit_distro(self): """Editing a distro object""" pass def test_01_edit_profile(self): """Editing a profile object""" pass def test_02_edit_system(self): """Editing a system object""" pass def test_03_edit_repo(self): """Editing a repo object""" pass def test_04_edit_mgmtclass(self): """Editing a mgmtclass object""" pass def test_05_edit_image(self): """Editing an image object""" pass def test_06_edit_file(self): """Editing a file object""" pass def test_07_edit_package(self): """Editing a package object""" pass class Test_E_Copy(CobblerXMLRPCTest): """ Tests the remote copy_ calls for various objects """ def test_00_copy_distro(self): """Copying a distro object""" distro = self.remote.get_item_handle("distro","testdistro0",self.token) self.assertTrue(self.remote.copy_distro(distro,"testdistrocopy",self.token)) def test_01_copy_profile(self): """Copying a profile object""" profile = self.remote.get_item_handle("profile","testprofile0",self.token) self.assertTrue(self.remote.copy_profile(profile,"testprofilecopy",self.token)) def test_02_copy_system(self): """Copying a system object""" system = self.remote.get_item_handle("system","testsystem0",self.token) self.assertTrue(self.remote.copy_system(system,"testsystemcopy",self.token)) def test_03_copy_repo(self): """Copying a repo object""" repo = self.remote.get_item_handle("repo","testrepo0",self.token) self.assertTrue(self.remote.copy_repo(repo,"testrepocopy",self.token)) def test_04_copy_mgmtclass(self): """Copying a mgmtclass object""" mgmtclass = self.remote.get_item_handle("mgmtclass","testmgmtclass0",self.token) self.assertTrue(self.remote.copy_mgmtclass(mgmtclass,"testmgmtclasscopy",self.token)) def test_05_copy_image(self): """Copying an image object""" image = self.remote.get_item_handle("image","testimage0",self.token) self.assertTrue(self.remote.copy_image(image,"testimagecopy",self.token)) def test_06_copy_file(self): """Copying a file object""" #testfile = self.remote.get_item_handle("file","testfile0",self.token) #self.assertTrue(self.remote.copy_file(testfile,"testfilecopy",self.token)) pass def test_07_copy_package(self): """Copying a package object""" pass class Test_F_Rename(CobblerXMLRPCTest): """ Tests the remote rename_ calls for various objects """ def test_00_rename_distro(self): """Renaming a distro object""" distro = self.remote.get_item_handle("distro","testdistrocopy",self.token) self.assertTrue(self.remote.rename_distro(distro,"testdistro1",self.token)) def test_01_rename_profile(self): """Renaming a profile object""" profile = self.remote.get_item_handle("profile","testprofilecopy",self.token) self.assertTrue(self.remote.rename_profile(profile,"testprofile1",self.token)) def test_02_rename_system(self): """Renaming a system object""" system = self.remote.get_item_handle("system","testsystemcopy",self.token) self.assertTrue(self.remote.rename_system(system,"testsystem1",self.token)) def test_03_rename_repo(self): """Renaming a repo object""" repo = self.remote.get_item_handle("repo","testrepocopy",self.token) self.assertTrue(self.remote.rename_repo(repo,"testrepo1",self.token)) def test_04_rename_mgmtclass(self): """Renaming a mgmtclass object""" mgmtclass = self.remote.get_item_handle("mgmtclass","testmgmtclasscopy",self.token) self.assertTrue(self.remote.rename_mgmtclass(mgmtclass,"testmgmtclass1",self.token)) def test_05_rename_image(self): """Renaming an image object""" image = self.remote.get_item_handle("image","testimagecopy",self.token) self.assertTrue(self.remote.rename_image(image,"testimage1",self.token)) def test_06_rename_file(self): """Renaming a file object""" #testfile = self.remote.get_item_handle("file","testfilecopy",self.token) #self.assertTrue(self.remote.rename_file(testfile,"testfile1",self.token)) pass def test_07_rename_package(self): """Renaming a package object""" pass class Test_G_Remove(CobblerXMLRPCTest): """ Tests the remote remove_ calls for various objects Removals happen backwards, to prevent recurisive deletes """ def test_99_remove_distro(self): """Removing a distro object""" self.assertTrue(self.remote.remove_distro("testdistro0",self.token)) self.assertTrue(self.remote.remove_distro("testdistro1",self.token)) def test_98_remove_profile(self): """Removing a profile object""" self.assertTrue(self.remote.remove_profile("testsubprofile0",self.token)) self.assertTrue(self.remote.remove_profile("testprofile0",self.token)) self.assertTrue(self.remote.remove_profile("testprofile1",self.token)) def test_97_remove_system(self): """Removing a system object""" self.assertTrue(self.remote.remove_system("testsystem0",self.token)) self.assertTrue(self.remote.remove_system("testsystem1",self.token)) def test_96_remove_image(self): """Removing an image object""" self.assertTrue(self.remote.remove_image("testimage0",self.token)) self.assertTrue(self.remote.remove_image("testimage1",self.token)) def test_95_remove_repo(self): """Removing a repo object""" self.assertTrue(self.remote.remove_repo("testrepo0",self.token)) self.assertTrue(self.remote.remove_repo("testrepo1",self.token)) def test_94_remove_mgmtclass(self): """Removing a mgmtclass object""" self.assertTrue(self.remote.remove_mgmtclass("testmgmtclass0",self.token)) self.assertTrue(self.remote.remove_mgmtclass("testmgmtclass1",self.token)) def test_93_remove_file(self): """Removing a file object""" #self.assertTrue(self.remote.remove_file("testfile0",self.token)) #self.assertTrue(self.remote.remove_file("testfile1",self.token)) pass def test_92_remove_package(self): """Removing a package object""" pass class Test_H_RegularCalls(CobblerXMLRPCTest): def test_00_token(self): """ Check if the authenticated token is valid """ assert self.token not in ("",None) def test_01_get_user_from_token(self): """ Gets the associated user from the token """ self.assertTrue(self.remote.get_user_from_token(self.token)) def test_02_check(self): """ Execute the remote check call """ self.assertTrue(self.remote.check(self.token)) def test_03_last_modified_time(self): """ Execute the remote last_modified_time call """ # should return a short time, if the setup # phase went ok assert self.remote.last_modified_time(self.token) != 0 cobbler-2.4.1/rel-eng/000077500000000000000000000000001227367477500145035ustar00rootroot00000000000000cobbler-2.4.1/rel-eng/packages/000077500000000000000000000000001227367477500162615ustar00rootroot00000000000000cobbler-2.4.1/rel-eng/packages/cobbler000066400000000000000000000000121227367477500176050ustar00rootroot000000000000002.4.0-1 / cobbler-2.4.1/rel-eng/tito.props000066400000000000000000000001411227367477500165430ustar00rootroot00000000000000[globalconfig] default_builder = tito.builder.Builder default_tagger = tito.tagger.VersionTagger cobbler-2.4.1/scripts/000077500000000000000000000000001227367477500146415ustar00rootroot00000000000000cobbler-2.4.1/scripts/preseed_early_default000066400000000000000000000002201227367477500211050ustar00rootroot00000000000000# Start preseed_early_default # This script is not run in the chroot /target by default $SNIPPET('kickstart_start') # End preseed_early_default cobbler-2.4.1/scripts/preseed_late_default000066400000000000000000000004171227367477500207260ustar00rootroot00000000000000# Start preseed_late_default # This script runs in the chroot /target by default $SNIPPET('post_install_network_config_deb') $SNIPPET('late_apt_repo_config') $SNIPPET('post_run_deb') $SNIPPET('download_config_files') $SNIPPET('kickstart_done') # End preseed_late_default cobbler-2.4.1/setup.cfg000066400000000000000000000006301227367477500147720ustar00rootroot00000000000000#Setup configuration for Red Hat distributions #Currently should work for Fedora 8 and up. #tab_build=.dev #tag_git_revision=true [build_ext] inplace=1 [bdist_rpm] release="2" group="Applications/System" packager="Alex Kesling" requires="cobbler Django mod_python" build_requires="python-setuptools-devel python-setuptools" [install] #install-purelib=/usr/share/cobbler/ install_data=/usr/share/cobbler/ cobbler-2.4.1/setup.py000066400000000000000000000313231227367477500146660ustar00rootroot00000000000000#!/usr/bin/env python import glob, os, sys, time, yaml from distutils.core import setup, Command from distutils.command.build_py import build_py as _build_py import unittest try: import subprocess except: import cobbler.sub_process as subprocess try: import coverage except: converage = None VERSION = "2.4.1" OUTPUT_DIR = "config" ##################################################################### ## Helper Functions ################################################# ##################################################################### ##################################################################### def explode_glob_path(path): """Take a glob and hand back the full recursive expansion, ignoring links. """ result = [] includes = glob.glob(path) for item in includes: if os.path.isdir(item) and not os.path.islink(item): result.extend(explode_glob_path(os.path.join(item, "*"))) else: result.append(item) return result def proc_data_files(data_files): """Because data_files doesn't natively support globs... let's add them. """ result = [] for dir,files in data_files: includes = [] for item in files: includes.extend(explode_glob_path(item)) result.append((dir, includes)) return result ##################################################################### def gen_manpages(): """Generate the man pages... this is currently done through POD, possible future version may do this through some Python mechanism (maybe conversion from ReStructured Text (.rst))... """ manpages = { "cobbler": 'pod2man --center="cobbler" --release="" ./docs/cobbler.pod | gzip -c > ./docs/cobbler.1.gz', "koan": 'pod2man --center="koan" --release="" ./docs/koan.pod | gzip -c > ./docs/koan.1.gz', "cobbler-register": 'pod2man --center="cobbler-register" --release="" ./docs/cobbler-register.pod | gzip -c > ./docs/cobbler-register.1.gz', } #Actually build them for man, cmd in manpages.items(): print("building %s man page." % man) if os.system(cmd): print "Creation of %s manpage failed." % man exit(1) ##################################################################### def gen_build_version(): fd = open(os.path.join(OUTPUT_DIR, "version"),"w+") gitdate = "?" gitstamp = "?" builddate = time.asctime() if os.path.exists(".git"): # for builds coming from git, include the date of the last commit cmd = subprocess.Popen(["/usr/bin/git","log","--format=%h%n%ad","-1"],stdout=subprocess.PIPE) data = cmd.communicate()[0].strip() if cmd.returncode == 0: gitstamp, gitdate = data.split("\n") data = { "gitdate" : gitdate, "gitstamp" : gitstamp, "builddate" : builddate, "version" : VERSION, "version_tuple" : [ int(x) for x in VERSION.split(".")] } fd.write(yaml.dump(data)) fd.close() ##################################################################### ##################################################################### ## Modify Build Stage ############################################## ##################################################################### class build_py(_build_py): """Specialized Python source builder.""" def run(self): gen_manpages() gen_build_version() _build_py.run(self) ##################################################################### ## Test Command ##################################################### ##################################################################### class test_command(Command): user_options = [] def initialize_options(self): pass def finalize_options(self): pass def run(self): testfiles = [] testdirs = ["koan"] for d in testdirs: testdir = os.path.join(os.getcwd(), "tests", d) for t in glob.glob(os.path.join(testdir, '*.py')): if t.endswith('__init__.py'): continue testfile = '.'.join(['tests', d, os.path.splitext(os.path.basename(t))[0]]) testfiles.append(testfile) tests = unittest.TestLoader().loadTestsFromNames(testfiles) runner = unittest.TextTestRunner(verbosity = 1) if coverage: coverage.erase() coverage.start() result = runner.run(tests) if coverage: coverage.stop() sys.exit(int(bool(len(result.failures) > 0 or len(result.errors) > 0))) ##################################################################### ## Actual Setup.py Script ########################################### ##################################################################### if __name__ == "__main__": ## Configurable installation roots for various data files. # Trailing slashes on these vars is to allow for easy # later configuration of relative paths if desired. docpath = "/usr/share/man/man1" etcpath = "/etc/cobbler/" initpath = "/etc/init.d/" libpath = "/var/lib/cobbler/" logpath = "/var/log/" if os.path.exists("/etc/SuSE-release"): webconfig = "/etc/apache2/conf.d" webroot = "/srv/www/" elif os.path.exists("/etc/debian_version"): webconfig = "/etc/apache2/conf.d" webroot = "/srv/www/" else: webconfig = "/etc/httpd/conf.d" webroot = "/var/www/" webcontent = webroot + "cobbler_webui_content/" setup( cmdclass={'build_py': build_py, 'test': test_command}, name = "cobbler", version = VERSION, description = "Network Boot and Update Server", long_description = "Cobbler is a network install server. Cobbler supports PXE, virtualized installs, and reinstalling existing Linux machines. The last two modes use a helper tool, 'koan', that integrates with cobbler. There is also a web interface 'cobbler-web'. Cobbler's advanced features include importing distributions from DVDs and rsync mirrors, kickstart templating, integrated yum mirroring, and built-in DHCP/DNS Management. Cobbler has a XMLRPC API for integration with other applications.", author = "Team Cobbler", author_email = "cobbler@lists.fedorahosted.org", url = "http://www.cobblerd.org/", license = "GPLv2+", requires = [ "mod_python", "cobbler", ], packages = [ "cobbler", "cobbler/modules", "koan", ], package_dir = { "cobbler_web": "web/cobbler_web", }, scripts = [ "bin/cobbler", "bin/cobblerd", "bin/cobbler-ext-nodes", "bin/koan", "bin/ovz-install", "bin/cobbler-register", ], data_files = proc_data_files([ # tftpd, hide in /usr/sbin ("/usr/sbin", ["bin/tftpd.py"]), ("%s" % webconfig, ["config/cobbler.conf"]), ("%s" % webconfig, ["config/cobbler_web.conf"]), ("%s" % initpath, ["config/cobblerd"]), ("%s" % docpath, ["docs/*.gz"]), ("installer_templates", ["installer_templates/*"]), ("%skickstarts" % libpath, ["kickstarts/*"]), ("%ssnippets" % libpath, ["snippets/*"]), ("%sscripts" % libpath, ["scripts/*"]), ("%s" % libpath, ["config/distro_signatures.json"]), ("web", ["web/*.*"]), ("%s" % webcontent, ["web/content/*.*"]), ("web/cobbler_web", ["web/cobbler_web/*.*"]), ("web/cobbler_web/templatetags",["web/cobbler_web/templatetags/*"]), ("web/cobbler_web/templates", ["web/cobbler_web/templates/*"]), ("%swebui_sessions" % libpath, []), ("%sloaders" % libpath, []), ("%scobbler/aux" % webroot, ["aux/*"]), #Configuration ("%s" % etcpath, ["config/*"]), ("%s" % etcpath, ["templates/etc/*"]), ("%siso" % etcpath, ["templates/iso/*"]), ("%spxe" % etcpath, ["templates/pxe/*"]), ("%sreporting" % etcpath, ["templates/reporting/*"]), ("%spower" % etcpath, ["templates/power/*"]), ("%sldap" % etcpath, ["templates/ldap/*"]), #Build empty directories to hold triggers ("%striggers/add/distro/pre" % libpath, []), ("%striggers/add/distro/post" % libpath, []), ("%striggers/add/profile/pre" % libpath, []), ("%striggers/add/profile/post" % libpath, []), ("%striggers/add/system/pre" % libpath, []), ("%striggers/add/system/post" % libpath, []), ("%striggers/add/repo/pre" % libpath, []), ("%striggers/add/repo/post" % libpath, []), ("%striggers/add/mgmtclass/pre" % libpath, []), ("%striggers/add/mgmtclass/post" % libpath, []), ("%striggers/add/package/pre" % libpath, []), ("%striggers/add/package/post" % libpath, []), ("%striggers/add/file/pre" % libpath, []), ("%striggers/add/file/post" % libpath, []), ("%striggers/delete/distro/pre" % libpath, []), ("%striggers/delete/distro/post" % libpath, []), ("%striggers/delete/profile/pre" % libpath, []), ("%striggers/delete/profile/post" % libpath, []), ("%striggers/delete/system/pre" % libpath, []), ("%striggers/delete/system/post" % libpath, []), ("%striggers/delete/repo/pre" % libpath, []), ("%striggers/delete/repo/post" % libpath, []), ("%striggers/delete/mgmtclass/pre" % libpath, []), ("%striggers/delete/mgmtclass/post" % libpath,[]), ("%striggers/delete/package/pre" % libpath, []), ("%striggers/delete/package/post" % libpath, []), ("%striggers/delete/file/pre" % libpath, []), ("%striggers/delete/file/post" % libpath, []), ("%striggers/install/pre" % libpath, []), ("%striggers/install/post" % libpath, []), ("%striggers/install/firstboot" % libpath, []), ("%striggers/sync/pre" % libpath, []), ("%striggers/sync/post" % libpath, []), ("%striggers/change" % libpath, []), #Build empty directories to hold the database ("%sconfig" % libpath, []), ("%sconfig/distros.d" % libpath, []), ("%sconfig/images.d" % libpath, []), ("%sconfig/profiles.d" % libpath, []), ("%sconfig/repos.d" % libpath, []), ("%sconfig/systems.d" % libpath, []), ("%sconfig/mgmtclasses.d" % libpath, []), ("%sconfig/packages.d" % libpath, []), ("%sconfig/files.d" % libpath, []), #Build empty directories to hold koan localconfig ("/var/lib/koan/config", []), # logfiles ("%scobbler/kicklog" % logpath, []), ("%scobbler/syslog" % logpath, []), ("%shttpd/cobbler" % logpath, []), ("%scobbler/anamon" % logpath, []), ("%skoan" % logpath, []), ("%scobbler/tasks" % logpath, []), # spoolpaths ("spool/koan", []), # web page directories that we own ("%scobbler/localmirror" % webroot, []), ("%scobbler/repo_mirror" % webroot, []), ("%scobbler/ks_mirror" % webroot, []), ("%scobbler/ks_mirror/config" % webroot, []), ("%scobbler/links" % webroot, []), ("%scobbler/aux" % webroot, []), ("%scobbler/pub" % webroot, []), ("%scobbler/rendered" % webroot, []), ("%scobbler/images" % webroot, []), #A script that isn't really data, wsgi script ("%scobbler/svc/" % webroot, ["bin/services.py"]), # zone-specific templates directory ("%szone_templates" % etcpath, []), ]), ) cobbler-2.4.1/snippets/000077500000000000000000000000001227367477500150175ustar00rootroot00000000000000cobbler-2.4.1/snippets/SuSE/000077500000000000000000000000001227367477500156365ustar00rootroot00000000000000cobbler-2.4.1/snippets/SuSE/hosts.xml000066400000000000000000000014651227367477500175260ustar00rootroot00000000000000 127.0.0.1 localhost #if $getVar("system_name","") != "" #set $ikeys = $interfaces.keys() #for $iface in $ikeys #set $idata = $interfaces[$iface] #if $idata["interface_type"].lower() in ["","na","bridge","bond"] $idata["ip_address"] #set $my_interface_hostname_short = $idata["dns_name"].split('.',1)[:1][0] $idata["dns_name"].lower() $my_interface_hostname_short.lower() #end if #end for #end if cobbler-2.4.1/snippets/SuSE/kdump.xml000066400000000000000000000024361227367477500175050ustar00rootroot00000000000000 true 256M-2G:64M,2G-:128M file:///var/crash true 64 4 compressed 31 yes 3 cobbler-2.4.1/snippets/SuSE/networking.xml000066400000000000000000000105461227367477500205550ustar00rootroot00000000000000#set $hostname = $getVar("hostname","") #if $hostname == "" #set $hostname = $getVar("system_name","cobbler") #end if #if $getVar("dns_name_eth0","") != "" #set $my_hostname = $hostname.split('.',1)[:1][0] #set $my_domainname = $dns_name_eth0.split('.',1)[1:][0] #else #set $my_hostname = $hostname #set $my_domainname = "site" #end if false false false false $my_hostname $my_domainname #if $getVar("name_servers_search","") != "" #for $sd in $name_servers_search $sd #end for #end if #for $ns in $name_servers $ns #end for #if $getVar("system_name","") != "" #set $ikeys = $interfaces.keys() #for $iface in $ikeys #set $idata = $interfaces[$iface] #set $mac = $idata["mac_address"] #set $ip = $idata["ip_address"] #set $netmask = $idata["netmask"] #set $iface_type = $idata["interface_type"] #set $bonding_opts = $idata["bonding_opts"] #if $iface_type.lower() == "bond" yes $bonding_opts.lower() #set $loop_ikeys = $interfaces.keys() #set $loop_counter = 0 #for $loop_iface in $loop_ikeys #set $loop_idata = $interfaces[$loop_iface] #set $loop_interface_type = $loop_idata["interface_type"] #if $loop_interface_type.lower == "bond_slave" #if $loop_idata["interface_master"] != "" #if $loop_idata["interface_master"].lower() == $iface.lower() $loop_iface #set $loop_counter += 1 #end if #end if #end if #end for static $iface $ip $netmask auto no #end if #if $iface_type.lower() in ["bond_slave","bridge_slave"] none $iface off no #end if #if $iface_type.lower() in ["","na"] static $iface $mac.lower() $ip $netmask auto no #end if #end for #end if false #if $getVar("system_name","") != "" #set $ikeys = $interfaces.keys() #for $iface in $ikeys #set $idata = $interfaces[$iface] #set $mac = $idata["mac_address"] #set $interface_type = $idata["interface_type"] #if $mac.lower() != "" #if $interface_type.lower() not in ["bond","bridge"] $iface ATTR{address} $mac.lower() #end if #end if #end for #end if false #if $getVar("system_name","") != "" ## TODO: add in static routes here default - - $gateway #end if cobbler-2.4.1/snippets/SuSE/proxy.xml000066400000000000000000000004071227367477500175420ustar00rootroot00000000000000 true $proxy localhost, 127.0.0.1 cobbler-2.4.1/snippets/SuSE/suse_scriptwrapper.xml000066400000000000000000000006661227367477500223340ustar00rootroot00000000000000 cobbler-2.4.1/snippets/cobbler_register000066400000000000000000000006571227367477500202660ustar00rootroot00000000000000# Begin cobbler registration #if $getVar('system_name','') == '' #if $str($getVar('register_new_installs','')) in [ "1", "true", "yes", "y" ] if [ -f "/usr/bin/cobbler-register" ]; then cobbler-register --server=$server --fqdn '*AUTO*' --profile=$profile_name --batch fi #else # cobbler registration is disabled in /etc/cobbler/settings #end if #else # skipping for system-based installation #end if # End cobbler registration cobbler-2.4.1/snippets/download_config_files000066400000000000000000000012161227367477500212600ustar00rootroot00000000000000# Start download cobbler managed config files (if applicable) #for $tkey, $tpath in $template_files.items() #set $orig = $tpath #set $tpath = $tpath.replace("_","__").replace("/","_") #if $getVar("system_name","") != "" #set $ttype = "system" #set $tname = $system_name #else #set $ttype = "profile" #set $tname = $profile_name #end if #set $turl = "http://"+$http_server+"/cblr/svc/op/template/"+$ttype+"/"+$tname+"/path/"+$tpath #if $orig.startswith("/") mkdir -p `dirname $orig` wget "$turl" --output-document="$orig" #end if #end for # End download cobbler managed config files (if applicable) cobbler-2.4.1/snippets/download_config_files_deb000066400000000000000000000015411227367477500220730ustar00rootroot00000000000000## Start download cobbler managed config files (if applicable) #import os #import stat #set $cmd = '\\' #for $tkey, $tpath in $template_files.items() #set $orig = $tpath #set $tpath = $tpath.replace("_","__").replace("/","_") #if $getVar("system_name","") != "" #set $ttype = "system" #set $tname = $system_name #else #set $ttype = "profile" #set $tname = $profile_name #end if #set $turl = "http://"+$http_server+"/cblr/svc/op/template/"+$ttype+"/"+$tname+"/path/"+$tpath #if $orig.startswith("/") #set $perms = oct(stat.S_IMODE(os.stat($tkey).st_mode))[-3:] #set $cmd = $cmd + "\n" + "mkdir -p " + "`dirname " + $orig + "`; wget -nv " + $turl + " --output-document=" + $orig + "; chmod " + $perms +" " + $orig +"; \\" #end if #end for #echo $cmd ## End download cobbler managed config files (if applicable) cobbler-2.4.1/snippets/func_install_if_enabled000066400000000000000000000000751227367477500215550ustar00rootroot00000000000000#if $str($getVar('func_auto_setup','')) == "1" func #end if cobbler-2.4.1/snippets/func_register_if_enabled000066400000000000000000000007001227367477500217260ustar00rootroot00000000000000 #if $str($getVar('func_auto_setup','')) == "1" # Start func registration section /sbin/chkconfig --level 345 funcd on cat < /etc/func/minion.conf [main] log_level = INFO acl_dir = /etc/func/minion-acl.d listen_addr = listen_port = 51234 EOFM cat < /etc/certmaster/minion.conf [main] certmaster = $func_master certmaster_port = 51235 log_level = DEBUG cert_dir = /etc/pki/certmaster EOCM # End func registration section #end if cobbler-2.4.1/snippets/keep_cfengine_keys000066400000000000000000000047121227367477500205630ustar00rootroot00000000000000#raw # Nifty trick to restore cfengine keys without using a nochroot %post echo "Saving cfengine keys..." > /dev/ttyS0 SEARCHDIR=/var/cfengine/ppkeys TEMPDIR=cfengine PATTERN=localhost keys_found=no # /var could be a separate partition SHORTDIR=${SEARCHDIR#/var} if [ $SHORTDIR = $SEARCHDIR ]; then SHORTDIR='' fi insmod /lib/jbd.o insmod /lib/ext3.o mkdir -p /tmp/$TEMPDIR function findkeys { for disk in $DISKS; do name=$(basename $disk) tmpdir=$(mktemp -d $name.XXXXXX) mkdir -p /tmp/$tmpdir mount $disk /tmp/$tmpdir if [ $? -ne 0 ]; then # Skip to the next partition if the mount fails rm -rf /tmp/$tmpdir continue fi # Copy current host keys out to be reused if [ -d /tmp/$tmpdir$SEARCHDIR ] && cp -a /tmp/$tmpdir$SEARCHDIR/${PATTERN}* /tmp/$TEMPDIR; then keys_found="yes" umount /tmp/$tmpdir rm -r /tmp/$tmpdir break elif [ -n "$SHORTDIR" ] && [ -d /tmp/$tmpdir$SHORTDIR ] && cp -a /tmp/$tmpdir$SHORTDIR/${PATTERN}* /tmp/$TEMPDIR; then keys_found="yes" umount /tmp/$tmpdir rm -r /tmp/$tmpdir break fi umount /tmp/$tmpdir rm -r /tmp/$tmpdir done } DISKS=$(awk '{if ($NF ~ "^[a-zA-Z].*[0-9]$" && $NF !~ "c[0-9]+d[0-9]+$" && $NF !~ "^loop.*") print "/dev/"$NF}' /proc/partitions) # In the awk line above we want to make list of partitions, but not devices/controllers # cciss raid controllers have partitions like /dev/cciss/cNdMpL, where N,M,L - some digits, we want to make sure 'pL' is there # No need to scan loopback niether. # Try to find the keys on ordinary partitions findkeys # Try software RAID if [ "$keys_found" = "no" ]; then if mdadm -As; then DISKS=$(awk '/md/{print "/dev/"$1}' /proc/mdstat) findkeys fi fi # Try LVM if that didn't work if [ "$keys_found" = "no" ]; then lvm lvmdiskscan vgs=$(lvm vgs | tail -n +2 | awk '{ print $1 }') for vg in $vgs; do # Activate any VG we found lvm vgchange -ay $vg done DISKS=$(lvm lvs | tail -n +2 | awk '{ print "/dev/" $2 "/" $1 }') findkeys # And clean up.. for vg in $vgs; do lvm vgchange -an $vg done fi # Loop until the corresponding rpm is installed if [ "$keys_found" = "yes" ]; then while : ; do sleep 10 if [ -d /mnt/sysimage$SEARCHDIR ] ; then cp -af /tmp/$TEMPDIR/${PATTERN}* /mnt/sysimage$SEARCHDIR logger "keys copied to newly installed system" break fi done & fi #end raw cobbler-2.4.1/snippets/keep_files000066400000000000000000000103341227367477500170510ustar00rootroot00000000000000## This snippet preserves files during re-build. ## It supersedes other similar snippets - keep_*_keys. ## Put it in %pre section of the kickstart template file ## It uses preserve_files field which should contain a list of items to preserve ## This field for now could contain any of the following: ## 'ssh', 'cfengine', 'rhn' in any order ## 'rhn' part of this snippet should NOT be used with systems subscribed ## to Red Hat Satellite Server or Spacewalk as these ## have a concept of "reactivation keys" to keep the systems ## appearing to be the same. Also do not use if changing ## base channels, i.e. RHEL4 -> RHEL5 upgrades. ## #if $getVar('$preserve_files','') != '' #set $preserve_files = $getVar('$preserve_files','') preserve_files = $preserve_files #raw # Nifty trick to restore keys without using a nochroot %post echo "Saving keys..." > /dev/ttyS0 insmod /lib/jbd.o insmod /lib/ext3.o function findkeys { for disk in $DISKS; do name=$(basename $disk) tmpdir=$(mktemp -d $name.XXXXXX) mkdir -p /tmp/$tmpdir mount $disk /tmp/$tmpdir if [ $? -ne 0 ]; then # Skip to the next partition if the mount fails rm -rf /tmp/$tmpdir continue fi # Copy current host keys out to be reused if [ -d /tmp/$tmpdir$SEARCHDIR ] && cp -a /tmp/$tmpdir$SEARCHDIR/${PATTERN}* /tmp/$TEMPDIR; then keys_found="yes" umount /tmp/$tmpdir rm -r /tmp/$tmpdir break elif [ -n "$SHORTDIR" ] && [ -d /tmp/$tmpdir$SHORTDIR ] && cp -a /tmp/$tmpdir$SHORTDIR/${PATTERN}* /tmp/$TEMPDIR; then keys_found="yes" umount /tmp/$tmpdir rm -r /tmp/$tmpdir break fi umount /tmp/$tmpdir rm -r /tmp/$tmpdir done } function search_for_keys { SEARCHDIR=$1 TEMPDIR=$2 PATTERN=$3 keys_found=no # /var could be a separate partition SHORTDIR=${SEARCHDIR#/var} if [ $SHORTDIR = $SEARCHDIR ]; then SHORTDIR='' fi mkdir -p /tmp/$TEMPDIR DISKS=$(awk '{if ($NF ~ "^[a-zA-Z].*[0-9]$" && $NF !~ "c[0-9]+d[0-9]+$" && $NF !~ "^loop.*") print "/dev/"$NF}' /proc/partitions) # In the awk line above we want to make list of partitions, but not devices/controllers # cciss raid controllers have partitions like /dev/cciss/cNdMpL, where N,M,L - some digits, we want to make sure 'pL' is there # No need to scan loopback niether. # Try to find the keys on ordinary partitions findkeys # Try software RAID if [ "$keys_found" = "no" ]; then if mdadm -As; then DISKS=$(awk '/md/{print "/dev/"$1}' /proc/mdstat) findkeys fi fi # Try LVM if that didn't work if [ "$keys_found" = "no" ]; then lvm lvmdiskscan vgs=$(lvm vgs | tail -n +2 | awk '{ print $1 }') for vg in $vgs; do # Activate any VG we found lvm vgchange -ay $vg done DISKS=$(lvm lvs | tail -n +2 | awk '{ print "/dev/" $2 "/" $1 }') findkeys # And clean up.. for vg in $vgs; do lvm vgchange -an $vg done fi } function restore_keys { SEARCHDIR=$1 TEMPDIR=$2 PATTERN=$3 # Loop until the corresponding rpm is installed if the keys are saved if [ "$keys_found" = "yes" ] && [ -f /tmp/$TEMPDIR/${PATTERN}* ]; then while : ; do sleep 10 if [ -d /mnt/sysimage$SEARCHDIR ] ; then cp -af /tmp/$TEMPDIR/${PATTERN}* /mnt/sysimage$SEARCHDIR logger "$TEMPDIR keys copied to newly installed system" break fi done & fi } for key in $preserve_files do if [ $key = 'ssh' ]; then search_for_keys '/etc/ssh' 'ssh' 'ssh_host_' elif [ $key = 'cfengine' ]; then search_for_keys '/var/cfengine/ppkeys' 'cfengine' 'localhost' elif [ $key = 'rhn' ]; then search_for_keys '/etc/sysconfig/rhn', 'rhn', '*' else echo "No keys to save!" > /dev/ttyS0 fi done # now restore keys if found for key in $preserve_files do if [ $key = 'ssh' ]; then restore_keys '/etc/ssh' 'ssh' 'ssh_host_' elif [ $key = 'cfengine' ]; then restore_keys '/var/cfengine/ppkeys' 'cfengine' 'localhost' elif [ $key = 'rhn' ]; then restore_keys '/etc/sysconfig/rhn', 'rhn', '*' else echo "Nothing to restore!" > /dev/ttyS0 fi done #end raw #end if cobbler-2.4.1/snippets/keep_rhn_keys000066400000000000000000000050611227367477500175720ustar00rootroot00000000000000#raw ## this snippet should NOT be used with systems subscribed ## to Red Hat Satellite Server or Spacewalk as these ## have a concept of "reactivation keys" to keep the systems ## appearing to be the same. Also do not use if changing ## base channels, i.e. RHEL4 -> RHEL5 upgrades. echo "Saving RHN keys..." > /dev/ttyS0 rhn_keys_found=no insmod /lib/jbd.o insmod /lib/ext3.o mkdir -p /tmp/rhn drives=$(list-harddrives | awk '{print $1}') for disk in $drives; do DISKS="$DISKS $(fdisk -l /dev/$disk | awk '/^\/dev/{print $1}')" done # Try to find the keys on ordinary partitions for disk in $DISKS; do name=$(basename $disk) mkdir -p /tmp/$name mount $disk /tmp/$name [ $? -eq 0 ] || continue # Skip to the next partition if the mount fails # Copy current RHN host keys out to be reused if [ -d /tmp/${name}/etc/sysconfig/rhn ]; then cp -a /tmp/${name}/etc/sysconfig/rhn/install-num /tmp/rhn cp -a /tmp/${name}/etc/sysconfig/rhn/systemid /tmp/rhn cp -a /tmp/${name}/etc/sysconfig/rhn/up2date /tmp/rhn rhn_keys_found="yes" umount /tmp/$name break fi umount /tmp/$name rm -r /tmp/$name done # Try LVM if that didn't work if [ "$rhn_keys_found" = "no" ]; then lvm lvmdiskscan vgs=$(lvm vgs | tail -n +2 | awk '{ print $1 }') for vg in $vgs; do # Activate any VG we found lvm vgchange -ay $vg done lvs=$(lvm lvs | tail -n +2 | awk '{ print "/dev/" $2 "/" $1 }') for lv in $lvs; do tmpdir=$(mktemp -d findkeys.XXXXXX) mkdir -p /tmp/${tmpdir} mount $lv /tmp/${tmpdir} || continue # Skip to next volume if this fails # Let's see if the keys are in there if [ -d /tmp/${tmpdir}/etc/sysconfig/rhn ]; then cp -a /tmp/${tmpdir}/etc/sysconfig/rhn/install-num* /tmp/rhn/ cp -a /tmp/${tmpdir}/etc/sysconfig/rhn/systemid* /tmp/rhn/ cp -a /tmp/${tmpdir}/etc/sysconfig/rhn/up2date /tmp/rhn/ rhn_keys_found="yes" umount /tmp/${tmpdir} break # We're done! fi umount /tmp/${tmpdir} rm -r /tmp/${tmpdir} done # And clean up.. for vg in $vgs; do lvm vgchange -an $vg done fi # Loop until the RHN rpm is installed if [ "$rhn_keys_found" = "yes" ]; then while : ; do sleep 10 if [ -d /mnt/sysimage/etc/sysconfig/rhn ] ; then cp -af /tmp/rhn/* /mnt/sysimage/etc/sysconfig/rhn/ logger "RHN KEY copied to newly installed system" break fi done & fi #end raw cobbler-2.4.1/snippets/keep_ssh_host_keys000066400000000000000000000061231227367477500206350ustar00rootroot00000000000000#raw # Nifty trick to restore keys without using a nochroot %post echo "Saving keys..." > /dev/ttyS0 SEARCHDIR=/etc/ssh TEMPDIR=ssh PATTERN=ssh_host_ keys_found=no # /var could be a separate partition SHORTDIR=${SEARCHDIR#/var} if [ $SHORTDIR = $SEARCHDIR ]; then SHORTDIR='' fi insmod /lib/jbd.o insmod /lib/ext3.o mkdir -p /tmp/$TEMPDIR function findkeys { for disk in $DISKS; do name=$(basename $disk) tmpdir=$(mktemp -d $name.XXXXXX) mkdir -p /tmp/$tmpdir mount $disk /tmp/$tmpdir if [ $? -ne 0 ]; then # Skip to the next partition if the mount fails rm -rf /tmp/$tmpdir continue fi # Copy current host keys out to be reused if [ -d /tmp/$tmpdir$SEARCHDIR ] && cp -a /tmp/$tmpdir$SEARCHDIR/${PATTERN}* /tmp/$TEMPDIR; then keys_found="yes" umount /tmp/$tmpdir rm -r /tmp/$tmpdir break elif [ -n "$SHORTDIR" ] && [ -d /tmp/$tmpdir$SHORTDIR ] && cp -a /tmp/$tmpdir$SHORTDIR/${PATTERN}* /tmp/$TEMPDIR; then keys_found="yes" umount /tmp/$tmpdir rm -r /tmp/$tmpdir break fi umount /tmp/$tmpdir rm -r /tmp/$tmpdir done } DISKS=$(awk '{if ($NF ~ "^[a-zA-Z].*[0-9]$" && $NF !~ "c[0-9]+d[0-9]+$" && $NF !~ "^loop.*") print "/dev/"$NF}' /proc/partitions) # In the awk line above we want to make list of partitions, but not devices/controllers # cciss raid controllers have partitions like /dev/cciss/cNdMpL, where N,M,L - some digits, we want to make sure 'pL' is there # No need to scan loopback niether. # Try to find the keys on ordinary partitions findkeys # Try software RAID if [ "$keys_found" = "no" ]; then if mdadm -As; then DISKS=$(awk '/md/{print "/dev/"$1}' /proc/mdstat) findkeys # unmount and deactivate all md for md in $DISKS ; do umount $md mdadm -S $md done fi fi # Try LVM if that didn't work if [ "$keys_found" = "no" ]; then lvm lvmdiskscan vgs=$(lvm vgs | tail -n +2 | awk '{ print $1 }') for vg in $vgs; do # Activate any VG we found lvm vgchange -ay $vg done DISKS=$(lvm lvs | tail -n +2 | awk '{ print "/dev/" $2 "/" $1 }') findkeys # And clean up.. for vg in $vgs; do lvm vgchange -an $vg done fi # Loop until the corresponding rpm is installed if [ "$keys_found" = "yes" ]; then if [ "$PATTERN" = "ssh_host_" ]; then while : ; do sleep 10 if [ -f /etc/ssh/ssh_host_key ] ; then cp -af /tmp/$TEMPDIR/${PATTERN}* $SEARCHDIR break fi done 1>/dev/null 2>/dev/null & fi while : ; do sleep 10 if [ -d /mnt/sysimage$SEARCHDIR ] ; then cp -af /tmp/$TEMPDIR/${PATTERN}* /mnt/sysimage$SEARCHDIR if [ -e "/sbin/restorecon"]; then /sbin/restorecon -r /etc/ssh fi logger "keys copied to newly installed system" break fi done 1>/dev/null 2>/dev/null & fi #end raw cobbler-2.4.1/snippets/kickstart_done000066400000000000000000000107031227367477500177470ustar00rootroot00000000000000#set system_name = $getVar('system_name','') #set profile_name = $getVar('profile_name','') #set breed = $getVar('breed','') #set os_version = $getVar('os_version','') #set srv = $getVar('http_server','') #set kickstart = $getVar('kickstart','') #set run_install_triggers = $str($getVar('run_install_triggers','')) #set pxe_just_once = $str($getVar('pxe_just_once','')) #set nopxe = "" #set saveks = "" #set runpost = "" #if $system_name != '' ## PXE JUST ONCE #if $pxe_just_once in [ "1", "true", "yes", "y" ] #if $breed == 'redhat' #set nopxe = "\nwget \"http://%s/cblr/svc/op/nopxe/system/%s\" -O /dev/null" % (srv, system_name) #else if $breed == 'vmware' and $os_version == 'esx4' #set nopxe = "\ncurl \"http://%s/cblr/svc/op/nopxe/system/%s\" -o /dev/null" % (srv, system_name) #else if $breed == 'vmware' #set nopxe = "\nwget \"http://%s/cblr/svc/op/nopxe/system/%s\" -O /dev/null" % (srv, system_name) #else if $breed == 'debian' or $breed == 'ubuntu' #set nopxe = "\nwget \"http://%s/cblr/svc/op/nopxe/system/%s\" -O /dev/null" % (srv, system_name) #else ## default to wget #set nopxe = "wget \"http://%s/cblr/svc/op/nopxe/system/%s\" -O /dev/null;" % (srv, system_name) #end if #end if ## SAVE KICKSTART #if $kickstart != '' #if $breed == 'redhat' #set saveks = "\nwget \"http://%s/cblr/svc/op/ks/%s/%s\" -O /root/cobbler.ks" % (srv, "system", system_name) #else if $breed == 'vmware' and $os_version == 'esx4' #set saveks = "\ncurl \"http://%s/cblr/svc/op/ks/%s/%s\" -o /root/cobbler.ks" % (srv, "system", system_name) #else if $breed == 'vmware' #set saveks = "\nwget \"http://%s/cblr/svc/op/ks/%s/%s\" -O /var/log/cobbler.ks" % (srv, "system", system_name) #else if $breed == 'debian' or $breed == 'ubuntu' #set saveks = "\nwget \"http://%s/cblr/svc/op/ks/%s/%s\" -O /var/log/cobbler.seed" % (srv, "system", system_name) #end if #end if ## RUN POST TRIGGER #if $run_install_triggers in [ "1", "true", "yes", "y" ] #if $breed == 'redhat' #set runpost = "\nwget \"http://%s/cblr/svc/op/trig/mode/post/%s/%s\" -O /dev/null" % (srv, "system", system_name) #else if $breed == 'vmware' and $os_version == 'esx4' #set runpost = "\ncurl \"http://%s/cblr/svc/op/trig/mode/post/%s/%s\" -o /dev/null" % (srv, "system", system_name) #else if $breed == 'vmware' #set runpost = "\nwget \"http://%s/cblr/svc/op/trig/mode/post/%s/%s\" -O /dev/null" % (srv, "system", system_name) #else if $breed == 'debian' or $breed == 'ubuntu' #set runpost = "\nwget \"http://%s/cblr/svc/op/trig/mode/post/%s/%s\" -O /dev/null" % (srv, "system", system_name) #end if #end if #else if $profile_name != '' ## SAVE KICKSTART #if $kickstart != '' #if $breed == 'redhat' #set saveks = "\nwget \"http://%s/cblr/svc/op/ks/%s/%s\" -O /root/cobbler.ks" % (srv, "profile", profile_name) #else if $breed == 'vmware' and $os_version == 'esx4' #set saveks = "\ncurl \"http://%s/cblr/svc/op/ks/%s/%s\" -o /root/cobbler.ks" % (srv, "profile", profile_name) #else if $breed == 'vmware' #set saveks = "\nwget \"http://%s/cblr/svc/op/ks/%s/%s\" -O /var/log/cobbler.ks" % (srv, "profile", profile_name) #else if $breed == 'debian' or $breed == 'ubuntu' #set saveks = "\nwget \"http://%s/cblr/svc/op/ks/%s/%s\" -O /var/log/cobbler.seed" % (srv, "profile", profile_name) #end if #end if ## RUN POST TRIGGER #if $run_install_triggers in [ "1", "true", "yes", "y" ] #if $breed == 'redhat' #set runpost = "\nwget \"http://%s/cblr/svc/op/trig/mode/post/%s/%s\" -O /dev/null" % (srv, "profile", profile_name) #else if $breed == 'vmware' and $os_version == 'esx4' #set runpost = "\ncurl \"http://%s/cblr/svc/op/trig/mode/post/%s/%s\" -o /dev/null" % (srv, "profile", profile_name) #else if $breed == 'vmware' #set runpost = "\nwget \"http://%s/cblr/svc/op/trig/mode/post/%s/%s\" -O /dev/null" % (srv, "profile", profile_name) #else if $breed == 'debian' or $breed == 'ubuntu' #set runpost = "\nwget \"http://%s/cblr/svc/op/trig/mode/post/%s/%s\" -O /dev/null" % (srv, "profile", profile_name) #end if #end if #end if #echo $saveks #echo $runpost #echo $nopxe cobbler-2.4.1/snippets/kickstart_start000066400000000000000000000027641227367477500201670ustar00rootroot00000000000000#set system_name = $getVar('system_name','') #set profile_name = $getVar('profile_name','') #set breed = $getVar('breed','') #set srv = $getVar('http_server','') #set run_install_triggers = $str($getVar('run_install_triggers','')) #set runpre = "" #if $system_name != '' ## RUN PRE TRIGGER #if $run_install_triggers in [ "1", "true", "yes", "y" ] #if $breed == 'redhat' #set runpre = "\nwget \"http://%s/cblr/svc/op/trig/mode/pre/%s/%s\" -O /dev/null" % (srv, "system", system_name) #else if $breed == 'vmware' #set runpre = "\nwget \"http://%s/cblr/svc/op/trig/mode/pre/%s/%s\" -O /dev/null" % (srv, "system", system_name) #else if $breed == 'debian' or $breed == 'ubuntu' #set runpre = "wget \"http://%s/cblr/svc/op/trig/mode/pre/%s/%s\" -O /dev/null" % (srv, "system", system_name) #else if $breed == 'vmware' #set runpre = "wget \"http://%s/cblr/svc/op/trig/mode/pre/%s/%s\" -O /dev/null" % (srv, "system", system_name) #end if #end if #else if $profile_name != '' ## RUN PRE TRIGGER #if $run_install_triggers in [ "1", "true", "yes", "y" ] #if $breed == 'redhat' #set runpre = "\nwget \"http://%s/cblr/svc/op/trig/mode/pre/%s/%s\" -O /dev/null" % (srv, "profile", profile_name) #else if $breed == 'vmware' #set runpre = "\nwget \"http://%s/cblr/svc/op/trig/mode/pre/%s/%s\" -O /dev/null" % (srv, "profile", profile_name) #end if #end if #end if #echo $runpre cobbler-2.4.1/snippets/koan_environment000066400000000000000000000002771227367477500203240ustar00rootroot00000000000000# Start koan environment setup echo "export COBBLER_SERVER=$server" > /etc/profile.d/cobbler.sh echo "setenv COBBLER_SERVER $server" > /etc/profile.d/cobbler.csh # End koan environment setup cobbler-2.4.1/snippets/late_apt_repo_config000066400000000000000000000012041227367477500211020ustar00rootroot00000000000000# start late_apt_repo_config cat</etc/apt/sources.list deb http://$http_server/cblr/links/$distro_name $os_version main #set $repo_data = $getVar("repo_data",[]) #for $repo in $repo_data #for $dist in $repo.apt_dists #set $comps = " ".join($repo.apt_components) #if $repo.comment != "" # ${repo.comment} #end if #if $repo.arch == "x86_64" #set $rarch = "[arch=amd64]" #else #set $rarch = "[arch=%s]" % $repo.arch #end if #if $repo.mirror_locally deb ${rarch} http://$http_server/cblr/repo_mirror/${repo.name} $dist $comps #else deb ${rarch} ${repo.mirror} $dist $comps #end if #end for #end for EOF # end late_apt_repo_config cobbler-2.4.1/snippets/log_ks_post000066400000000000000000000000501227367477500172600ustar00rootroot00000000000000set -x -v exec 1>/root/ks-post.log 2>&1 cobbler-2.4.1/snippets/log_ks_pre000066400000000000000000000004311227367477500170640ustar00rootroot00000000000000set -x -v exec 1>/tmp/ks-pre.log 2>&1 # Once root's homedir is there, copy over the log. while : ; do sleep 10 if [ -d /mnt/sysimage/root ]; then cp /tmp/ks-pre.log /mnt/sysimage/root/ logger "Copied %pre section log to system" break fi done & cobbler-2.4.1/snippets/main_partition_select000066400000000000000000000000561227367477500213170ustar00rootroot00000000000000# partition selection %include /tmp/partinfo cobbler-2.4.1/snippets/network_config000066400000000000000000000073201227367477500177620ustar00rootroot00000000000000## start of cobbler network_config generated code #if $getVar("system_name","") != "" #set ikeys = $interfaces.keys() #import re #set $vlanpattern = $re.compile("[a-zA-Z0-9]+[\.:][0-9]+") ## ## Determine if we should use the MAC address to configure the interfaces first ## Only physical interfaces are required to have a MAC address #set $configbymac = True #for $iname in $ikeys #set $idata = $interfaces[$iname] #if $idata["mac_address"] == "" and not $vlanpattern.match($iname) and not $idata["interface_type"].lower() in ("master","bond","bridge","bonded_bridge_slave") #set $configbymac = False #end if #end for #set $i = -1 #if $configbymac # Using "new" style networking config, by matching networking information to the physical interface's # MAC-address %include /tmp/pre_install_network_config #else # Using "old" style networking config. Make sure all MAC-addresses are in cobbler to use the new-style config #set $vlanpattern = $re.compile("[a-zA-Z0-9]+[\.:][0-9]+") #for $iname in $ikeys #set $idata = $interfaces[$iname] #set $mac = $idata["mac_address"] #set $static = $idata["static"] #set $ip = $idata["ip_address"] #set $netmask = $idata["netmask"] #set $type = $idata["interface_type"] #if $vlanpattern.match($iname) or $type in ("master","bond","bridge","bonded_bridge_slave") ## If this is a VLAN interface, skip it, anaconda doesn't know ## about VLANs. #set $is_vlan = "true" #else #set $is_vlan = "false" ## Only up the counter on physical interfaces! #set $i = $i + 1 #end if #if $mac != "" or $ip != "" and $is_vlan == "false" #if $static == True: #if $ip != "": #set $network_str = "--bootproto=static" #set $network_str = $network_str + " --ip=" + $ip #if $netmask != "": #set $network_str = $network_str + " --netmask=" + $netmask #end if #if $gateway != "": #set $network_str = $network_str + " --gateway=" + $gateway #end if #if $name_servers and $name_servers[0] != "": ## Anaconda only allows one nameserver #set $network_str = $network_str + " --nameserver=" + $name_servers[0] #end if #else #set $network_str = "--bootproto=none" #end if #else #set $network_str = "--bootproto=dhcp" #end if #if $hostname != "" #set $network_str = $network_str + " --hostname=" + $hostname #end if #else #set $network_str = "--bootproto=dhcp" #end if ## network details are populated from the cobbler system object #if $is_vlan == "false" #if $getVar('os_version','').find("rhel3") == -1 network $network_str --device=$iname --onboot=on #else network $network_str --device=$iname #end if #end if #end for #end if #else ## profile based install so just provide one interface for starters #if $getVar('os_version','').find("rhel3") == -1 network --bootproto=dhcp --device=eth0 --onboot=on #else network $network_str --device=$iname #end if #end if ## end of cobbler network_config generated code cobbler-2.4.1/snippets/network_config_esx000066400000000000000000000035771227367477500206530ustar00rootroot00000000000000#import re #if $getVar("system_name","") != "" #set ikeys = $interfaces.keys() #set $vlanpattern = $re.compile("[a-zA-Z0-9]+[\.:][0-9]+") #for $iname in $ikeys #set $idata = $interfaces[$iname] #set $mac = $idata["mac_address"] #set $static = $idata["static"] #set $ip = $idata["ip_address"] #set $netmask = $idata["netmask"] #set $type = $idata["interface_type"] #if $vlanpattern.match($iname) or $type in ("master","bond","bridge") ## If this is a VLAN interface, skip it, anaconda doesn't know ## about VLANs. #set $is_vlan = "true" #else #set $is_vlan = "false" #end if #if $mac != "" or $ip != "" and $is_vlan == "false" #if $static == True: #set $network_str = "--bootproto=static" #if $ip != "": #set $network_str = $network_str + " --ip=" + $ip #if $netmask != "": #set $network_str = $network_str + " --netmask=" + $netmask #end if #if $gateway != "": #set $network_str = $network_str + " --gateway=" + $gateway #end if #if $name_servers and $name_servers[0] != "": ## Anaconda only allows one nameserver #set $network_str = $network_str + " --nameserver=" + $name_servers[0] #end if #end if #else #set $network_str = "--bootproto=dhcp" #end if #if $hostname != "" #set $network_str = $network_str + " --hostname=" + $hostname #end if #else #set $network_str = "--bootproto=dhcp" #end if network $network_str --device=$mac #end for #end if cobbler-2.4.1/snippets/network_config_esxi000066400000000000000000000043451227367477500210160ustar00rootroot00000000000000#import re #if $getVar("system_name","") != "" #set ikeys = $interfaces.keys() #set $vlanpattern = $re.compile("[a-zA-Z0-9]+[\.:][0-9]+") #for $iname in $ikeys #set $idata = $interfaces[$iname] #set $mac = $idata["mac_address"] #set $static = $idata["static"] #set $ip = $idata["ip_address"] #set $netmask = $idata["netmask"] #set $type = $idata["interface_type"] #set $vlanid = "" #if $vlanpattern.match($iname) or $type in ("master","bond","bridge") ## If this is a VLAN interface, skip it, anaconda doesn't know ## about VLANs. #set $is_vlan = "true" #set $vlanid = " --vlanid=" + $iname.split('.')[1] #set $iname = $iname.split('.')[0] #else #set $is_vlan = "false" #end if #if $mac != "" or $ip != "" and $is_vlan == "false" #if $static == True: #set $network_str = "--bootproto=static" #if $ip != "": #set $network_str = $network_str + " --ip=" + $ip #if $netmask != "": #set $network_str = $network_str + " --netmask=" + $netmask #end if #if $gateway != "": #set $network_str = $network_str + " --gateway=" + $gateway #end if #if $name_servers and $name_servers[0] != "": #set $network_str = $network_str + " --nameserver=" + $name_servers[0] #if len($name_servers) > 1 and $name_servers[1] != "": #set $network_str += "," + $name_servers[1] #end if #end if #end if #else #set $network_str = "--bootproto=dhcp" #end if #if $hostname != "" #set $network_str = $network_str + " --hostname=" + $hostname #end if #else #set $network_str = "--bootproto=dhcp" #end if #if $vlanid != "" #set $network_str = $network_str + $vlanid #end if network $network_str --device=$iname #end for #end if cobbler-2.4.1/snippets/partition_select000066400000000000000000000014011227367477500203060ustar00rootroot00000000000000%include /tmp/partinfo %pre # Determine how many drives we have set \$(list-harddrives) let numd=\$#/2 d1=\$1 d2=\$3 # Determine architecture-specific partitioning needs EFI_PART="" PPC_PREP_PART="" BOOT_PART="" case \$(uname -m) in ia64) EFI_PART="part /boot/efi --fstype vfat --size 200 --recommended" ;; ppc*) PPC_PREP_PART="part None --fstype 'PPC PReP Boot' --size 8" BOOT_PART="part /boot --fstype ext3 --size 200 --recommended" ;; *) BOOT_PART="part /boot --fstype ext3 --size 200 --recommended" ;; esac cat << EOF > /tmp/partinfo \$EFI_PART \$PPC_PREP_PART \$BOOT_PART part / --fstype ext3 --size=1024 --grow --ondisk=\$d1 --asprimary part swap --recommended --ondisk=\$d1 --asprimary EOF cobbler-2.4.1/snippets/post_anamon000066400000000000000000000013031227367477500172550ustar00rootroot00000000000000#if $str($getVar('anamon_enabled','')) == "1" ## install anamon script wget -O /usr/local/sbin/anamon "http://$server:$http_port/cobbler/aux/anamon" ## install anamon system service wget -O /etc/rc.d/init.d/anamon "http://$server:$http_port/cobbler/aux/anamon.init" ## adjust permissions chmod 755 /etc/rc.d/init.d/anamon /usr/local/sbin/anamon test -d /selinux && restorecon /etc/rc.d/init.d/anamon /usr/local/sbin/anamon ## enable the script chkconfig --add anamon ## configure anamon service cat << __EOT__ > /etc/sysconfig/anamon COBBLER_SERVER="$server" COBBLER_PORT="$http_port" COBBLER_NAME="$name" LOGFILES="/var/log/boot.log /var/log/messages /var/log/dmesg /root/ks-post.log" __EOT__ #end if cobbler-2.4.1/snippets/post_install_kernel_options000066400000000000000000000010611227367477500225660ustar00rootroot00000000000000#if $getVar('kernel_options_post','') != '' # Start post install kernel options update if [ -f /etc/default/grub ]; then TMP_GRUB=\$(gawk 'match(\$0,/^GRUB_CMDLINE_LINUX="([^"]+)"/,a) {printf("%s\n",a[1])}' /etc/default/grub) sed -i '/^GRUB_CMDLINE_LINUX=/d' /etc/default/grub echo "GRUB_CMDLINE_LINUX=\"\$TMP_GRUB $kernel_options_post\"" >> /etc/default/grub grub2-mkconfig -o /boot/grub2/grub.cfg else /sbin/grubby --update-kernel=\$(/sbin/grubby --default-kernel) --args="$kernel_options_post" fi # End post install kernel options update #end if cobbler-2.4.1/snippets/post_install_network_config000066400000000000000000000364211227367477500225610ustar00rootroot00000000000000# Start post_install_network_config generated code #if $getVar("system_name","") != "" ## this is being provisioned by system records, not profile records ## so we can do the more complex stuff ## get the list of interface names #set ikeys = $interfaces.keys() #set osversion = $getVar("os_version","") #import re #set $vlanpattern = $re.compile("[a-zA-Z0-9]+[\.:][0-9]+") ## Determine if we should use the MAC address to configure the interfaces first ## Only physical interfaces are required to have a MAC address ## Also determine the number of bonding devices we have, so we can set the ## max-bonds option in modprobe.conf accordingly. -- jcapel #set $configbymac = True #set $numbondingdevs = 0 #set $enableipv6 = False ## ============================================================================= #for $iname in $ikeys ## look at the interface hash data for the specific interface #set $idata = $interfaces[$iname] ## do not configure by mac address if we don't have one AND it's not for bonding/vlans ## as opposed to a "real" physical interface #if $idata.get("mac_address", "") == "" and not $vlanpattern.match($iname) and not $idata.get("interface_type", "").lower() in ("master","bond","bridge"): ## we have to globally turn off the config by mac feature as we can't ## use it now #set $configbymac = False #end if ## count the number of bonding devices we have. #if $idata.get("interface_type", "").lower() in ("master","bond","bonded_bridge_slave") #set $numbondingdevs += 1 #end if ## enable IPv6 networking if we set an ipv6 address or turn on autoconfiguration #if $idata.get("ipv6_address", "") != "" or $ipv6_autoconfiguration == True #set $enableipv6 = True #end if #end for ## end looping through the interfaces to see which ones we need to configure. ## ============================================================================= #set $i = 0 ## setup bonding if we have to #if $numbondingdevs > 0 # we have bonded interfaces, so set max_bonds if [ -f "/etc/modprobe.conf" ]; then echo "options bonding max_bonds=$numbondingdevs" >> /etc/modprobe.conf fi #end if ## ============================================================================= ## create a staging directory to build out our network scripts into ## make sure we preserve the loopback device # create a working directory for interface scripts mkdir /etc/sysconfig/network-scripts/cobbler cp /etc/sysconfig/network-scripts/ifcfg-lo /etc/sysconfig/network-scripts/cobbler/ ## ============================================================================= ## configure the gateway if set up (this is global, not a per-interface setting) #if $gateway != "" # set the gateway in the network configuration file grep -v GATEWAY /etc/sysconfig/network > /etc/sysconfig/network.cobbler echo "GATEWAY=$gateway" >> /etc/sysconfig/network.cobbler rm -f /etc/sysconfig/network mv /etc/sysconfig/network.cobbler /etc/sysconfig/network #end if ## ============================================================================= ## Configure the system's primary hostname. This is also passed to anaconda, but ## anaconda doesn't seem to honour it in DHCP-setups. #if $hostname != "" # set the hostname in the network configuration file grep -v HOSTNAME /etc/sysconfig/network > /etc/sysconfig/network.cobbler echo "HOSTNAME=$hostname" >> /etc/sysconfig/network.cobbler rm -f /etc/sysconfig/network mv /etc/sysconfig/network.cobbler /etc/sysconfig/network # Also set the hostname now, some applications require it # (e.g.: if we're connecting to Puppet before a reboot). /bin/hostname $hostname #end if #if $enableipv6 == True grep -v NETWORKING_IPV6 /etc/sysconfig/network > /etc/sysconfig/network.cobbler echo "NETWORKING_IPV6=yes" >> /etc/sysconfig/network.cobbler rm -f /etc/sysconfig/network mv /etc/sysconfig/network.cobbler /etc/sysconfig/network #if $ipv6_autoconfiguration != "" grep -v IPV6_AUTOCONF /etc/sysconfig/network > /etc/sysconfig/network.cobbler #if $ipv6_autoconfiguration == True echo "IPV6_AUTOCONF=yes" >> /etc/sysconfig/network.cobbler #else echo "IPV6_AUTOCONF=no" >> /etc/sysconfig/network.cobbler #end if rm -f /etc/sysconfig/network mv /etc/sysconfig/network.cobbler /etc/sysconfig/network #end if #if $ipv6_default_device != "" grep -v IPV6_DEFAULTDEV /etc/sysconfig/network > /etc/sysconfig/network.cobbler echo "IPV6_DEFAULTDEV=$ipv6_default_device" >> /etc/sysconfig/network.cobbler rm -f /etc/sysconfig/network mv /etc/sysconfig/network.cobbler /etc/sysconfig/network #end if #end if ## ============================================================================= ## now create the config file for each interface #for $iname in $ikeys # Start configuration for $iname ## create lots of variables to use later #set $idata = $interfaces[$iname] #set $mac = $idata.get("mac_address", "").upper() #set $mtu = $idata.get("mtu", "") #set $static = $idata.get("static", "") #set $ip = $idata.get("ip_address", "") #set $netmask = $idata.get("netmask", "") #set $if_gateway = $idata.get("if_gateway", "") #set $static_routes = $idata.get("static_routes", "") #set $iface_type = $idata.get("interface_type", "").lower() #set $iface_master = $idata.get("interface_master", "") #set $bonding_opts = $idata.get("bonding_opts", "") #set $bridge_opts = $idata.get("bridge_opts", "").split(" ") #set $ipv6_address = $idata.get("ipv6_address", "") #set $ipv6_secondaries = $idata.get("ipv6_secondaries", "") #set $ipv6_mtu = $idata.get("ipv6_mtu", "") #set $ipv6_default_gateway = $idata.get("ipv6_default_gateway", "") #set $ipv6_static_routes = $idata.get("ipv6_static_routes", "") #set $devfile = "/etc/sysconfig/network-scripts/cobbler/ifcfg-" + $iname #set $routesfile = "/etc/sysconfig/network-scripts/cobbler/route-" + $iname #set $ipv6_routesfile = "/etc/sysconfig/network-scripts/cobbler/route6-" + $iname ## determine if this interface is for a VLAN #if $vlanpattern.match($iname) #set $is_vlan = "true" #else #set $is_vlan = "false" #end if ## slave interfaces are assumed to be static #if $iface_type in ("slave","bond_slave","bridge_slave","bonded_bridge_slave") #set $static = 1 #end if ## =================================================================== ## Things every interface get, no matter what ## =================================================================== echo "DEVICE=$iname" > $devfile echo "ONBOOT=yes" >> $devfile #if $mac != "" and $iface_type not in ("master","bond","bridge","bonded_bridge_slave") ## virtual interfaces don't get MACs echo "HWADDR=$mac" >> $devfile IFNAME=\$(ip -o link | grep -i '$mac' | sed -e 's/^[0-9]*: //' -e 's/:.*//') ## Rename this interface in modprobe.conf ## FIXME: if both interfaces startwith eth this is wrong if [ -f "/etc/modprobe.conf" ] && [ \$IFNAME ]; then grep \$IFNAME /etc/modprobe.conf | sed "s/\$IFNAME/$iname/" >> /etc/modprobe.conf.cobbler grep -v \$IFNAME /etc/modprobe.conf >> /etc/modprobe.conf.new rm -f /etc/modprobe.conf mv /etc/modprobe.conf.new /etc/modprobe.conf fi #end if ## =================================================================== ## Actions based on interface_type ## =================================================================== #if $iface_type in ("master","bond","bonded_bridge_slave") ## if this is a bonded interface, configure it in modprobe.conf #if $osversion == "rhel4" if [ -f "/etc/modprobe.conf" ]; then echo "install $iname /sbin/modprobe bonding -o $iname $bonding_opts" >> /etc/modprobe.conf.cobbler fi #else ## Add required entry to modprobe.conf if [ -f "/etc/modprobe.conf" ]; then echo "alias $iname bonding" >> /etc/modprobe.conf.cobbler fi #end if #if $bonding_opts != "" cat >> $devfile << EOF BONDING_OPTS="$bonding_opts" EOF #end if #elif $iface_type in ("slave","bond_slave") and $iface_master != "" echo "SLAVE=yes" >> $devfile echo "MASTER=$iface_master" >> $devfile echo "HOTPLUG=no" >> $devfile #end if #if $iface_type == "bridge" echo "TYPE=Bridge" >> $devfile #for $bridge_opt in $bridge_opts #if $bridge_opt.strip() != "" echo "$bridge_opt" >> $devfile #end if #end for #elif ($iface_type == "bridge_slave" or $iface_type == "bonded_bridge_slave") and $iface_master != "" echo "BRIDGE=$iface_master" >> $devfile echo "HOTPLUG=no" >> $devfile #end if #if $iface_type != "bridge" echo "TYPE=Ethernet" >> $devfile #end if ## =================================================================== ## Actions based on static/dynamic configuration ## =================================================================== #if $static #if $mac == "" and $iface_type == "" # WARNING! Configuring interfaces by their names only # is error-prone, and can cause issues if and when # the kernel gives an interface a different name # following a reboot/hardware changes. #end if echo "BOOTPROTO=none" >> $devfile #if $ip != "" and $iface_type not in ("slave","bond_slave","bridge_slave","bonded_bridge_slave") ## Only configure static networking if an IP-address is configured ## and if the interface isn't slaved to another interface (bridging or bonding) echo "IPADDR=$ip" >> $devfile #if $if_gateway != "" echo "GATEWAY=$if_gateway" >> $devfile #end if #if $netmask == "" ## Default to 255.255.255.0? #set $netmask = "255.255.255.0" #end if echo "NETMASK=$netmask" >> $devfile #end if #if $enableipv6 == True and $ipv6_autoconfiguration == False #if $ipv6_address != "" echo "IPV6INIT=yes" >> $devfile echo "IPV6ADDR=$ipv6_address" >> $devfile #end if #if $ipv6_secondaries != "" #set ipv6_secondaries = ' '.join(ipv6_secondaries) ## The quotes around the ipv6 ip's need to be here echo "IPV6ADDR_SECONDARIES=\"$ipv6_secondaries\"" >> $devfile #end if #if $ipv6_mtu != "" echo "IPV6MTU=$ipv6_mtu" >> $devfile #end if #if $ipv6_default_gateway != "" echo "IPV6_DEFAULTGW=$ipv6_default_gateway" >> $devfile #end if #end if #else ## this is a DHCP interface, much less work to do echo "BOOTPROTO=dhcp" >> $devfile #if $len($name_servers) > 0 echo "PEERDNS=no" >> $devfile #end if #end if ## =================================================================== ## VLAN configuration ## =================================================================== #if $is_vlan == "true" echo "VLAN=yes" >> $devfile echo "ONPARENT=yes" >> $devfile #end if ## =================================================================== ## Optional configuration stuff ## =================================================================== #if $mtu != "" echo "MTU=$mtu" >> $devfile #end if ## =================================================================== ## Non-slave DNS configuration, when applicable ## =================================================================== ## If the interface is anything but a slave then add DNSn entry #if $iface_type.lower() not in ("slave","bond_slave","bridge_slave","bonded_bridge_slave") #set $nct = 0 #for $nameserver in $name_servers #set $nct = $nct + 1 echo "DNS$nct=$nameserver" >> $devfile #end for #end if ## =================================================================== ## Interface route configuration ## =================================================================== #for $route in $static_routes #set routepattern = $re.compile("[0-9/.]+:[0-9.]+") #if $routepattern.match($route) #set $routebits = $route.split(":") #set [$network, $router] = $route.split(":") echo "$network via $router" >> $routesfile #else # Warning: invalid route "$route" #end if #end for #if $enableipv6 == True #for $route in $ipv6_static_routes #set routepattern = $re.compile("[0-9a-fA-F:/]+,[0-9a-fA-F:]+") #if $routepattern.match($route) #set $routebits = $route.split(",") #set [$network, $router] = $route.split(",") echo "$network via $router dev $iname" >> $ipv6_routesfile #else # Warning: invalid ipv6 route "$route" #end if #end for #end if ## =================================================================== ## Done with this interface ## =================================================================== #set $i = $i + 1 # End configuration for $iname #end for ## ============================================================================= ## Configure name server search path in /etc/resolv.conf #set $num_ns = $len($name_servers) #set $num_ns_search = $len($name_servers_search) #if $num_ns_search > 0 sed -i -e "/^search /d" /etc/resolv.conf echo -n "search " >>/etc/resolv.conf #for $nameserversearch in $name_servers_search echo -n "$nameserversearch " >>/etc/resolv.conf #end for echo "" >>/etc/resolv.conf #end if ## ============================================================================= ## Configure name servers in /etc/resolv.conf #if $num_ns > 0 sed -i -e "/^nameserver /d" /etc/resolv.conf #for $nameserver in $name_servers echo "nameserver $nameserver" >>/etc/resolv.conf #end for #end if ## Disable all eth interfaces by default before overwriting ## the old files with the new ones in the working directory ## This stops unneccesary (and time consuming) DHCP queries ## during the network initialization sed -i 's/ONBOOT=yes/ONBOOT=no/g' /etc/sysconfig/network-scripts/ifcfg-eth* ## Move all staged files to their final location #for $iname in $ikeys rm -f /etc/sysconfig/network-scripts/ifcfg-$iname #end for mv /etc/sysconfig/network-scripts/cobbler/* /etc/sysconfig/network-scripts/ rm -r /etc/sysconfig/network-scripts/cobbler if [ -f "/etc/modprobe.conf" ]; then cat /etc/modprobe.conf.cobbler >> /etc/modprobe.conf rm -f /etc/modprobe.conf.cobbler fi #end if # End post_install_network_config generated code cobbler-2.4.1/snippets/post_install_network_config_deb000066400000000000000000000263631227367477500233770ustar00rootroot00000000000000# Start post_install_network_config generated code #if $getVar("system_name","") != "" ## this is being provisioned by system records, not profile records ## so we can do the more complex stuff ## get the list of interface names #set ikeys = $interfaces.keys() #set osversion = $getVar("os_version","") #import re #set $vlanpattern = $re.compile("[a-zA-Z0-9]+[\.:][0-9]+") ## Determine if we should use the MAC address to configure the interfaces first ## Only physical interfaces are required to have a MAC address ## Also determine the number of bonding devices we have, so we can set the ## max-bonds option in modprobe.conf accordingly. -- jcapel #set $configbymac = True #set $bridge_slaves = {} #set $numbondingdevs = 0 #set $enableipv6 = False ## ============================================================================= #for $iname in $ikeys ## look at the interface hash data for the specific interface #set $idata = $interfaces[$iname] ## do not configure by mac address if we don't have one AND it's not for bonding/vlans ## as opposed to a "real" physical interface #if $idata.get("mac_address", "") == "" and not $vlanpattern.match($iname) and not $idata.get("interface_type", "").lower() in ("master","bond","bridge"): ## we have to globally turn off the config by mac feature as we can't ## use it now #set $configbymac = False #end if ## count the number of bonding devices we have. #if $idata.get("interface_type", "").lower() in ("master","bond","bonded_bridge_slave") #set $numbondingdevs += 1 #end if ## build a mapping of bridge slaves, since deb/ubuntu bridge slaves do not ## get interface entries of their own #if $idata.get("interface_type","").lower() == "bridge_slave" #set $this_master = $idata.get("interface_master", None) #if $this_master and not $bridge_slaves.has_key($this_master) #set $bridge_slaves[$this_master] = [] #end if <% bridge_slaves[this_master].append(iname) %> #end if ## enable IPv6 networking if we set an ipv6 address or turn on autoconfiguration #if $idata.get("ipv6_address", "") != "" or $ipv6_autoconfiguration == True #set $enableipv6 = True #end if #end for ## end looping through the interfaces to see which ones we need to configure. ## ============================================================================= ## Rewrite the interfaces file and make sure we preserve the loopback device rm -f /etc/network/interfaces touch /etc/network/interfaces echo "auto lo" >> /etc/network/interfaces echo "iface lo inet loopback" >> /etc/network/interfaces echo "" >> /etc/network/interfaces ## ============================================================================= ## now create the config file for each interface #for $iname in $ikeys ## create lots of variables to use later #set $idata = $interfaces[$iname] #set $mac = $idata.get("mac_address", "").upper() #set $mtu = $idata.get("mtu", "") #set $static = $idata.get("static", "") #set $ip = $idata.get("ip_address", "") #set $netmask = $idata.get("netmask", "") #set $if_gateway = $idata.get("if_gateway", "") #set $static_routes = $idata.get("static_routes", "") #set $iface_type = $idata.get("interface_type", "").lower() #set $iface_master = $idata.get("interface_master", "") #set $bonding_opts = $idata.get("bonding_opts", "") #set $bridge_opts = $idata.get("bridge_opts", "").split(" ") #set $ipv6_address = $idata.get("ipv6_address", "") #set $ipv6_secondaries = $idata.get("ipv6_secondaries", "") #set $ipv6_mtu = $idata.get("ipv6_mtu", "") #set $ipv6_default_gateway = $idata.get("ipv6_default_gateway", "") #set $ipv6_static_routes = $idata.get("ipv6_static_routes", "") #set $devfile = "/etc/sysconfig/network-scripts/cobbler/ifcfg-" + $iname #set $routesfile = "/etc/sysconfig/network-scripts/cobbler/route-" + $iname #set $ipv6_routesfile = "/etc/sysconfig/network-scripts/cobbler/route6-" + $iname ## determine if this interface is for a VLAN #if $vlanpattern.match($iname) #set $is_vlan = "true" #else #set $is_vlan = "false" #end if ## slave interfaces are assumed to be static #if $iface_type in ("slave","bond_slave","bridge_slave","bonded_bridge_slave") #set $static = 1 #end if ## =================================================================== ## Things every interface get, no matter what ## =================================================================== echo "auto $iname" >> /etc/network/interfaces ## =================================================================== ## Actions based on interface_type ## =================================================================== #if $iface_type in ("master","bond","bonded_bridge_slave") #pass #elif $iface_type in ("slave","bond_slave") and $iface_master != "" #pass #elif $iface_type == "bridge" #set $slave_ports = " ".join($bridge_slaves.get($iname,[])) #if $slave_ports != "" echo " bridge_ports $slave_ports" >> /etc/network/interfaces #end if #for $bridge_opt in $bridge_opts #if $bridge_opt.strip() != "" echo " $bridge_opt" >> /etc/network/interfaces #end if #end for #end if ## =================================================================== ## Actions based on static configuration ## =================================================================== #if $static #if $ip != "" and $iface_type not in ("slave","bond_slave","bridge_slave","bonded_bridge_slave") echo "iface $iname inet static" >> /etc/network/interfaces echo " hwaddress $mac" >> /etc/network/interfaces echo " address $ip" >> /etc/network/interfaces #if $netmask != "" echo " netmask $netmask" >> /etc/network/interfaces #end if #if $iface_type in ("master","bond") #set $bondslaves = "" #for $bondiname in $ikeys #set $bondidata = $interfaces[$bondiname] #set $bondiface_type = $bondidata.get("interface_type", "").lower() #set $bondiface_master = $bondidata.get("interface_master", "") #if $bondiface_master == $iname #set $bondslaves += $bondiname + " " #end if #end for echo " bond-slaves $bondslaves" >> /etc/network/interfaces #for $bondopts in $bonding_opts.split(" ") #set [$bondkey, $bondvalue] = $bondopts.split("=") echo " bond-$bondkey $bondvalue" >> /etc/network/interfaces #end for #end if #else echo "iface $iname inet manual" >> /etc/network/interfaces #end if #if $iface_type in ("slave","bond_slave") and $iface_master != "" echo "bond-master $iface_master" >> /etc/network/interfaces #end if #if $enableipv6 == True and $ipv6_autoconfiguration == False #if $ipv6_address != "" #pass #end if #if $ipv6_secondaries != "" #set ipv6_secondaries = ' '.join(ipv6_secondaries) #end if #if $ipv6_mtu != "" #pass #end if #if $ipv6_default_gateway != "" #pass #end if #end if #else echo "iface $iname inet dhcp" >> /etc/network/interfaces #end if ## =================================================================== ## VLAN configuration ## =================================================================== #if $is_vlan == "true" #pass #end if ## =================================================================== ## Optional configuration stuff ## =================================================================== #if $if_gateway != "" echo " gateway $if_gateway" >> /etc/network/interfaces #end if #if $mtu != "" echo " mtu $mtu" >> /etc/network/interfaces #end if ## =================================================================== ## Interface route configuration ## =================================================================== #for $route in $static_routes #set routepattern = $re.compile("[0-9/.]+:[0-9.]+") #if $routepattern.match($route) #set [$network, $router] = $route.split(":") echo " up ip route add $network via $router dev $iname || true" >> /etc/network/interfaces #else echo " # Warning: invalid route: $route" >> /etc/network/interfaces #end if #end for #if $enableipv6 == True #for $route in $ipv6_static_routes #set routepattern = $re.compile("[0-9a-fA-F:/]+,[0-9a-fA-F:]+") #if $routepattern.match($route) #set [$network, $router] = $route.split(",") echo " up ip -6 route add $network via $router dev $iname || true" >> /etc/network/interfaces #else echo " # Warning: invalid route: $route" >> /etc/network/interfaces #end if #end for #end if ## =================================================================== ## Done with this interface ## =================================================================== #end for ## ============================================================================= ## Configure the system's primary hostname. This is also passed to anaconda, but ## anaconda doesn't seem to honour it in DHCP-setups. #if $hostname != "" echo "$hostname" > /etc/hostname /bin/hostname $hostname #end if ## ============================================================================= ## Configure name server search path in /etc/resolv.conf #set $num_ns = $len($name_servers) #set $num_ns_search = $len($name_servers_search) #if $num_ns_search > 0 sed -i -e "/^search /d" /etc/resolv.conf echo -n "search " >>/etc/resolv.conf #for $nameserversearch in $name_servers_search echo -n "$nameserversearch " >>/etc/resolv.conf #end for echo "" >>/etc/resolv.conf #end if ## ============================================================================= ## Configure name servers in /etc/resolv.conf #if $num_ns > 0 sed -i -e "/^nameserver /d" /etc/resolv.conf #for $nameserver in $name_servers echo "nameserver $nameserver" >>/etc/resolv.conf #end for #end if #end if # End post_install_network_config generated code cobbler-2.4.1/snippets/post_koan_add_reinstall_entry000066400000000000000000000003141227367477500230430ustar00rootroot00000000000000%post #if $getVar("system_name","") != "" koan --server=$server --replace-self --add-reinstall-entry #else koan --server=$server --replace-self --profile=$profile_name --add-reinstall-entry #end if cobbler-2.4.1/snippets/post_run_deb000066400000000000000000000001051227367477500174210ustar00rootroot00000000000000# A general purpose snippet to add late-command actions for preseeds cobbler-2.4.1/snippets/post_s390_reboot000066400000000000000000000037041227367477500200630ustar00rootroot00000000000000## RHEL zVM installs do not properly reboot into the installed system. This ## issue has been resolved in RHEL-5 Update3. To get a consistent reboot ## behavior for s390* installs on all distros, this snippet can be used. The ## snippet will attempt to discover the IPL volume zipl is being installed ## to and will attempt a reipl. Be sure to set this snippet as the *last* ## snippet your kickstart template. #if $arch.startswith("s390"): %post --nochroot # Does the kickstart file request a reboot? grep -q "^reboot" /tmp/ks.cfg /ks.cfg 2>/dev/null if [ \$? -ne 0 ]; then exit 0 fi # find out the location of /boot and use it to re-ipl boot_dev="" for mountpt in /mnt/sysimage/boot /mnt/sysimage; do set -- \$(grep " \$mountpt " /proc/mounts) if [ -b "\$1" ]; then boot_dev=\$1 break fi done # lookup dasd disk if [[ \$boot_dev == *dasd* ]]; then # remove the '/dev/' (aka basename) boot_dev=\${boot_dev\#\#/[^/]*/} # strip partition number from dasd device boot_dev=\${boot_dev%%[0-9]} type="ccw" id=`basename \$(readlink /sys/block/\$boot_dev/device)` # HACK - In RHEL4 and RHEL3 ... we do it the hard way grep -q "^[34]\$" /.buildstamp 2>/dev/null if [ \$? -eq 0 ]; then cat < /mnt/sysimage/tmp/zeboot.sh \#!/bin/bash /sbin/modprobe -r vmcp rm -f "/dev/vmcp" sleep 2 [ -b "/dev/vmcp" ] || /bin/mknod /dev/vmcp c 10 61 /sbin/modprobe -a vmcp sync # Force a boot (e.g. IPL 0100) /sbin/vmcp ipl \${id\#\#*.} EOF /bin/chmod +x /mnt/sysimage/tmp/zeboot.sh /bin/chroot /mnt/sysimage /tmp/zeboot.sh # In RHEL5 ... lets cleanly shutdown (Update 3 and newer) else echo \$type > /sys/firmware/reipl/reipl_type echo \$id > /sys/firmware/reipl/\$type/device # Force a reboot pid=\$(cat /var/run/init.pid) [ -z "\$pid" ] && pid=\$(pidof init) kill -12 \$pid pid=\$(cat /var/run/loader.run) [ -z "\$pid" ] && pid=\$(pidof loader) kill \$pid fi fi #end if cobbler-2.4.1/snippets/pre_anamon000066400000000000000000000003021227367477500170540ustar00rootroot00000000000000#if $str($getVar('anamon_enabled','')) == "1" wget -O /tmp/anamon "http://$server:$http_port/cobbler/aux/anamon" python /tmp/anamon --name "$name" --server "$server" --port "$http_port" #end if cobbler-2.4.1/snippets/pre_install_network_config000066400000000000000000000136111227367477500223560ustar00rootroot00000000000000#if $getVar("system_name","") != "" # Start pre_install_network_config generated code #raw # generic functions to be used later for discovering NICs mac_exists() { [ -z "$1" ] && return 1 if which ip 2>/dev/null >/dev/null; then ip -o link | grep -i "$1" 2>/dev/null >/dev/null return $? elif which esxcfg-nics 2>/dev/null >/dev/null; then esxcfg-nics -l | grep -i "$1" 2>/dev/null >/dev/null return $? else ifconfig -a | grep -i "$1" 2>/dev/null >/dev/null return $? fi } get_ifname() { if which ip 2>/dev/null >/dev/null; then IFNAME=$(ip -o link | grep -i "$1" | sed -e 's/^[0-9]*: //' -e 's/:.*//') elif which esxcfg-nics 2>/dev/null >/dev/null; then IFNAME=$(esxcfg-nics -l | grep -i "$1" | cut -d " " -f 1) else IFNAME=$(ifconfig -a | grep -i "$1" | cut -d " " -f 1) if [ -z $IFNAME ]; then IFNAME=$(ifconfig -a | grep -i -B 2 "$1" | sed -n '/flags/s/:.*$//p') fi fi } #end raw #set ikeys = $interfaces.keys() #import re #set $vlanpattern = $re.compile("[a-zA-Z0-9]+[\.:][0-9]+") #set $routepattern = $re.compile("[0-9/.]+:[0-9.]+") ## ## Determine if we should use the MAC address to configure the interfaces first ## Only physical interfaces are required to have a MAC address #set $configbymac = True #for $iname in $ikeys #set $idata = $interfaces[$iname] #if $idata["mac_address"] == "" and not $vlanpattern.match($iname) and not $idata["interface_type"].lower() in ("master","bond","bridge","bonded_bridge_slave") #set $configbymac = False #end if #end for #set $i = 0 #if $configbymac ## Output diagnostic message # Start of code to match cobbler system interfaces to physical interfaces by their mac addresses #end if #for $iname in $ikeys # Start $iname #set $idata = $interfaces[$iname] #set $mac = $idata["mac_address"] #set $static = $idata["static"] #set $ip = $idata["ip_address"] #set $netmask = $idata["netmask"] #set $iface_type = $idata["interface_type"] #set $iface_master = $idata["interface_master"] #set $static_routes = $idata["static_routes"] #set $devfile = "/etc/sysconfig/network-scripts/ifcfg-" + $iname #if $vlanpattern.match($iname) ## If this is a VLAN interface, skip it, anaconda doesn't know ## about VLANs. #set $is_vlan = "true" #else #set $is_vlan = "false" #end if #if ($configbymac and $is_vlan == "false" and $iface_type.lower() not in ("slave","bond_slave","bridge_slave","bonded_bridge_slave")) or $iface_type.lower() in ("master","bond","bridge") ## This is a physical interface, hand it to anaconda. Do not ## process slave interface here. #if $iface_type.lower() in ("master","bond","bridge","bonded_bridge_slave") ## Find a slave for this interface #for $tiname in $ikeys #set $tidata = $interfaces[$tiname] #if $tidata["interface_type"].lower() in ("slave","bond_slave","bridge_slave") and $tidata["interface_master"].lower() == $iname #set $mac = $tidata["mac_address"] # Found a slave for this interface: $tiname ($mac) #break #else if $tidata["interface_type"].lower() == "bonded_bridge_slave" and $tidata["interface_master"].lower() == $iname ## find a slave for this slave interface... #for $stiname in $ikeys #set $stidata = $interfaces[$stiname] #if $stidata["interface_type"].lower() in ("slave","bond_slave","bridge_slave") and $stidata["interface_master"].lower() == $tiname #set $mac = $stidata["mac_address"] # Found a slave for this interface: $tiname -> $stiname ($mac) #break #end if #end for #end if #end for #end if #if $static and $ip != "" #if $netmask == "" ## Netmask not provided, default to /24. #set $netmask = "255.255.255.0" #end if #set $netinfo = "--bootproto=static --ip=%s --netmask=%s" % ($ip, $netmask) #if $gateway != "" #set $netinfo = "%s --gateway=%s" % ($netinfo, $gateway) #end if #if $len($name_servers) > 0 #set $netinfo = "%s --nameserver=%s" % ($netinfo, $name_servers[0]) #end if #else if not $static #set $netinfo = "--bootproto=dhcp" #else ## Skip this interface, it's set as static, but without ## networking info. # Skipping (no configuration)... #continue #end if #if $hostname != "" #set $netinfo = "%s --hostname=%s" % ($netinfo, $hostname) #end if # Configuring $iname ($mac) if mac_exists $mac then get_ifname $mac echo "network --device=\$IFNAME $netinfo" >> /tmp/pre_install_network_config #for $route in $static_routes #if $routepattern.match($route) #set $routebits = $route.split(":") #set [$network, $router] = $route.split(":") ip route add $network via $router dev \$IFNAME #else # Warning: invalid route "$route" #end if #end for fi #else #if $iface_type.lower() in ("slave","bond_slave","bridge_slave","bonded_bridge_slave") # Skipping (slave-interface) #else # Skipping (not a physical interface)... #end if #end if #end for # End pre_install_network_config generated code #end if cobbler-2.4.1/snippets/pre_partition_select000066400000000000000000000014051227367477500211600ustar00rootroot00000000000000# partition details calculation # Determine how many drives we have set \$(list-harddrives) let numd=\$#/2 d1=\$1 d2=\$3 # Determine architecture-specific partitioning needs EFI_PART="" PPC_PREP_PART="" BOOT_PART="" case \$(uname -m) in ia64) EFI_PART="part /boot/efi --fstype vfat --size 200 --recommended" ;; ppc*) PPC_PREP_PART="part None --fstype 'PPC PReP Boot' --size 8" BOOT_PART="part /boot --fstype ext3 --size 200 --recommended" ;; *) BOOT_PART="part /boot --fstype ext3 --size 200 --recommended" ;; esac cat << EOF > /tmp/partinfo \$EFI_PART \$PPC_PREP_PART \$BOOT_PART part / --fstype ext3 --size=1024 --grow --ondisk=\$d1 --asprimary part swap --recommended --ondisk=\$d1 --asprimary EOF cobbler-2.4.1/snippets/preseed_apt_repo_config000066400000000000000000000011641227367477500216110ustar00rootroot00000000000000# Additional repositories, local[0-9] available #set $cur=0 #set $repo_data = $getVar("repo_data",[]) #for $repo in $repo_data #for $dist in $repo.apt_dists #set $comps = " ".join($repo.apt_components) d-i apt-setup/local${cur}/repository string \ #if $repo.mirror_locally http://$http_server/cblr/repo_mirror/${repo.name} $dist $comps #else ${repo.mirror} $dist $comps #end if #if $repo.comment != "" d-i apt-setup/local${cur}/comment string ${repo.comment} #end if #if $repo.breed == "src" # Enable deb-src lines d-i apt-setup/local${cur}/source boolean false #end if #set $cur=$cur+1 #end for #end for cobbler-2.4.1/snippets/puppet_install_if_enabled000066400000000000000000000001011227367477500221250ustar00rootroot00000000000000#if $str($getVar('puppet_auto_setup','')) == "1" puppet #end if cobbler-2.4.1/snippets/puppet_register_if_enabled000066400000000000000000000012041227367477500223100ustar00rootroot00000000000000# start puppet registration #if $str($getVar('puppet_auto_setup','')) == "1" # generate puppet certificates and trigger a signing request, but # don't wait for signing to complete #if $int($getVar('puppet_version',2)) >= 3 /usr/bin/puppet agent --test --waitforcert 0 #echo (($str($getVar('puppet_server','')) != '') and "--server '"+$str($getVar('puppet_server',''))+"'" or '') #else /usr/sbin/puppetd --test --waitforcert 0 #echo (($str($getVar('puppet_server','')) != '') and "--server '"+$str($getVar('puppet_server',''))+"'" or '') #end if # turn puppet service on for reboot /sbin/chkconfig puppet on #end if # end puppet registration cobbler-2.4.1/snippets/redhat_register000066400000000000000000000015111227367477500201130ustar00rootroot00000000000000# begin Red Hat management server registration #if $redhat_management_type != "off" and $redhat_management_key != "" mkdir -p /usr/share/rhn/ #if $redhat_management_type == "site" #set $mycert_file = "RHN-ORG-TRUSTED-SSL-CERT" #set $mycert = "/usr/share/rhn/" + $mycert_file wget http://$redhat_management_server/pub/RHN-ORG-TRUSTED-SSL-CERT -O $mycert perl -npe 's/RHNS-CA-CERT/$mycert_file/g' -i /etc/sysconfig/rhn/* #end if #if $redhat_management_type == "hosted" #set $mycert = "/usr/share/rhn/RHNS-CA-CERT" #end if #set $endpoint = "https://%s/XMLRPC" % $redhat_management_server rhnreg_ks --serverUrl=$endpoint --sslCACert=$mycert --activationkey=$redhat_management_key #else # not configured to register to any Red Hat management server (ok) #end if # end Red Hat management server registration cobbler-2.4.1/snippets/restore_boot_device000066400000000000000000000003771227367477500207760ustar00rootroot00000000000000if [ "$os_version" == "sles11" ]; then nvsetenv boot-device "$(cat /root/inst-sys/boot-device.bak)" elif [ "$os_version" == "fedora17" ]; then # must be run from a %post --nochroot section nvsetenv boot-device "$(cat /tmp/boot-device.bak)" fi cobbler-2.4.1/snippets/rhn_certificate_based_register000066400000000000000000000011541227367477500231360ustar00rootroot00000000000000# begin Red Hat Network certificate-based server registration #if $redhat_management_type == "cert" and $redhat_register_user != "" and $redhat_register_password != "" # Subscribe (register) the system subscription-manager register --autosubscribe --username=$redhat_register_user --password=$redhat_register_password # Add what used to be called channels yum -y install yum-utils yum-config-manager --enable rhel-6-server-optional-rpms yum-config-manager --enable rhel-6-server-supplementary #else # not configured to use Certificate-based RHN (ok) #end if # end Red Hat Network certificate-based server registration cobbler-2.4.1/snippets/save_boot_device000066400000000000000000000003161227367477500202420ustar00rootroot00000000000000if [ "$os_version" == "sles11" ]; then nvram --print-config=boot-device > /root/boot-device.bak elif [ "$os_version" == "fedora17" ]; then nvram --print-config=boot-device > /tmp/boot-device.bak fi cobbler-2.4.1/sources000066400000000000000000000000671227367477500145630ustar00rootroot00000000000000735c29d65141fe0d7864188e08d46859 cobbler-2.4.0.tar.gz cobbler-2.4.1/templates/000077500000000000000000000000001227367477500151505ustar00rootroot00000000000000cobbler-2.4.1/templates/etc/000077500000000000000000000000001227367477500157235ustar00rootroot00000000000000cobbler-2.4.1/templates/etc/dhcp.template000066400000000000000000000056021227367477500204010ustar00rootroot00000000000000# ****************************************************************** # Cobbler managed dhcpd.conf file # # generated from cobbler dhcp.conf template ($date) # Do NOT make changes to /etc/dhcpd.conf. Instead, make your changes # in /etc/cobbler/dhcp.template, as /etc/dhcpd.conf will be # overwritten. # # ****************************************************************** ddns-update-style interim; allow booting; allow bootp; ignore client-updates; set vendorclass = option vendor-class-identifier; option pxe-system-type code 93 = unsigned integer 16; subnet 192.168.1.0 netmask 255.255.255.0 { option routers 192.168.1.5; option domain-name-servers 192.168.1.1; option subnet-mask 255.255.255.0; range dynamic-bootp 192.168.1.100 192.168.1.254; default-lease-time 21600; max-lease-time 43200; next-server $next_server; class "pxeclients" { match if substring (option vendor-class-identifier, 0, 9) = "PXEClient"; if option pxe-system-type = 00:02 { filename "ia64/elilo.efi"; } else if option pxe-system-type = 00:06 { filename "grub/grub-x86.efi"; } else if option pxe-system-type = 00:07 { filename "grub/grub-x86_64.efi"; } else { filename "pxelinux.0"; } } } #for dhcp_tag in $dhcp_tags.keys(): ## group could be subnet if your dhcp tags line up with your subnets ## or really any valid dhcpd.conf construct ... if you only use the ## default dhcp tag in cobbler, the group block can be deleted for a ## flat configuration # group for Cobbler DHCP tag: $dhcp_tag group { #for mac in $dhcp_tags[$dhcp_tag].keys(): #set iface = $dhcp_tags[$dhcp_tag][$mac] host $iface.name { hardware ethernet $mac; #if $iface.ip_address: fixed-address $iface.ip_address; #end if #if $iface.hostname: option host-name "$iface.hostname"; #end if #if $iface.netmask: option subnet-mask $iface.netmask; #end if #if $iface.gateway: option routers $iface.gateway; #end if #if $iface.enable_gpxe: if exists user-class and option user-class = "gPXE" { filename "http://$cobbler_server/cblr/svc/op/gpxe/system/$iface.owner"; } else if exists user-class and option user-class = "iPXE" { filename "http://$cobbler_server/cblr/svc/op/gpxe/system/$iface.owner"; } else { filename "undionly.kpxe"; } #else filename "$iface.filename"; #end if ## Cobbler defaults to $next_server, but some users ## may like to use $iface.system.server for proxied setups next-server $next_server; ## next-server $iface.next_server; } #end for } #end for cobbler-2.4.1/templates/etc/dnsmasq.template000066400000000000000000000006011227367477500211230ustar00rootroot00000000000000# Cobbler generated configuration file for dnsmasq # $date # # resolve.conf .. ? #no-poll #enable-dbus read-ethers addn-hosts = /var/lib/cobbler/cobbler_hosts dhcp-range=192.168.1.5,192.168.1.200 dhcp-option=3,$next_server dhcp-lease-max=1000 dhcp-authoritative dhcp-boot=pxelinux.0 dhcp-boot=net:normalarch,pxelinux.0 dhcp-boot=net:ia64,$elilo $insert_cobbler_system_definitions cobbler-2.4.1/templates/etc/named.template000066400000000000000000000012501227367477500205420ustar00rootroot00000000000000options { listen-on port 53 { 127.0.0.1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-query { localhost; }; recursion yes; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; #for $zone in $forward_zones zone "${zone}." { type master; file "$zone"; }; #end for #for $zone, $arpa in $reverse_zones zone "${arpa}." { type master; file "$zone"; }; #end for cobbler-2.4.1/templates/etc/rsync.template000066400000000000000000000020611227367477500206150ustar00rootroot00000000000000# ****************************************************************** # Cobbler managed rsyncd.conf file # # Do NOT make changes to /etc/rsyncd.conf directly. # Instead, make your changes to /etc/cobbler/rsync.template. # # ****************************************************************** [cobbler-distros] path = $webdir/ks_mirror comment = All Cobbler Distros [cobbler-repos] path = $webdir/repo_mirror comment = All Cobbler Distros [cobbler-kickstarts] path = /var/lib/cobbler/kickstarts comment = Cobbler Kickstarts [cobbler-snippets] path = /var/lib/cobbler/snippets comment = Cobbler Snippets [cobbler-triggers] path = /var/lib/cobbler/triggers comment = Cobbler Triggers [cobbler-scripts] path = /var/lib/cobbler/scripts comment = Cobbler Scripts #for repo in $repos: [repo-$repo] path = $webdir/repo_mirror/$repo comment = Cobbler Repo $repo #end for #for distro in $distros: [distro-$distro.name] path = $distro.path comment = Cobbler Distro $distro.name #end for cobbler-2.4.1/templates/etc/secondary.template000066400000000000000000000013621227367477500214510ustar00rootroot00000000000000options { listen-on port 53 { 127.0.0.1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-query { localhost; }; recursion yes; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; #for $zone in $forward_zones zone "${zone}." { type slave; masters { 1.1.1.1; }; file "$zone"; }; #end for #for $zone, $arpa in $reverse_zones zone "${arpa}." { type slave; masters { 1.1.1.1; }; file "$zone"; }; #end for cobbler-2.4.1/templates/etc/tftpd.template000066400000000000000000000013441227367477500206030ustar00rootroot00000000000000# default: off # description: The tftp server serves files using the trivial file transfer \ # protocol. The tftp protocol is often used to boot diskless \ # workstations, download configuration files to network-aware printers, \ # and to start the installation process for some operating systems. service tftp { disable = no socket_type = dgram protocol = udp wait = yes user = $user server = $binary server_args = -B 1380 -v -s $args per_source = 11 cps = 100 2 flags = IPv4 } cobbler-2.4.1/templates/etc/zone.template000066400000000000000000000010121227367477500204250ustar00rootroot00000000000000\$TTL 300 @ IN SOA $cobbler_server. nobody.example.com. ( $serial ; Serial 600 ; Refresh 1800 ; Retry 604800 ; Expire 300 ; TTL ) IN NS $cobbler_server. $cname_record $host_record cobbler-2.4.1/templates/iso/000077500000000000000000000000001227367477500157425ustar00rootroot00000000000000cobbler-2.4.1/templates/iso/buildiso.template000066400000000000000000000003011227367477500213030ustar00rootroot00000000000000DEFAULT MENU PROMPT 0 MENU TITLE Cobbler | http://www.cobblerd.org/ TIMEOUT 200 TOTALTIMEOUT 6000 ONTIMEOUT local LABEL local MENU LABEL (local) MENU DEFAULT KERNEL chain.c32 APPEND hd0 0 cobbler-2.4.1/templates/ldap/000077500000000000000000000000001227367477500160705ustar00rootroot00000000000000cobbler-2.4.1/templates/ldap/ldap_authconfig.template000066400000000000000000000002521227367477500227530ustar00rootroot00000000000000authconfig --enableshadow --enableldap --enableldapauth --enablelocauthorize --enablecache --enablemkhomedir --updateall --ldapserver=$ldapserver --ldapbasedn=$ldapbasedncobbler-2.4.1/templates/power/000077500000000000000000000000001227367477500163045ustar00rootroot00000000000000cobbler-2.4.1/templates/power/power_apc_snmp.template000066400000000000000000000001031227367477500230470ustar00rootroot00000000000000fence_apc_snmp -a "$power_address" -n "$power_id" -o "$power_mode" cobbler-2.4.1/templates/power/power_bladecenter.template000066400000000000000000000001531227367477500235240ustar00rootroot00000000000000fence_bladecenter -x -a "$power_address" -l "$power_user" -p "$power_pass" -n "$power_id" -o "$power_mode" cobbler-2.4.1/templates/power/power_bullpap.template000066400000000000000000000001441227367477500227130ustar00rootroot00000000000000fence_bullpap -a "$power_address" -l "$power_user" -p "$power_pass" -d "$power_id" -o "$power_mode" cobbler-2.4.1/templates/power/power_drac.template000066400000000000000000000002551227367477500221700ustar00rootroot00000000000000#if $getVar("power_id","") != "" #set $power_id = "-m %s" % $power_id #end if fence_drac -a "$power_address" -l "$power_user" -p "$power_pass" $power_id -o "$power_mode" cobbler-2.4.1/templates/power/power_ether_wake.template000066400000000000000000000004571227367477500234010ustar00rootroot00000000000000if [ -x /sbin/ether-wake ]; then /sbin/ether-wake -i eth0 "$power_address" elif [ -x /usr/bin/powerwake ]; then /usr/bin/powerwake "$power_address" elif [ -x /usr/bin/wakeonlan ]; then /usr/bin/wakeonlan "$power_address" elif [ -x /usr/sbin/etherwake ]; then /usr/sbin/etherwake "$power_address" fi cobbler-2.4.1/templates/power/power_ilo.template000066400000000000000000000001221227367477500220330ustar00rootroot00000000000000fence_ilo -a "$power_address" -l "$power_user" -p "$power_pass" -o "$power_mode" cobbler-2.4.1/templates/power/power_integrity.template000066400000000000000000000001461227367477500232740ustar00rootroot00000000000000/usr/local/bin/fence_integrity -a "$power_address" -l "$power_user" -p "$power_pass" -o "$power_mode" cobbler-2.4.1/templates/power/power_ipmilan.template000066400000000000000000000002401227367477500227020ustar00rootroot00000000000000#use power_id to pass in additional options like -P, Use Lanplus fence_ipmilan -i "$power_address" $power_id -l "$power_user" -p "$power_pass" -o "$power_mode" cobbler-2.4.1/templates/power/power_ipmitool.template000066400000000000000000000001341227367477500231070ustar00rootroot00000000000000/usr/bin/ipmitool -H "$power_address" -U "$power_user" -P "$power_pass" power "$power_mode" cobbler-2.4.1/templates/power/power_lpar.template000066400000000000000000000002551227367477500222150ustar00rootroot00000000000000#set ($power_sys, $power_lpar) = $power_id.split(':') fence_lpar -a "$power_address" -l "$power_user" -p "$power_pass" -x -s "$power_sys" -n "$power_lpar" -o "$power_mode" cobbler-2.4.1/templates/power/power_rsa.template000066400000000000000000000001211227367477500220340ustar00rootroot00000000000000fence_rsa -a "$power_address" -l "$power_user" -p "$power_pass" -o "$power_mode" cobbler-2.4.1/templates/power/power_virsh.template000066400000000000000000000021761227367477500224160ustar00rootroot00000000000000## Set proper virsh operation #if $power_mode == "on" #set operation = "start" #else #set operation = "destroy" #end if ## Build connection URI ## driver[+transport]://[username@][hostname][:port]/[path][?extraparameters] ## Determine requested driver to use (defaults to 'qemu') #if $power_address and $power_address.count(':') > 0 #set (driver, power_address) = $power_address.split('://', 1) #else #set driver = "qemu" #end if ## Was a username requested (defaults to '')? #if $power_user #set $username = "%s@" % $power_user #else #set $username = "" #end if ## Default to localhost #if $username and $power_address is None or $power_address == "" #set $power_address = "localhost" #end if ## Perform requested action ## NOTE - may require additional setup by sys-admin to enable passwd-less operation domstate=\$(/usr/bin/virsh --connect $driver://$username$power_address/system domstate $power_id) if [ "$operation" = "destroy" -a "$domstate" = "running" -o "$operation" = "start" -a "$domstate" = "shut off" ]; then /usr/bin/virsh --connect $driver://$username$power_address/system $operation $power_id fi cobbler-2.4.1/templates/power/power_wti.template000066400000000000000000000001171227367477500220570ustar00rootroot00000000000000fence_wti -a "$power_address" -n "$power_id" -p "$power_pass" -o "$power_mode" cobbler-2.4.1/templates/pxe/000077500000000000000000000000001227367477500157445ustar00rootroot00000000000000cobbler-2.4.1/templates/pxe/bootcfg_esxi5.template000066400000000000000000000025101227367477500222370ustar00rootroot00000000000000bootstate=0 title=Cobbler - Loading ESXi installer prefix=$img_path kernel=tboot.b00 #if $getVar('system_name','') != '' #set $what = "system" #else #set $what = "profile" #end if kernelopt=ks=http://$server:$http_port/cblr/svc/op/ks/$what/$name modules=b.b00 --- useropts.gz --- k.b00 --- a.b00 --- ata-pata.v00 --- ata-pata.v01 --- ata-pata.v02 --- ata-pata.v03 --- ata-pata.v04 --- ata-pata.v05 --- ata-pata.v06 --- ata-pata.v07 --- block-cc.v00 --- ehci-ehc.v00 --- s.v00 --- weaselin.i00 --- ima-qla4.v00 --- ipmi-ipm.v00 --- ipmi-ipm.v01 --- ipmi-ipm.v02 --- misc-cni.v00 --- misc-dri.v00 --- net-be2n.v00 --- net-bnx2.v00 --- net-bnx2.v01 --- net-cnic.v00 --- net-e100.v00 --- net-e100.v01 --- net-enic.v00 --- net-forc.v00 --- net-igb.v00 --- net-ixgb.v00 --- net-nx-n.v00 --- net-r816.v00 --- net-r816.v01 --- net-s2io.v00 --- net-sky2.v00 --- net-tg3.v00 --- ohci-usb.v00 --- sata-ahc.v00 --- sata-ata.v00 --- sata-sat.v00 --- sata-sat.v01 --- sata-sat.v02 --- sata-sat.v03 --- scsi-aac.v00 --- scsi-adp.v00 --- scsi-aic.v00 --- scsi-bnx.v00 --- scsi-fni.v00 --- scsi-hps.v00 --- scsi-ips.v00 --- scsi-lpf.v00 --- scsi-meg.v00 --- scsi-meg.v01 --- scsi-meg.v02 --- scsi-mpt.v00 --- scsi-mpt.v01 --- scsi-mpt.v02 --- scsi-qla.v00 --- scsi-qla.v01 --- scsi-rst.v00 --- uhci-usb.v00 --- tools.t00 --- imgdb.tgz --- imgpayld.tgz build= updated=0 cobbler-2.4.1/templates/pxe/bootcfg_esxi51.template000066400000000000000000000026641227367477500223320ustar00rootroot00000000000000bootstate=0 title=Cobbler - Loading ESXi installer prefix=$img_path kernel=tboot.b00 #if $getVar('system_name','') != '' #set $what = "system" #else #set $what = "profile" #end if kernelopt=ks=http://$server:$http_port/cblr/svc/op/ks/$what/$name modules=b.b00 --- useropts.gz --- k.b00 --- chardevs.b00 --- a.b00 --- user.b00 --- s.v00 --- ata_pata.v00 --- ata_pata.v01 --- ata_pata.v02 --- ata_pata.v03 --- ata_pata.v04 --- ata_pata.v05 --- ata_pata.v06 --- ata_pata.v07 --- block_cc.v00 --- ehci_ehc.v00 --- weaselin.t00 --- esx_dvfi.v00 --- xlibs.v00 --- ima_qla4.v00 --- ipmi_ipm.v00 --- ipmi_ipm.v01 --- ipmi_ipm.v02 --- misc_cni.v00 --- misc_dri.v00 --- net_be2n.v00 --- net_bnx2.v00 --- net_bnx2.v01 --- net_cnic.v00 --- net_e100.v00 --- net_e100.v01 --- net_enic.v00 --- net_forc.v00 --- net_igb.v00 --- net_ixgb.v00 --- net_nx_n.v00 --- net_r816.v00 --- net_r816.v01 --- net_s2io.v00 --- net_sky2.v00 --- net_tg3.v00 --- net_vmxn.v00 --- ohci_usb.v00 --- sata_ahc.v00 --- sata_ata.v00 --- sata_sat.v00 --- sata_sat.v01 --- sata_sat.v02 --- sata_sat.v03 --- sata_sat.v04 --- scsi_aac.v00 --- scsi_adp.v00 --- scsi_aic.v00 --- scsi_bnx.v00 --- scsi_fni.v00 --- scsi_hps.v00 --- scsi_ips.v00 --- scsi_lpf.v00 --- scsi_meg.v00 --- scsi_meg.v01 --- scsi_meg.v02 --- scsi_mpt.v00 --- scsi_mpt.v01 --- scsi_mpt.v02 --- scsi_qla.v00 --- scsi_qla.v01 --- scsi_rst.v00 --- uhci_usb.v00 --- tools.t00 --- xorg.v00 --- imgdb.tgz --- imgpayld.tgz build= updated=0 cobbler-2.4.1/templates/pxe/bootcfg_esxi55.template000066400000000000000000000030171227367477500223270ustar00rootroot00000000000000bootstate=0 title=Loading ESXi installer prefix=$img_path kernel=tboot.b00 #if $getVar('system_name','') != '' #set $what = "system" #else #set $what = "profile" #end if kernelopt=ks=http://$server:$http_port/cblr/svc/op/ks/$what/$name modules=b.b00 --- jumpstrt.gz --- useropts.gz --- k.b00 --- chardevs.b00 --- a.b00 --- user.b00 --- sb.v00 --- s.v00 --- ata_pata.v00 --- ata_pata.v01 --- ata_pata.v02 --- ata_pata.v03 --- ata_pata.v04 --- ata_pata.v05 --- ata_pata.v06 --- ata_pata.v07 --- block_cc.v00 --- ehci_ehc.v00 --- elxnet.v00 --- weaselin.t00 --- esx_dvfi.v00 --- xlibs.v00 --- ima_qla4.v00 --- ipmi_ipm.v00 --- ipmi_ipm.v01 --- ipmi_ipm.v02 --- lpfc.v00 --- lsi_mr3.v00 --- lsi_msgp.v00 --- misc_cni.v00 --- misc_dri.v00 --- mtip32xx.v00 --- net_be2n.v00 --- net_bnx2.v00 --- net_bnx2.v01 --- net_cnic.v00 --- net_e100.v00 --- net_e100.v01 --- net_enic.v00 --- net_forc.v00 --- net_igb.v00 --- net_ixgb.v00 --- net_mlx4.v00 --- net_mlx4.v01 --- net_nx_n.v00 --- net_tg3.v00 --- net_vmxn.v00 --- ohci_usb.v00 --- qlnative.v00 --- rste.v00 --- sata_ahc.v00 --- sata_ata.v00 --- sata_sat.v00 --- sata_sat.v01 --- sata_sat.v02 --- sata_sat.v03 --- sata_sat.v04 --- scsi_aac.v00 --- scsi_adp.v00 --- scsi_aic.v00 --- scsi_bnx.v00 --- scsi_bnx.v01 --- scsi_fni.v00 --- scsi_hps.v00 --- scsi_ips.v00 --- scsi_lpf.v00 --- scsi_meg.v00 --- scsi_meg.v01 --- scsi_meg.v02 --- scsi_mpt.v00 --- scsi_mpt.v01 --- scsi_mpt.v02 --- scsi_qla.v00 --- scsi_qla.v01 --- uhci_usb.v00 --- tools.t00 --- xorg.v00 --- imgdb.tgz --- imgpayld.tgz build= updated=0 cobbler-2.4.1/templates/pxe/efidefault.template000066400000000000000000000000461227367477500216110ustar00rootroot00000000000000default=0 timeout=0 $grub_menu_items cobbler-2.4.1/templates/pxe/gpxe_system_esxi5.template000066400000000000000000000003021227367477500231600ustar00rootroot00000000000000#!gpxe kernel http://$server:$http_port/cobbler/ks_mirror/$ks_mirror_name/mboot.c32 imgargs mboot.c32 -c http://$server:$http_port/cblr/svc/op/bootcfg/system/$name BOOTIF=$mac_address_eth0 boot cobbler-2.4.1/templates/pxe/gpxe_system_freebsd.template000066400000000000000000000002471227367477500235450ustar00rootroot00000000000000#!gpxe kernel http://$server/cobbler/ks_mirror/$distro/boot/memdisk imgargs memdisk raw iso initrd http://$server/cobbler/ks_mirror/$distro/boot/bootonly.iso.zip boot cobbler-2.4.1/templates/pxe/gpxe_system_linux.template000066400000000000000000000002701227367477500232660ustar00rootroot00000000000000#!gpxe kernel http://$server:$http_port/cobbler/images/$distro/$kernel_name imgargs $kernel_name $append_line initrd http://$server:$http_port/cobbler/images/$distro/$initrd_name boot cobbler-2.4.1/templates/pxe/gpxe_system_local.template000066400000000000000000000001601227367477500232170ustar00rootroot00000000000000#!gpxe # Boot from local hard disk iseq ${smbios/manufacturer} HP && exit || sanboot --no-describe --drive 0x80 cobbler-2.4.1/templates/pxe/grubprofile.template000066400000000000000000000001421227367477500220160ustar00rootroot00000000000000title $profile_name root (nd) kernel $kernel_path $kernel_options initrd $initrd_path cobbler-2.4.1/templates/pxe/grubsystem.template000066400000000000000000000001421227367477500217020ustar00rootroot00000000000000title $profile_name root (nd) kernel $kernel_path $kernel_options initrd $initrd_path cobbler-2.4.1/templates/pxe/pxedefault.template000066400000000000000000000003561227367477500216460ustar00rootroot00000000000000DEFAULT menu PROMPT 0 MENU TITLE Cobbler | http://www.cobblerd.org/ TIMEOUT 200 TOTALTIMEOUT 6000 ONTIMEOUT $pxe_timeout_profile LABEL local MENU LABEL (local) MENU DEFAULT LOCALBOOT -1 $pxe_menu_items MENU end cobbler-2.4.1/templates/pxe/pxelocal.template000066400000000000000000000001431227367477500213060ustar00rootroot00000000000000DEFAULT local PROMPT 0 TIMEOUT 0 TOTALTIMEOUT 0 ONTIMEOUT local LABEL local LOCALBOOT -1 cobbler-2.4.1/templates/pxe/pxelocal_ia64.template000066400000000000000000000000501227367477500221260ustar00rootroot00000000000000default=linux image=dummy label=linux cobbler-2.4.1/templates/pxe/pxelocal_s390x.template000066400000000000000000000000061227367477500222520ustar00rootroot00000000000000local cobbler-2.4.1/templates/pxe/pxeprofile.template000066400000000000000000000001541227367477500216560ustar00rootroot00000000000000LABEL $profile_name kernel $kernel_path $menu_label $append_line ipappend 2 cobbler-2.4.1/templates/pxe/pxeprofile_arm.template000066400000000000000000000001311227367477500225100ustar00rootroot00000000000000LABEL $profile_name kernel $kernel_path $menu_label $append_line initrd $initrd_path cobbler-2.4.1/templates/pxe/pxeprofile_esxi.template000066400000000000000000000002051227367477500227030ustar00rootroot00000000000000LABEL $profile_name kernel $kernel_path $menu_label ipappend 2 append -c $img_path/cobbler-boot.cfg cobbler-2.4.1/templates/pxe/pxeprofile_s390x.template000066400000000000000000000004371227367477500226300ustar00rootroot00000000000000$kernel_path $initrd_path ## Only include kickstart URL -- all other parms in _parm and _conf files #set $start = $append_line.find("ks=") #set $end = $append_line.find(" ", $start) #if $start != -1 #if $end == -1 #set $end = $len($append_line) #end if $append_line[$start:$end] #end if cobbler-2.4.1/templates/pxe/pxesystem.template000066400000000000000000000001621227367477500215410ustar00rootroot00000000000000default linux prompt 0 timeout 1 label linux kernel $kernel_path ipappend 2 $append_line cobbler-2.4.1/templates/pxe/pxesystem_arm.template000066400000000000000000000001451227367477500224010ustar00rootroot00000000000000default linux prompt 0 timeout 1 label linux kernel $kernel_path $append_line initrd $initrd_path cobbler-2.4.1/templates/pxe/pxesystem_esxi.template000066400000000000000000000004111227367477500225660ustar00rootroot00000000000000default linux prompt 0 timeout 1 label linux kernel $kernel_path ipappend 2 append $img_path/vmkboot.gz $append_line --- $img_path/vmkernel.gz --- $img_path/sys.vgz --- $img_path/cim.vgz --- $img_path/ienviron.vgz --- $img_path/install.vgz cobbler-2.4.1/templates/pxe/pxesystem_ia64.template000066400000000000000000000001531227367477500223640ustar00rootroot00000000000000image=$kernel_path label=netinstall append="$append_line" initrd=$initrd_path read-only root=/dev/ram cobbler-2.4.1/templates/pxe/pxesystem_ppc.template000066400000000000000000000003601227367477500224030ustar00rootroot00000000000000# yaboot.conf generated by cobbler init-message="Cobbler generated yaboot configuration.\nHit for boot options" timeout=80 delay=100 default=linux image=$kernel_path label=linux initrd=$initrd_path append="$append_line" cobbler-2.4.1/templates/pxe/pxesystem_s390x.template000066400000000000000000000004371227367477500225140ustar00rootroot00000000000000$kernel_path $initrd_path ## Only include kickstart URL -- all other parms in _parm and _conf files #set $start = $append_line.find("ks=") #set $end = $append_line.find(" ", $start) #if $start != -1 #if $end == -1 #set $end = $len($append_line) #end if $append_line[$start:$end] #end if cobbler-2.4.1/templates/pxe/s390x_conf.template000066400000000000000000000047631227367477500214060ustar00rootroot00000000000000## Stuff content into a list so we can display a condensed format later #if $isinstance($kernel_options, $list) #set $argList = $kernel_options #else #set $argList = $kernel_options.split(" ") #end if #silent $argList.append("DASD=100-101,200") #silent $argList.append("SUBCHANNELS=0.0.0600,0.0.0601,0.0.0602") #silent $argList.append("NETTYPE=qeth") #if $getVar('hostname', '') != '' #silent $argList.append("HOSTNAME=%s" % $hostname) #end if #if $getVar('name_servers_search', '') != '' #silent $argList.append("SEARCHDNS=%s" % ':'.join($name_servers_search)) #end if #if $getVar('gateway', '') != '' #silent $argList.append("GATEWAY=%s" % $gateway) #end if #if $getVar('name_servers', '') != '' #silent $argList.append("DNS=%s" % ':'.join($name_servers)) #end if #if $getVar("interfaces","") != "" and $interfaces.has_key("eth0") #set $ip=$interfaces['eth0'].get('ip_address','') #set $netmask=$interfaces['eth0'].get('netmask','') #if $ip != '' #set $tokens = $ip.split('.') #set $tokens = $tokens[0:-1] #set $broadcast = ".".join($tokens) + ".255" #else #set $broadcast = "" #end if #else #set $ip="" #set $netmask="" #set $broadcast = "" #end if #if $ip != '' #silent $argList.append("IPADDR=%s" % $ip) #end if ## Unless provided, calculate the network using netmask and broadcast #if $getVar('network', '') != '' #silent $argList.append("NETWORK=%s" % $network) #elif $netmask != '' and $ip != '' #set $ip_split = $ip.split('.') #set $nm_split = $netmask.split('.') #set $nw_split = [] #for $oct in $range($len($ip_split)) #silent $nw_split.append("%s" % ($int($nm_split[$oct]) & $int($ip_split[$oct]))) #end for #set $network=".".join($nw_split) #silent $argList.append("NETWORK=%s" % $network) #end if #if $netmask != '' #silent $argList.append("NETMASK=%s" % $netmask) #end if #if $broadcast != '' #silent $argList.append("BROADCAST=%s" % $broadcast) #end if #silent $argList.append("MTU=1500") #silent $argList.append("PORTNAME=UNASSIGNED") #silent $argList.append("PORTNO=0") #silent $argList.append("LAYER2=0") ## ===================================== ## Now write out data. Content cannot be longer than 80 characters in length, ## and must not exceed 11 lines ## ===================================== #set $output_str="" #for $item in $argList #if $len($output_str) + $len($item) >= 80 #echo "%s\n" % $output_str.strip() #set $output_str = "" #end if #set $output_str = "%s %s" % ($output_str, $item) #end for #if $len($output_str) > 0 #echo "%s\n" % $output_str.strip() #end if cobbler-2.4.1/templates/pxe/s390x_parm.template000066400000000000000000000012471227367477500214120ustar00rootroot00000000000000## Stuff content into a list so we can display a condensed format later #if $isinstance($kernel_options, $list) #set $argList = $kernel_options #else #set $argList = $kernel_options.split(" ") #end if ## ===================================== ## Now write out data. Content cannot be longer than 80 characters in length, ## and must not exceed 11 lines ## ===================================== #set $output_str="" #for $item in $argList #set $output_str = "%s %s" % ($output_str, $item) #if $len($output_str) >= 80 #echo "%s\n" % $output_str[0:80] #set $output_str = $output_str[80:$len($output_str)] #end if #end for #if $len($output_str) > 0 #echo "%s\n" % $output_str #end if cobbler-2.4.1/templates/reporting/000077500000000000000000000000001227367477500171615ustar00rootroot00000000000000cobbler-2.4.1/templates/reporting/build_report_email.template000066400000000000000000000017531227367477500245650ustar00rootroot00000000000000From: $from_addr To: $to_addr #if $getVar("interfaces","") != "" Subject: $subject (system: $name, ip: $boot_ip) #else Subject: $subject (profile: $name, ip: $boot_ip) #end if Cobbler build report. ======================================================= http://www.cobblerd.org/ #if $getVar("interfaces","") != "": system name : $name profile : $profile distro : $distro #for $intf in $interfaces.keys(): #set $mac = $interfaces[$intf].get("mac_address","") #set $ip = $interfaces[$intf].get("ip_address","") #set $static = $interfaces[$intf].get("static","") interface: $intf mac: $mac ip: $ip static: $static #end for #else profile : $name distro : $distro #end if kernel_options: $getVar("kernel_options","") #if $kickstart.startswith("/") or $kickstart == "": #if $getVar("interfaces","") != "" kickstart=http://$server/cblr/svc/op/ks/system/$name #else kickstart=http://$server/cblr/svc/op/ks/profile/$name #end if #else kickstart=$kickstart #end if cobbler-2.4.1/tests/000077500000000000000000000000001227367477500143145ustar00rootroot00000000000000cobbler-2.4.1/tests/README000066400000000000000000000007421227367477500151770ustar00rootroot00000000000000Before running tests, ensure you have python-nosetests installed. To run tests: - Ensure you have python-nose installed. - Copy apitests.conf.sample to apitests.conf and modify for your environment. - From the apitests/ directory execute: PYTHONPATH=./ nosetests Note that these tests require modifications to a running Cobbler server. They will attempt to clean up any test objects created, but it may be best to not execute these against a production Cobbler server. cobbler-2.4.1/tests/__init__.py000066400000000000000000000000001227367477500164130ustar00rootroot00000000000000cobbler-2.4.1/tests/apitests.conf000066400000000000000000000010551227367477500170200ustar00rootroot00000000000000# Cobbler test suite settings # Cobbler server to test against: cobbler_server: 127.0.0.1 cobbler_user: testing cobbler_pass: testing # Test files, these must exist on the Cobbler server's filesystem: # These do not have to be real kernel/initrd, just empty files will do: test_kernel: /tmp/cobblertest/fake-kernel test_initrd: /tmp/cobblertest/fake-initrd # Should exist by default on most Cobbler servers: test_kickstart: /var/lib/cobbler/kickstarts/sample.ks # These should be a valid cheetah template: test_template: /tmp/cobblertest/fake-template cobbler-2.4.1/tests/apitests.conf.sample000066400000000000000000000010551227367477500203000ustar00rootroot00000000000000# Cobbler test suite settings # Cobbler server to test against: cobbler_server: 127.0.0.1 cobbler_user: testing cobbler_pass: testing # Test files, these must exist on the Cobbler server's filesystem: # These do not have to be real kernel/initrd, just empty files will do: test_kernel: /tmp/cobblertest/fake-kernel test_initrd: /tmp/cobblertest/fake-initrd # Should exist by default on most Cobbler servers: test_kickstart: /var/lib/cobbler/kickstarts/sample.ks # These should be a valid cheetah template: test_template: /tmp/cobblertest/fake-template cobbler-2.4.1/tests/base.py000066400000000000000000000304531227367477500156050ustar00rootroot00000000000000""" Base.py defines a base set of helper methods for running automated Cobbler XMLRPC API tests Copyright 2009, Red Hat, Inc and Others Steve Salevan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import yaml import xmlrpclib import unittest import traceback import random import commands import urlgrabber import os.path cfg = None CONFIG_LOC = "./apitests.conf" def read_config(): global cfg f = open(CONFIG_LOC, 'r') cfg = yaml.safe_load(f) f.close() read_config() TEST_DISTRO_PREFIX = "TEST-DISTRO-" TEST_PROFILE_PREFIX = "TEST-PROFILE-" TEST_SYSTEM_PREFIX = "TEST-SYSTEM-" TEST_MGMTCLASS_PREFIX = "TEST-MGMTCLASS-" TEST_PACKAGE_PREFIX = "TEST-PACKAGE-" TEST_FILE_PREFIX = "TEST-FILE-" FAKE_KS_CONTENTS = "HELLO WORLD" FAKE_TEMPLATE_CONTENTS = "HELLO WORLD" # Files to pretend are kernel/initrd, don't point to anything real. # These will be created if they don't already exist. FAKE_KERNEL = "/tmp/cobbler-testing-fake-kernel" FAKE_INITRD = "/tmp/cobbler-testing-fake-initrd" FAKE_KICKSTART = "/tmp/cobbler-testing-kickstart" FAKE_TEMPLATE = "/tmp/cobbler-testing-template" class CobblerTest(unittest.TestCase): def __cleanUpObjects(self): """ Cleanup the test objects we created during testing. """ for system_name in self.cleanup_systems: try: self.api.remove_system(system_name, self.token) except Exception, e: print("ERROR: unable to delete system: %s" % system_name) print(e) pass for profile_name in self.cleanup_profiles: try: self.api.remove_profile(profile_name, self.token) except Exception, e: print("ERROR: unable to delete profile: %s" % profile_name) print(e) pass for distro_name in self.cleanup_distros: try: self.api.remove_distro(distro_name, self.token) print("Removed distro: %s" % distro_name) except Exception, e: print("ERROR: unable to delete distro: %s" % distro_name) print(e) pass for mgmtclass_name in self.cleanup_mgmtclasses: try: self.api.remove_mgmtclass(mgmtclass_name, self.token) print("Removed mgmtclass: %s" % mgmtclass_name) except Exception, e: print("ERROR: unable to delete mgmtclass: %s" % mgmtclass_name) print(e) pass for package_name in self.cleanup_packages: try: self.api.remove_package(package_name, self.token) print("Removed package: %s" % package_name) except Exception, e: print("ERROR: unable to delete package: %s" % package_name) print(e) pass for file_name in self.cleanup_files: try: self.api.remove_file(file_name, self.token) print("Removed file: %s" % file_name) except Exception, e: print("ERROR: unable to delete file: %s" % file_name) print(e) pass def setUp(self): """ Sets up Cobbler API connection and logs in """ self.api = xmlrpclib.Server("http://%s/cobbler_api" % cfg["cobbler_server"]) self.token = self.api.login(cfg["cobbler_user"], cfg["cobbler_pass"]) # Store object names to clean up in teardown. Be sure not to # store anything in here unless we're sure it was successfully # created by the tests. self.cleanup_distros = [] self.cleanup_profiles = [] self.cleanup_systems = [] self.cleanup_mgmtclasses = [] self.cleanup_packages = [] self.cleanup_files = [] # Create a fake kernel/init pair in /tmp, Cobbler doesn't care what # these files actually contain. if not os.path.exists(FAKE_KERNEL): commands.getstatusoutput("touch %s" % FAKE_KERNEL) if not os.path.exists(FAKE_INITRD): commands.getstatusoutput("touch %s" % FAKE_INITRD) if not os.path.exists(FAKE_KICKSTART): f = open(FAKE_KICKSTART, 'w') f.write(FAKE_KS_CONTENTS) f.close() if not os.path.exists(FAKE_TEMPLATE): f = open(FAKE_TEMPLATE, 'w') f.write(FAKE_TEMPLATE_CONTENTS) f.close() def tearDown(self): """ Removes any Cobbler objects created during a test """ self.__cleanUpObjects() def create_distro(self): """ Create a test distro with a random name, store it for cleanup during teardown. Returns a tuple of the objects ID and name. """ distro_name = "%s%s" % (TEST_DISTRO_PREFIX, random.randint(1, 1000000)) did = self.api.new_distro(self.token) self.api.modify_distro(did, "name", distro_name, self.token) self.api.modify_distro(did, "kernel", FAKE_KERNEL, self.token) self.api.modify_distro(did, "initrd", FAKE_INITRD, self.token) self.api.modify_distro(did, "kopts", { "dog" : "fido", "cat" : "fluffy" }, self.token) self.api.modify_distro(did, "ksmeta", "good=sg1 evil=gould", self.token) self.api.modify_distro(did, "breed", "redhat", self.token) self.api.modify_distro(did, "os-version", "rhel5", self.token) self.api.modify_distro(did, "owners", "sam dave", self.token) self.api.modify_distro(did, "mgmt-classes", "blip", self.token) self.api.modify_distro(did, "comment", "test distro", self.token) self.api.modify_distro(did, "redhat_management_key", "1-ABC123", self.token) self.api.modify_distro(did, "redhat_management_server", "mysatellite.example.com", self.token) self.api.save_distro(did, self.token) self.cleanup_distros.append(distro_name) url = "http://%s/cblr/svc/op/list/what/distros" % cfg['cobbler_server'] data = urlgrabber.urlread(url) self.assertNotEquals(-1, data.find(distro_name)) return (did, distro_name) def create_profile(self, distro_name): """ Create a test profile with random name associated with the given distro. Returns a tuple of profile ID and name. """ profile_name = "%s%s" % (TEST_PROFILE_PREFIX, random.randint(1, 1000000)) profile_id = self.api.new_profile(self.token) self.api.modify_profile(profile_id, "name", profile_name, self.token) self.api.modify_profile(profile_id, "distro", distro_name, self.token) self.api.modify_profile(profile_id, "kickstart", FAKE_KICKSTART, self.token) self.api.modify_profile(profile_id, "kopts", { "dog" : "fido", "cat" : "fluffy" }, self.token) self.api.modify_profile(profile_id, "kopts-post", { "phil" : "collins", "steve" : "hackett" }, self.token) self.api.modify_profile(profile_id, "ksmeta", "good=sg1 evil=gould", self.token) self.api.modify_profile(profile_id, "breed", "redhat", self.token) self.api.modify_profile(profile_id, "owners", "sam dave", self.token) self.api.modify_profile(profile_id, "mgmt-classes", "blip", self.token) self.api.modify_profile(profile_id, "comment", "test profile", self.token) self.api.modify_profile(profile_id, "redhat_management_key", "1-ABC123", self.token) self.api.modify_profile(profile_id, "redhat_management_server", "mysatellite.example.com", self.token) self.api.modify_profile(profile_id, "virt_bridge", "virbr0", self.token) self.api.modify_profile(profile_id, "virt_cpus", "2", self.token) self.api.modify_profile(profile_id, "virt_file_size", "3", self.token) self.api.modify_profile(profile_id, "virt_path", "/opt/qemu/%s" % profile_name, self.token) self.api.modify_profile(profile_id, "virt_ram", "1024", self.token) self.api.modify_profile(profile_id, "virt_type", "qemu", self.token) self.api.save_profile(profile_id, self.token) self.cleanup_profiles.append(profile_name) # Check cobbler services URLs: url = "http://%s/cblr/svc/op/ks/profile/%s" % (cfg['cobbler_server'], profile_name) data = urlgrabber.urlread(url) self.assertEquals(FAKE_KS_CONTENTS, data) url = "http://%s/cblr/svc/op/list/what/profiles" % cfg['cobbler_server'] data = urlgrabber.urlread(url) self.assertNotEquals(-1, data.find(profile_name)) return (profile_id, profile_name) def create_system(self, profile_name): """ Create a system record. Returns a tuple of system name """ system_name = "%s%s" % (TEST_SYSTEM_PREFIX, random.randint(1, 1000000)) system_id = self.api.new_system(self.token) self.api.modify_system(system_id, "name", system_name, self.token) self.api.modify_system(system_id, "profile", profile_name, self.token) self.api.save_system(system_id, self.token) return (system_id, system_name) def create_mgmtclass(self, package_name, file_name): """ Create a mgmtclass record. Returns a tuple of mgmtclass name """ mgmtclass_name = "%s%s" % (TEST_MGMTCLASS_PREFIX, random.randint(1, 1000000)) mgmtclass_id = self.api.new_mgmtclass(self.token) self.api.modify_mgmtclass(mgmtclass_id, "name", mgmtclass_name, self.token) self.api.modify_mgmtclass(mgmtclass_id, "packages", package_name, self.token) self.api.modify_mgmtclass(mgmtclass_id, "files", file_name, self.token) self.api.save_mgmtclass(mgmtclass_id, self.token) self.cleanup_mgmtclasses.append(mgmtclass_name) return (mgmtclass_id, mgmtclass_name) def create_package(self): """ Create a package record. Returns a tuple of package name """ package_name = "%s%s" % (TEST_PACKAGE_PREFIX, random.randint(1, 1000000)) package_id = self.api.new_package(self.token) self.api.modify_package(package_id, "name", package_name, self.token) self.api.modify_package(package_id, "action", "create", self.token) self.api.modify_package(package_id, "installer", "yum", self.token) self.api.modify_package(package_id, "version", "1.0.0", self.token) self.api.save_package(package_id, self.token) self.cleanup_packages.append(package_name) return (package_id, package_name) def create_file(self): """ Create a file record. Returns a tuple of file name """ file_name = "%s%s" % (TEST_FILE_PREFIX, random.randint(1, 1000000)) path = "/tmp/%s" % file_name file_id = self.api.new_file(self.token) self.api.modify_file(file_id, "name", file_name, self.token) self.api.modify_file(file_id, "is_directory", "False", self.token) self.api.modify_file(file_id, "action", "create", self.token) self.api.modify_file(file_id, "group", "root", self.token) self.api.modify_file(file_id, "mode", "0644", self.token) self.api.modify_file(file_id, "owner", "root", self.token) self.api.modify_file(file_id, "path", path, self.token) self.api.modify_file(file_id, "template", FAKE_TEMPLATE, self.token) self.api.save_file(file_id, self.token) self.cleanup_files.append(file_name) return (file_id, file_name) cobbler-2.4.1/tests/distro_test.py000066400000000000000000000044621227367477500172370ustar00rootroot00000000000000""" new_distro.py defines a set of methods designed for testing Cobbler's distros. Copyright 2009, Red Hat, Inc and Others Steve Salevan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ from base import * import urllib2 class DistroTests(CobblerTest): def test_new_working_distro_basic(self): """ Attempts to create a barebones Cobbler distro using information contained within config file """ name = self.create_distro()[1] distro = self.api.find_distro({'name': name}) self.assertTrue(distro != None) def test_new_working_distro_detailed(self): """ Attempts to create a Cobbler distro with a bevy of options, using information contained within config file """ did, distro_name = self.create_distro() self.assertTrue(self.api.find_distro({'name': distro_name}) != None) def test_new_nonworking_distro(self): """ Attempts to create a distro lacking required information, passes if xmlrpclib returns Fault """ did = self.api.new_distro(self.token) self.api.modify_distro(did, "name", "whatever", self.token) self.assertRaises(xmlrpclib.Fault, self.api.save_distro, did, self.token) def test_new_distro_without_token(self): """ Attempts to run new_distro method without supplying authenticated token """ self.assertRaises(xmlrpclib.Fault, self.api.new_distro) def test_ks_mirror_accessible(self): url = "http://%s/cblr/ks_mirror/" % (cfg['cobbler_server']) # Just want to be sure no 404 HTTPError is thrown: response = urllib2.urlopen(url) cobbler-2.4.1/tests/koan/000077500000000000000000000000001227367477500152445ustar00rootroot00000000000000cobbler-2.4.1/tests/koan/__init__.py000066400000000000000000000000231227367477500173500ustar00rootroot00000000000000import virtinstall cobbler-2.4.1/tests/koan/virtinstall.py000066400000000000000000000173061227367477500202000ustar00rootroot00000000000000import unittest import koan from koan.virtinstall import build_commandline def setup(): try: from virtinst import version as vi_version koan.virtinstall.virtinst_version = vi_version.__version__.split('.') except: koan.virtinstall.virtinst_version = 6 class KoanVirtInstallTest(unittest.TestCase): def testXenPVBasic(self): cmd = build_commandline("xen:///", name="foo", ram=256, uuid="ad6611b9-98e4-82c8-827f-051b6b6680d7", vcpus=1, bridge="br0", disks=[("/tmp/foo1.img", 8), ("/dev/foo1", 0)], virt_type="xenpv", qemu_driver_type="virtio", qemu_net_type="virtio", profile_data={ "kernel_local" : "kernel", "initrd_local" : "initrd", }, extra="ks=http://example.com/ks.ks") cmd = " ".join(cmd) self.assertEquals(cmd, ("virt-install --connect xen:/// --name foo --ram 256 --vcpus 1 " "--uuid ad6611b9-98e4-82c8-827f-051b6b6680d7 --vnc --paravirt " "--boot kernel=kernel,initrd=initrd,kernel_args=ks=http://example.com/ks.ks " "--disk path=/tmp/foo1.img,size=8 --disk path=/dev/foo1 " "--network bridge=br0 " "--wait 0 --noautoconsole")) def testXenFVBasic(self): cmd = build_commandline("xen:///", name="foo", ram=256, vcpus=1, disks=[("/dev/foo1", 0)], fullvirt=True, arch="x86_64", bridge="br0,br1", virt_type="xenfv", profile_data = { "breed" : "redhat", "os_version" : "fedora14", "interfaces" : { "eth0": { "interface_type": "na", "mac_address": "11:22:33:44:55:66", }, "eth1": { "interface_type": "na", "mac_address": "11:22:33:33:22:11", } } }) cmd = " ".join(cmd) self.assertEquals(cmd, ("virt-install --connect xen:/// --name foo --ram 256 --vcpus 1 " "--vnc --hvm --pxe --arch x86_64 " "--os-variant fedora14 --disk path=/dev/foo1 " "--network bridge=br0,mac=11:22:33:44:55:66 " "--network bridge=br1,mac=11:22:33:33:22:11 " "--wait 0 --noautoconsole")) def testQemuCDROM(self): cmd = build_commandline("qemu:///system", name="foo", ram=256, vcpus=1, disks=[("/tmp/foo1.img", 8), ("/dev/foo1", 0)], fullvirt=True, virt_type="qemu", bridge="br0", profile_data = { "breed" : "windows", "file" : "/some/cdrom/path.iso", }) cmd = " ".join(cmd) self.assertEquals(cmd, ("virt-install --connect qemu:///system --name foo --ram 256 " "--vcpus 1 --vnc --virt-type qemu --machine pc --hvm --cdrom /some/cdrom/path.iso " "--os-type windows --disk path=/tmp/foo1.img,size=8 " "--disk path=/dev/foo1 --network bridge=br0 " "--wait 0 --noautoconsole") ) def testQemuURL(self): cmd = build_commandline("qemu:///system", name="foo", ram=256, vcpus=1, disks=[("/tmp/foo1.img", 8), ("/dev/foo1", 0)], fullvirt=True, arch="i686", bridge="br0", virt_type="qemu", qemu_driver_type="virtio", qemu_net_type="virtio", profile_data = { "breed" : "ubuntu", "os_version" : "natty", "install_tree" : "http://example.com/some/install/tree", }, extra="ks=http://example.com/ks.ks text kssendmac") cmd = " ".join(cmd) self.assertEquals(cmd, ("virt-install --connect qemu:///system --name foo --ram 256 " "--vcpus 1 --vnc --virt-type qemu --machine pc --hvm " "--extra-args=ks=http://example.com/ks.ks text kssendmac " "--location http://example.com/some/install/tree/ --arch i686 " "--os-variant ubuntunatty " "--disk path=/tmp/foo1.img,size=8,bus=virtio " "--disk path=/dev/foo1,bus=virtio " "--network bridge=br0,model=virtio --wait 0 --noautoconsole") ) def testKvmURL(self): cmd = build_commandline("qemu:///system", name="foo", ram=256, vcpus=1, disks=[("/tmp/foo1.img", 8), ("/dev/foo1", 0)], fullvirt=None, arch="i686", bridge="br0", virt_type="kvm", qemu_driver_type="virtio", qemu_net_type="virtio", qemu_machine_type="pc-1.0", profile_data = { "breed" : "ubuntu", "os_version" : "natty", "install_tree" : "http://example.com/some/install/tree", }, extra="ks=http://example.com/ks.ks text kssendmac") cmd = " ".join(cmd) self.assertEquals(cmd, ("virt-install --connect qemu:///system --name foo --ram 256 " "--vcpus 1 --vnc --virt-type kvm --machine pc-1.0 " "--extra-args=ks=http://example.com/ks.ks text kssendmac " "--location http://example.com/some/install/tree/ --arch i686 " "--os-variant ubuntunatty " "--disk path=/tmp/foo1.img,size=8,bus=virtio " "--disk path=/dev/foo1,bus=virtio " "--network bridge=br0,model=virtio --wait 0 --noautoconsole") ) def testImage(self): cmd = build_commandline("import", name="foo", ram=256, vcpus=1, fullvirt=True, bridge="br0,br2", disks=[], qemu_driver_type="virtio", qemu_net_type="virtio", profile_data = { "file" : "/some/install/image.img", "network_count" : 2, }) cmd = " ".join(cmd) self.assertEquals(cmd, ("virt-install --name foo --ram 256 --vcpus 1 --vnc --import " "--disk path=/some/install/image.img --network bridge=br0 " "--network bridge=br2 --wait 0 --noautoconsole") ) @patch('koan.virtinstall.utils.subprocess_call') @patch('koan.virtinstall.utils.os.path', new_callable=OsPathMock) def test_create_qcow_file(self, mock_path, mock_subprocess): disks = [ ( '/path/to/imagedir/new_qcow_file', '30', 'qcow' ), ( '/path/to/imagedir/new_qcow2_file', '30', 'qcow2' ), ( '/path/to/imagedir/new_raw_file', '30', 'raw' ), ( '/path/to/imagedir/new_vmdk_file', '30', 'vmdk' ), ( '/path/to/imagedir/new_qcow_file', '30'), ( '/path/to/imagedir/new_qcow2_file', '0', 'qcow2' ), ( '/path/to/imagedir/existfile', '30', 'qcow2' ), ( '/path/to/imagedir', '30', 'qcow2' ), ] create_image_file(disks) res = [] for args, kargs in mock_subprocess.call_args_list: res.append(" ".join(args[0])) self.assertEquals(res, [ 'qemu-img create -f qcow /path/to/imagedir/new_qcow_file 30G', 'qemu-img create -f qcow2 /path/to/imagedir/new_qcow2_file 30G', 'qemu-img create -f raw /path/to/imagedir/new_raw_file 30G', 'qemu-img create -f vmdk /path/to/imagedir/new_vmdk_file 30G', ] ) cobbler-2.4.1/tests/mgmtclass_test.py000066400000000000000000000024061227367477500177210ustar00rootroot00000000000000""" Copyright 2009, Red Hat, Inc and Others This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ from base import * class MgmtclassTests(CobblerTest): def setUp(self): CobblerTest.setUp(self) (self.package_id, self.package_name) = self.create_package() (self.file_id, self.file_name) = self.create_file() def test_create_mgmtclass(self): """ Test creation of a cobbler mgmtclass. """ (mgmtclass_id, mgmtclass_name) = self.create_mgmtclass(self.package_name, self.file_name) mgmtclasses = self.api.find_mgmtclass({'name': mgmtclass_name}) self.assertTrue(len(mgmtclasses) > 0) cobbler-2.4.1/tests/profile_test.py000066400000000000000000000035221227367477500173670ustar00rootroot00000000000000""" new_profile.py defines a set of methods designed for testing Cobbler profiles. Copyright 2009, Red Hat, Inc and Others Steve Salevan This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ import urllib2 from base import * class ProfileTests(CobblerTest): def test_new_working_profile_basic(self): """ Attempt to create a Cobbler profile. """ distro_name = self.create_distro()[1] profile_name = self.create_profile(distro_name)[1] self.assertTrue(self.api.find_profile({'name': profile_name}) != []) def test_new_nonworking_profile(self): """ Attempts to create a profile lacking required information. """ did = self.api.new_profile(self.token) self.api.modify_profile(did, "name", "anythinggoes", self.token) self.assertRaises(xmlrpclib.Fault, self.api.save_profile, did, self.token) def test_getks_no_such_profile(self): url = "http://%s/cblr/svc/op/ks/profile/%s" % (cfg['cobbler_server'], "doesnotexist") try: response = urllib2.urlopen(url) self.fail() except urllib2.HTTPError, e: self.assertEquals(404, e.code) cobbler-2.4.1/tests/system_test.py000066400000000000000000000040041227367477500172470ustar00rootroot00000000000000""" Copyright 2009, Red Hat, Inc and Others This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA """ from base import * import time class SystemTests(CobblerTest): def setUp(self): CobblerTest.setUp(self) (self.distro_id, self.distro_name) = self.create_distro() (self.profile_id, self.profile_name) = self.create_profile(self.distro_name) def test_create_system(self): """ Test creation of a cobbler system. """ (system_id, system_name) = self.create_system(self.profile_name) systems = self.api.find_system({'name': system_name}) self.assertTrue(len(systems) > 0) # Old tests laying around indicate this should pass, but it no longer seems too? #def test_nopxe(self): # """ Test network boot loop prevention. """ # (system_id, system_name) = self.create_system(self.profile_name) # self.api.modify_system(system_id, 'netboot_enabled', True, self.token) # self.api.save_system(system_id, self.token) # #systems = self.api.find_system({'name': system_name}) # url = "http://%s/cblr/svc/op/nopxe/system/%s" % \ # (cfg['cobbler_server'], system_name) # data = urlgrabber.urlread(url) # time.sleep(2) # results = self.api.get_blended_data("", system_name) # print(results['netboot_enabled']) # self.assertFalse(results['netboot_enabled']) cobbler-2.4.1/web/000077500000000000000000000000001227367477500137275ustar00rootroot00000000000000cobbler-2.4.1/web/__init__.py000066400000000000000000000000001227367477500160260ustar00rootroot00000000000000cobbler-2.4.1/web/cobbler.wsgi000066400000000000000000000007501227367477500162340ustar00rootroot00000000000000import inspect import os import sys os.environ['DJANGO_SETTINGS_MODULE'] = 'settings' os.environ['PYTHON_EGG_CACHE'] = '/var/lib/cobbler/webui_cache' # chdir resilient solution script_path = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe()))) if script_path not in sys.path: sys.path.insert(0, script_path) sys.path.insert(0, os.path.join(script_path, 'cobbler_web')) import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler() cobbler-2.4.1/web/cobbler_web/000077500000000000000000000000001227367477500161745ustar00rootroot00000000000000cobbler-2.4.1/web/cobbler_web/__init__.py000066400000000000000000000000001227367477500202730ustar00rootroot00000000000000cobbler-2.4.1/web/cobbler_web/templates/000077500000000000000000000000001227367477500201725ustar00rootroot00000000000000cobbler-2.4.1/web/cobbler_web/templates/blank.tmpl000066400000000000000000000004031227367477500221540ustar00rootroot00000000000000#extends cobbler.webui.master #block body #if $getVar('more_blank','') == '' You are now logged in to Cobbler. Main screen turn on. #else Cobbler awaits your command. #end if #end block body cobbler-2.4.1/web/cobbler_web/templates/check.tmpl000066400000000000000000000003731227367477500221500ustar00rootroot00000000000000{% extends "master.tmpl" %} {% block content %}

Things You Might Want To Fix:


    {% for entry in results %}
  1. {{ entry }}
  2. {% endfor %}
After making changes, run "cobbler sync" and restart cobblerd. {% endblock %} cobbler-2.4.1/web/cobbler_web/templates/empty.tmpl000066400000000000000000000002611227367477500222250ustar00rootroot00000000000000#extends cobbler.webui.master #block body #if $getVar('search','') == '' No items found. Add some using the links on the left. #else No matches found. #end if #end block body cobbler-2.4.1/web/cobbler_web/templates/enoaccess.tmpl000066400000000000000000000004531227367477500230350ustar00rootroot00000000000000#set $myowners = ", ".join($owners)
WARNING: You do not have permission to make changes to this object. To receive access, contact your Cobbler server administrator.
#if $owners != [] The access control list for this object is: $myowners. #end if
cobbler-2.4.1/web/cobbler_web/templates/error_page.tmpl000066400000000000000000000005201227367477500232120ustar00rootroot00000000000000{% extends "master.tmpl" %} {% block content %}

Error

An error has occurred. You may find more information in /var/log/cobbler/cobbler.log on the server should this error not be self explanatory.

{{ message }}

Go Back {% endblock content %} cobbler-2.4.1/web/cobbler_web/templates/eventlog.tmpl000066400000000000000000000004501227367477500227120ustar00rootroot00000000000000{% extends 'master.tmpl' %} {% block content %}

{{ eventid }} : {{ eventname }} :: {{ eventstate }}



Refresh log {% endblock content %} cobbler-2.4.1/web/cobbler_web/templates/events.tmpl000066400000000000000000000013131227367477500223720ustar00rootroot00000000000000{% extends "master.tmpl" %} {% block content %}

Events


{% if results %} {% for line in results reversed %} {% endfor %}
Start Time Name State Log
{{ line.1|safe }} {{ line.2|safe }} {{ line.3|safe }} log
{% else %}

No events

{% endif %} {% endblock %} cobbler-2.4.1/web/cobbler_web/templates/filter.tmpl000066400000000000000000000207671227367477500223710ustar00rootroot00000000000000{% if pageinfo %}
{% if filters %}
    {% for key,value in filters.items %}
  • {{ key }} = {{ value }}
  • {% endfor %}
{% endif %} {% endif %} cobbler-2.4.1/web/cobbler_web/templates/generic_edit.tmpl000066400000000000000000000564761227367477500235330ustar00rootroot00000000000000{% extends "master.tmpl" %} {% load site %} {% block content %}

{% ifequal editmode 'edit' %}Editing{% else %}Adding{% endifequal %} a {{ what|capfirst }}{%ifequal editmode 'edit' %}: {{ name }}{% endifequal %}


{% csrf_token %}
{% if editable %} {% else %} This user does not have permissions to edit this object. {% endif %}
{% for key,value in sections.items|sort %}
    {% for item in value.fields %}
  1. {% ifequal item.html_element "widget" %} {% else %} {% ifequal item.html_element "text" %} {% smart_if editmode == "edit" and item.dname == "name" %} {{ item.value }} (editing, value is read-only) {% else %} {% endsmart_if %} {% endifequal %} {% ifequal item.html_element "textarea" %} {% endifequal %} {% ifequal item.html_element "radio" %} {% for choice in item.choices %} {% ifequal item.value choice %} {{ choice }}  {% else %} {{ choice }}  {% endifequal %} {% endfor %} {% endifequal %} {% ifequal item.html_element "checkbox" %} {% ifequal item.value 1 %} {% else %} {% endifequal %} {% endifequal %} {% ifequal item.html_element "select" %} {% endifequal %} {% ifequal item.html_element "multiselect" %}


    {% endifequal %} {% smart_if editmode == "edit" and item.dname != "name" %} {% if item.tooltip %} {{ item.tooltip }} {% endif %} {% endsmart_if %}
  2. {% endifequal %} {% endfor %}
{% endfor %}
{% ifequal what "system" %} {% endifequal %}
{% endblock content %} cobbler-2.4.1/web/cobbler_web/templates/generic_list.tmpl000066400000000000000000000156711227367477500235510ustar00rootroot00000000000000{% extends "master.tmpl" %} {% load site %} {% block content %}
{% csrf_token %}

{{ what|title }}s


{% csrf_token %} {% for value in columns %} {% endfor %} {% for item in items %} {% for value in item %} {% endfor %} {% endfor %}
{{ value.0|title }} {% ifequal value.1 "asc" %} ↓ {% endifequal %} {% ifequal value.1 "desc" %} ↑ {% endifequal %} Actions
{% ifequal value.0 "name" %} {{ value.1 }} {% endifequal %} {% ifequal value.2 "editlink" %} {% ifnotequal value.1 "~" %} {{ value.1 }} {% endifnotequal %} {% endifequal %} {% ifequal value.2 "checkbox" %} {% ifequal value.1 1 %} {% else %} {% endifequal %} {% endifequal %} {% ifequal value.2 "text" %} {{ value.1 }} {% endifequal %} Edit Copy Rename Delete {% ifequal what "system" %} View kickstart {% endifequal %} {% ifequal what "profile" %} View kickstart {% endifequal %}
{% include "filter.tmpl" %} {% endblock content %} cobbler-2.4.1/web/cobbler_web/templates/import.tmpl000066400000000000000000000045651227367477500224140ustar00rootroot00000000000000{% extends "master.tmpl" %} {% block content %}

DVD Importer


{% csrf_token %}
  • Example: rhel5u3, fedora11 (do not include the arch name)
  • Architecture of the DVD you are importing
  • Type of OS you are importing. If yours is not listed here (ex: SUSE), you will have to add a distro manually. Other distro imports may be supported in the future. Non Red Hat based distros may require additional instructions, see the Wiki for details.
  • Full path to mounted DVD contents only (ex: /mnt/cdrom). No CD ISOs!. Content will be copied by Cobbler to /var/www/cobbler/ks_mirror automatically and then cobbler will create distro and profile objects for each ISO imported. If you need more control about where files are sourced or end up, create new distro and profile object manually.
{% endblock content %} cobbler-2.4.1/web/cobbler_web/templates/index.tmpl000066400000000000000000000003051227367477500221750ustar00rootroot00000000000000{% extends "master.tmpl" %} {% block content %} Welcome to Cobbler {{ version }}.

Currently logged in as {{ username }}.

{% endblock %} cobbler-2.4.1/web/cobbler_web/templates/item.tmpl000066400000000000000000000015651227367477500220350ustar00rootroot00000000000000#extends cobbler.webui.master #block body #set $evenodd = 1 #for $key,$value in $item_data.items(): #if $evenodd % 2 == 0 #set $tr_class = "roweven" #else #set $tr_class = "rowodd" #end if #set $evenodd += 1 #end for
$caption
Key Value
$key #if isinstance($value,dict):
    #for $skey,$sval in $value.items():
  • $skey = $sval
  • #end for
#else $value #end if
#end block body cobbler-2.4.1/web/cobbler_web/templates/ksfile_edit.tmpl000066400000000000000000000042601227367477500233540ustar00rootroot00000000000000{% extends 'master.tmpl' %} {% block content %} {% if not editable %}
NOTE: You do not have permission to make changes to this kickstart template and can only read it. It is possible that other Cobbler users has secured permissions on Cobbler profiles/systems that depend on this template -- changing this template would ultimately affect those profile/system records which you do not have access to. Alternatively, you may not have access to edit *any* kickstart templates. Contact your Cobbler server administrator if you need to resolve this.

{% else %}

{% ifequal editmode 'edit' %}Editing{% else %}Adding{% endifequal %} a Kickstart Template


{% csrf_token %}
  1. {% ifnotequal editmode 'edit' %} Example: foo.ks (to be saved in /var/lib/cobbler/kickstarts/) {% else %} {% endifnotequal %}
  2. {% if deleteable %}
  3. Check both buttons and click save to delete this object
  4. {% else %} {% ifequal editmode "edit" %}
  5. NOTE: This kickstart template is currently in-use.
  6. {% endifequal %} {% endif %} {% if editable %}
  7. {% endif %}
{% endif %} {% endblock content %} cobbler-2.4.1/web/cobbler_web/templates/ksfile_list.tmpl000066400000000000000000000020461227367477500234020ustar00rootroot00000000000000{% extends 'master.tmpl' %} {% block content %}

Cobbler Kickstart Templates


{% for ksfile,shortname,option in ksfiles %} {% if option %} ",""],legend:[1,"
","
"],thead:[1,"
File Actions
{{ shortname }} {% ifequal option "editable" %}Edit {% endifequal %} {% ifequal option "viewable" %} {% block title %}Cobbler Web Interface{% endblock %}
{% csrf_token %} {% if next %}{% endif %}
{% if message %}

{{ message }}

{% endif %}
cobbler-2.4.1/web/cobbler_web/templates/master.tmpl000066400000000000000000000100361227367477500223630ustar00rootroot00000000000000 {% block title %}Cobbler Web Interface{% endblock %}
{% block content %}

Template Failure

{% endblock %}
 
cobbler-2.4.1/web/cobbler_web/templates/paginate.tmpl000066400000000000000000000027411227367477500226640ustar00rootroot00000000000000{% if pageinfo %}
  • {% ifnotequal pageinfo.prev_page "~" %} {% else %} {% endifnotequal %} {% ifnotequal pageinfo.next_page "~" %} {% else %} {% endifnotequal %}
  • {% endif %} cobbler-2.4.1/web/cobbler_web/templates/settings.tmpl000066400000000000000000000015001227367477500227240ustar00rootroot00000000000000{% extends "master.tmpl" %} {% block content %}

    Settings


    These settings live in /etc/cobbler/settings on the server.
    After making changes sync or run "cobbler sync" from the command line then restart cobblerd.

    {% if settings %} {% for setting,value in settings %} {% endfor %}
    Setting Value
    {{ setting }} {% if value %}{{ value }}{% else %}None{% endif %}
    {% else %}

    No settings found.

    {% endif %} {% endblock content %} cobbler-2.4.1/web/cobbler_web/templates/snippet_edit.tmpl000066400000000000000000000035271227367477500235660ustar00rootroot00000000000000{% extends 'master.tmpl' %} {% block content %} {% if not editable %}
    NOTE: You do not have permission to make changes to this snippet and can only read it. Contact your Cobbler server administrator if you need to resolve this.

    {% endif %}

    {% ifequal editmode 'edit' %}Editing{% else %}Adding{% endifequal %} a Snippet


    {% csrf_token %}
    1. {% ifnotequal editmode 'edit' %} Example: foo.ks (to be saved in /var/lib/cobbler/snippets/) {% else %} {% endifnotequal %}
    2. {% if deleteable %}
    3. Check both buttons and click save to delete this object
    4. {% else %} {% ifequal editmode "edit" %}
    5. NOTE: This kickstart template is currently in-use.
    6. {% endifequal %} {% endif %} {% if editable %}
    7. {% endif %}
    {% endblock content %} cobbler-2.4.1/web/cobbler_web/templates/snippet_list.tmpl000066400000000000000000000016471227367477500236150ustar00rootroot00000000000000{% extends 'master.tmpl' %} {% block content %}

    Cobbler Kickstart Snippets


    {% for snippet,shortname,option in snippets %} {% if option %} {% endif %} {% endfor %}
    File Actions
    {{ shortname }} {% ifequal option "editable" %}Edit {% endifequal %}
    {% endblock content %} cobbler-2.4.1/web/cobbler_web/templates/task_created.tmpl000066400000000000000000000005041227367477500235200ustar00rootroot00000000000000{% extends "master.tmpl" %} {% block content %}

    Task Enqueued


    A background task has been created for the action you have initiated. Pop-up notifications will alert you to status updates regarding this and other tasks. You can also browse the Task Log. {% endblock %} cobbler-2.4.1/web/cobbler_web/templatetags/000077500000000000000000000000001227367477500206665ustar00rootroot00000000000000cobbler-2.4.1/web/cobbler_web/templatetags/__init__.py000066400000000000000000000000001227367477500227650ustar00rootroot00000000000000cobbler-2.4.1/web/cobbler_web/templatetags/site.py000066400000000000000000000267751227367477500222250ustar00rootroot00000000000000from django import template from django.template import Node, NodeList from django.utils.datastructures import SortedDict register = template.Library() #========================== # -*- coding: utf-8 -*- ''' A smarter {% if %} tag for django templates. While retaining current Django functionality, it also handles equality, greater than and less than operators. Some common case examples:: {% if articles|length >= 5 %}...{% endif %} {% if "ifnotequal tag" != "beautiful" %}...{% endif %} ''' import unittest from django import template register = template.Library() #=============================================================================== # Calculation objects #=============================================================================== class BaseCalc(object): def __init__(self, var1, var2=None, negate=False): self.var1 = var1 self.var2 = var2 self.negate = negate def resolve(self, context): try: var1, var2 = self.resolve_vars(context) outcome = self.calculate(var1, var2) except: outcome = False if self.negate: return not outcome return outcome def resolve_vars(self, context): var2 = self.var2 and self.var2.resolve(context) return self.var1.resolve(context), var2 def calculate(self, var1, var2): raise NotImplementedError() class Or(BaseCalc): def calculate(self, var1, var2): return var1 or var2 class And(BaseCalc): def calculate(self, var1, var2): return var1 and var2 class Equals(BaseCalc): def calculate(self, var1, var2): return var1 == var2 class Greater(BaseCalc): def calculate(self, var1, var2): return var1 > var2 class GreaterOrEqual(BaseCalc): def calculate(self, var1, var2): return var1 >= var2 class In(BaseCalc): def calculate(self, var1, var2): return var1 in var2 #=============================================================================== # Tests #=============================================================================== class TestVar(object): """ A basic self-resolvable object similar to a Django template variable. Used to assist with tests. """ def __init__(self, value): self.value = value def resolve(self, context): return self.value class SmartIfTests(unittest.TestCase): def setUp(self): self.true = TestVar(True) self.false = TestVar(False) self.high = TestVar(9000) self.low = TestVar(1) def assertCalc(self, calc, context=None): """ Test a calculation is True, also checking the inverse "negate" case. """ context = context or {} self.assert_(calc.resolve(context)) calc.negate = not calc.negate self.assertFalse(calc.resolve(context)) def assertCalcFalse(self, calc, context=None): """ Test a calculation is False, also checking the inverse "negate" case. """ context = context or {} self.assertFalse(calc.resolve(context)) calc.negate = not calc.negate self.assert_(calc.resolve(context)) def test_or(self): self.assertCalc(Or(self.true)) self.assertCalcFalse(Or(self.false)) self.assertCalc(Or(self.true, self.true)) self.assertCalc(Or(self.true, self.false)) self.assertCalc(Or(self.false, self.true)) self.assertCalcFalse(Or(self.false, self.false)) def test_and(self): self.assertCalc(And(self.true, self.true)) self.assertCalcFalse(And(self.true, self.false)) self.assertCalcFalse(And(self.false, self.true)) self.assertCalcFalse(And(self.false, self.false)) def test_equals(self): self.assertCalc(Equals(self.low, self.low)) self.assertCalcFalse(Equals(self.low, self.high)) def test_greater(self): self.assertCalc(Greater(self.high, self.low)) self.assertCalcFalse(Greater(self.low, self.low)) self.assertCalcFalse(Greater(self.low, self.high)) def test_greater_or_equal(self): self.assertCalc(GreaterOrEqual(self.high, self.low)) self.assertCalc(GreaterOrEqual(self.low, self.low)) self.assertCalcFalse(GreaterOrEqual(self.low, self.high)) def test_in(self): list_ = TestVar([1,2,3]) invalid_list = TestVar(None) self.assertCalc(In(self.low, list_)) self.assertCalcFalse(In(self.low, invalid_list)) def test_parse_bits(self): var = IfParser([True]).parse() self.assert_(var.resolve({})) var = IfParser([False]).parse() self.assertFalse(var.resolve({})) var = IfParser([False, 'or', True]).parse() self.assert_(var.resolve({})) var = IfParser([False, 'and', True]).parse() self.assertFalse(var.resolve({})) var = IfParser(['not', False, 'and', 'not', False]).parse() self.assert_(var.resolve({})) var = IfParser([1, '=', 1]).parse() self.assert_(var.resolve({})) var = IfParser([1, '!=', 1]).parse() self.assertFalse(var.resolve({})) var = IfParser([3, '>', 2]).parse() self.assert_(var.resolve({})) var = IfParser([1, '<', 2]).parse() self.assert_(var.resolve({})) var = IfParser([2, 'not', 'in', [2, 3]]).parse() self.assertFalse(var.resolve({})) def test_boolean(self): var = IfParser([True, 'and', True, 'and', True]).parse() self.assert_(var.resolve({})) var = IfParser([False, 'or', False, 'or', True]).parse() self.assert_(var.resolve({})) var = IfParser([True, 'and', False, 'or', True]).parse() self.assert_(var.resolve({})) var = IfParser([False, 'or', True, 'and', True]).parse() self.assert_(var.resolve({})) var = IfParser([True, 'and', True, 'and', False]).parse() self.assertFalse(var.resolve({})) var = IfParser([False, 'or', False, 'or', False]).parse() self.assertFalse(var.resolve({})) var = IfParser([False, 'or', True, 'and', False]).parse() self.assertFalse(var.resolve({})) var = IfParser([False, 'and', True, 'or', False]).parse() self.assertFalse(var.resolve({})) OPERATORS = { '=': (Equals, True), '==': (Equals, True), '!=': (Equals, False), '>': (Greater, True), '>=': (GreaterOrEqual, True), '<=': (Greater, False), '<': (GreaterOrEqual, False), 'or': (Or, True), 'and': (And, True), 'in': (In, True), } class IfParser(object): error_class = ValueError def __init__(self, tokens): self.tokens = tokens def _get_tokens(self): return self._tokens def _set_tokens(self, tokens): self._tokens = tokens self.len = len(tokens) self.pos = 0 tokens = property(_get_tokens, _set_tokens) def parse(self): if self.at_end(): raise self.error_class('No variables provided.') var1 = self.get_var() while not self.at_end(): token = self.get_token() if token == 'not': if self.at_end(): raise self.error_class('No variable provided after "not".') token = self.get_token() negate = True else: negate = False if token not in OPERATORS: raise self.error_class('%s is not a valid operator.' % token) if self.at_end(): raise self.error_class('No variable provided after "%s"' % token) op, true = OPERATORS[token] if not true: negate = not negate var2 = self.get_var() var1 = op(var1, var2, negate=negate) return var1 def get_token(self): token = self.tokens[self.pos] self.pos += 1 return token def at_end(self): return self.pos >= self.len def create_var(self, value): return TestVar(value) def get_var(self): token = self.get_token() if token == 'not': if self.at_end(): raise self.error_class('No variable provided after "not".') token = self.get_token() return Or(self.create_var(token), negate=True) return self.create_var(token) #=============================================================================== # Actual templatetag code. #=============================================================================== class TemplateIfParser(IfParser): error_class = template.TemplateSyntaxError def __init__(self, parser, *args, **kwargs): self.template_parser = parser return super(TemplateIfParser, self).__init__(*args, **kwargs) def create_var(self, value): return self.template_parser.compile_filter(value) class SmartIfNode(template.Node): def __init__(self, var, nodelist_true, nodelist_false=None): self.nodelist_true, self.nodelist_false = nodelist_true, nodelist_false self.var = var def render(self, context): if self.var.resolve(context): return self.nodelist_true.render(context) if self.nodelist_false: return self.nodelist_false.render(context) return '' def __repr__(self): return "" def __iter__(self): for node in self.nodelist_true: yield node if self.nodelist_false: for node in self.nodelist_false: yield node def get_nodes_by_type(self, nodetype): nodes = [] if isinstance(self, nodetype): nodes.append(self) nodes.extend(self.nodelist_true.get_nodes_by_type(nodetype)) if self.nodelist_false: nodes.extend(self.nodelist_false.get_nodes_by_type(nodetype)) return nodes #@register.tag('if') def smart_if(parser, token): ''' A smarter {% if %} tag for django templates. While retaining current Django functionality, it also handles equality, greater than and less than operators. Some common case examples:: {% if articles|length >= 5 %}...{% endif %} {% if "ifnotequal tag" != "beautiful" %}...{% endif %} Arguments and operators _must_ have a space between them, so ``{% if 1>2 %}`` is not a valid smart if tag. All supported operators are: ``or``, ``and``, ``in``, ``=`` (or ``==``), ``!=``, ``>``, ``>=``, ``<`` and ``<=``. ''' bits = token.split_contents()[1:] var = TemplateIfParser(parser, bits).parse() nodelist_true = parser.parse(('else', 'endsmart_if')) token = parser.next_token() if token.contents == 'else': nodelist_false = parser.parse(('endsmart_if',)) parser.delete_first_token() else: nodelist_false = None return SmartIfNode(var, nodelist_true, nodelist_false) #========================== ifinlist = register.tag(smart_if) #========================== # Based on code found here: # http://stackoverflow.com/questions/2024660/django-sort-dict-in-template # # Required since dict.items|dictsort doesn't seem to work # when iterating over the keys with a for loop @register.filter(name='sort') def listsort(value): if isinstance(value, dict): new_dict = SortedDict() key_list = value.keys() key_list.sort() for key in key_list: new_dict[key] = value[key] return new_dict elif isinstance(value, list): new_list = list(value) new_list.sort() return new_list else: return value listsort.is_safe = True cobbler-2.4.1/web/cobbler_web/urls.py000066400000000000000000000037761227367477500175500ustar00rootroot00000000000000from django.conf.urls.defaults import * from views import * # Uncomment the next two lines to enable the admin: # from cobbler_web.contrib import admin # admin.autodiscover() urlpatterns = patterns('', (r'^$', index), (r'^setting/list$', setting_list), (r'^setting/edit/(?P.+)$', setting_edit), (r'^setting/save$', setting_save), (r'^ksfile/list(/(?P\d+))?$', ksfile_list), (r'^ksfile/edit$', ksfile_edit, {'editmode':'new'}), (r'^ksfile/edit/file:(?P.+)$', ksfile_edit, {'editmode':'edit'}), (r'^ksfile/save$', ksfile_save), (r'^snippet/list(/(?P\d+))?$', snippet_list), (r'^snippet/edit$', snippet_edit, {'editmode':'new'}), (r'^snippet/edit/file:(?P.+)$', snippet_edit, {'editmode':'edit'}), (r'^snippet/save$', snippet_save), (r'^(?P\w+)/list(/(?P\d+))?', genlist), (r'^(?P\w+)/modifylist/(?P[!\w]+)/(?P.+)$', modify_list), (r'^(?P\w+)/edit/(?P.+)$', generic_edit, {'editmode': 'edit'}), (r'^(?P\w+)/edit$', generic_edit, {'editmode': 'new'}), (r'^(?P\w+)/rename/(?P.+)/(?P.+)$', generic_rename), (r'^(?P\w+)/copy/(?P.+)/(?P.+)$', generic_copy), (r'^(?P\w+)/delete/(?P.+)$', generic_delete), (r'^(?P\w+)/multi/(?P.+)/(?P.+)$', generic_domulti), (r'^utils/random_mac$', random_mac), (r'^utils/random_mac/virttype/(?P.+)$', random_mac), (r'^events$', events), (r'^eventlog/(?P.+)$', eventlog), (r'^task_created$', task_created), (r'^sync$', sync), (r'^reposync$',reposync), (r'^replicate$',replicate), (r'^hardlink', hardlink), (r'^(?P\w+)/save$', generic_save), (r'^import/prompt$', import_prompt), (r'^import/run$', import_run), (r'^buildiso$', buildiso), (r'^check$', check), (r'^login$', login), (r'^do_login$', do_login), (r'^logout$', do_logout), ) cobbler-2.4.1/web/cobbler_web/views.py000066400000000000000000001422721227367477500177130ustar00rootroot00000000000000from django.template.loader import get_template from django.template import Context from django.template import RequestContext from django.http import HttpResponse from django.http import HttpResponseRedirect from django.shortcuts import render_to_response from django.views.decorators.http import require_POST try: from django.views.decorators.csrf import csrf_protect except: # Old Django, fudge the @csrf_protect decorator to be a pass-through # that does nothing. Django decorator shell based on this page: # http://passingcuriosity.com/2009/writing-view-decorators-for-django/ def csrf_protect(f): def _dec(view_func): def _view(request,*args,**kwargs): return view_func(request,*args,**kwargs) _view.__name__ = view_func.__name__ _view.__dict__ = view_func.__dict__ _view.__doc__ = view_func.__doc__ return _view if f is None: return _dec else: return _dec(f) import xmlrpclib import time import simplejson import string import distutils import exceptions import time import cobbler.item_distro as item_distro import cobbler.item_profile as item_profile import cobbler.item_system as item_system import cobbler.item_repo as item_repo import cobbler.item_image as item_image import cobbler.item_mgmtclass as item_mgmtclass import cobbler.item_package as item_package import cobbler.item_file as item_file import cobbler.settings as item_settings import cobbler.field_info as field_info import cobbler.utils as utils url_cobbler_api = None remote = None username = None #================================================================================== def index(request): """ This is the main greeting page for cobbler web. """ if not test_user_authenticated(request): return login(request,next="/cobbler_web", expired=True) t = get_template('index.tmpl') html = t.render(RequestContext(request,{ 'version' : remote.extended_version(request.session['token'])['version'], 'username': username, })) return HttpResponse(html) #======================================================================== def task_created(request): """ Let's the user know what to expect for event updates. """ if not test_user_authenticated(request): return login(request, next="/cobbler_web/task_created", expired=True) t = get_template("task_created.tmpl") html = t.render(RequestContext(request,{ 'version' : remote.extended_version(request.session['token'])['version'], 'username' : username })) return HttpResponse(html) #======================================================================== def error_page(request,message): """ This page is used to explain error messages to the user. """ if not test_user_authenticated(request): return login(request,expired=True) # FIXME: test and make sure we use this rather than throwing lots of tracebacks for # field errors t = get_template('error_page.tmpl') message = message.replace(":'","Remote exception: ") message = message.replace("'\">","") html = t.render(RequestContext(request,{ 'version' : remote.extended_version(request.session['token'])['version'], 'message' : message, 'username': username })) return HttpResponse(html) #================================================================================== def get_fields(what, is_subobject, seed_item=None): """ Helper function. Retrieves the field table from the cobbler objects and formats it in a way to make it useful for Django templating. The field structure indicates what fields to display and what the default values are, etc. """ if what == "distro": field_data = item_distro.FIELDS if what == "profile": field_data = item_profile.FIELDS if what == "system": field_data = item_system.FIELDS if what == "repo": field_data = item_repo.FIELDS if what == "image": field_data = item_image.FIELDS if what == "mgmtclass": field_data = item_mgmtclass.FIELDS if what == "package": field_data = item_package.FIELDS if what == "file": field_data = item_file.FIELDS if what == "setting": field_data = item_settings.FIELDS settings = remote.get_settings() fields = [] for row in field_data: # if we are subprofile and see the field "distro", make it say parent # with this really sneaky hack here if is_subobject and row[0] == "distro": row[0] = "parent" row[3] = "Parent object" row[5] = "Inherit settings from this profile" row[6] = [] elem = { "name" : row[0], "dname" : row[0].replace("*",""), "value" : "?", "caption" : row[3], "editable" : row[4], "tooltip" : row[5], "choices" : row[6], "css_class" : "generic", "html_element" : "generic", } if not elem["editable"]: continue if seed_item is not None: if what == "setting": elem["value"] = seed_item[row[0]] elif row[0].startswith("*"): # system interfaces are loaded by javascript, not this elem["value"] = "" elem["name"] = row[0].replace("*","") elif row[0].find("widget") == -1: elem["value"] = seed_item[row[0]] elif is_subobject: elem["value"] = row[2] else: elem["value"] = row[1] if elem["value"] is None: elem["value"] = "" # we'll process this for display but still need to present the original to some # template logic elem["value_raw"] = elem["value"] if isinstance(elem["value"],basestring) and elem["value"].startswith("SETTINGS:"): key = elem["value"].replace("SETTINGS:","",1) elem["value"] = settings[key] # flatten hashes of all types, they can only be edited as text # as we have no HTML hash widget (yet) if type(elem["value"]) == type({}): if elem["name"] == "mgmt_parameters": #Render dictionary as YAML for Management Parameters field tokens = [] for (x,y) in elem["value"].items(): if y is not None: tokens.append("%s: %s" % (x,y)) else: tokens.append("%s: " % x) elem["value"] = "{ %s }" % ", ".join(tokens) else: tokens = [] for (x,y) in elem["value"].items(): if isinstance(y,basestring) and y.strip() != "~": y = y.replace(" ","\\ ") tokens.append("%s=%s" % (x,y)) elif isinstance(y,list): for l in y: l = l.replace(" ","\\ ") tokens.append("%s=%s" % (x,l)) elif y != None: tokens.append("%s" % x) elem["value"] = " ".join(tokens) name = row[0] if name.find("_widget") != -1: elem["html_element"] = "widget" elif name in field_info.USES_SELECT: elem["html_element"] = "select" elif name in field_info.USES_MULTI_SELECT: elem["html_element"] = "multiselect" elif name in field_info.USES_RADIO: elem["html_element"] = "radio" elif name in field_info.USES_CHECKBOX: elem["html_element"] = "checkbox" elif name in field_info.USES_TEXTAREA: elem["html_element"] = "textarea" else: elem["html_element"] = "text" elem["block_section"] = field_info.BLOCK_MAPPINGS.get(name, "General") # flatten lists for those that aren't using select boxes if type(elem["value"]) == type([]): if elem["html_element"] != "select": elem["value"] = string.join(elem["value"], sep=" ") # FIXME: need to handle interfaces special, they are prefixed with "*" fields.append(elem) return fields #================================================================================== def __tweak_field(fields,field_name,attribute,value): """ Helper function to insert extra data into the field list. """ # FIXME: eliminate this function. for x in fields: if x["name"] == field_name: x[attribute] = value #================================================================================== def __format_columns(column_names,sort_field): """ Format items retrieved from XMLRPC for rendering by the generic_edit template """ dataset = [] # Default is sorting on name if sort_field is not None: sort_name = sort_field else: sort_name = "name" if sort_name.startswith("!"): sort_name = sort_name[1:] sort_order = "desc" else: sort_order = "asc" for fieldname in column_names: fieldorder = "none" if fieldname == sort_name: fieldorder = sort_order dataset.append([fieldname,fieldorder]) return dataset #================================================================================== def __format_items(items, column_names): """ Format items retrieved from XMLRPC for rendering by the generic_edit template """ dataset = [] for itemhash in items: row = [] for fieldname in column_names: if fieldname == "name": html_element = "name" elif fieldname in [ "system", "repo", "distro", "profile", "image", "mgmtclass", "package", "file" ]: html_element = "editlink" elif fieldname in field_info.USES_CHECKBOX: html_element = "checkbox" else: html_element = "text" row.append([fieldname,itemhash[fieldname],html_element]) dataset.append(row) return dataset #================================================================================== def genlist(request, what, page=None): """ Lists all object types, complete with links to actions on those objects. """ if not test_user_authenticated(request): return login(request, next="/cobbler_web/%s/list" % what, expired=True) # get details from the session if page == None: page = int(request.session.get("%s_page" % what, 1)) limit = int(request.session.get("%s_limit" % what, 50)) sort_field = request.session.get("%s_sort_field" % what, "name") filters = simplejson.loads(request.session.get("%s_filters" % what, "{}")) pageditems = remote.find_items_paged(what,utils.strip_none(filters),sort_field,page,limit) # what columns to show for each page? # we also setup the batch actions here since they're dependent # on what we're looking at # everythng gets batch delete batchactions = [ ["Delete","delete","delete"], ] if what == "distro": columns = [ "name" ] batchactions += [ ["Build ISO","buildiso","enable"], ] if what == "profile": columns = [ "name", "distro" ] batchactions += [ ["Build ISO","buildiso","enable"], ] if what == "system": # FIXME: also list network, once working columns = [ "name", "profile", "status", "netboot_enabled" ] batchactions += [ ["Power on","power","on"], ["Power off","power","off"], ["Reboot","power","reboot"], ["Change profile","profile",""], ["Netboot enable","netboot","enable"], ["Netboot disable","netboot","disable"], ["Build ISO","buildiso","enable"], ] if what == "repo": columns = [ "name", "mirror" ] batchactions += [ ["Reposync","reposync","go"], ] if what == "image": columns = [ "name", "file" ] if what == "network": columns = [ "name" ] if what == "mgmtclass": columns = [ "name" ] if what == "package": columns = [ "name", "installer" ] if what == "file": columns = [ "name" ] # render the list t = get_template('generic_list.tmpl') html = t.render(RequestContext(request,{ 'what' : what, 'columns' : __format_columns(columns,sort_field), 'items' : __format_items(pageditems["items"],columns), 'pageinfo' : pageditems["pageinfo"], 'filters' : filters, 'version' : remote.extended_version(request.session['token'])['version'], 'username' : username, 'limit' : limit, 'batchactions' : batchactions, })) return HttpResponse(html) @require_POST @csrf_protect def modify_list(request, what, pref, value=None): """ This function is used in the generic list view to modify the page/column sort/number of items shown per page, and also modify the filters. This function modifies the session object to store these preferences persistently. """ if not test_user_authenticated(request): return login(request, next="/cobbler_web/%s/modifylist/%s/%s" % (what,pref,str(value)), expired=True) # what preference are we tweaking? if pref == "sort": # FIXME: this isn't exposed in the UI. # sorting list on columns old_sort = request.session.get("%s_sort_field" % what,"name") if old_sort.startswith("!"): old_sort = old_sort[1:] old_revsort = True else: old_revsort = False # User clicked on the column already sorted on, # so reverse the sorting list if old_sort == value and not old_revsort: value = "!" + value request.session["%s_sort_field" % what] = value request.session["%s_page" % what] = 1 elif pref == "limit": # number of items to show per page request.session["%s_limit" % what] = int(value) request.session["%s_page" % what] = 1 elif pref == "page": # what page are we currently on request.session["%s_page" % what] = int(value) elif pref in ("addfilter","removefilter"): # filters limit what we show in the lists # they are stored in json format for marshalling filters = simplejson.loads(request.session.get("%s_filters" % what, "{}")) if pref == "addfilter": (field_name, field_value) = value.split(":", 1) # add this filter filters[field_name] = field_value else: # remove this filter, if it exists if filters.has_key(value): del filters[value] # save session variable request.session["%s_filters" % what] = simplejson.dumps(filters) # since we changed what is viewed, reset the page request.session["%s_page" % what] = 1 else: return error_page(request, "Invalid preference change request") # redirect to the list page return HttpResponseRedirect("/cobbler_web/%s/list" % what) # ====================================================================== @require_POST @csrf_protect def generic_rename(request, what, obj_name=None, obj_newname=None): """ Renames an object. """ if not test_user_authenticated(request): return login(request, next="/cobbler_web/%s/rename/%s/%s" % (what,obj_name,obj_newname), expired=True) if obj_name == None: return error_page(request,"You must specify a %s to rename" % what) if not remote.has_item(what,obj_name): return error_page(request,"Unknown %s specified" % what) elif not remote.check_access_no_fail(request.session['token'], "modify_%s" % what, obj_name): return error_page(request,"You do not have permission to rename this %s" % what) else: obj_id = remote.get_item_handle(what, obj_name, request.session['token']) remote.rename_item(what, obj_id, obj_newname, request.session['token']) return HttpResponseRedirect("/cobbler_web/%s/list" % what) # ====================================================================== @require_POST @csrf_protect def generic_copy(request, what, obj_name=None, obj_newname=None): """ Copies an object. """ if not test_user_authenticated(request): return login(request, next="/cobbler_web/%s/copy/%s/%s" % (what,obj_name,obj_newname), expired=True) # FIXME: shares all but one line with rename, merge it. if obj_name == None: return error_page(request,"You must specify a %s to rename" % what) if not remote.has_item(what,obj_name): return error_page(request,"Unknown %s specified" % what) elif not remote.check_access_no_fail(request.session['token'], "modify_%s" % what, obj_name): return error_page(request,"You do not have permission to copy this %s" % what) else: obj_id = remote.get_item_handle(what, obj_name, request.session['token']) remote.copy_item(what, obj_id, obj_newname, request.session['token']) return HttpResponseRedirect("/cobbler_web/%s/list" % what) # ====================================================================== @require_POST @csrf_protect def generic_delete(request, what, obj_name=None): """ Deletes an object. """ if not test_user_authenticated(request): return login(request, next="/cobbler_web/%s/delete/%s" % (what,obj_name), expired=True) # FIXME: consolidate code with above functions. if obj_name == None: return error_page(request,"You must specify a %s to delete" % what) if not remote.has_item(what,obj_name): return error_page(request,"Unknown %s specified" % what) elif not remote.check_access_no_fail(request.session['token'], "remove_%s" % what, obj_name): return error_page(request,"You do not have permission to delete this %s" % what) else: remote.remove_item(what, obj_name, request.session['token']) return HttpResponseRedirect("/cobbler_web/%s/list" % what) # ====================================================================== @require_POST @csrf_protect def generic_domulti(request, what, multi_mode=None, multi_arg=None): """ Process operations like profile reassignment, netboot toggling, and deletion which occur on all items that are checked on the list page. """ if not test_user_authenticated(request): return login(request, next="/cobbler_web/%s/multi/%s/%s" % (what,multi_mode,multi_arg), expired=True) # FIXME: cleanup # FIXME: COMMENTS!!!11111??? names = request.POST.get('names', '').strip().split() if names == "": return error_page(request, "Need to select some systems first") if multi_mode == "delete": for obj_name in names: remote.remove_item(what,obj_name, request.session['token']) elif what == "system" and multi_mode == "netboot": netboot_enabled = multi_arg # values: enable or disable if netboot_enabled is None: return error_page(request,"Cannot modify systems without specifying netboot_enabled") if netboot_enabled == "enable": netboot_enabled = True elif netboot_enabled == "disable": netboot_enabled = False else: return error_page(request,"Invalid netboot option, expect enable or disable") for obj_name in names: obj_id = remote.get_system_handle(obj_name, request.session['token']) remote.modify_system(obj_id, "netboot_enabled", netboot_enabled, request.session['token']) remote.save_system(obj_id, request.session['token'], "edit") elif what == "system" and multi_mode == "profile": profile = multi_arg if profile is None: return error_page(request,"Cannot modify systems without specifying profile") for obj_name in names: obj_id = remote.get_system_handle(obj_name, request.session['token']) remote.modify_system(obj_id, "profile", profile, request.session['token']) remote.save_system(obj_id, request.session['token'], "edit") elif what == "system" and multi_mode == "power": # FIXME: power should not loop, but send the list of all systems # in one set. power = multi_arg if power is None: return error_page(request,"Cannot modify systems without specifying power option") options = { "systems" : names, "power" : power } remote.background_power_system(options, request.session['token']) elif what == "system" and multi_mode == "buildiso": options = { "systems" : names, "profiles" : [] } remote.background_buildiso(options, request.session['token']) elif what == "profile" and multi_mode == "buildiso": options = { "profiles" : names, "systems" : [] } remote.background_buildiso(options, request.session['token']) elif what == "distro" and multi_mode == "buildiso": if len(names) > 1: return error_page(request,"You can only select one distro at a time to build an ISO for") options = { "standalone" : True, "distro": str(names[0]) } remote.background_buildiso(options, request.session['token']) elif what == "repo" and multi_mode == "reposync": options = { "repos" : names, "tries" : 3 } remote.background_reposync(options,request.session['token']) else: return error_page(request,"Unknown batch operation on %ss: %s" % (what,str(multi_mode))) # FIXME: "operation complete" would make a lot more sense here than a redirect return HttpResponseRedirect("/cobbler_web/%s/list"%what) # ====================================================================== def import_prompt(request): if not test_user_authenticated(request): return login(request, next="/cobbler_web/import/prompt", expired=True) t = get_template('import.tmpl') html = t.render(RequestContext(request,{ 'version' : remote.extended_version(request.session['token'])['version'], 'username' : username, })) return HttpResponse(html) # ====================================================================== def check(request): """ Shows a page with the results of 'cobbler check' """ if not test_user_authenticated(request): return login(request, next="/cobbler_web/check", expired=True) results = remote.check(request.session['token']) t = get_template('check.tmpl') html = t.render(RequestContext(request,{ 'version': remote.extended_version(request.session['token'])['version'], 'username' : username, 'results' : results })) return HttpResponse(html) # ====================================================================== @require_POST @csrf_protect def buildiso(request): if not test_user_authenticated(request): return login(request, next="/cobbler_web/buildiso", expired=True) remote.background_buildiso({},request.session['token']) return HttpResponseRedirect('/cobbler_web/task_created') # ====================================================================== @require_POST @csrf_protect def import_run(request): if not test_user_authenticated(request): return login(request, next="/cobbler_web/import/prompt", expired=True) options = { "name" : request.POST.get("name",""), "path" : request.POST.get("path",""), "breed" : request.POST.get("breed",""), "arch" : request.POST.get("arch","") } remote.background_import(options,request.session['token']) return HttpResponseRedirect('/cobbler_web/task_created') # ====================================================================== def ksfile_list(request, page=None): """ List all kickstart templates and link to their edit pages. """ if not test_user_authenticated(request): return login(request, next="/cobbler_web/ksfile/list", expired=True) ksfiles = remote.get_kickstart_templates(request.session['token']) ksfile_list = [] for ksfile in ksfiles: if ksfile.startswith("/var/lib/cobbler/kickstarts") or ksfile.startswith("/etc/cobbler"): ksfile_list.append((ksfile,ksfile.replace('/var/lib/cobbler/kickstarts/',''),'editable')) elif ksfile.startswith("http://") or ksfile.startswith("ftp://"): ksfile_list.append((ksfile,ksfile,'','viewable')) else: ksfile_list.append((ksfile,ksfile,None)) t = get_template('ksfile_list.tmpl') html = t.render(RequestContext(request,{ 'what':'ksfile', 'ksfiles': ksfile_list, 'version': remote.extended_version(request.session['token'])['version'], 'username': username, 'item_count': len(ksfile_list[0]), })) return HttpResponse(html) # ====================================================================== @csrf_protect def ksfile_edit(request, ksfile_name=None, editmode='edit'): """ This is the page where a kickstart file is edited. """ if not test_user_authenticated(request): return login(request, next="/cobbler_web/ksfile/edit/file:%s" % ksfile_name, expired=True) if editmode == 'edit': editable = False else: editable = True deleteable = False ksdata = "" if not ksfile_name is None: editable = remote.check_access_no_fail(request.session['token'], "modify_kickstart", ksfile_name) deleteable = not remote.is_kickstart_in_use(ksfile_name, request.session['token']) ksdata = remote.read_or_write_kickstart_template(ksfile_name, True, "", request.session['token']) t = get_template('ksfile_edit.tmpl') html = t.render(RequestContext(request,{ 'ksfile_name' : ksfile_name, 'deleteable' : deleteable, 'ksdata' : ksdata, 'editable' : editable, 'editmode' : editmode, 'version' : remote.extended_version(request.session['token'])['version'], 'username' : username })) return HttpResponse(html) # ====================================================================== @require_POST @csrf_protect def ksfile_save(request): """ This page processes and saves edits to a kickstart file. """ if not test_user_authenticated(request): return login(request, next="/cobbler_web/ksfile/list", expired=True) # FIXME: error checking editmode = request.POST.get('editmode', 'edit') ksfile_name = request.POST.get('ksfile_name', None) ksdata = request.POST.get('ksdata', "").replace('\r\n','\n') if ksfile_name == None: return HttpResponse("NO KSFILE NAME SPECIFIED") if editmode != 'edit': ksfile_name = "/var/lib/cobbler/kickstarts/" + ksfile_name delete1 = request.POST.get('delete1', None) delete2 = request.POST.get('delete2', None) if delete1 and delete2: remote.read_or_write_kickstart_template(ksfile_name, False, -1, request.session['token']) return HttpResponseRedirect('/cobbler_web/ksfile/list') else: remote.read_or_write_kickstart_template(ksfile_name,False,ksdata,request.session['token']) return HttpResponseRedirect('/cobbler_web/ksfile/edit/file:%s' % ksfile_name) # ====================================================================== def snippet_list(request, page=None): """ This page lists all available snippets and has links to edit them. """ if not test_user_authenticated(request): return login(request, next="/cobbler_web/snippet/list", expired=True) snippets = remote.get_snippets(request.session['token']) snippet_list = [] for snippet in snippets: if snippet.startswith("/var/lib/cobbler/snippets"): snippet_list.append((snippet,snippet.replace("/var/lib/cobbler/snippets/",""),'editable')) else: snippet_list.append((snippet,snippet,None)) t = get_template('snippet_list.tmpl') html = t.render(RequestContext(request,{ 'what' : 'snippet', 'snippets' : snippet_list, 'version' : remote.extended_version(request.session['token'])['version'], 'username' : username })) return HttpResponse(html) # ====================================================================== @csrf_protect def snippet_edit(request, snippet_name=None, editmode='edit'): """ This page edits a specific snippet. """ if not test_user_authenticated(request): return login(request, next="/cobbler_web/edit/file:%s" % snippet_name, expired=True) if editmode == 'edit': editable = False else: editable = True deleteable = False snippetdata = "" if not snippet_name is None: editable = remote.check_access_no_fail(request.session['token'], "modify_snippet", snippet_name) deleteable = True snippetdata = remote.read_or_write_snippet(snippet_name, True, "", request.session['token']) t = get_template('snippet_edit.tmpl') html = t.render(RequestContext(request,{ 'snippet_name' : snippet_name, 'deleteable' : deleteable, 'snippetdata' : snippetdata, 'editable' : editable, 'editmode' : editmode, 'version' : remote.extended_version(request.session['token'])['version'], 'username' : username })) return HttpResponse(html) # ====================================================================== @require_POST @csrf_protect def snippet_save(request): """ This snippet saves a snippet once edited. """ if not test_user_authenticated(request): return login(request, next="/cobbler_web/snippet/list", expired=True) # FIXME: error checking editmode = request.POST.get('editmode', 'edit') snippet_name = request.POST.get('snippet_name', None) snippetdata = request.POST.get('snippetdata', "").replace('\r\n','\n') if snippet_name == None: return HttpResponse("NO SNIPPET NAME SPECIFIED") if editmode != 'edit': if snippet_name.find("/var/lib/cobbler/snippets/") != 0: snippet_name = "/var/lib/cobbler/snippets/" + snippet_name delete1 = request.POST.get('delete1', None) delete2 = request.POST.get('delete2', None) if delete1 and delete2: remote.read_or_write_snippet(snippet_name, False, -1, request.session['token']) return HttpResponseRedirect('/cobbler_web/snippet/list') else: remote.read_or_write_snippet(snippet_name,False,snippetdata,request.session['token']) return HttpResponseRedirect('/cobbler_web/snippet/edit/file:%s' % snippet_name) # ====================================================================== def setting_list(request): """ This page presents a list of all the settings to the user. They are not editable. """ if not test_user_authenticated(request): return login(request, next="/cobbler_web/setting/list", expired=True) settings = remote.get_settings() skeys = settings.keys() skeys.sort() results = [] for k in skeys: results.append([k,settings[k]]) t = get_template('settings.tmpl') html = t.render(RequestContext(request,{ 'settings' : results, 'version' : remote.extended_version(request.session['token'])['version'], 'username' : username, })) return HttpResponse(html) @csrf_protect def setting_edit(request, setting_name=None): if not setting_name: return HttpResponseRedirect('/cobbler_web/setting/list') if not test_user_authenticated(request): return login(request, next="/cobbler_web/setting/edit/%s" % setting_name, expired=True) settings = remote.get_settings() if not settings.has_key(setting_name): return error_page(request,"Unknown setting: %s" % setting_name) cur_setting = { 'name' : setting_name, 'value' : settings[setting_name], } fields = get_fields('setting', False, seed_item=cur_setting) sections = {} for field in fields: bmo = field_info.BLOCK_MAPPINGS_ORDER[field['block_section']] fkey = "%d_%s" % (bmo,field['block_section']) if not sections.has_key(fkey): sections[fkey] = {} sections[fkey]['name'] = field['block_section'] sections[fkey]['fields'] = [] sections[fkey]['fields'].append(field) t = get_template('generic_edit.tmpl') html = t.render(RequestContext(request,{ 'what' : 'setting', #'fields' : fields, 'sections' : sections, 'subobject' : False, 'editmode' : 'edit', 'editable' : True, 'version' : remote.version(request.session['token']), 'username' : username, 'name' : setting_name, })) return HttpResponse(html) @csrf_protect def setting_save(request): if not test_user_authenticated(request): return login(request, next="/cobbler_web/setting/list", expired=True) # load request fields and see if they are valid setting_name = request.POST.get('name', "") setting_value = request.POST.get('value', None) if setting_name == "": return error_page(request,"The setting name was not specified") settings = remote.get_settings() if not settings.has_key(setting_name): return error_page(request,"Unknown setting: %s" % setting_name) if remote.modify_setting(setting_name, setting_value, request.session['token']): return error_page(request,"There was an error saving the setting") return HttpResponseRedirect("/cobbler_web/setting/list") # ====================================================================== def events(request): """ This page presents a list of all the events and links to the event log viewer. """ if not test_user_authenticated(request): return login(request, next="/cobbler_web/events", expired=True) events = remote.get_events() events2 = [] for id in events.keys(): (ttime, name, state, read_by) = events[id] events2.append([id,time.asctime(time.localtime(ttime)),name,state]) def sorter(a,b): return cmp(a[0],b[0]) events2.sort(sorter) t = get_template('events.tmpl') html = t.render(RequestContext(request,{ 'results' : events2, 'version' : remote.extended_version(request.session['token'])['version'], 'username' : username })) return HttpResponse(html) # ====================================================================== def eventlog(request, event=0): """ Shows the log for a given event. """ if not test_user_authenticated(request): return login(request, next="/cobbler_web/eventlog/%s" % str(event), expired=True) event_info = remote.get_events() if not event_info.has_key(event): return HttpResponse("event not found") data = event_info[event] eventname = data[0] eventtime = data[1] eventstate = data[2] eventlog = remote.get_event_log(event) t = get_template('eventlog.tmpl') vars = { 'eventlog' : eventlog, 'eventname' : eventname, 'eventstate' : eventstate, 'eventid' : event, 'eventtime' : eventtime, 'version' : remote.extended_version(request.session['token'])['version'], 'username' : username } html = t.render(RequestContext(request,vars)) return HttpResponse(html) # ====================================================================== def random_mac(request, virttype="xenpv"): """ Used in an ajax call to fill in a field with a mac address. """ # FIXME: not exposed in UI currently if not test_user_authenticated(request): return login(request, expired=True) random_mac = remote.get_random_mac(virttype, request.session['token']) return HttpResponse(random_mac) # ====================================================================== @require_POST @csrf_protect def sync(request): """ Runs 'cobbler sync' from the API when the user presses the sync button. """ if not test_user_authenticated(request): return login(request, next="/cobbler_web/sync", expired=True) remote.background_sync({"verbose":"True"},request.session['token']) return HttpResponseRedirect("/cobbler_web/task_created") # ====================================================================== @require_POST @csrf_protect def reposync(request): """ Syncs all repos that are configured to be synced. """ if not test_user_authenticated(request): return login(request, next="/cobbler_web/reposync", expired=True) remote.background_reposync({ "names":"", "tries" : 3},request.session['token']) return HttpResponseRedirect("/cobbler_web/task_created") # ====================================================================== @require_POST @csrf_protect def hardlink(request): """ Hardlinks files between repos and install trees to save space. """ if not test_user_authenticated(request): return login(request, next="/cobbler_web/hardlink", expired=True) remote.background_hardlink({},request.session['token']) return HttpResponseRedirect("/cobbler_web/task_created") # ====================================================================== @require_POST @csrf_protect def replicate(request): """ Replicate configuration from the central cobbler server, configured in /etc/cobbler/settings (note: this is uni-directional!) FIXME: this is disabled because we really need a web page to provide options for this command. """ #settings = remote.get_settings() #options = settings # just load settings from file until we decide to ask user (later?) #remote.background_replicate(options, request.session['token']) if not test_user_authenticated(request): return login(request, next="/cobbler_web/replicate", expired=True) return HttpResponseRedirect("/cobbler_web/task_created") # ====================================================================== def __names_from_dicts(loh,optional=True): """ Tiny helper function. Get the names out of an array of hashes that the remote interface returns. """ results = [] if optional: results.append("<>") for x in loh: results.append(x["name"]) results.sort() return results # ====================================================================== @csrf_protect def generic_edit(request, what=None, obj_name=None, editmode="new"): """ Presents an editor page for any type of object. While this is generally standardized, systems are a little bit special. """ target = "" if obj_name != None: target = "/%s" % obj_name if not test_user_authenticated(request): return login(request, next="/cobbler_web/%s/edit%s" % (what,target), expired=True) obj = None child = False if what == "subprofile": what = "profile" child = True if not obj_name is None: editable = remote.check_access_no_fail(request.session['token'], "modify_%s" % what, obj_name) obj = remote.get_item(what, obj_name, False) else: editable = remote.check_access_no_fail(request.session['token'], "new_%s" % what, None) obj = None interfaces = {} if what == "system": if obj: interfaces = obj.get("interfaces",{}) else: interfaces = {} fields = get_fields(what, child, obj) # populate some select boxes # FIXME: we really want to just populate with the names, right? if what == "profile": if (obj and obj["parent"] not in (None,"")) or child: __tweak_field(fields, "parent", "choices", __names_from_dicts(remote.get_profiles())) else: __tweak_field(fields, "distro", "choices", __names_from_dicts(remote.get_distros())) __tweak_field(fields, "repos", "choices", __names_from_dicts(remote.get_repos())) elif what == "system": __tweak_field(fields, "profile", "choices", __names_from_dicts(remote.get_profiles())) __tweak_field(fields, "image", "choices", __names_from_dicts(remote.get_images(),optional=True)) elif what == "mgmtclass": __tweak_field(fields, "packages", "choices", __names_from_dicts(remote.get_packages())) __tweak_field(fields, "files", "choices", __names_from_dicts(remote.get_files())) if what in ("distro","profile","system"): __tweak_field(fields, "mgmt_classes", "choices", __names_from_dicts(remote.get_mgmtclasses(),optional=False)) __tweak_field(fields, "os_version", "choices", remote.get_valid_os_versions()) __tweak_field(fields, "breed", "choices", remote.get_valid_breeds()) # if editing save the fields in the session for comparison later if editmode == "edit": request.session['%s_%s' % (what,obj_name)] = fields sections = {} for field in fields: bmo = field_info.BLOCK_MAPPINGS_ORDER[field['block_section']] fkey = "%d_%s" % (bmo,field['block_section']) if not sections.has_key(fkey): sections[fkey] = {} sections[fkey]['name'] = field['block_section'] sections[fkey]['fields'] = [] sections[fkey]['fields'].append(field) t = get_template('generic_edit.tmpl') inames = interfaces.keys() inames.sort() html = t.render(RequestContext(request,{ 'what' : what, #'fields' : fields, 'sections' : sections, 'subobject' : child, 'editmode' : editmode, 'editable' : editable, 'interfaces' : interfaces, 'interface_names' : inames, 'interface_length': len(inames), 'version' : remote.extended_version(request.session['token'])['version'], 'username' : username, 'name' : obj_name })) return HttpResponse(html) # ====================================================================== @require_POST @csrf_protect def generic_save(request,what): """ Saves an object back using the cobbler API after clearing any 'generic_edit' page. """ if not test_user_authenticated(request): return login(request, next="/cobbler_web/%s/list" % what, expired=True) # load request fields and see if they are valid editmode = request.POST.get('editmode', 'edit') obj_name = request.POST.get('name', "") subobject = request.POST.get('subobject', "False") if subobject == "False": subobject = False else: subobject = True if obj_name == "": return error_page(request,"Required field name is missing") prev_fields = [] if request.session.has_key("%s_%s" % (what,obj_name)) and editmode == "edit": prev_fields = request.session["%s_%s" % (what,obj_name)] # grab the remote object handle # for edits, fail in the object cannot be found to be edited # for new objects, fail if the object already exists if editmode == "edit": if not remote.has_item(what, obj_name): return error_page(request,"Failure trying to access item %s, it may have been deleted." % (obj_name)) obj_id = remote.get_item_handle( what, obj_name, request.session['token'] ) else: if remote.has_item(what, obj_name): return error_page(request,"Could not create a new item %s, it already exists." % (obj_name)) obj_id = remote.new_item( what, request.session['token'] ) # walk through our fields list saving things we know how to save fields = get_fields(what, subobject) for field in fields: if field['name'] == 'name' and editmode == 'edit': # do not attempt renames here continue elif field['name'].startswith("*"): # interface fields will be handled below continue else: # check and see if the value exists in the fields stored in the session prev_value = None for prev_field in prev_fields: if prev_field['name'] == field['name']: prev_value = prev_field['value'] break value = request.POST.get(field['name'],None) # Checkboxes return the value of the field if checked, otherwise None # convert to True/False if field["html_element"] == "checkbox": if value==field['name']: value=True else: value=False # Multiselect fields are handled differently if field["html_element"] == "multiselect": values=request.POST.getlist(field['name']) value=[] if '<>' in values: value='<>' else: for single_value in values: if single_value != "<>": value.insert(0,single_value) if value != None: if value == "<>": value = "" if value is not None and (not subobject or field['name'] != 'distro') and value != prev_value: try: remote.modify_item(what,obj_id,field['name'],value,request.session['token']) except Exception, e: return error_page(request, str(e)) # special handling for system interface fields # which are the only objects in cobbler that will ever work this way if what == "system": interface_field_list = [] for field in fields: if field['name'].startswith("*"): field = field['name'].replace("*","") interface_field_list.append(field) interfaces = request.POST.get('interface_list', "").split(",") for interface in interfaces: if interface == "": continue ifdata = {} for item in interface_field_list: ifdata["%s-%s" % (item,interface)] = request.POST.get("%s-%s" % (item,interface), "") ifdata=utils.strip_none(ifdata) # FIXME: I think this button is missing. present = request.POST.get("present-%s" % interface, "") original = request.POST.get("original-%s" % interface, "") try: if present == "0" and original == "1": remote.modify_system(obj_id, 'delete_interface', interface, request.session['token']) elif present == "1": remote.modify_system(obj_id, 'modify_interface', ifdata, request.session['token']) except Exception, e: return error_page(request, str(e)) try: remote.save_item(what, obj_id, request.session['token'], editmode) except Exception, e: return error_page(request, str(e)) return HttpResponseRedirect('/cobbler_web/%s/list' % what) # ====================================================================== # Login/Logout views def test_user_authenticated(request): global remote global username global url_cobbler_api if url_cobbler_api is None: url_cobbler_api = utils.local_get_cobbler_api_url() remote = xmlrpclib.Server(url_cobbler_api, allow_none=True) # if we have a token, get the associated username from # the remote server via XMLRPC. We then compare that to # the value stored in the session. If everything matches up, # the user is considered successfully authenticated if request.session.has_key('token') and request.session['token'] != '': try: if remote.token_check(request.session['token']): token_user = remote.get_user_from_token(request.session['token']) if request.session.has_key('username') and request.session['username'] == token_user: username = request.session['username'] return True except: # just let it fall through to the 'return False' below pass return False use_passthru = -1 @csrf_protect def login(request, next=None, message=None, expired=False): global use_passthru if use_passthru < 0: token = remote.login("", utils.get_shared_secret()) auth_module = remote.get_authn_module_name(token) use_passthru = auth_module == 'authn_passthru' if use_passthru: return accept_remote_user(request, next) if expired and not message: message = "Sorry, either you need to login or your session expired." return render_to_response('login.tmpl', RequestContext(request,{'next':next,'message':message})) def accept_remote_user(request, nextsite): global username username = request.META['REMOTE_USER'] token = remote.login(username, utils.get_shared_secret()) request.session['username'] = username request.session['token'] = token if nextsite: return HttpResponseRedirect(nextsite) else: return HttpResponseRedirect("/cobbler_web") @require_POST @csrf_protect def do_login(request): global remote global username global url_cobbler_api username = request.POST.get('username', '').strip() password = request.POST.get('password', '') nextsite = request.POST.get('next',None) if url_cobbler_api is None: url_cobbler_api = utils.local_get_cobbler_api_url() remote = xmlrpclib.Server(url_cobbler_api, allow_none=True) try: token = remote.login(username, password) except: token = None if token: request.session['username'] = username request.session['token'] = token if nextsite: return HttpResponseRedirect(nextsite) else: return HttpResponseRedirect("/cobbler_web") else: return login(request,nextsite,message="Login failed, please try again") @require_POST @csrf_protect def do_logout(request): request.session['username'] = "" request.session['token'] = "" return HttpResponseRedirect("/cobbler_web") cobbler-2.4.1/web/content/000077500000000000000000000000001227367477500154015ustar00rootroot00000000000000cobbler-2.4.1/web/content/cobbler.js000066400000000000000000000032351227367477500173520ustar00rootroot00000000000000var js_growl = new jsGrowl('js_growl'); var run_once = 0 var now = new Date() var page_load = -1 /* show tasks not yet recorded, update task found time in hidden field */ function get_latest_task_info() { var username = document.getElementById("username").value /* FIXME: used the logged in user here instead */ /* FIXME: don't show events that are older than 40 seconds */ $.getJSON("/cblr/svc/op/events/user/" + username, function(data){$.each(data, function(i,record) { var id = record[0]; var ts = record[1]; var name = record[2]; var state = record[3]; var buf = "" var logmsg = " (log)"; if (state == "complete") { buf = "Task " + name + " is complete: " + logmsg } else if (state == "running") { buf = "Task " + name + " is running: " + logmsg } else if (state == "failed") { buf = "Task " + name + " has failed: " + logmsg } else { buf = name } window.status = buf; js_growl.addMessage({msg:buf}); }); }); } function go_go_gadget() { setInterval(get_latest_task_info, 2000) try { page_onload() } catch (error) { } } function page_onload() { var submitting = false; $(window).bind("submit", function () { submitting = true; }); $(window).bind("beforeunload", function () { if (!submitting && $("#ksdata")[0].defaultValue !== $("#ksdata")[0].value) { submitting = false; return "You have unsaved changes."; } }); } cobbler-2.4.1/web/content/favicon.png000066400000000000000000000035771227367477500175500ustar00rootroot00000000000000PNG  IHDR szzsRGBbKGD pHYsgRtIME4,((IDATXý}lVsN LF˔]Ph Vq76fe7]V5Q)+-ȇ"9` -ma8}ssӢ1\9ys]y/tQ[W ].ggI7|K$;}i:_3ʰa>m]سrph>6=DS$ ! IHϴ$ƒ4{ pF9g9'ar=dH"@u+"K0|x̰gX--̞x7<8rw+$SZ9${ˡfvQ` s(]max&ǎ(zm۾78ϔ/`ٟ0YaIjSj NZB,o ^PAPh&,dr8K2iTQ2ܵ&؆\+#֥?q3O^;i}kwQ1},< Sy`_GF^ۡSfc?_J@2ē??ikC{ E8wi2+D&0o˾%;FAgqyӧ"hrfe}^$&dilzۦҪe4V6nMH< 0j%{?p5;k)ފ ak1JJ Qk쮹ljk&W-4_fI&Lf$݉Pa]/l W"/Ӽ _(|3!ƚVֿ䎸y9Mѷf9Q )|ν;Yo7)cf˓5`ŵ{0`o5m.f=IPRU،[e6=7IۺobA*;a:]3f#9Q\r3/N{__'FnAy]Q@}9*c}W}pZTi _tA/Yy&9ɺpR;. 3,d{mS3o#VT=瓧4VR3 wq 0 2s@L9x"\xC4io6ܛho50浴=8C< .(*IENDB`cobbler-2.4.1/web/content/index.html000066400000000000000000000004211227367477500173730ustar00rootroot00000000000000 Cobbler services Cobbler Web is powered by Django. Install the cobbler-web package and visit the management URL to access it. Authentication is set up by your cobbler administrator. cobbler-2.4.1/web/content/jquery-ui.css000066400000000000000000001021711227367477500200470ustar00rootroot00000000000000/* * jQuery UI CSS Framework 1.8.18 * * Copyright 2011, AUTHORS.txt (http://jqueryui.com/about) * Dual licensed under the MIT or GPL Version 2 licenses. * http://jquery.org/license * * http://docs.jquery.com/UI/Theming/API */ /* Layout helpers ----------------------------------*/ .ui-helper-hidden { display: none; } .ui-helper-hidden-accessible { position: absolute !important; clip: rect(1px 1px 1px 1px); clip: rect(1px,1px,1px,1px); } .ui-helper-reset { margin: 0; padding: 0; border: 0; outline: 0; line-height: 1.3; text-decoration: none; font-size: 100%; list-style: none; } .ui-helper-clearfix:before, .ui-helper-clearfix:after { content: ""; display: table; } .ui-helper-clearfix:after { clear: both; } .ui-helper-clearfix { zoom: 1; } .ui-helper-zfix { width: 100%; height: 100%; top: 0; left: 0; position: absolute; opacity: 0; filter:Alpha(Opacity=0); } /* Interaction Cues ----------------------------------*/ .ui-state-disabled { cursor: default !important; } /* Icons ----------------------------------*/ /* states and images */ .ui-icon { display: block; text-indent: -99999px; overflow: hidden; background-repeat: no-repeat; } /* Misc visuals ----------------------------------*/ /* Overlays */ .ui-widget-overlay { position: absolute; top: 0; left: 0; width: 100%; height: 100%; } /* * jQuery UI Accordion 1.8.18 * * Copyright 2011, AUTHORS.txt (http://jqueryui.com/about) * Dual licensed under the MIT or GPL Version 2 licenses. * http://jquery.org/license * * http://docs.jquery.com/UI/Accordion#theming */ /* IE/Win - Fix animation bug - #4615 */ .ui-accordion { width: 100%; } .ui-accordion .ui-accordion-header { cursor: pointer; position: relative; margin-top: 1px; zoom: 1; } .ui-accordion .ui-accordion-li-fix { display: inline; } .ui-accordion .ui-accordion-header-active { border-bottom: 0 !important; } .ui-accordion .ui-accordion-header a { display: block; font-size: 1em; padding: .5em .5em .5em .7em; } .ui-accordion-icons .ui-accordion-header a { padding-left: 2.2em; } .ui-accordion .ui-accordion-header .ui-icon { position: absolute; left: .5em; top: 50%; margin-top: -8px; } .ui-accordion .ui-accordion-content { padding: 1em 2.2em; border-top: 0; margin-top: -2px; position: relative; top: 1px; margin-bottom: 2px; overflow: auto; display: none; zoom: 1; } .ui-accordion .ui-accordion-content-active { display: block; } /* * jQuery UI Autocomplete 1.8.18 * * Copyright 2011, AUTHORS.txt (http://jqueryui.com/about) * Dual licensed under the MIT or GPL Version 2 licenses. * http://jquery.org/license * * http://docs.jquery.com/UI/Autocomplete#theming */ .ui-autocomplete { position: absolute; cursor: default; } /* workarounds */ * html .ui-autocomplete { width:1px; } /* without this, the menu expands to 100% in IE6 */ /* * jQuery UI Menu 1.8.18 * * Copyright 2010, AUTHORS.txt (http://jqueryui.com/about) * Dual licensed under the MIT or GPL Version 2 licenses. * http://jquery.org/license * * http://docs.jquery.com/UI/Menu#theming */ .ui-menu { list-style:none; padding: 2px; margin: 0; display:block; float: left; } .ui-menu .ui-menu { margin-top: -3px; } .ui-menu .ui-menu-item { margin:0; padding: 0; zoom: 1; float: left; clear: left; width: 100%; } .ui-menu .ui-menu-item a { text-decoration:none; display:block; padding:.2em .4em; line-height:1.5; zoom:1; } .ui-menu .ui-menu-item a.ui-state-hover, .ui-menu .ui-menu-item a.ui-state-active { font-weight: normal; margin: -1px; } /* * jQuery UI Button 1.8.18 * * Copyright 2011, AUTHORS.txt (http://jqueryui.com/about) * Dual licensed under the MIT or GPL Version 2 licenses. * http://jquery.org/license * * http://docs.jquery.com/UI/Button#theming */ .ui-button { display: inline-block; position: relative; padding: 0; margin-right: .1em; text-decoration: none !important; cursor: pointer; text-align: center; zoom: 1; overflow: hidden; *overflow: visible; } /* the overflow property removes extra width in IE */ .ui-button-icon-only { width: 2.2em; } /* to make room for the icon, a width needs to be set here */ button.ui-button-icon-only { width: 2.4em; } /* button elements seem to need a little more width */ .ui-button-icons-only { width: 3.4em; } button.ui-button-icons-only { width: 3.7em; } /*button text element */ .ui-button .ui-button-text { display: block; line-height: 1.4; } .ui-button-text-only .ui-button-text { padding: .4em 1em; } .ui-button-icon-only .ui-button-text, .ui-button-icons-only .ui-button-text { padding: .4em; text-indent: -9999999px; } .ui-button-text-icon-primary .ui-button-text, .ui-button-text-icons .ui-button-text { padding: .4em 1em .4em 2.1em; } .ui-button-text-icon-secondary .ui-button-text, .ui-button-text-icons .ui-button-text { padding: .4em 2.1em .4em 1em; } .ui-button-text-icons .ui-button-text { padding-left: 2.1em; padding-right: 2.1em; } /* no icon support for input elements, provide padding by default */ input.ui-button { padding: .4em 1em; } /*button icon element(s) */ .ui-button-icon-only .ui-icon, .ui-button-text-icon-primary .ui-icon, .ui-button-text-icon-secondary .ui-icon, .ui-button-text-icons .ui-icon, .ui-button-icons-only .ui-icon { position: absolute; top: 50%; margin-top: -8px; } .ui-button-icon-only .ui-icon { left: 50%; margin-left: -8px; } .ui-button-text-icon-primary .ui-button-icon-primary, .ui-button-text-icons .ui-button-icon-primary, .ui-button-icons-only .ui-button-icon-primary { left: .5em; } .ui-button-text-icon-secondary .ui-button-icon-secondary, .ui-button-text-icons .ui-button-icon-secondary, .ui-button-icons-only .ui-button-icon-secondary { right: .5em; } .ui-button-text-icons .ui-button-icon-secondary, .ui-button-icons-only .ui-button-icon-secondary { right: .5em; } /*button sets*/ .ui-buttonset { margin-right: 7px; } .ui-buttonset .ui-button { margin-left: 0; margin-right: -.3em; } /* workarounds */ button.ui-button::-moz-focus-inner { border: 0; padding: 0; } /* reset extra padding in Firefox */ /* * jQuery UI Datepicker 1.8.18 * * Copyright 2011, AUTHORS.txt (http://jqueryui.com/about) * Dual licensed under the MIT or GPL Version 2 licenses. * http://jquery.org/license * * http://docs.jquery.com/UI/Datepicker#theming */ .ui-datepicker { width: 17em; padding: .2em .2em 0; display: none; } .ui-datepicker .ui-datepicker-header { position:relative; padding:.2em 0; } .ui-datepicker .ui-datepicker-prev, .ui-datepicker .ui-datepicker-next { position:absolute; top: 2px; width: 1.8em; height: 1.8em; } .ui-datepicker .ui-datepicker-prev-hover, .ui-datepicker .ui-datepicker-next-hover { top: 1px; } .ui-datepicker .ui-datepicker-prev { left:2px; } .ui-datepicker .ui-datepicker-next { right:2px; } .ui-datepicker .ui-datepicker-prev-hover { left:1px; } .ui-datepicker .ui-datepicker-next-hover { right:1px; } .ui-datepicker .ui-datepicker-prev span, .ui-datepicker .ui-datepicker-next span { display: block; position: absolute; left: 50%; margin-left: -8px; top: 50%; margin-top: -8px; } .ui-datepicker .ui-datepicker-title { margin: 0 2.3em; line-height: 1.8em; text-align: center; } .ui-datepicker .ui-datepicker-title select { font-size:1em; margin:1px 0; } .ui-datepicker select.ui-datepicker-month-year {width: 100%;} .ui-datepicker select.ui-datepicker-month, .ui-datepicker select.ui-datepicker-year { width: 49%;} .ui-datepicker table {width: 100%; font-size: .9em; border-collapse: collapse; margin:0 0 .4em; } .ui-datepicker th { padding: .7em .3em; text-align: center; font-weight: bold; border: 0; } .ui-datepicker td { border: 0; padding: 1px; } .ui-datepicker td span, .ui-datepicker td a { display: block; padding: .2em; text-align: right; text-decoration: none; } .ui-datepicker .ui-datepicker-buttonpane { background-image: none; margin: .7em 0 0 0; padding:0 .2em; border-left: 0; border-right: 0; border-bottom: 0; } .ui-datepicker .ui-datepicker-buttonpane button { float: right; margin: .5em .2em .4em; cursor: pointer; padding: .2em .6em .3em .6em; width:auto; overflow:visible; } .ui-datepicker .ui-datepicker-buttonpane button.ui-datepicker-current { float:left; } /* with multiple calendars */ .ui-datepicker.ui-datepicker-multi { width:auto; } .ui-datepicker-multi .ui-datepicker-group { float:left; } .ui-datepicker-multi .ui-datepicker-group table { width:95%; margin:0 auto .4em; } .ui-datepicker-multi-2 .ui-datepicker-group { width:50%; } .ui-datepicker-multi-3 .ui-datepicker-group { width:33.3%; } .ui-datepicker-multi-4 .ui-datepicker-group { width:25%; } .ui-datepicker-multi .ui-datepicker-group-last .ui-datepicker-header { border-left-width:0; } .ui-datepicker-multi .ui-datepicker-group-middle .ui-datepicker-header { border-left-width:0; } .ui-datepicker-multi .ui-datepicker-buttonpane { clear:left; } .ui-datepicker-row-break { clear:both; width:100%; font-size:0em; } /* RTL support */ .ui-datepicker-rtl { direction: rtl; } .ui-datepicker-rtl .ui-datepicker-prev { right: 2px; left: auto; } .ui-datepicker-rtl .ui-datepicker-next { left: 2px; right: auto; } .ui-datepicker-rtl .ui-datepicker-prev:hover { right: 1px; left: auto; } .ui-datepicker-rtl .ui-datepicker-next:hover { left: 1px; right: auto; } .ui-datepicker-rtl .ui-datepicker-buttonpane { clear:right; } .ui-datepicker-rtl .ui-datepicker-buttonpane button { float: left; } .ui-datepicker-rtl .ui-datepicker-buttonpane button.ui-datepicker-current { float:right; } .ui-datepicker-rtl .ui-datepicker-group { float:right; } .ui-datepicker-rtl .ui-datepicker-group-last .ui-datepicker-header { border-right-width:0; border-left-width:1px; } .ui-datepicker-rtl .ui-datepicker-group-middle .ui-datepicker-header { border-right-width:0; border-left-width:1px; } /* IE6 IFRAME FIX (taken from datepicker 1.5.3 */ .ui-datepicker-cover { display: none; /*sorry for IE5*/ display/**/: block; /*sorry for IE5*/ position: absolute; /*must have*/ z-index: -1; /*must have*/ filter: mask(); /*must have*/ top: -4px; /*must have*/ left: -4px; /*must have*/ width: 200px; /*must have*/ height: 200px; /*must have*/ }/* * jQuery UI Dialog 1.8.18 * * Copyright 2011, AUTHORS.txt (http://jqueryui.com/about) * Dual licensed under the MIT or GPL Version 2 licenses. * http://jquery.org/license * * http://docs.jquery.com/UI/Dialog#theming */ .ui-dialog { position: absolute; padding: .2em; width: 300px; overflow: hidden; } .ui-dialog .ui-dialog-titlebar { padding: .4em 1em; position: relative; } .ui-dialog .ui-dialog-title { float: left; margin: .1em 16px .1em 0; } .ui-dialog .ui-dialog-titlebar-close { position: absolute; right: .3em; top: 50%; width: 19px; margin: -10px 0 0 0; padding: 1px; height: 18px; } .ui-dialog .ui-dialog-titlebar-close span { display: block; margin: 1px; } .ui-dialog .ui-dialog-titlebar-close:hover, .ui-dialog .ui-dialog-titlebar-close:focus { padding: 0; } .ui-dialog .ui-dialog-content { position: relative; border: 0; padding: .5em 1em; background: none; overflow: auto; zoom: 1; } .ui-dialog .ui-dialog-buttonpane { text-align: left; border-width: 1px 0 0 0; background-image: none; margin: .5em 0 0 0; padding: .3em 1em .5em .4em; } .ui-dialog .ui-dialog-buttonpane .ui-dialog-buttonset { float: right; } .ui-dialog .ui-dialog-buttonpane button { margin: .5em .4em .5em 0; cursor: pointer; } .ui-dialog .ui-resizable-se { width: 14px; height: 14px; right: 3px; bottom: 3px; } .ui-draggable .ui-dialog-titlebar { cursor: move; } /* * jQuery UI Progressbar 1.8.18 * * Copyright 2011, AUTHORS.txt (http://jqueryui.com/about) * Dual licensed under the MIT or GPL Version 2 licenses. * http://jquery.org/license * * http://docs.jquery.com/UI/Progressbar#theming */ .ui-progressbar { height:2em; text-align: left; overflow: hidden; } .ui-progressbar .ui-progressbar-value {margin: -1px; height:100%; }/* * jQuery UI Resizable 1.8.18 * * Copyright 2011, AUTHORS.txt (http://jqueryui.com/about) * Dual licensed under the MIT or GPL Version 2 licenses. * http://jquery.org/license * * http://docs.jquery.com/UI/Resizable#theming */ .ui-resizable { position: relative;} .ui-resizable-handle { position: absolute;font-size: 0.1px;z-index: 99999; display: block; } .ui-resizable-disabled .ui-resizable-handle, .ui-resizable-autohide .ui-resizable-handle { display: none; } .ui-resizable-n { cursor: n-resize; height: 7px; width: 100%; top: -5px; left: 0; } .ui-resizable-s { cursor: s-resize; height: 7px; width: 100%; bottom: -5px; left: 0; } .ui-resizable-e { cursor: e-resize; width: 7px; right: -5px; top: 0; height: 100%; } .ui-resizable-w { cursor: w-resize; width: 7px; left: -5px; top: 0; height: 100%; } .ui-resizable-se { cursor: se-resize; width: 12px; height: 12px; right: 1px; bottom: 1px; } .ui-resizable-sw { cursor: sw-resize; width: 9px; height: 9px; left: -5px; bottom: -5px; } .ui-resizable-nw { cursor: nw-resize; width: 9px; height: 9px; left: -5px; top: -5px; } .ui-resizable-ne { cursor: ne-resize; width: 9px; height: 9px; right: -5px; top: -5px;}/* * jQuery UI Selectable 1.8.18 * * Copyright 2011, AUTHORS.txt (http://jqueryui.com/about) * Dual licensed under the MIT or GPL Version 2 licenses. * http://jquery.org/license * * http://docs.jquery.com/UI/Selectable#theming */ .ui-selectable-helper { position: absolute; z-index: 100; border:1px dotted black; } /* * jQuery UI Slider 1.8.18 * * Copyright 2011, AUTHORS.txt (http://jqueryui.com/about) * Dual licensed under the MIT or GPL Version 2 licenses. * http://jquery.org/license * * http://docs.jquery.com/UI/Slider#theming */ .ui-slider { position: relative; text-align: left; } .ui-slider .ui-slider-handle { position: absolute; z-index: 2; width: 1.2em; height: 1.2em; cursor: default; } .ui-slider .ui-slider-range { position: absolute; z-index: 1; font-size: .7em; display: block; border: 0; background-position: 0 0; } .ui-slider-horizontal { height: .8em; } .ui-slider-horizontal .ui-slider-handle { top: -.3em; margin-left: -.6em; } .ui-slider-horizontal .ui-slider-range { top: 0; height: 100%; } .ui-slider-horizontal .ui-slider-range-min { left: 0; } .ui-slider-horizontal .ui-slider-range-max { right: 0; } .ui-slider-vertical { width: .8em; height: 100px; } .ui-slider-vertical .ui-slider-handle { left: -.3em; margin-left: 0; margin-bottom: -.6em; } .ui-slider-vertical .ui-slider-range { left: 0; width: 100%; } .ui-slider-vertical .ui-slider-range-min { bottom: 0; } .ui-slider-vertical .ui-slider-range-max { top: 0; }/* * jQuery UI Tabs 1.8.18 * * Copyright 2011, AUTHORS.txt (http://jqueryui.com/about) * Dual licensed under the MIT or GPL Version 2 licenses. * http://jquery.org/license * * http://docs.jquery.com/UI/Tabs#theming */ .ui-tabs { position: relative; padding: .2em; zoom: 1; } /* position: relative prevents IE scroll bug (element with position: relative inside container with overflow: auto appear as "fixed") */ .ui-tabs .ui-tabs-nav { margin: 0; padding: .2em .2em 0; } .ui-tabs .ui-tabs-nav li { list-style: none; float: left; position: relative; top: 1px; margin: 0 .2em 1px 0; border-bottom: 0 !important; padding: 0; white-space: nowrap; } .ui-tabs .ui-tabs-nav li a { float: left; padding: .5em 1em; text-decoration: none; } .ui-tabs .ui-tabs-nav li.ui-tabs-selected { margin-bottom: 0; padding-bottom: 1px; } .ui-tabs .ui-tabs-nav li.ui-tabs-selected a, .ui-tabs .ui-tabs-nav li.ui-state-disabled a, .ui-tabs .ui-tabs-nav li.ui-state-processing a { cursor: text; } .ui-tabs .ui-tabs-nav li a, .ui-tabs.ui-tabs-collapsible .ui-tabs-nav li.ui-tabs-selected a { cursor: pointer; } /* first selector in group seems obsolete, but required to overcome bug in Opera applying cursor: text overall if defined elsewhere... */ .ui-tabs .ui-tabs-panel { display: block; border-width: 0; padding: 1em 1.4em; background: none; } .ui-tabs .ui-tabs-hide { display: none !important; } /* * jQuery UI CSS Framework 1.8.18 * * Copyright 2011, AUTHORS.txt (http://jqueryui.com/about) * Dual licensed under the MIT or GPL Version 2 licenses. * http://jquery.org/license * * http://docs.jquery.com/UI/Theming/API * * To view and modify this theme, visit http://jqueryui.com/themeroller/ */ /* Component containers ----------------------------------*/ .ui-widget { font-family: Verdana,Arial,sans-serif/*{ffDefault}*/; font-size: 1.1em/*{fsDefault}*/; } .ui-widget .ui-widget { font-size: 1em; } .ui-widget input, .ui-widget select, .ui-widget textarea, .ui-widget button { font-family: Verdana,Arial,sans-serif/*{ffDefault}*/; font-size: 1em; } .ui-widget-content { border: 1px solid #aaaaaa/*{borderColorContent}*/; background: #ffffff/*{bgColorContent}*/ url(images/ui-bg_flat_75_ffffff_40x100.png)/*{bgImgUrlContent}*/ 50%/*{bgContentXPos}*/ 50%/*{bgContentYPos}*/ repeat-x/*{bgContentRepeat}*/; color: #222222/*{fcContent}*/; } .ui-widget-content a { color: #222222/*{fcContent}*/; } .ui-widget-header { border: 1px solid #aaaaaa/*{borderColorHeader}*/; background: #cccccc/*{bgColorHeader}*/ url(images/ui-bg_highlight-soft_75_cccccc_1x100.png)/*{bgImgUrlHeader}*/ 50%/*{bgHeaderXPos}*/ 50%/*{bgHeaderYPos}*/ repeat-x/*{bgHeaderRepeat}*/; color: #222222/*{fcHeader}*/; font-weight: bold; } .ui-widget-header a { color: #222222/*{fcHeader}*/; } /* Interaction states ----------------------------------*/ .ui-state-default, .ui-widget-content .ui-state-default, .ui-widget-header .ui-state-default { border: 1px solid #d3d3d3/*{borderColorDefault}*/; background: #e6e6e6/*{bgColorDefault}*/ url(images/ui-bg_glass_75_e6e6e6_1x400.png)/*{bgImgUrlDefault}*/ 50%/*{bgDefaultXPos}*/ 50%/*{bgDefaultYPos}*/ repeat-x/*{bgDefaultRepeat}*/; font-weight: normal/*{fwDefault}*/; color: #555555/*{fcDefault}*/; } .ui-state-default a, .ui-state-default a:link, .ui-state-default a:visited { color: #555555/*{fcDefault}*/; text-decoration: none; } .ui-state-hover, .ui-widget-content .ui-state-hover, .ui-widget-header .ui-state-hover, .ui-state-focus, .ui-widget-content .ui-state-focus, .ui-widget-header .ui-state-focus { border: 1px solid #999999/*{borderColorHover}*/; background: #dadada/*{bgColorHover}*/ url(images/ui-bg_glass_75_dadada_1x400.png)/*{bgImgUrlHover}*/ 50%/*{bgHoverXPos}*/ 50%/*{bgHoverYPos}*/ repeat-x/*{bgHoverRepeat}*/; font-weight: normal/*{fwDefault}*/; color: #212121/*{fcHover}*/; } .ui-state-hover a, .ui-state-hover a:hover { color: #212121/*{fcHover}*/; text-decoration: none; } .ui-state-active, .ui-widget-content .ui-state-active, .ui-widget-header .ui-state-active { border: 1px solid #aaaaaa/*{borderColorActive}*/; background: #ffffff/*{bgColorActive}*/ url(images/ui-bg_glass_65_ffffff_1x400.png)/*{bgImgUrlActive}*/ 50%/*{bgActiveXPos}*/ 50%/*{bgActiveYPos}*/ repeat-x/*{bgActiveRepeat}*/; font-weight: normal/*{fwDefault}*/; color: #212121/*{fcActive}*/; } .ui-state-active a, .ui-state-active a:link, .ui-state-active a:visited { color: #212121/*{fcActive}*/; text-decoration: none; } .ui-widget :active { outline: none; } /* Interaction Cues ----------------------------------*/ .ui-state-highlight, .ui-widget-content .ui-state-highlight, .ui-widget-header .ui-state-highlight {border: 1px solid #fcefa1/*{borderColorHighlight}*/; background: #fbf9ee/*{bgColorHighlight}*/ url(images/ui-bg_glass_55_fbf9ee_1x400.png)/*{bgImgUrlHighlight}*/ 50%/*{bgHighlightXPos}*/ 50%/*{bgHighlightYPos}*/ repeat-x/*{bgHighlightRepeat}*/; color: #363636/*{fcHighlight}*/; } .ui-state-highlight a, .ui-widget-content .ui-state-highlight a,.ui-widget-header .ui-state-highlight a { color: #363636/*{fcHighlight}*/; } .ui-state-error, .ui-widget-content .ui-state-error, .ui-widget-header .ui-state-error {border: 1px solid #cd0a0a/*{borderColorError}*/; background: #fef1ec/*{bgColorError}*/ url(images/ui-bg_glass_95_fef1ec_1x400.png)/*{bgImgUrlError}*/ 50%/*{bgErrorXPos}*/ 50%/*{bgErrorYPos}*/ repeat-x/*{bgErrorRepeat}*/; color: #cd0a0a/*{fcError}*/; } .ui-state-error a, .ui-widget-content .ui-state-error a, .ui-widget-header .ui-state-error a { color: #cd0a0a/*{fcError}*/; } .ui-state-error-text, .ui-widget-content .ui-state-error-text, .ui-widget-header .ui-state-error-text { color: #cd0a0a/*{fcError}*/; } .ui-priority-primary, .ui-widget-content .ui-priority-primary, .ui-widget-header .ui-priority-primary { font-weight: bold; } .ui-priority-secondary, .ui-widget-content .ui-priority-secondary, .ui-widget-header .ui-priority-secondary { opacity: .7; filter:Alpha(Opacity=70); font-weight: normal; } .ui-state-disabled, .ui-widget-content .ui-state-disabled, .ui-widget-header .ui-state-disabled { opacity: .35; filter:Alpha(Opacity=35); background-image: none; } /* Icons ----------------------------------*/ /* states and images */ .ui-icon { width: 16px; height: 16px; background-image: url(images/ui-icons_222222_256x240.png)/*{iconsContent}*/; } .ui-widget-content .ui-icon {background-image: url(images/ui-icons_222222_256x240.png)/*{iconsContent}*/; } .ui-widget-header .ui-icon {background-image: url(images/ui-icons_222222_256x240.png)/*{iconsHeader}*/; } .ui-state-default .ui-icon { background-image: url(images/ui-icons_888888_256x240.png)/*{iconsDefault}*/; } .ui-state-hover .ui-icon, .ui-state-focus .ui-icon {background-image: url(images/ui-icons_454545_256x240.png)/*{iconsHover}*/; } .ui-state-active .ui-icon {background-image: url(images/ui-icons_454545_256x240.png)/*{iconsActive}*/; } .ui-state-highlight .ui-icon {background-image: url(images/ui-icons_2e83ff_256x240.png)/*{iconsHighlight}*/; } .ui-state-error .ui-icon, .ui-state-error-text .ui-icon {background-image: url(images/ui-icons_cd0a0a_256x240.png)/*{iconsError}*/; } /* positioning */ .ui-icon-carat-1-n { background-position: 0 0; } .ui-icon-carat-1-ne { background-position: -16px 0; } .ui-icon-carat-1-e { background-position: -32px 0; } .ui-icon-carat-1-se { background-position: -48px 0; } .ui-icon-carat-1-s { background-position: -64px 0; } .ui-icon-carat-1-sw { background-position: -80px 0; } .ui-icon-carat-1-w { background-position: -96px 0; } .ui-icon-carat-1-nw { background-position: -112px 0; } .ui-icon-carat-2-n-s { background-position: -128px 0; } .ui-icon-carat-2-e-w { background-position: -144px 0; } .ui-icon-triangle-1-n { background-position: 0 -16px; } .ui-icon-triangle-1-ne { background-position: -16px -16px; } .ui-icon-triangle-1-e { background-position: -32px -16px; } .ui-icon-triangle-1-se { background-position: -48px -16px; } .ui-icon-triangle-1-s { background-position: -64px -16px; } .ui-icon-triangle-1-sw { background-position: -80px -16px; } .ui-icon-triangle-1-w { background-position: -96px -16px; } .ui-icon-triangle-1-nw { background-position: -112px -16px; } .ui-icon-triangle-2-n-s { background-position: -128px -16px; } .ui-icon-triangle-2-e-w { background-position: -144px -16px; } .ui-icon-arrow-1-n { background-position: 0 -32px; } .ui-icon-arrow-1-ne { background-position: -16px -32px; } .ui-icon-arrow-1-e { background-position: -32px -32px; } .ui-icon-arrow-1-se { background-position: -48px -32px; } .ui-icon-arrow-1-s { background-position: -64px -32px; } .ui-icon-arrow-1-sw { background-position: -80px -32px; } .ui-icon-arrow-1-w { background-position: -96px -32px; } .ui-icon-arrow-1-nw { background-position: -112px -32px; } .ui-icon-arrow-2-n-s { background-position: -128px -32px; } .ui-icon-arrow-2-ne-sw { background-position: -144px -32px; } .ui-icon-arrow-2-e-w { background-position: -160px -32px; } .ui-icon-arrow-2-se-nw { background-position: -176px -32px; } .ui-icon-arrowstop-1-n { background-position: -192px -32px; } .ui-icon-arrowstop-1-e { background-position: -208px -32px; } .ui-icon-arrowstop-1-s { background-position: -224px -32px; } .ui-icon-arrowstop-1-w { background-position: -240px -32px; } .ui-icon-arrowthick-1-n { background-position: 0 -48px; } .ui-icon-arrowthick-1-ne { background-position: -16px -48px; } .ui-icon-arrowthick-1-e { background-position: -32px -48px; } .ui-icon-arrowthick-1-se { background-position: -48px -48px; } .ui-icon-arrowthick-1-s { background-position: -64px -48px; } .ui-icon-arrowthick-1-sw { background-position: -80px -48px; } .ui-icon-arrowthick-1-w { background-position: -96px -48px; } .ui-icon-arrowthick-1-nw { background-position: -112px -48px; } .ui-icon-arrowthick-2-n-s { background-position: -128px -48px; } .ui-icon-arrowthick-2-ne-sw { background-position: -144px -48px; } .ui-icon-arrowthick-2-e-w { background-position: -160px -48px; } .ui-icon-arrowthick-2-se-nw { background-position: -176px -48px; } .ui-icon-arrowthickstop-1-n { background-position: -192px -48px; } .ui-icon-arrowthickstop-1-e { background-position: -208px -48px; } .ui-icon-arrowthickstop-1-s { background-position: -224px -48px; } .ui-icon-arrowthickstop-1-w { background-position: -240px -48px; } .ui-icon-arrowreturnthick-1-w { background-position: 0 -64px; } .ui-icon-arrowreturnthick-1-n { background-position: -16px -64px; } .ui-icon-arrowreturnthick-1-e { background-position: -32px -64px; } .ui-icon-arrowreturnthick-1-s { background-position: -48px -64px; } .ui-icon-arrowreturn-1-w { background-position: -64px -64px; } .ui-icon-arrowreturn-1-n { background-position: -80px -64px; } .ui-icon-arrowreturn-1-e { background-position: -96px -64px; } .ui-icon-arrowreturn-1-s { background-position: -112px -64px; } .ui-icon-arrowrefresh-1-w { background-position: -128px -64px; } .ui-icon-arrowrefresh-1-n { background-position: -144px -64px; } .ui-icon-arrowrefresh-1-e { background-position: -160px -64px; } .ui-icon-arrowrefresh-1-s { background-position: -176px -64px; } .ui-icon-arrow-4 { background-position: 0 -80px; } .ui-icon-arrow-4-diag { background-position: -16px -80px; } .ui-icon-extlink { background-position: -32px -80px; } .ui-icon-newwin { background-position: -48px -80px; } .ui-icon-refresh { background-position: -64px -80px; } .ui-icon-shuffle { background-position: -80px -80px; } .ui-icon-transfer-e-w { background-position: -96px -80px; } .ui-icon-transferthick-e-w { background-position: -112px -80px; } .ui-icon-folder-collapsed { background-position: 0 -96px; } .ui-icon-folder-open { background-position: -16px -96px; } .ui-icon-document { background-position: -32px -96px; } .ui-icon-document-b { background-position: -48px -96px; } .ui-icon-note { background-position: -64px -96px; } .ui-icon-mail-closed { background-position: -80px -96px; } .ui-icon-mail-open { background-position: -96px -96px; } .ui-icon-suitcase { background-position: -112px -96px; } .ui-icon-comment { background-position: -128px -96px; } .ui-icon-person { background-position: -144px -96px; } .ui-icon-print { background-position: -160px -96px; } .ui-icon-trash { background-position: -176px -96px; } .ui-icon-locked { background-position: -192px -96px; } .ui-icon-unlocked { background-position: -208px -96px; } .ui-icon-bookmark { background-position: -224px -96px; } .ui-icon-tag { background-position: -240px -96px; } .ui-icon-home { background-position: 0 -112px; } .ui-icon-flag { background-position: -16px -112px; } .ui-icon-calendar { background-position: -32px -112px; } .ui-icon-cart { background-position: -48px -112px; } .ui-icon-pencil { background-position: -64px -112px; } .ui-icon-clock { background-position: -80px -112px; } .ui-icon-disk { background-position: -96px -112px; } .ui-icon-calculator { background-position: -112px -112px; } .ui-icon-zoomin { background-position: -128px -112px; } .ui-icon-zoomout { background-position: -144px -112px; } .ui-icon-search { background-position: -160px -112px; } .ui-icon-wrench { background-position: -176px -112px; } .ui-icon-gear { background-position: -192px -112px; } .ui-icon-heart { background-position: -208px -112px; } .ui-icon-star { background-position: -224px -112px; } .ui-icon-link { background-position: -240px -112px; } .ui-icon-cancel { background-position: 0 -128px; } .ui-icon-plus { background-position: -16px -128px; } .ui-icon-plusthick { background-position: -32px -128px; } .ui-icon-minus { background-position: -48px -128px; } .ui-icon-minusthick { background-position: -64px -128px; } .ui-icon-close { background-position: -80px -128px; } .ui-icon-closethick { background-position: -96px -128px; } .ui-icon-key { background-position: -112px -128px; } .ui-icon-lightbulb { background-position: -128px -128px; } .ui-icon-scissors { background-position: -144px -128px; } .ui-icon-clipboard { background-position: -160px -128px; } .ui-icon-copy { background-position: -176px -128px; } .ui-icon-contact { background-position: -192px -128px; } .ui-icon-image { background-position: -208px -128px; } .ui-icon-video { background-position: -224px -128px; } .ui-icon-script { background-position: -240px -128px; } .ui-icon-alert { background-position: 0 -144px; } .ui-icon-info { background-position: -16px -144px; } .ui-icon-notice { background-position: -32px -144px; } .ui-icon-help { background-position: -48px -144px; } .ui-icon-check { background-position: -64px -144px; } .ui-icon-bullet { background-position: -80px -144px; } .ui-icon-radio-off { background-position: -96px -144px; } .ui-icon-radio-on { background-position: -112px -144px; } .ui-icon-pin-w { background-position: -128px -144px; } .ui-icon-pin-s { background-position: -144px -144px; } .ui-icon-play { background-position: 0 -160px; } .ui-icon-pause { background-position: -16px -160px; } .ui-icon-seek-next { background-position: -32px -160px; } .ui-icon-seek-prev { background-position: -48px -160px; } .ui-icon-seek-end { background-position: -64px -160px; } .ui-icon-seek-start { background-position: -80px -160px; } /* ui-icon-seek-first is deprecated, use ui-icon-seek-start instead */ .ui-icon-seek-first { background-position: -80px -160px; } .ui-icon-stop { background-position: -96px -160px; } .ui-icon-eject { background-position: -112px -160px; } .ui-icon-volume-off { background-position: -128px -160px; } .ui-icon-volume-on { background-position: -144px -160px; } .ui-icon-power { background-position: 0 -176px; } .ui-icon-signal-diag { background-position: -16px -176px; } .ui-icon-signal { background-position: -32px -176px; } .ui-icon-battery-0 { background-position: -48px -176px; } .ui-icon-battery-1 { background-position: -64px -176px; } .ui-icon-battery-2 { background-position: -80px -176px; } .ui-icon-battery-3 { background-position: -96px -176px; } .ui-icon-circle-plus { background-position: 0 -192px; } .ui-icon-circle-minus { background-position: -16px -192px; } .ui-icon-circle-close { background-position: -32px -192px; } .ui-icon-circle-triangle-e { background-position: -48px -192px; } .ui-icon-circle-triangle-s { background-position: -64px -192px; } .ui-icon-circle-triangle-w { background-position: -80px -192px; } .ui-icon-circle-triangle-n { background-position: -96px -192px; } .ui-icon-circle-arrow-e { background-position: -112px -192px; } .ui-icon-circle-arrow-s { background-position: -128px -192px; } .ui-icon-circle-arrow-w { background-position: -144px -192px; } .ui-icon-circle-arrow-n { background-position: -160px -192px; } .ui-icon-circle-zoomin { background-position: -176px -192px; } .ui-icon-circle-zoomout { background-position: -192px -192px; } .ui-icon-circle-check { background-position: -208px -192px; } .ui-icon-circlesmall-plus { background-position: 0 -208px; } .ui-icon-circlesmall-minus { background-position: -16px -208px; } .ui-icon-circlesmall-close { background-position: -32px -208px; } .ui-icon-squaresmall-plus { background-position: -48px -208px; } .ui-icon-squaresmall-minus { background-position: -64px -208px; } .ui-icon-squaresmall-close { background-position: -80px -208px; } .ui-icon-grip-dotted-vertical { background-position: 0 -224px; } .ui-icon-grip-dotted-horizontal { background-position: -16px -224px; } .ui-icon-grip-solid-vertical { background-position: -32px -224px; } .ui-icon-grip-solid-horizontal { background-position: -48px -224px; } .ui-icon-gripsmall-diagonal-se { background-position: -64px -224px; } .ui-icon-grip-diagonal-se { background-position: -80px -224px; } /* Misc visuals ----------------------------------*/ /* Corner radius */ .ui-corner-all, .ui-corner-top, .ui-corner-left, .ui-corner-tl { -moz-border-radius-topleft: 4px/*{cornerRadius}*/; -webkit-border-top-left-radius: 4px/*{cornerRadius}*/; -khtml-border-top-left-radius: 4px/*{cornerRadius}*/; border-top-left-radius: 4px/*{cornerRadius}*/; } .ui-corner-all, .ui-corner-top, .ui-corner-right, .ui-corner-tr { -moz-border-radius-topright: 4px/*{cornerRadius}*/; -webkit-border-top-right-radius: 4px/*{cornerRadius}*/; -khtml-border-top-right-radius: 4px/*{cornerRadius}*/; border-top-right-radius: 4px/*{cornerRadius}*/; } .ui-corner-all, .ui-corner-bottom, .ui-corner-left, .ui-corner-bl { -moz-border-radius-bottomleft: 4px/*{cornerRadius}*/; -webkit-border-bottom-left-radius: 4px/*{cornerRadius}*/; -khtml-border-bottom-left-radius: 4px/*{cornerRadius}*/; border-bottom-left-radius: 4px/*{cornerRadius}*/; } .ui-corner-all, .ui-corner-bottom, .ui-corner-right, .ui-corner-br { -moz-border-radius-bottomright: 4px/*{cornerRadius}*/; -webkit-border-bottom-right-radius: 4px/*{cornerRadius}*/; -khtml-border-bottom-right-radius: 4px/*{cornerRadius}*/; border-bottom-right-radius: 4px/*{cornerRadius}*/; } /* Overlays */ .ui-widget-overlay { background: #aaaaaa/*{bgColorOverlay}*/ url(images/ui-bg_flat_0_aaaaaa_40x100.png)/*{bgImgUrlOverlay}*/ 50%/*{bgOverlayXPos}*/ 50%/*{bgOverlayYPos}*/ repeat-x/*{bgOverlayRepeat}*/; opacity: .3;filter:Alpha(Opacity=30)/*{opacityOverlay}*/; } .ui-widget-shadow { margin: -8px/*{offsetTopShadow}*/ 0 0 -8px/*{offsetLeftShadow}*/; padding: 8px/*{thicknessShadow}*/; background: #aaaaaa/*{bgColorShadow}*/ url(images/ui-bg_flat_0_aaaaaa_40x100.png)/*{bgImgUrlShadow}*/ 50%/*{bgShadowXPos}*/ 50%/*{bgShadowYPos}*/ repeat-x/*{bgShadowRepeat}*/; opacity: .3;filter:Alpha(Opacity=30)/*{opacityShadow}*/; -moz-border-radius: 8px/*{cornerRadiusShadow}*/; -khtml-border-radius: 8px/*{cornerRadiusShadow}*/; -webkit-border-radius: 8px/*{cornerRadiusShadow}*/; border-radius: 8px/*{cornerRadiusShadow}*/; }cobbler-2.4.1/web/content/jquery-ui.min.js000066400000000000000000006121621227367477500204630ustar00rootroot00000000000000/*! * jQuery UI 1.8.18 * * Copyright 2011, AUTHORS.txt (http://jqueryui.com/about) * Dual licensed under the MIT or GPL Version 2 licenses. * http://jquery.org/license * * http://docs.jquery.com/UI */(function(a,b){function d(b){return!a(b).parents().andSelf().filter(function(){return a.curCSS(this,"visibility")==="hidden"||a.expr.filters.hidden(this)}).length}function c(b,c){var e=b.nodeName.toLowerCase();if("area"===e){var f=b.parentNode,g=f.name,h;if(!b.href||!g||f.nodeName.toLowerCase()!=="map")return!1;h=a("img[usemap=#"+g+"]")[0];return!!h&&d(h)}return(/input|select|textarea|button|object/.test(e)?!b.disabled:"a"==e?b.href||c:c)&&d(b)}a.ui=a.ui||{};a.ui.version||(a.extend(a.ui,{version:"1.8.18",keyCode:{ALT:18,BACKSPACE:8,CAPS_LOCK:20,COMMA:188,COMMAND:91,COMMAND_LEFT:91,COMMAND_RIGHT:93,CONTROL:17,DELETE:46,DOWN:40,END:35,ENTER:13,ESCAPE:27,HOME:36,INSERT:45,LEFT:37,MENU:93,NUMPAD_ADD:107,NUMPAD_DECIMAL:110,NUMPAD_DIVIDE:111,NUMPAD_ENTER:108,NUMPAD_MULTIPLY:106,NUMPAD_SUBTRACT:109,PAGE_DOWN:34,PAGE_UP:33,PERIOD:190,RIGHT:39,SHIFT:16,SPACE:32,TAB:9,UP:38,WINDOWS:91}}),a.fn.extend({propAttr:a.fn.prop||a.fn.attr,_focus:a.fn.focus,focus:function(b,c){return typeof b=="number"?this.each(function(){var d=this;setTimeout(function(){a(d).focus(),c&&c.call(d)},b)}):this._focus.apply(this,arguments)},scrollParent:function(){var b;a.browser.msie&&/(static|relative)/.test(this.css("position"))||/absolute/.test(this.css("position"))?b=this.parents().filter(function(){return/(relative|absolute|fixed)/.test(a.curCSS(this,"position",1))&&/(auto|scroll)/.test(a.curCSS(this,"overflow",1)+a.curCSS(this,"overflow-y",1)+a.curCSS(this,"overflow-x",1))}).eq(0):b=this.parents().filter(function(){return/(auto|scroll)/.test(a.curCSS(this,"overflow",1)+a.curCSS(this,"overflow-y",1)+a.curCSS(this,"overflow-x",1))}).eq(0);return/fixed/.test(this.css("position"))||!b.length?a(document):b},zIndex:function(c){if(c!==b)return this.css("zIndex",c);if(this.length){var d=a(this[0]),e,f;while(d.length&&d[0]!==document){e=d.css("position");if(e==="absolute"||e==="relative"||e==="fixed"){f=parseInt(d.css("zIndex"),10);if(!isNaN(f)&&f!==0)return f}d=d.parent()}}return 0},disableSelection:function(){return this.bind((a.support.selectstart?"selectstart":"mousedown")+".ui-disableSelection",function(a){a.preventDefault()})},enableSelection:function(){return this.unbind(".ui-disableSelection")}}),a.each(["Width","Height"],function(c,d){function h(b,c,d,f){a.each(e,function(){c-=parseFloat(a.curCSS(b,"padding"+this,!0))||0,d&&(c-=parseFloat(a.curCSS(b,"border"+this+"Width",!0))||0),f&&(c-=parseFloat(a.curCSS(b,"margin"+this,!0))||0)});return c}var e=d==="Width"?["Left","Right"]:["Top","Bottom"],f=d.toLowerCase(),g={innerWidth:a.fn.innerWidth,innerHeight:a.fn.innerHeight,outerWidth:a.fn.outerWidth,outerHeight:a.fn.outerHeight};a.fn["inner"+d]=function(c){if(c===b)return g["inner"+d].call(this);return this.each(function(){a(this).css(f,h(this,c)+"px")})},a.fn["outer"+d]=function(b,c){if(typeof b!="number")return g["outer"+d].call(this,b);return this.each(function(){a(this).css(f,h(this,b,!0,c)+"px")})}}),a.extend(a.expr[":"],{data:function(b,c,d){return!!a.data(b,d[3])},focusable:function(b){return c(b,!isNaN(a.attr(b,"tabindex")))},tabbable:function(b){var d=a.attr(b,"tabindex"),e=isNaN(d);return(e||d>=0)&&c(b,!e)}}),a(function(){var b=document.body,c=b.appendChild(c=document.createElement("div"));c.offsetHeight,a.extend(c.style,{minHeight:"100px",height:"auto",padding:0,borderWidth:0}),a.support.minHeight=c.offsetHeight===100,a.support.selectstart="onselectstart"in c,b.removeChild(c).style.display="none"}),a.extend(a.ui,{plugin:{add:function(b,c,d){var e=a.ui[b].prototype;for(var f in d)e.plugins[f]=e.plugins[f]||[],e.plugins[f].push([c,d[f]])},call:function(a,b,c){var d=a.plugins[b];if(!!d&&!!a.element[0].parentNode)for(var e=0;e0)return!0;b[d]=1,e=b[d]>0,b[d]=0;return e},isOverAxis:function(a,b,c){return a>b&&a=9)&&!b.button)return this._mouseUp(b);if(this._mouseStarted){this._mouseDrag(b);return b.preventDefault()}this._mouseDistanceMet(b)&&this._mouseDelayMet(b)&&(this._mouseStarted=this._mouseStart(this._mouseDownEvent,b)!==!1,this._mouseStarted?this._mouseDrag(b):this._mouseUp(b));return!this._mouseStarted},_mouseUp:function(b){a(document).unbind("mousemove."+this.widgetName,this._mouseMoveDelegate).unbind("mouseup."+this.widgetName,this._mouseUpDelegate),this._mouseStarted&&(this._mouseStarted=!1,b.target==this._mouseDownEvent.target&&a.data(b.target,this.widgetName+".preventClickEvent",!0),this._mouseStop(b));return!1},_mouseDistanceMet:function(a){return Math.max(Math.abs(this._mouseDownEvent.pageX-a.pageX),Math.abs(this._mouseDownEvent.pageY-a.pageY))>=this.options.distance},_mouseDelayMet:function(a){return this.mouseDelayMet},_mouseStart:function(a){},_mouseDrag:function(a){},_mouseStop:function(a){},_mouseCapture:function(a){return!0}})}(jQuery),function(a,b){a.widget("ui.draggable",a.ui.mouse,{widgetEventPrefix:"drag",options:{addClasses:!0,appendTo:"parent",axis:!1,connectToSortable:!1,containment:!1,cursor:"auto",cursorAt:!1,grid:!1,handle:!1,helper:"original",iframeFix:!1,opacity:!1,refreshPositions:!1,revert:!1,revertDuration:500,scope:"default",scroll:!0,scrollSensitivity:20,scrollSpeed:20,snap:!1,snapMode:"both",snapTolerance:20,stack:!1,zIndex:!1},_create:function(){this.options.helper=="original"&&!/^(?:r|a|f)/.test(this.element.css("position"))&&(this.element[0].style.position="relative"),this.options.addClasses&&this.element.addClass("ui-draggable"),this.options.disabled&&this.element.addClass("ui-draggable-disabled"),this._mouseInit()},destroy:function(){if(!!this.element.data("draggable")){this.element.removeData("draggable").unbind(".draggable").removeClass("ui-draggable ui-draggable-dragging ui-draggable-disabled"),this._mouseDestroy();return this}},_mouseCapture:function(b){var c=this.options;if(this.helper||c.disabled||a(b.target).is(".ui-resizable-handle"))return!1;this.handle=this._getHandle(b);if(!this.handle)return!1;c.iframeFix&&a(c.iframeFix===!0?"iframe":c.iframeFix).each(function(){a('
    ').css({width:this.offsetWidth+"px",height:this.offsetHeight+"px",position:"absolute",opacity:"0.001",zIndex:1e3}).css(a(this).offset()).appendTo("body")});return!0},_mouseStart:function(b){var c=this.options;this.helper=this._createHelper(b),this._cacheHelperProportions(),a.ui.ddmanager&&(a.ui.ddmanager.current=this),this._cacheMargins(),this.cssPosition=this.helper.css("position"),this.scrollParent=this.helper.scrollParent(),this.offset=this.positionAbs=this.element.offset(),this.offset={top:this.offset.top-this.margins.top,left:this.offset.left-this.margins.left},a.extend(this.offset,{click:{left:b.pageX-this.offset.left,top:b.pageY-this.offset.top},parent:this._getParentOffset(),relative:this._getRelativeOffset()}),this.originalPosition=this.position=this._generatePosition(b),this.originalPageX=b.pageX,this.originalPageY=b.pageY,c.cursorAt&&this._adjustOffsetFromHelper(c.cursorAt),c.containment&&this._setContainment();if(this._trigger("start",b)===!1){this._clear();return!1}this._cacheHelperProportions(),a.ui.ddmanager&&!c.dropBehaviour&&a.ui.ddmanager.prepareOffsets(this,b),this.helper.addClass("ui-draggable-dragging"),this._mouseDrag(b,!0),a.ui.ddmanager&&a.ui.ddmanager.dragStart(this,b);return!0},_mouseDrag:function(b,c){this.position=this._generatePosition(b),this.positionAbs=this._convertPositionTo("absolute");if(!c){var d=this._uiHash();if(this._trigger("drag",b,d)===!1){this._mouseUp({});return!1}this.position=d.position}if(!this.options.axis||this.options.axis!="y")this.helper[0].style.left=this.position.left+"px";if(!this.options.axis||this.options.axis!="x")this.helper[0].style.top=this.position.top+"px";a.ui.ddmanager&&a.ui.ddmanager.drag(this,b);return!1},_mouseStop:function(b){var c=!1;a.ui.ddmanager&&!this.options.dropBehaviour&&(c=a.ui.ddmanager.drop(this,b)),this.dropped&&(c=this.dropped,this.dropped=!1);if((!this.element[0]||!this.element[0].parentNode)&&this.options.helper=="original")return!1;if(this.options.revert=="invalid"&&!c||this.options.revert=="valid"&&c||this.options.revert===!0||a.isFunction(this.options.revert)&&this.options.revert.call(this.element,c)){var d=this;a(this.helper).animate(this.originalPosition,parseInt(this.options.revertDuration,10),function(){d._trigger("stop",b)!==!1&&d._clear()})}else this._trigger("stop",b)!==!1&&this._clear();return!1},_mouseUp:function(b){this.options.iframeFix===!0&&a("div.ui-draggable-iframeFix").each(function(){this.parentNode.removeChild(this)}),a.ui.ddmanager&&a.ui.ddmanager.dragStop(this,b);return a.ui.mouse.prototype._mouseUp.call(this,b)},cancel:function(){this.helper.is(".ui-draggable-dragging")?this._mouseUp({}):this._clear();return this},_getHandle:function(b){var c=!this.options.handle||!a(this.options.handle,this.element).length?!0:!1;a(this.options.handle,this.element).find("*").andSelf().each(function(){this==b.target&&(c=!0)});return c},_createHelper:function(b){var c=this.options,d=a.isFunction(c.helper)?a(c.helper.apply(this.element[0],[b])):c.helper=="clone"?this.element.clone().removeAttr("id"):this.element;d.parents("body").length||d.appendTo(c.appendTo=="parent"?this.element[0].parentNode:c.appendTo),d[0]!=this.element[0]&&!/(fixed|absolute)/.test(d.css("position"))&&d.css("position","absolute");return d},_adjustOffsetFromHelper:function(b){typeof b=="string"&&(b=b.split(" ")),a.isArray(b)&&(b={left:+b[0],top:+b[1]||0}),"left"in b&&(this.offset.click.left=b.left+this.margins.left),"right"in b&&(this.offset.click.left=this.helperProportions.width-b.right+this.margins.left),"top"in b&&(this.offset.click.top=b.top+this.margins.top),"bottom"in b&&(this.offset.click.top=this.helperProportions.height-b.bottom+this.margins.top)},_getParentOffset:function(){this.offsetParent=this.helper.offsetParent();var b=this.offsetParent.offset();this.cssPosition=="absolute"&&this.scrollParent[0]!=document&&a.ui.contains(this.scrollParent[0],this.offsetParent[0])&&(b.left+=this.scrollParent.scrollLeft(),b.top+=this.scrollParent.scrollTop());if(this.offsetParent[0]==document.body||this.offsetParent[0].tagName&&this.offsetParent[0].tagName.toLowerCase()=="html"&&a.browser.msie)b={top:0,left:0};return{top:b.top+(parseInt(this.offsetParent.css("borderTopWidth"),10)||0),left:b.left+(parseInt(this.offsetParent.css("borderLeftWidth"),10)||0)}},_getRelativeOffset:function(){if(this.cssPosition=="relative"){var a=this.element.position();return{top:a.top-(parseInt(this.helper.css("top"),10)||0)+this.scrollParent.scrollTop(),left:a.left-(parseInt(this.helper.css("left"),10)||0)+this.scrollParent.scrollLeft()}}return{top:0,left:0}},_cacheMargins:function(){this.margins={left:parseInt(this.element.css("marginLeft"),10)||0,top:parseInt(this.element.css("marginTop"),10)||0,right:parseInt(this.element.css("marginRight"),10)||0,bottom:parseInt(this.element.css("marginBottom"),10)||0}},_cacheHelperProportions:function(){this.helperProportions={width:this.helper.outerWidth(),height:this.helper.outerHeight()}},_setContainment:function(){var b=this.options;b.containment=="parent"&&(b.containment=this.helper[0].parentNode);if(b.containment=="document"||b.containment=="window")this.containment=[b.containment=="document"?0:a(window).scrollLeft()-this.offset.relative.left-this.offset.parent.left,b.containment=="document"?0:a(window).scrollTop()-this.offset.relative.top-this.offset.parent.top,(b.containment=="document"?0:a(window).scrollLeft())+a(b.containment=="document"?document:window).width()-this.helperProportions.width-this.margins.left,(b.containment=="document"?0:a(window).scrollTop())+(a(b.containment=="document"?document:window).height()||document.body.parentNode.scrollHeight)-this.helperProportions.height-this.margins.top];if(!/^(document|window|parent)$/.test(b.containment)&&b.containment.constructor!=Array){var c=a(b.containment),d=c[0];if(!d)return;var e=c.offset(),f=a(d).css("overflow")!="hidden";this.containment=[(parseInt(a(d).css("borderLeftWidth"),10)||0)+(parseInt(a(d).css("paddingLeft"),10)||0),(parseInt(a(d).css("borderTopWidth"),10)||0)+(parseInt(a(d).css("paddingTop"),10)||0),(f?Math.max(d.scrollWidth,d.offsetWidth):d.offsetWidth)-(parseInt(a(d).css("borderLeftWidth"),10)||0)-(parseInt(a(d).css("paddingRight"),10)||0)-this.helperProportions.width-this.margins.left-this.margins.right,(f?Math.max(d.scrollHeight,d.offsetHeight):d.offsetHeight)-(parseInt(a(d).css("borderTopWidth"),10)||0)-(parseInt(a(d).css("paddingBottom"),10)||0)-this.helperProportions.height-this.margins.top-this.margins.bottom],this.relative_container=c}else b.containment.constructor==Array&&(this.containment=b.containment)},_convertPositionTo:function(b,c){c||(c=this.position);var d=b=="absolute"?1:-1,e=this.options,f=this.cssPosition=="absolute"&&(this.scrollParent[0]==document||!a.ui.contains(this.scrollParent[0],this.offsetParent[0]))?this.offsetParent:this.scrollParent,g=/(html|body)/i.test(f[0].tagName);return{top:c.top+this.offset.relative.top*d+this.offset.parent.top*d-(a.browser.safari&&a.browser.version<526&&this.cssPosition=="fixed"?0:(this.cssPosition=="fixed"?-this.scrollParent.scrollTop():g?0:f.scrollTop())*d),left:c.left+this.offset.relative.left*d+this.offset.parent.left*d-(a.browser.safari&&a.browser.version<526&&this.cssPosition=="fixed"?0:(this.cssPosition=="fixed"?-this.scrollParent.scrollLeft():g?0:f.scrollLeft())*d)}},_generatePosition:function(b){var c=this.options,d=this.cssPosition=="absolute"&&(this.scrollParent[0]==document||!a.ui.contains(this.scrollParent[0],this.offsetParent[0]))?this.offsetParent:this.scrollParent,e=/(html|body)/i.test(d[0].tagName),f=b.pageX,g=b.pageY;if(this.originalPosition){var h;if(this.containment){if(this.relative_container){var i=this.relative_container.offset();h=[this.containment[0]+i.left,this.containment[1]+i.top,this.containment[2]+i.left,this.containment[3]+i.top]}else h=this.containment;b.pageX-this.offset.click.lefth[2]&&(f=h[2]+this.offset.click.left),b.pageY-this.offset.click.top>h[3]&&(g=h[3]+this.offset.click.top)}if(c.grid){var j=c.grid[1]?this.originalPageY+Math.round((g-this.originalPageY)/c.grid[1])*c.grid[1]:this.originalPageY;g=h?j-this.offset.click.toph[3]?j-this.offset.click.toph[2]?k-this.offset.click.left=0;k--){var l=d.snapElements[k].left,m=l+d.snapElements[k].width,n=d.snapElements[k].top,o=n+d.snapElements[k].height;if(!(l-f=k&&g<=l||h>=k&&h<=l||gl)&&(e>=i&&e<=j||f>=i&&f<=j||ej);default:return!1}},a.ui.ddmanager={current:null,droppables:{"default":[]},prepareOffsets:function(b,c){var d=a.ui.ddmanager.droppables[b.options.scope]||[],e=c?c.type:null,f=(b.currentItem||b.element).find(":data(droppable)").andSelf();droppablesLoop:for(var g=0;g').css({position:this.element.css("position"),width:this.element.outerWidth(),height:this.element.outerHeight(),top:this.element.css("top"),left:this.element.css("left")})),this.element=this.element.parent().data("resizable",this.element.data("resizable")),this.elementIsWrapper=!0,this.element.css({marginLeft:this.originalElement.css("marginLeft"),marginTop:this.originalElement.css("marginTop"),marginRight:this.originalElement.css("marginRight"),marginBottom:this.originalElement.css("marginBottom")}),this.originalElement.css({marginLeft:0,marginTop:0,marginRight:0,marginBottom:0}),this.originalResizeStyle=this.originalElement.css("resize"),this.originalElement.css("resize","none"),this._proportionallyResizeElements.push(this.originalElement.css({position:"static",zoom:1,display:"block"})),this.originalElement.css({margin:this.originalElement.css("margin")}),this._proportionallyResize()),this.handles=c.handles||(a(".ui-resizable-handle",this.element).length?{n:".ui-resizable-n",e:".ui-resizable-e",s:".ui-resizable-s",w:".ui-resizable-w",se:".ui-resizable-se",sw:".ui-resizable-sw",ne:".ui-resizable-ne",nw:".ui-resizable-nw"}:"e,s,se");if(this.handles.constructor==String){this.handles=="all"&&(this.handles="n,e,s,w,se,sw,ne,nw");var d=this.handles.split(",");this.handles={};for(var e=0;e');/sw|se|ne|nw/.test(f)&&h.css({zIndex:++c.zIndex}),"se"==f&&h.addClass("ui-icon ui-icon-gripsmall-diagonal-se"),this.handles[f]=".ui-resizable-"+f,this.element.append(h)}}this._renderAxis=function(b){b=b||this.element;for(var c in this.handles){this.handles[c].constructor==String&&(this.handles[c]=a(this.handles[c],this.element).show());if(this.elementIsWrapper&&this.originalElement[0].nodeName.match(/textarea|input|select|button/i)){var d=a(this.handles[c],this.element),e=0;e=/sw|ne|nw|se|n|s/.test(c)?d.outerHeight():d.outerWidth();var f=["padding",/ne|nw|n/.test(c)?"Top":/se|sw|s/.test(c)?"Bottom":/^e$/.test(c)?"Right":"Left"].join("");b.css(f,e),this._proportionallyResize()}if(!a(this.handles[c]).length)continue}},this._renderAxis(this.element),this._handles=a(".ui-resizable-handle",this.element).disableSelection(),this._handles.mouseover(function(){if(!b.resizing){if(this.className)var a=this.className.match(/ui-resizable-(se|sw|ne|nw|n|e|s|w)/i);b.axis=a&&a[1]?a[1]:"se"}}),c.autoHide&&(this._handles.hide(),a(this.element).addClass("ui-resizable-autohide").hover(function(){c.disabled||(a(this).removeClass("ui-resizable-autohide"),b._handles.show())},function(){c.disabled||b.resizing||(a(this).addClass("ui-resizable-autohide"),b._handles.hide())})),this._mouseInit()},destroy:function(){this._mouseDestroy();var b=function(b){a(b).removeClass("ui-resizable ui-resizable-disabled ui-resizable-resizing").removeData("resizable").unbind(".resizable").find(".ui-resizable-handle").remove()};if(this.elementIsWrapper){b(this.element);var c=this.element;c.after(this.originalElement.css({position:c.css("position"),width:c.outerWidth(),height:c.outerHeight(),top:c.css("top"),left:c.css("left")})).remove()}this.originalElement.css("resize",this.originalResizeStyle),b(this.originalElement);return this},_mouseCapture:function(b){var c=!1;for(var d in this.handles)a(this.handles[d])[0]==b.target&&(c=!0);return!this.options.disabled&&c},_mouseStart:function(b){var d=this.options,e=this.element.position(),f=this.element;this.resizing=!0,this.documentScroll={top:a(document).scrollTop(),left:a(document).scrollLeft()},(f.is(".ui-draggable")||/absolute/.test(f.css("position")))&&f.css({position:"absolute",top:e.top,left:e.left}),this._renderProxy();var g=c(this.helper.css("left")),h=c(this.helper.css("top"));d.containment&&(g+=a(d.containment).scrollLeft()||0,h+=a(d.containment).scrollTop()||0),this.offset=this.helper.offset(),this.position={left:g,top:h},this.size=this._helper?{width:f.outerWidth(),height:f.outerHeight()}:{width:f.width(),height:f.height()},this.originalSize=this._helper?{width:f.outerWidth(),height:f.outerHeight()}:{width:f.width(),height:f.height()},this.originalPosition={left:g,top:h},this.sizeDiff={width:f.outerWidth()-f.width(),height:f.outerHeight()-f.height()},this.originalMousePosition={left:b.pageX,top:b.pageY},this.aspectRatio=typeof d.aspectRatio=="number"?d.aspectRatio:this.originalSize.width/this.originalSize.height||1;var i=a(".ui-resizable-"+this.axis).css("cursor");a("body").css("cursor",i=="auto"?this.axis+"-resize":i),f.addClass("ui-resizable-resizing"),this._propagate("start",b);return!0},_mouseDrag:function(b){var c=this.helper,d=this.options,e={},f=this,g=this.originalMousePosition,h=this.axis,i=b.pageX-g.left||0,j=b.pageY-g.top||0,k=this._change[h];if(!k)return!1;var l=k.apply(this,[b,i,j]),m=a.browser.msie&&a.browser.version<7,n=this.sizeDiff;this._updateVirtualBoundaries(b.shiftKey);if(this._aspectRatio||b.shiftKey)l=this._updateRatio(l,b);l=this._respectSize(l,b),this._propagate("resize",b),c.css({top:this.position.top+"px",left:this.position.left+"px",width:this.size.width+"px",height:this.size.height+"px"}),!this._helper&&this._proportionallyResizeElements.length&&this._proportionallyResize(),this._updateCache(l),this._trigger("resize",b,this.ui());return!1},_mouseStop:function(b){this.resizing=!1;var c=this.options,d=this;if(this._helper){var e=this._proportionallyResizeElements,f=e.length&&/textarea/i.test(e[0].nodeName),g=f&&a.ui.hasScroll(e[0],"left")?0:d.sizeDiff.height,h=f?0:d.sizeDiff.width,i={width:d.helper.width()-h,height:d.helper.height()-g},j=parseInt(d.element.css("left"),10)+(d.position.left-d.originalPosition.left)||null,k=parseInt(d.element.css("top"),10)+(d.position.top-d.originalPosition.top)||null;c.animate||this.element.css(a.extend(i,{top:k,left:j})),d.helper.height(d.size.height),d.helper.width(d.size.width),this._helper&&!c.animate&&this._proportionallyResize()}a("body").css("cursor","auto"),this.element.removeClass("ui-resizable-resizing"),this._propagate("stop",b),this._helper&&this.helper.remove();return!1},_updateVirtualBoundaries:function(a){var b=this.options,c,e,f,g,h;h={minWidth:d(b.minWidth)?b.minWidth:0,maxWidth:d(b.maxWidth)?b.maxWidth:Infinity,minHeight:d(b.minHeight)?b.minHeight:0,maxHeight:d(b.maxHeight)?b.maxHeight:Infinity};if(this._aspectRatio||a)c=h.minHeight*this.aspectRatio,f=h.minWidth/this.aspectRatio,e=h.maxHeight*this.aspectRatio,g=h.maxWidth/this.aspectRatio,c>h.minWidth&&(h.minWidth=c),f>h.minHeight&&(h.minHeight=f),ea.width,k=d(a.height)&&e.minHeight&&e.minHeight>a.height;j&&(a.width=e.minWidth),k&&(a.height=e.minHeight),h&&(a.width=e.maxWidth),i&&(a.height=e.maxHeight);var l=this.originalPosition.left+this.originalSize.width,m=this.position.top+this.size.height,n=/sw|nw|w/.test(g),o=/nw|ne|n/.test(g);j&&n&&(a.left=l-e.minWidth),h&&n&&(a.left=l-e.maxWidth),k&&o&&(a.top=m-e.minHeight),i&&o&&(a.top=m-e.maxHeight);var p=!a.width&&!a.height;p&&!a.left&&a.top?a.top=null:p&&!a.top&&a.left&&(a.left=null);return a},_proportionallyResize:function(){var b=this.options;if(!!this._proportionallyResizeElements.length){var c=this.helper||this.element;for(var d=0;d');var d=a.browser.msie&&a.browser.version<7,e=d?1:0,f=d?2:-1;this.helper.addClass(this._helper).css({width:this.element.outerWidth()+f,height:this.element.outerHeight()+f,position:"absolute",left:this.elementOffset.left-e+"px",top:this.elementOffset.top-e+"px",zIndex:++c.zIndex}),this.helper.appendTo("body").disableSelection()}else this.helper=this.element},_change:{e:function(a,b,c){return{width:this.originalSize.width+b}},w:function(a,b,c){var d=this.options,e=this.originalSize,f=this.originalPosition;return{left:f.left+b,width:e.width-b}},n:function(a,b,c){var d=this.options,e=this.originalSize,f=this.originalPosition;return{top:f.top+c,height:e.height-c}},s:function(a,b,c){return{height:this.originalSize.height+c}},se:function(b,c,d){return a.extend(this._change.s.apply(this,arguments),this._change.e.apply(this,[b,c,d]))},sw:function(b,c,d){return a.extend(this._change.s.apply(this,arguments),this._change.w.apply(this,[b,c,d]))},ne:function(b,c,d){return a.extend(this._change.n.apply(this,arguments),this._change.e.apply(this,[b,c,d]))},nw:function(b,c,d){return a.extend(this._change.n.apply(this,arguments),this._change.w.apply(this,[b,c,d]))}},_propagate:function(b,c){a.ui.plugin.call(this,b,[c,this.ui()]),b!="resize"&&this._trigger(b,c,this.ui())},plugins:{},ui:function(){return{originalElement:this.originalElement,element:this.element,helper:this.helper,position:this.position,size:this.size,originalSize:this.originalSize,originalPosition:this.originalPosition}}}),a.extend(a.ui.resizable,{version:"1.8.18"}),a.ui.plugin.add("resizable","alsoResize",{start:function(b,c){var d=a(this).data("resizable"),e=d.options,f=function(b){a(b).each(function(){var b=a(this);b.data("resizable-alsoresize",{width:parseInt(b.width(),10),height:parseInt(b.height(),10),left:parseInt(b.css("left"),10),top:parseInt(b.css("top"),10)})})};typeof e.alsoResize=="object"&&!e.alsoResize.parentNode?e.alsoResize.length?(e.alsoResize=e.alsoResize[0],f(e.alsoResize)):a.each(e.alsoResize,function(a){f(a)}):f(e.alsoResize)},resize:function(b,c){var d=a(this).data("resizable"),e=d.options,f=d.originalSize,g=d.originalPosition,h={height:d.size.height-f.height||0,width:d.size.width-f.width||0,top:d.position.top-g.top||0,left:d.position.left-g.left||0},i=function(b,d){a(b).each(function(){var b=a(this),e=a(this).data("resizable-alsoresize"),f={},g=d&&d.length?d:b.parents(c.originalElement[0]).length?["width","height"]:["width","height","top","left"];a.each(g,function(a,b){var c=(e[b]||0)+(h[b]||0);c&&c>=0&&(f[b]=c||null)}),b.css(f)})};typeof e.alsoResize=="object"&&!e.alsoResize.nodeType?a.each(e.alsoResize,function(a,b){i(a,b)}):i(e.alsoResize)},stop:function(b,c){a(this).removeData("resizable-alsoresize")}}),a.ui.plugin.add("resizable","animate",{stop:function(b,c){var d=a(this).data("resizable"),e=d.options,f=d._proportionallyResizeElements,g=f.length&&/textarea/i.test(f[0].nodeName),h=g&&a.ui.hasScroll(f[0],"left")?0:d.sizeDiff.height,i=g?0:d.sizeDiff.width,j={width:d.size.width-i,height:d.size.height-h},k=parseInt(d.element.css("left"),10)+(d.position.left-d.originalPosition.left)||null,l=parseInt(d.element.css("top"),10)+(d.position.top-d.originalPosition.top)||null;d.element.animate(a.extend(j,l&&k?{top:l,left:k}:{}),{duration:e.animateDuration,easing:e.animateEasing,step:function(){var c={width:parseInt(d.element.css("width"),10),height:parseInt(d.element.css("height"),10),top:parseInt(d.element.css("top"),10),left:parseInt(d.element.css("left"),10)};f&&f.length&&a(f[0]).css({width:c.width,height:c.height}),d._updateCache(c),d._propagate("resize",b)}})}}),a.ui.plugin.add("resizable","containment",{start:function(b,d){var e=a(this).data("resizable"),f=e.options,g=e.element,h=f.containment,i=h instanceof a?h.get(0):/parent/.test(h)?g.parent().get(0):h;if(!!i){e.containerElement=a(i);if(/document/.test(h)||h==document)e.containerOffset={left:0,top:0},e.containerPosition={left:0,top:0},e.parentData={element:a(document),left:0,top:0,width:a(document).width(),height:a(document).height()||document.body.parentNode.scrollHeight};else{var j=a(i),k=[];a(["Top","Right","Left","Bottom"]).each(function(a,b){k[a]=c(j.css("padding"+b))}),e.containerOffset=j.offset(),e.containerPosition=j.position(),e.containerSize={height:j.innerHeight()-k[3],width:j.innerWidth()-k[1]};var l=e.containerOffset,m=e.containerSize.height,n=e.containerSize.width,o=a.ui.hasScroll(i,"left")?i.scrollWidth:n,p=a.ui.hasScroll(i)?i.scrollHeight:m;e.parentData={element:i,left:l.left,top:l.top,width:o,height:p}}}},resize:function(b,c){var d=a(this).data("resizable"),e=d.options,f=d.containerSize,g=d.containerOffset,h=d.size,i=d.position,j=d._aspectRatio||b.shiftKey,k={top:0,left:0},l=d.containerElement;l[0]!=document&&/static/.test(l.css("position"))&&(k=g),i.left<(d._helper?g.left:0)&&(d.size.width=d.size.width+(d._helper?d.position.left-g.left:d.position.left-k.left),j&&(d.size.height=d.size.width/e.aspectRatio),d.position.left=e.helper?g.left:0),i.top<(d._helper?g.top:0)&&(d.size.height=d.size.height+(d._helper?d.position.top-g.top:d.position.top),j&&(d.size.width=d.size.height*e.aspectRatio),d.position.top=d._helper?g.top:0),d.offset.left=d.parentData.left+d.position.left,d.offset.top=d.parentData.top+d.position.top;var m=Math.abs((d._helper?d.offset.left-k.left:d.offset.left-k.left)+d.sizeDiff.width),n=Math.abs((d._helper?d.offset.top-k.top:d.offset.top-g.top)+d.sizeDiff.height),o=d.containerElement.get(0)==d.element.parent().get(0),p=/relative|absolute/.test(d.containerElement.css("position"));o&&p&&(m-=d.parentData.left),m+d.size.width>=d.parentData.width&&(d.size.width=d.parentData.width-m,j&&(d.size.height=d.size.width/d.aspectRatio)),n+d.size.height>=d.parentData.height&&(d.size.height=d.parentData.height-n,j&&(d.size.width=d.size.height*d.aspectRatio))},stop:function(b,c){var d=a(this).data("resizable"),e=d.options,f=d.position,g=d.containerOffset,h=d.containerPosition,i=d.containerElement,j=a(d.helper),k=j.offset(),l=j.outerWidth()-d.sizeDiff.width,m=j.outerHeight()-d.sizeDiff.height;d._helper&&!e.animate&&/relative/.test(i.css("position"))&&a(this).css({left:k.left-h.left-g.left,width:l,height:m}),d._helper&&!e.animate&&/static/.test(i.css("position"))&&a(this).css({left:k.left-h.left-g.left,width:l,height:m})}}),a.ui.plugin.add("resizable","ghost",{start:function(b,c){var d=a(this).data("resizable"),e=d.options,f=d.size;d.ghost=d.originalElement.clone(),d.ghost.css({opacity:.25,display:"block",position:"relative",height:f.height,width:f.width,margin:0,left:0,top:0}).addClass("ui-resizable-ghost").addClass(typeof e.ghost=="string"?e.ghost:""),d.ghost.appendTo(d.helper)},resize:function(b,c){var d=a(this).data("resizable"),e=d.options;d.ghost&&d.ghost.css({position:"relative",height:d.size.height,width:d.size.width})},stop:function(b,c){var d=a(this).data("resizable"),e=d.options;d.ghost&&d.helper&&d.helper.get(0).removeChild(d.ghost.get(0))}}),a.ui.plugin.add("resizable","grid",{resize:function(b,c){var d=a(this).data("resizable"),e=d.options,f=d.size,g=d.originalSize,h=d.originalPosition,i=d.axis,j=e._aspectRatio||b.shiftKey;e.grid=typeof e.grid=="number"?[e.grid,e.grid]:e.grid;var k=Math.round((f.width-g.width)/(e.grid[0]||1))*(e.grid[0]||1),l=Math.round((f.height-g.height)/(e.grid[1]||1))*(e.grid[1]||1);/^(se|s|e)$/.test(i)?(d.size.width=g.width+k,d.size.height=g.height+l):/^(ne)$/.test(i)?(d.size.width=g.width+k,d.size.height=g.height+l,d.position.top=h.top-l):/^(sw)$/.test(i)?(d.size.width=g.width+k,d.size.height=g.height+l,d.position.left=h.left-k):(d.size.width=g.width+k,d.size.height=g.height+l,d.position.top=h.top-l,d.position.left=h.left-k)}});var c=function(a){return parseInt(a,10)||0},d=function(a){return!isNaN(parseInt(a,10))}}(jQuery),function(a,b){a.widget("ui.selectable",a.ui.mouse,{options:{appendTo:"body",autoRefresh:!0,distance:0,filter:"*",tolerance:"touch"},_create:function(){var b=this;this.element.addClass("ui-selectable"),this.dragged=!1;var c;this.refresh=function(){c=a(b.options.filter,b.element[0]),c.addClass("ui-selectee"),c.each(function(){var b=a(this),c=b.offset();a.data(this,"selectable-item",{element:this,$element:b,left:c.left,top:c.top,right:c.left+b.outerWidth(),bottom:c.top+b.outerHeight(),startselected:!1,selected:b.hasClass("ui-selected"),selecting:b.hasClass("ui-selecting"),unselecting:b.hasClass("ui-unselecting")})})},this.refresh(),this.selectees=c.addClass("ui-selectee"),this._mouseInit(),this.helper=a("
    ")},destroy:function(){this.selectees.removeClass("ui-selectee").removeData("selectable-item"),this.element.removeClass("ui-selectable ui-selectable-disabled").removeData("selectable").unbind(".selectable"),this._mouseDestroy();return this},_mouseStart:function(b){var c=this;this.opos=[b.pageX,b.pageY];if(!this.options.disabled){var d=this.options;this.selectees=a(d.filter,this.element[0]),this._trigger("start",b),a(d.appendTo).append(this.helper),this.helper.css({left:b.clientX,top:b.clientY,width:0,height:0}),d.autoRefresh&&this.refresh(),this.selectees.filter(".ui-selected").each(function(){var d=a.data(this,"selectable-item");d.startselected=!0,!b.metaKey&&!b.ctrlKey&&(d.$element.removeClass("ui-selected"),d.selected=!1,d.$element.addClass("ui-unselecting"),d.unselecting=!0,c._trigger("unselecting",b,{unselecting:d.element}))}),a(b.target).parents().andSelf().each(function(){var d=a.data(this,"selectable-item");if(d){var e=!b.metaKey&&!b.ctrlKey||!d.$element.hasClass("ui-selected");d.$element.removeClass(e?"ui-unselecting":"ui-selected").addClass(e?"ui-selecting":"ui-unselecting"),d.unselecting=!e,d.selecting=e,d.selected=e,e?c._trigger("selecting",b,{selecting:d.element}):c._trigger("unselecting",b,{unselecting:d.element});return!1}})}},_mouseDrag:function(b){var c=this;this.dragged=!0;if(!this.options.disabled){var d=this.options,e=this.opos[0],f=this.opos[1],g=b.pageX,h=b.pageY;if(e>g){var i=g;g=e,e=i}if(f>h){var i=h;h=f,f=i}this.helper.css({left:e,top:f,width:g-e,height:h-f}),this.selectees.each(function(){var i=a.data(this,"selectable-item");if(!!i&&i.element!=c.element[0]){var j=!1;d.tolerance=="touch"?j=!(i.left>g||i.righth||i.bottome&&i.rightf&&i.bottom *",opacity:!1,placeholder:!1,revert:!1,scroll:!0,scrollSensitivity:20,scrollSpeed:20,scope:"default",tolerance:"intersect",zIndex:1e3},_create:function(){var a=this.options;this.containerCache={},this.element.addClass("ui-sortable"),this.refresh(),this.floating=this.items.length?a.axis==="x"||/left|right/.test(this.items[0].item.css("float"))||/inline|table-cell/.test(this.items[0].item.css("display")):!1,this.offset=this.element.offset(),this._mouseInit(),this.ready=!0},destroy:function(){a.Widget.prototype.destroy.call(this),this.element.removeClass("ui-sortable ui-sortable-disabled"),this._mouseDestroy();for(var b=this.items.length-1;b>=0;b--)this.items[b].item.removeData(this.widgetName+"-item");return this},_setOption:function(b,c){b==="disabled"?(this.options[b]=c,this.widget()[c?"addClass":"removeClass"]("ui-sortable-disabled")):a.Widget.prototype._setOption.apply(this,arguments)},_mouseCapture:function(b,c){var d=this;if(this.reverting)return!1;if(this.options.disabled||this.options.type=="static")return!1;this._refreshItems(b);var e=null,f=this,g=a(b.target).parents().each(function(){if(a.data(this,d.widgetName+"-item")==f){e=a(this);return!1}});a.data(b.target,d.widgetName+"-item")==f&&(e=a(b.target));if(!e)return!1;if(this.options.handle&&!c){var h=!1;a(this.options.handle,e).find("*").andSelf().each(function(){this==b.target&&(h=!0)});if(!h)return!1}this.currentItem=e,this._removeCurrentsFromItems();return!0},_mouseStart:function(b,c,d){var e=this.options,f=this;this.currentContainer=this,this.refreshPositions(),this.helper=this._createHelper(b),this._cacheHelperProportions(),this._cacheMargins(),this.scrollParent=this.helper.scrollParent(),this.offset=this.currentItem.offset(),this.offset={top:this.offset.top-this.margins.top,left:this.offset.left-this.margins.left},this.helper.css("position","absolute"),this.cssPosition=this.helper.css("position"),a.extend(this.offset,{click:{left:b.pageX-this.offset.left,top:b.pageY-this.offset.top},parent:this._getParentOffset(),relative:this._getRelativeOffset()}),this.originalPosition=this._generatePosition(b),this.originalPageX=b.pageX,this.originalPageY=b.pageY,e.cursorAt&&this._adjustOffsetFromHelper(e.cursorAt),this.domPosition={prev:this.currentItem.prev()[0],parent:this.currentItem.parent()[0]},this.helper[0]!=this.currentItem[0]&&this.currentItem.hide(),this._createPlaceholder(),e.containment&&this._setContainment(),e.cursor&&(a("body").css("cursor")&&(this._storedCursor=a("body").css("cursor")),a("body").css("cursor",e.cursor)),e.opacity&&(this.helper.css("opacity")&&(this._storedOpacity=this.helper.css("opacity")),this.helper.css("opacity",e.opacity)),e.zIndex&&(this.helper.css("zIndex")&&(this._storedZIndex=this.helper.css("zIndex")),this.helper.css("zIndex",e.zIndex)),this.scrollParent[0]!=document&&this.scrollParent[0].tagName!="HTML"&&(this.overflowOffset=this.scrollParent.offset()),this._trigger("start",b,this._uiHash()),this._preserveHelperProportions||this._cacheHelperProportions();if(!d)for(var g=this.containers.length-1;g>=0;g--)this.containers[g]._trigger("activate",b,f._uiHash(this));a.ui.ddmanager&&(a.ui.ddmanager.current=this),a.ui.ddmanager&&!e.dropBehaviour&&a.ui.ddmanager.prepareOffsets(this,b),this.dragging=!0,this.helper.addClass("ui-sortable-helper"),this._mouseDrag(b);return!0},_mouseDrag:function(b){this.position=this._generatePosition(b),this.positionAbs=this._convertPositionTo("absolute"),this.lastPositionAbs||(this.lastPositionAbs=this.positionAbs);if(this.options.scroll){var c=this.options,d=!1;this.scrollParent[0]!=document&&this.scrollParent[0].tagName!="HTML"?(this.overflowOffset.top+this.scrollParent[0].offsetHeight-b.pageY=0;e--){var f=this.items[e],g=f.item[0],h=this._intersectsWithPointer(f);if(!h)continue;if(g!=this.currentItem[0]&&this.placeholder[h==1?"next":"prev"]()[0]!=g&&!a.ui.contains(this.placeholder[0],g)&&(this.options.type=="semi-dynamic"?!a.ui.contains(this.element[0],g):!0)){this.direction=h==1?"down":"up";if(this.options.tolerance=="pointer"||this._intersectsWithSides(f))this._rearrange(b,f);else break;this._trigger("change",b,this._uiHash());break}}this._contactContainers(b),a.ui.ddmanager&&a.ui.ddmanager.drag(this,b),this._trigger("sort",b,this._uiHash()),this.lastPositionAbs=this.positionAbs;return!1},_mouseStop:function(b,c){if(!!b){a.ui.ddmanager&&!this.options.dropBehaviour&&a.ui.ddmanager.drop(this,b);if(this.options.revert){var d=this,e=d.placeholder.offset();d.reverting=!0,a(this.helper).animate({left:e.left-this.offset.parent.left-d.margins.left+(this.offsetParent[0]==document.body?0:this.offsetParent[0].scrollLeft),top:e.top-this.offset.parent.top-d.margins.top+(this.offsetParent[0]==document.body?0:this.offsetParent[0].scrollTop)},parseInt(this.options.revert,10)||500,function(){d._clear(b)})}else this._clear(b,c);return!1}},cancel:function(){var b=this;if(this.dragging){this._mouseUp({target:null}),this.options.helper=="original"?this.currentItem.css(this._storedCSS).removeClass("ui-sortable-helper"):this.currentItem.show();for(var c=this.containers.length-1;c>=0;c--)this.containers[c]._trigger("deactivate",null,b._uiHash(this)),this.containers[c].containerCache.over&&(this.containers[c]._trigger("out",null,b._uiHash(this)),this.containers[c].containerCache.over=0)}this.placeholder&&(this.placeholder[0].parentNode&&this.placeholder[0].parentNode.removeChild(this.placeholder[0]),this.options.helper!="original"&&this.helper&&this.helper[0].parentNode&&this.helper.remove(),a.extend(this,{helper:null,dragging:!1,reverting:!1,_noFinalSort:null}),this.domPosition.prev?a(this.domPosition.prev).after(this.currentItem):a(this.domPosition.parent).prepend(this.currentItem));return this},serialize:function(b){var c=this._getItemsAsjQuery(b&&b.connected),d=[];b=b||{},a(c).each(function(){var c=(a(b.item||this).attr(b.attribute||"id")||"").match(b.expression||/(.+)[-=_](.+)/);c&&d.push((b.key||c[1]+"[]")+"="+(b.key&&b.expression?c[1]:c[2]))}),!d.length&&b.key&&d.push(b.key+"=");return d.join("&")},toArray:function(b){var c=this._getItemsAsjQuery(b&&b.connected),d=[];b=b||{},c.each(function(){d.push(a(b.item||this).attr(b.attribute||"id")||"")});return d},_intersectsWith:function(a){var b=this.positionAbs.left,c=b+this.helperProportions.width,d=this.positionAbs.top,e=d+this.helperProportions.height,f=a.left,g=f+a.width,h=a.top,i=h+a.height,j=this.offset.click.top,k=this.offset.click.left,l=d+j>h&&d+jf&&b+ka[this.floating?"width":"height"]?l:f0?"down":"up")},_getDragHorizontalDirection:function(){var a=this.positionAbs.left-this.lastPositionAbs.left;return a!=0&&(a>0?"right":"left")},refresh:function(a){this._refreshItems(a),this.refreshPositions();return this},_connectWith:function(){var a=this.options;return a.connectWith.constructor==String?[a.connectWith]:a.connectWith},_getItemsAsjQuery:function(b){var c=this,d=[],e=[],f=this._connectWith();if(f&&b)for(var g=f.length-1;g>=0;g--){var h=a(f[g]);for(var i=h.length-1;i>=0;i--){var j=a.data(h[i],this.widgetName);j&&j!=this&&!j.options.disabled&&e.push([a.isFunction(j.options.items)?j.options.items.call(j.element):a(j.options.items,j.element).not(".ui-sortable-helper").not(".ui-sortable-placeholder"),j])}}e.push ([a.isFunction(this.options.items)?this.options.items.call(this.element,null,{options:this.options,item:this.currentItem}):a(this.options.items,this.element).not(".ui-sortable-helper").not(".ui-sortable-placeholder"),this]);for(var g=e.length-1;g>=0;g--)e[g][0].each(function(){d.push(this)});return a(d)},_removeCurrentsFromItems:function(){var a=this.currentItem.find(":data("+this.widgetName+"-item)");for(var b=0;b=0;g--){var h=a(f[g]);for(var i=h.length-1;i>=0;i--){var j=a.data(h[i],this.widgetName);j&&j!=this&&!j.options.disabled&&(e.push([a.isFunction(j.options.items)?j.options.items.call(j.element[0],b,{item:this.currentItem}):a(j.options.items,j.element),j]),this.containers.push(j))}}for(var g=e.length-1;g>=0;g--){var k=e[g][1],l=e[g][0];for(var i=0,m=l.length;i=0;c--){var d=this.items[c];if(d.instance!=this.currentContainer&&this.currentContainer&&d.item[0]!=this.currentItem[0])continue;var e=this.options.toleranceElement?a(this.options.toleranceElement,d.item):d.item;b||(d.width=e.outerWidth(),d.height=e.outerHeight());var f=e.offset();d.left=f.left,d.top=f.top}if(this.options.custom&&this.options.custom.refreshContainers)this.options.custom.refreshContainers.call(this);else for(var c=this.containers.length-1;c>=0;c--){var f=this.containers[c].element.offset();this.containers[c].containerCache.left=f.left,this.containers[c].containerCache.top=f.top,this.containers[c].containerCache.width=this.containers[c].element.outerWidth(),this.containers[c].containerCache.height=this.containers[c].element.outerHeight()}return this},_createPlaceholder:function(b){var c=b||this,d=c.options;if(!d.placeholder||d.placeholder.constructor==String){var e=d.placeholder;d.placeholder={element:function(){var b=a(document.createElement(c.currentItem[0].nodeName)).addClass(e||c.currentItem[0].className+" ui-sortable-placeholder").removeClass("ui-sortable-helper")[0];e||(b.style.visibility="hidden");return b},update:function(a,b){if(!e||!!d.forcePlaceholderSize)b.height()||b.height(c.currentItem.innerHeight()-parseInt(c.currentItem.css("paddingTop")||0,10)-parseInt(c.currentItem.css("paddingBottom")||0,10)),b.width()||b.width(c.currentItem.innerWidth()-parseInt(c.currentItem.css("paddingLeft")||0,10)-parseInt(c.currentItem.css("paddingRight")||0,10))}}}c.placeholder=a(d.placeholder.element.call(c.element,c.currentItem)),c.currentItem.after(c.placeholder),d.placeholder.update(c,c.placeholder)},_contactContainers:function(b){var c=null,d=null;for(var e=this.containers.length-1;e>=0;e--){if(a.ui.contains(this.currentItem[0],this.containers[e].element[0]))continue;if(this._intersectsWith(this.containers[e].containerCache)){if(c&&a.ui.contains(this.containers[e].element[0],c.element[0]))continue;c=this.containers[e],d=e}else this.containers[e].containerCache.over&&(this.containers[e]._trigger("out",b,this._uiHash(this)),this.containers[e].containerCache.over=0)}if(!!c)if(this.containers.length===1)this.containers[d]._trigger("over",b,this._uiHash(this)),this.containers[d].containerCache.over=1;else if(this.currentContainer!=this.containers[d]){var f=1e4,g=null,h=this.positionAbs[this.containers[d].floating?"left":"top"];for(var i=this.items.length-1;i>=0;i--){if(!a.ui.contains(this.containers[d].element[0],this.items[i].item[0]))continue;var j=this.items[i][this.containers[d].floating?"left":"top"];Math.abs(j-h)this.containment[2]&&(f=this.containment[2]+this.offset.click.left),b.pageY-this.offset.click.top>this.containment[3]&&(g=this.containment[3]+this.offset.click.top));if(c.grid){var h=this.originalPageY+Math.round((g-this.originalPageY)/c.grid[1])*c.grid[1];g=this.containment?h-this.offset.click.topthis.containment[3]?h-this.offset.click.topthis.containment[2]?i-this.offset.click.left=0;f--)a.ui.contains(this.containers[f].element[0],this.currentItem[0])&&!c&&(d.push(function(a){return function(b){a._trigger("receive",b,this._uiHash(this))}}.call(this,this.containers[f])),d.push(function(a){return function(b){a._trigger("update",b,this._uiHash(this))}}.call(this,this.containers[f])))}for(var f=this.containers.length-1;f>=0;f--)c||d.push(function(a){return function(b){a._trigger("deactivate",b,this._uiHash(this))}}.call(this,this.containers[f])),this.containers[f].containerCache.over&&(d.push(function(a){return function(b){a._trigger("out",b,this._uiHash(this))}}.call(this,this.containers[f])),this.containers[f].containerCache.over=0);this._storedCursor&&a("body").css("cursor",this._storedCursor),this._storedOpacity&&this.helper.css("opacity",this._storedOpacity),this._storedZIndex&&this.helper.css("zIndex",this._storedZIndex=="auto"?"":this._storedZIndex),this.dragging=!1;if(this.cancelHelperRemoval){if(!c){this._trigger("beforeStop",b,this._uiHash());for(var f=0;f").addClass("ui-effects-wrapper").css({fontSize:"100%",background:"transparent",border:"none",margin:0,padding:0}),e=document.activeElement;b.wrap(d),(b[0]===e||a.contains(b[0],e))&&a(e).focus(),d=b.parent(),b.css("position")=="static"?(d.css({position:"relative"}),b.css({position:"relative"})):(a.extend(c,{position:b.css("position"),zIndex:b.css("z-index")}),a.each(["top","left","bottom","right"],function(a,d){c[d]=b.css(d),isNaN(parseInt(c[d],10))&&(c[d]="auto")}),b.css({position:"relative",top:0,left:0,right:"auto",bottom:"auto"}));return d.css(c).show()},removeWrapper:function(b){var c,d=document.activeElement;if(b.parent().is(".ui-effects-wrapper")){c=b.parent().replaceWith(b),(b[0]===d||a.contains(b[0],d))&&a(d).focus();return c}return b},setTransition:function(b,c,d,e){e=e||{},a.each(c,function(a,c){unit=b.cssUnit(c),unit[0]>0&&(e[c]=unit[0]*d+unit[1])});return e}}),a.fn.extend({effect:function(b,c,d,e){var f=k.apply(this,arguments),g={options:f[1],duration:f[2],callback:f[3]},h=g.options.mode,i=a.effects[b];if(a.fx.off||!i)return h?this[h](g.duration,g.callback):this.each(function(){g.callback&&g.callback.call(this)});return i.call(this,g)},_show:a.fn.show,show:function(a){if(l(a))return this._show.apply(this,arguments);var b=k.apply(this,arguments);b[1].mode="show";return this.effect.apply(this,b)},_hide:a.fn.hide,hide:function(a){if(l(a))return this._hide.apply(this,arguments);var b=k.apply(this,arguments);b[1].mode="hide";return this.effect.apply(this,b)},__toggle:a.fn.toggle,toggle:function(b){if(l(b)||typeof b=="boolean"||a.isFunction(b))return this.__toggle.apply(this,arguments);var c=k.apply(this,arguments);c[1].mode="toggle";return this.effect.apply(this,c)},cssUnit:function(b){var c=this.css(b),d=[];a.each(["em","px","%","pt"],function(a,b){c.indexOf(b)>0&&(d=[parseFloat(c),b])});return d}}),a.easing.jswing=a.easing.swing,a.extend(a.easing,{def:"easeOutQuad",swing:function(b,c,d,e,f){return a.easing[a.easing.def](b,c,d,e,f)},easeInQuad:function(a,b,c,d,e){return d*(b/=e)*b+c},easeOutQuad:function(a,b,c,d,e){return-d*(b/=e)*(b-2)+c},easeInOutQuad:function(a,b,c,d,e){if((b/=e/2)<1)return d/2*b*b+c;return-d/2*(--b*(b-2)-1)+c},easeInCubic:function(a,b,c,d,e){return d*(b/=e)*b*b+c},easeOutCubic:function(a,b,c,d,e){return d*((b=b/e-1)*b*b+1)+c},easeInOutCubic:function(a,b,c,d,e){if((b/=e/2)<1)return d/2*b*b*b+c;return d/2*((b-=2)*b*b+2)+c},easeInQuart:function(a,b,c,d,e){return d*(b/=e)*b*b*b+c},easeOutQuart:function(a,b,c,d,e){return-d*((b=b/e-1)*b*b*b-1)+c},easeInOutQuart:function(a,b,c,d,e){if((b/=e/2)<1)return d/2*b*b*b*b+c;return-d/2*((b-=2)*b*b*b-2)+c},easeInQuint:function(a,b,c,d,e){return d*(b/=e)*b*b*b*b+c},easeOutQuint:function(a,b,c,d,e){return d*((b=b/e-1)*b*b*b*b+1)+c},easeInOutQuint:function(a,b,c,d,e){if((b/=e/2)<1)return d/2*b*b*b*b*b+c;return d/2*((b-=2)*b*b*b*b+2)+c},easeInSine:function(a,b,c,d,e){return-d*Math.cos(b/e*(Math.PI/2))+d+c},easeOutSine:function(a,b,c,d,e){return d*Math.sin(b/e*(Math.PI/2))+c},easeInOutSine:function(a,b,c,d,e){return-d/2*(Math.cos(Math.PI*b/e)-1)+c},easeInExpo:function(a,b,c,d,e){return b==0?c:d*Math.pow(2,10*(b/e-1))+c},easeOutExpo:function(a,b,c,d,e){return b==e?c+d:d*(-Math.pow(2,-10*b/e)+1)+c},easeInOutExpo:function(a,b,c,d,e){if(b==0)return c;if(b==e)return c+d;if((b/=e/2)<1)return d/2*Math.pow(2,10*(b-1))+c;return d/2*(-Math.pow(2,-10*--b)+2)+c},easeInCirc:function(a,b,c,d,e){return-d*(Math.sqrt(1-(b/=e)*b)-1)+c},easeOutCirc:function(a,b,c,d,e){return d*Math.sqrt(1-(b=b/e-1)*b)+c},easeInOutCirc:function(a,b,c,d,e){if((b/=e/2)<1)return-d/2*(Math.sqrt(1-b*b)-1)+c;return d/2*(Math.sqrt(1-(b-=2)*b)+1)+c},easeInElastic:function(a,b,c,d,e){var f=1.70158,g=0,h=d;if(b==0)return c;if((b/=e)==1)return c+d;g||(g=e*.3);if(h").css({position:"absolute",visibility:"visible",left:-j*(g/d),top:-i*(h/c)}).parent().addClass("ui-effects-explode").css({position:"absolute",overflow:"hidden",width:g/d,height:h/c,left:f.left+j*(g/d)+(b.options.mode=="show"?(j-Math.floor(d/2))*(g/d):0),top:f.top+i*(h/c)+(b.options.mode=="show"?(i-Math.floor(c/2))*(h/c):0),opacity:b.options.mode=="show"?0:1}).animate({left:f.left+j*(g/d)+(b.options.mode=="show"?0:(j-Math.floor(d/2))*(g/d)),top:f.top+i*(h/c)+(b.options.mode=="show"?0:(i-Math.floor(c/2))*(h/c)),opacity:b.options.mode=="show"?1:0},b.duration||500);setTimeout(function(){b.options.mode=="show"?e.css({visibility:"visible"}):e.css({visibility:"visible"}).hide(),b.callback&&b.callback.apply(e[0]),e.dequeue(),a("div.ui-effects-explode").remove()},b.duration||500)})}}(jQuery),function(a,b){a.effects.fade=function(b){return this.queue(function(){var c=a(this),d=a.effects.setMode(c,b.options.mode||"hide");c.animate({opacity:d},{queue:!1,duration:b.duration,easing:b.options.easing,complete:function(){b.callback&&b.callback.apply(this,arguments),c.dequeue()}})})}}(jQuery),function(a,b){a.effects.fold=function(b){return this.queue(function(){var c=a(this),d=["position","top","bottom","left","right"],e=a.effects.setMode(c,b.options.mode||"hide"),f=b.options.size||15,g=!!b.options.horizFirst,h=b.duration?b.duration/2:a.fx.speeds._default/2;a.effects.save(c,d),c.show();var i=a.effects.createWrapper(c).css({overflow:"hidden"}),j=e=="show"!=g,k=j?["width","height"]:["height","width"],l=j?[i.width(),i.height()]:[i.height(),i.width()],m=/([0-9]+)%/.exec(f);m&&(f=parseInt(m[1],10)/100*l[e=="hide"?0:1]),e=="show"&&i.css(g?{height:0,width:f}:{height:f,width:0});var n={},p={};n[k[0]]=e=="show"?l[0]:f,p[k[1]]=e=="show"?l[1]:0,i.animate(n,h,b.options.easing).animate(p,h,b.options.easing,function(){e=="hide"&&c.hide(),a.effects.restore(c,d),a.effects.removeWrapper(c),b.callback&&b.callback.apply(c[0],arguments),c.dequeue()})})}}(jQuery),function(a,b){a.effects.highlight=function(b){return this.queue(function(){var c=a(this),d=["backgroundImage","backgroundColor","opacity"],e=a.effects.setMode(c,b.options.mode||"show"),f={backgroundColor:c.css("backgroundColor")};e=="hide"&&(f.opacity=0),a.effects.save(c,d),c.show().css({backgroundImage:"none",backgroundColor:b.options.color||"#ffff99"}).animate(f,{queue:!1,duration:b.duration,easing:b.options.easing,complete:function(){e=="hide"&&c.hide(),a.effects.restore(c,d),e=="show"&&!a.support.opacity&&this.style.removeAttribute("filter"),b.callback&&b.callback.apply(this,arguments),c.dequeue()}})})}}(jQuery),function(a,b){a.effects.pulsate=function(b){return this.queue(function(){var c=a(this),d=a.effects.setMode(c,b.options.mode||"show");times=(b.options.times||5)*2-1,duration=b.duration?b.duration/2:a.fx.speeds._default/2,isVisible=c.is(":visible"),animateTo=0,isVisible||(c.css("opacity",0).show(),animateTo=1),(d=="hide"&&isVisible||d=="show"&&!isVisible)&×--;for(var e=0;e').appendTo(document.body).addClass(b.options.className).css({top:g.top,left:g.left,height:c.innerHeight(),width:c.innerWidth(),position:"absolute"}).animate(f,b.duration,b.options.easing,function(){h.remove(),b.callback&&b.callback.apply(c[0],arguments),c.dequeue()})})}}(jQuery),function(a,b){a.widget("ui.accordion",{options:{active:0,animated:"slide",autoHeight:!0,clearStyle:!1,collapsible:!1,event:"click",fillSpace:!1,header:"> li > :first-child,> :not(li):even",icons:{header:"ui-icon-triangle-1-e",headerSelected:"ui-icon-triangle-1-s"},navigation:!1,navigationFilter:function(){return this.href.toLowerCase()===location.href.toLowerCase()}},_create:function(){var b=this,c=b.options;b.running=0,b.element.addClass("ui-accordion ui-widget ui-helper-reset").children("li").addClass("ui-accordion-li-fix"),b.headers=b.element.find(c.header).addClass("ui-accordion-header ui-helper-reset ui-state-default ui-corner-all").bind("mouseenter.accordion",function(){c.disabled||a(this).addClass("ui-state-hover")}).bind("mouseleave.accordion",function(){c.disabled||a(this).removeClass("ui-state-hover")}).bind("focus.accordion",function(){c.disabled||a(this).addClass("ui-state-focus")}).bind("blur.accordion",function(){c.disabled||a(this).removeClass("ui-state-focus")}),b.headers.next().addClass("ui-accordion-content ui-helper-reset ui-widget-content ui-corner-bottom");if(c.navigation){var d=b.element.find("a").filter(c.navigationFilter).eq(0);if(d.length){var e=d.closest(".ui-accordion-header");e.length?b.active=e:b.active=d.closest(".ui-accordion-content").prev()}}b.active=b._findActive(b.active||c.active).addClass("ui-state-default ui-state-active").toggleClass("ui-corner-all").toggleClass("ui-corner-top"),b.active.next().addClass("ui-accordion-content-active"),b._createIcons(),b.resize(),b.element.attr("role","tablist"),b.headers.attr("role","tab").bind("keydown.accordion",function(a){return b._keydown(a)}).next().attr("role","tabpanel"),b.headers.not(b.active||"").attr({"aria-expanded":"false","aria-selected":"false",tabIndex:-1}).next().hide(),b.active.length?b.active.attr({"aria-expanded":"true","aria-selected":"true",tabIndex:0}):b.headers.eq(0).attr("tabIndex",0),a.browser.safari||b.headers.find("a").attr("tabIndex",-1),c.event&&b.headers.bind(c.event.split(" ").join(".accordion ")+".accordion",function(a){b._clickHandler.call(b,a,this),a.preventDefault()})},_createIcons:function(){var b=this.options;b.icons&&(a("").addClass("ui-icon "+b.icons.header).prependTo(this.headers),this.active.children(".ui-icon").toggleClass(b.icons.header).toggleClass(b.icons.headerSelected),this.element.addClass("ui-accordion-icons"))},_destroyIcons:function(){this.headers.children(".ui-icon").remove(),this.element.removeClass("ui-accordion-icons")},destroy:function(){var b=this.options;this.element.removeClass("ui-accordion ui-widget ui-helper-reset").removeAttr("role"),this.headers.unbind(".accordion").removeClass("ui-accordion-header ui-accordion-disabled ui-helper-reset ui-state-default ui-corner-all ui-state-active ui-state-disabled ui-corner-top").removeAttr("role").removeAttr("aria-expanded").removeAttr("aria-selected").removeAttr("tabIndex"),this.headers.find("a").removeAttr("tabIndex"),this._destroyIcons();var c=this.headers.next().css("display","").removeAttr("role").removeClass("ui-helper-reset ui-widget-content ui-corner-bottom ui-accordion-content ui-accordion-content-active ui-accordion-disabled ui-state-disabled");(b.autoHeight||b.fillHeight)&&c.css("height","");return a.Widget.prototype.destroy.call(this)},_setOption:function(b,c){a.Widget.prototype._setOption.apply(this,arguments),b=="active"&&this.activate(c),b=="icons"&&(this._destroyIcons(),c&&this._createIcons()),b=="disabled"&&this.headers.add(this.headers.next())[c?"addClass":"removeClass"]("ui-accordion-disabled ui-state-disabled")},_keydown:function(b){if(!(this.options.disabled||b.altKey||b.ctrlKey)){var c=a.ui.keyCode,d=this.headers.length,e=this.headers.index(b.target),f=!1;switch(b.keyCode){case c.RIGHT:case c.DOWN:f=this.headers[(e+1)%d];break;case c.LEFT:case c.UP:f=this.headers[(e-1+d)%d];break;case c.SPACE:case c.ENTER:this._clickHandler({target:b.target},b.target),b.preventDefault()}if(f){a(b.target).attr("tabIndex",-1),a(f).attr("tabIndex",0),f.focus();return!1}return!0}},resize:function(){var b=this.options,c;if(b.fillSpace){if(a.browser.msie){var d=this.element.parent().css("overflow");this.element.parent().css("overflow","hidden")}c=this.element.parent().height(),a.browser.msie&&this.element.parent().css("overflow",d),this.headers.each(function(){c-=a(this).outerHeight(!0)}),this.headers.next().each(function(){a(this).height(Math.max(0,c-a(this).innerHeight()+a(this).height()))}).css("overflow","auto")}else b.autoHeight&&(c=0,this.headers.next().each(function(){c=Math.max(c,a(this).height("").height())}).height(c));return this},activate:function(a){this.options.active=a;var b=this._findActive(a)[0];this._clickHandler({target:b},b);return this},_findActive:function(b){return b?typeof b=="number"?this.headers.filter(":eq("+b+")"):this.headers.not(this.headers.not(b)):b===!1?a([]):this.headers.filter(":eq(0)")},_clickHandler:function(b,c){var d=this.options;if(!d.disabled){if(!b.target){if(!d.collapsible)return;this.active.removeClass("ui-state-active ui-corner-top").addClass("ui-state-default ui-corner-all").children(".ui-icon").removeClass(d.icons.headerSelected).addClass(d.icons.header),this.active.next().addClass("ui-accordion-content-active");var e=this.active.next(),f={options:d,newHeader:a([]),oldHeader:d.active,newContent:a([]),oldContent:e},g=this.active=a([]);this._toggle(g,e,f);return}var h=a(b.currentTarget||c),i=h[0]===this.active[0];d.active=d.collapsible&&i?!1:this.headers.index(h);if(this.running||!d.collapsible&&i)return;var j=this.active,g=h.next(),e=this.active.next(),f={options:d,newHeader:i&&d.collapsible?a([]):h,oldHeader:this.active,newContent:i&&d.collapsible?a([]):g,oldContent:e},k=this.headers.index(this.active[0])>this.headers.index(h[0]);this.active=i?a([]):h,this._toggle(g,e,f,i,k),j.removeClass("ui-state-active ui-corner-top").addClass("ui-state-default ui-corner-all").children(".ui-icon").removeClass(d.icons.headerSelected).addClass(d.icons.header),i||(h.removeClass("ui-state-default ui-corner-all").addClass("ui-state-active ui-corner-top").children(".ui-icon").removeClass(d.icons.header).addClass(d.icons.headerSelected),h.next().addClass("ui-accordion-content-active"));return}},_toggle:function(b,c,d,e,f){var g=this,h=g.options;g.toShow=b,g.toHide=c,g.data=d;var i=function(){if(!!g)return g._completed.apply(g,arguments)};g._trigger("changestart",null,g.data),g.running=c.size()===0?b.size():c.size();if(h.animated){var j={};h.collapsible&&e?j={toShow:a([]),toHide:c,complete:i,down:f,autoHeight:h.autoHeight||h.fillSpace}:j={toShow:b,toHide:c,complete:i,down:f,autoHeight:h.autoHeight||h.fillSpace},h.proxied||(h.proxied=h.animated),h.proxiedDuration||(h.proxiedDuration=h.duration),h.animated=a.isFunction(h.proxied)?h.proxied(j):h.proxied,h.duration=a.isFunction(h.proxiedDuration)?h.proxiedDuration(j):h.proxiedDuration;var k=a.ui.accordion.animations,l=h.duration,m=h.animated;m&&!k[m]&&!a.easing[m]&&(m="slide"),k[m]||(k[m]=function(a){this.slide(a,{easing:m,duration:l||700})}),k[m](j)}else h.collapsible&&e?b.toggle():(c.hide(),b.show()),i(!0);c.prev().attr({"aria-expanded":"false","aria-selected":"false",tabIndex:-1}).blur(),b.prev().attr({"aria-expanded":"true","aria-selected":"true",tabIndex:0}).focus()},_completed:function(a){this.running=a?0:--this.running;this.running||(this.options.clearStyle&&this.toShow.add(this.toHide).css({height:"",overflow:""}),this.toHide.removeClass("ui-accordion-content-active"),this.toHide.length&&(this.toHide.parent()[0].className=this.toHide.parent()[0].className),this._trigger("change",null,this.data))}}),a.extend(a.ui.accordion,{version:"1.8.18",animations:{slide:function(b,c){b=a.extend({easing:"swing",duration:300},b,c);if(!b.toHide.size())b.toShow.animate({height:"show",paddingTop:"show",paddingBottom:"show"},b);else{if(!b.toShow.size()){b.toHide.animate({height:"hide",paddingTop:"hide",paddingBottom:"hide"},b);return}var d=b.toShow.css("overflow"),e=0,f={},g={},h=["height","paddingTop","paddingBottom"],i,j=b.toShow;i=j[0].style.width,j.width(j.parent().width()-parseFloat(j.css("paddingLeft"))-parseFloat(j.css("paddingRight"))-(parseFloat(j.css("borderLeftWidth"))||0)-(parseFloat(j.css("borderRightWidth"))||0)),a.each(h,function(c,d){g[d]="hide";var e=(""+a.css(b.toShow[0],d)).match(/^([\d+-.]+)(.*)$/);f[d]={value:e[1],unit:e[2]||"px"}}),b.toShow.css({height:0,overflow:"hidden"}).show(),b.toHide.filter(":hidden").each(b.complete).end().filter(":visible").animate(g,{step:function(a,c){c.prop=="height"&&(e=c.end-c.start===0?0:(c.now-c.start)/(c.end-c.start)),b.toShow[0].style[c.prop]=e*f[c.prop].value+f[c.prop].unit},duration:b.duration,easing:b.easing,complete:function(){b.autoHeight||b.toShow.css("height",""),b.toShow.css({width:i,overflow:d}),b.complete()}})}},bounceslide:function(a){this.slide(a,{easing:a.down?"easeOutBounce":"swing",duration:a.down?1e3:200})}}})}(jQuery),function(a,b){var c=0;a.widget("ui.autocomplete",{options:{appendTo:"body",autoFocus:!1,delay:300,minLength:1,position:{my:"left top",at:"left bottom",collision:"none"},source:null},pending:0,_create:function(){var b=this,c=this.element[0].ownerDocument,d;this.element.addClass("ui-autocomplete-input").attr("autocomplete","off").attr({role:"textbox","aria-autocomplete":"list","aria-haspopup":"true"}).bind("keydown.autocomplete",function(c){if(!b.options.disabled&&!b.element.propAttr("readOnly")){d=!1;var e=a.ui.keyCode;switch(c.keyCode){case e.PAGE_UP:b._move("previousPage",c);break;case e.PAGE_DOWN:b._move("nextPage",c);break;case e.UP:b._move("previous",c),c.preventDefault();break;case e.DOWN:b._move("next",c),c.preventDefault();break;case e.ENTER:case e.NUMPAD_ENTER:b.menu.active&&(d=!0,c.preventDefault());case e.TAB:if(!b.menu.active)return;b.menu.select(c);break;case e.ESCAPE:b.element.val(b.term),b.close(c);break;default:clearTimeout(b.searching),b.searching=setTimeout(function(){b.term!=b.element.val()&&(b.selectedItem=null,b.search(null,c))},b.options.delay)}}}).bind("keypress.autocomplete",function(a){d&&(d=!1,a.preventDefault())}).bind("focus.autocomplete",function(){b.options.disabled||(b.selectedItem=null,b.previous=b.element.val())}).bind("blur.autocomplete",function(a){b.options.disabled||(clearTimeout(b.searching),b.closing=setTimeout(function(){b.close(a),b._change(a)},150))}),this._initSource(),this.response=function(){return b._response.apply(b,arguments)},this.menu=a("
      ").addClass("ui-autocomplete").appendTo(a(this.options.appendTo||"body",c)[0]).mousedown(function(c){var d=b.menu.element[0];a(c.target).closest(".ui-menu-item").length||setTimeout(function(){a(document).one("mousedown",function(c){c.target!==b.element[0]&&c.target!==d&&!a.ui.contains(d,c.target)&&b.close()})},1),setTimeout(function(){clearTimeout(b.closing)},13)}).menu({focus:function(a,c){var d=c.item.data("item.autocomplete");!1!==b._trigger("focus",a,{item:d})&&/^key/.test(a.originalEvent.type)&&b.element.val(d.value)},selected:function(a,d){var e=d.item.data("item.autocomplete"),f=b.previous;b.element[0]!==c.activeElement&&(b.element.focus(),b.previous=f,setTimeout(function(){b.previous=f,b.selectedItem=e},1)),!1!==b._trigger("select",a,{item:e})&&b.element.val(e.value),b.term=b.element.val(),b.close(a),b.selectedItem=e},blur:function(a,c){b.menu.element.is(":visible")&&b.element.val()!==b.term&&b.element.val(b.term)}}).zIndex(this.element.zIndex()+1).css({top:0,left:0}).hide().data("menu"),a.fn.bgiframe&&this.menu.element.bgiframe(),b.beforeunloadHandler=function(){b.element.removeAttr("autocomplete")},a(window).bind("beforeunload",b.beforeunloadHandler)},destroy:function(){this.element.removeClass("ui-autocomplete-input").removeAttr("autocomplete").removeAttr("role").removeAttr("aria-autocomplete").removeAttr("aria-haspopup"),this.menu.element.remove(),a(window).unbind("beforeunload",this.beforeunloadHandler),a.Widget.prototype.destroy.call(this)},_setOption:function(b,c){a.Widget.prototype._setOption.apply(this,arguments),b==="source"&&this._initSource(),b==="appendTo"&&this.menu.element.appendTo(a(c||"body",this.element[0].ownerDocument)[0]),b==="disabled"&&c&&this.xhr&&this.xhr.abort()},_initSource:function(){var b=this,d,e;a.isArray(this.options.source)?(d=this.options.source,this.source=function(b,c){c(a.ui.autocomplete.filter(d,b.term))}):typeof this.options.source=="string"?(e=this.options.source,this.source=function(d,f){b.xhr&&b.xhr.abort(),b.xhr=a.ajax({url:e,data:d,dataType:"json",context:{autocompleteRequest:++c},success:function(a,b){this.autocompleteRequest===c&&f(a)},error:function(){this.autocompleteRequest===c&&f([])}})}):this.source=this.options.source},search:function(a,b){a=a!=null?a:this.element.val(),this.term=this.element.val();if(a.length").data("item.autocomplete",c).append(a("").text(c.label)).appendTo(b)},_move:function(a,b){if(!this.menu.element.is(":visible"))this.search(null,b);else{if(this.menu.first()&&/^previous/.test(a)||this.menu.last()&&/^next/.test(a)){this.element.val(this.term),this.menu.deactivate();return}this.menu[a](b)}},widget:function(){return this.menu.element}}),a.extend(a.ui.autocomplete,{escapeRegex:function(a){return a.replace(/[-[\]{}()*+?.,\\^$|#\s]/g,"\\$&")},filter:function(b,c){var d=new RegExp(a.ui.autocomplete.escapeRegex(c),"i");return a.grep(b,function(a){return d.test(a.label||a.value||a)})}})}(jQuery),function(a){a.widget("ui.menu",{_create:function(){var b=this;this.element.addClass("ui-menu ui-widget ui-widget-content ui-corner-all").attr({role:"listbox","aria-activedescendant":"ui-active-menuitem"}).click(function(c){!a(c.target).closest(".ui-menu-item a").length||(c.preventDefault(),b.select(c))}),this.refresh()},refresh:function(){var b=this,c=this.element.children("li:not(.ui-menu-item):has(a)").addClass("ui-menu-item").attr("role","menuitem");c.children("a").addClass("ui-corner-all").attr("tabindex",-1).mouseenter(function(c){b.activate(c,a(this).parent())}).mouseleave(function(){b.deactivate()})},activate:function(a,b){this.deactivate();if(this.hasScroll()){var c=b.offset().top-this.element.offset().top,d=this.element.scrollTop(),e=this.element.height();c<0?this.element.scrollTop(d+c):c>=e&&this.element.scrollTop(d+c-e+b.height())}this.active=b.eq(0).children("a").addClass("ui-state-hover").attr("id","ui-active-menuitem").end(),this._trigger("focus",a,{item:b})},deactivate:function(){!this.active||(this.active.children("a").removeClass("ui-state-hover").removeAttr("id"),this._trigger("blur"),this.active=null)},next:function(a){this.move("next",".ui-menu-item:first",a)},previous:function(a){this.move("prev",".ui-menu-item:last",a)},first:function(){return this.active&&!this.active.prevAll(".ui-menu-item").length},last:function(){return this.active&&!this.active.nextAll(".ui-menu-item").length},move:function(a,b,c){if(!this.active)this.activate(c,this.element.children(b));else{var d=this.active[a+"All"](".ui-menu-item").eq(0);d.length?this.activate(c,d):this.activate(c,this.element.children(b))}},nextPage:function(b){if(this.hasScroll()){if(!this.active||this.last()){this.activate(b,this.element.children(".ui-menu-item:first"));return}var c=this.active.offset().top,d=this.element.height(),e=this.element.children(".ui-menu-item").filter(function(){var b=a(this).offset().top-c-d+a(this).height();return b<10&&b>-10});e.length||(e=this.element.children(".ui-menu-item:last")),this.activate(b,e)}else this.activate(b,this.element.children(".ui-menu-item").filter(!this.active||this.last()?":first":":last"))},previousPage:function(b){if(this.hasScroll()){if(!this.active||this.first()){this.activate(b,this.element.children(".ui-menu-item:last"));return}var c=this.active.offset().top,d=this.element.height();result=this.element.children(".ui-menu-item").filter(function(){var b=a(this).offset().top-c+d-a(this).height();return b<10&&b>-10}),result.length||(result=this.element.children(".ui-menu-item:first")),this.activate(b,result)}else this.activate(b,this.element.children(".ui-menu-item").filter(!this.active||this.first()?":last":":first"))},hasScroll:function(){return this.element.height()
      ",this.element[0].ownerDocument).addClass("ui-button-text").html(this.options.label).appendTo(b.empty()).text(),d=this.options.icons,e=d.primary&&d.secondary,f=[];d.primary||d.secondary?(this.options.text&&f.push("ui-button-text-icon"+(e?"s":d.primary?"-primary":"-secondary")),d.primary&&b.prepend(""),d.secondary&&b.append(""),this.options.text||(f.push(e?"ui-button-icons-only":"ui-button-icon-only"),this.hasTitle||b.attr("title",c))):f.push("ui-button-text-only"),b.addClass(f.join(" "))}}}),a.widget("ui.buttonset",{options:{items:":button, :submit, :reset, :checkbox, :radio, a, :data(button)"},_create:function(){this.element.addClass("ui-buttonset")},_init:function(){this.refresh()},_setOption:function(b,c){b==="disabled"&&this.buttons.button("option",b,c),a.Widget.prototype._setOption.apply(this,arguments)},refresh:function(){var b=this.element.css("direction")==="rtl";this.buttons=this.element.find(this.options.items).filter(":ui-button").button("refresh").end().not(":ui-button").button().end().map(function(){return a(this).button("widget")[0]}).removeClass("ui-corner-all ui-corner-left ui-corner-right").filter(":first").addClass(b?"ui-corner-right":"ui-corner-left").end().filter(":last").addClass(b?"ui-corner-left":"ui-corner-right").end().end()},destroy:function(){this.element.removeClass("ui-buttonset"),this.buttons.map(function(){return a(this).button("widget")[0]}).removeClass("ui-corner-left ui-corner-right").end().button("destroy"),a.Widget.prototype.destroy.call(this)}})}(jQuery),function($,undefined){function isArray(a){return a&&($.browser.safari&&typeof a=="object"&&a.length||a.constructor&&a.constructor.toString().match(/\Array\(\)/))}function extendRemove(a,b){$.extend(a,b);for(var c in b)if(b[c]==null||b[c]==undefined)a[c]=b[c];return a}function bindHover(a){var b="button, .ui-datepicker-prev, .ui-datepicker-next, .ui-datepicker-calendar td a";return a.bind("mouseout",function(a){var c=$(a.target).closest(b);!c.length||c.removeClass("ui-state-hover ui-datepicker-prev-hover ui-datepicker-next-hover")}).bind("mouseover",function(c){var d=$(c.target).closest(b);!$.datepicker._isDisabledDatepicker(instActive.inline?a.parent()[0]:instActive.input[0])&&!!d.length&&(d.parents(".ui-datepicker-calendar").find("a").removeClass("ui-state-hover"),d.addClass("ui-state-hover"),d.hasClass("ui-datepicker-prev")&&d.addClass("ui-datepicker-prev-hover"),d.hasClass("ui-datepicker-next")&&d.addClass("ui-datepicker-next-hover"))})}function Datepicker(){this.debug=!1,this._curInst=null,this._keyEvent=!1,this._disabledInputs=[],this._datepickerShowing=!1,this._inDialog=!1,this._mainDivId="ui-datepicker-div",this._inlineClass="ui-datepicker-inline",this._appendClass="ui-datepicker-append",this._triggerClass="ui-datepicker-trigger",this._dialogClass="ui-datepicker-dialog",this._disableClass="ui-datepicker-disabled",this._unselectableClass="ui-datepicker-unselectable",this._currentClass="ui-datepicker-current-day",this._dayOverClass="ui-datepicker-days-cell-over",this.regional=[],this.regional[""]={closeText:"Done",prevText:"Prev",nextText:"Next",currentText:"Today",monthNames:["January","February","March","April","May","June","July","August","September","October","November","December"],monthNamesShort:["Jan","Feb","Mar","Apr","May","Jun","Jul","Aug","Sep","Oct","Nov","Dec"],dayNames:["Sunday","Monday","Tuesday","Wednesday","Thursday","Friday","Saturday"],dayNamesShort:["Sun","Mon","Tue","Wed","Thu","Fri","Sat"],dayNamesMin:["Su","Mo","Tu","We","Th","Fr","Sa"],weekHeader:"Wk",dateFormat:"mm/dd/yy",firstDay:0,isRTL:!1,showMonthAfterYear:!1,yearSuffix:""},this._defaults={showOn:"focus",showAnim:"fadeIn",showOptions:{},defaultDate:null,appendText:"",buttonText:"...",buttonImage:"",buttonImageOnly:!1,hideIfNoPrevNext:!1,navigationAsDateFormat:!1,gotoCurrent:!1,changeMonth:!1,changeYear:!1,yearRange:"c-10:c+10",showOtherMonths:!1,selectOtherMonths:!1,showWeek:!1,calculateWeek:this.iso8601Week,shortYearCutoff:"+10",minDate:null,maxDate:null,duration:"fast",beforeShowDay:null,beforeShow:null,onSelect:null,onChangeMonthYear:null,onClose:null,numberOfMonths:1,showCurrentAtPos:0,stepMonths:1,stepBigMonths:12,altField:"",altFormat:"",constrainInput:!0,showButtonPanel:!1,autoSize:!1,disabled:!1},$.extend(this._defaults,this.regional[""]),this.dpDiv=bindHover($('
      '))}$.extend($.ui,{datepicker:{version:"1.8.18"}});var PROP_NAME="datepicker",dpuuid=(new Date).getTime(),instActive;$.extend(Datepicker.prototype,{markerClassName:"hasDatepicker",maxRows:4,log:function(){this.debug&&console.log.apply("",arguments)},_widgetDatepicker:function(){return this.dpDiv},setDefaults:function(a){extendRemove(this._defaults,a||{});return this},_attachDatepicker:function(target,settings){var inlineSettings=null;for(var attrName in this._defaults){var attrValue=target.getAttribute("date:"+attrName);if(attrValue){inlineSettings=inlineSettings||{};try{inlineSettings[attrName]=eval(attrValue)}catch(err){inlineSettings[attrName]=attrValue} }}var nodeName=target.nodeName.toLowerCase(),inline=nodeName=="div"||nodeName=="span";target.id||(this.uuid+=1,target.id="dp"+this.uuid);var inst=this._newInst($(target),inline);inst.settings=$.extend({},settings||{},inlineSettings||{}),nodeName=="input"?this._connectDatepicker(target,inst):inline&&this._inlineDatepicker(target,inst)},_newInst:function(a,b){var c=a[0].id.replace(/([^A-Za-z0-9_-])/g,"\\\\$1");return{id:c,input:a,selectedDay:0,selectedMonth:0,selectedYear:0,drawMonth:0,drawYear:0,inline:b,dpDiv:b?bindHover($('
      ')):this.dpDiv}},_connectDatepicker:function(a,b){var c=$(a);b.append=$([]),b.trigger=$([]);c.hasClass(this.markerClassName)||(this._attachments(c,b),c.addClass(this.markerClassName).keydown(this._doKeyDown).keypress(this._doKeyPress).keyup(this._doKeyUp).bind("setData.datepicker",function(a,c,d){b.settings[c]=d}).bind("getData.datepicker",function(a,c){return this._get(b,c)}),this._autoSize(b),$.data(a,PROP_NAME,b),b.settings.disabled&&this._disableDatepicker(a))},_attachments:function(a,b){var c=this._get(b,"appendText"),d=this._get(b,"isRTL");b.append&&b.append.remove(),c&&(b.append=$(''+c+""),a[d?"before":"after"](b.append)),a.unbind("focus",this._showDatepicker),b.trigger&&b.trigger.remove();var e=this._get(b,"showOn");(e=="focus"||e=="both")&&a.focus(this._showDatepicker);if(e=="button"||e=="both"){var f=this._get(b,"buttonText"),g=this._get(b,"buttonImage");b.trigger=$(this._get(b,"buttonImageOnly")?$("").addClass(this._triggerClass).attr({src:g,alt:f,title:f}):$('').addClass(this._triggerClass).html(g==""?f:$("").attr({src:g,alt:f,title:f}))),a[d?"before":"after"](b.trigger),b.trigger.click(function(){$.datepicker._datepickerShowing&&$.datepicker._lastInput==a[0]?$.datepicker._hideDatepicker():$.datepicker._datepickerShowing&&$.datepicker._lastInput!=a[0]?($.datepicker._hideDatepicker(),$.datepicker._showDatepicker(a[0])):$.datepicker._showDatepicker(a[0]);return!1})}},_autoSize:function(a){if(this._get(a,"autoSize")&&!a.inline){var b=new Date(2009,11,20),c=this._get(a,"dateFormat");if(c.match(/[DM]/)){var d=function(a){var b=0,c=0;for(var d=0;db&&(b=a[d].length,c=d);return c};b.setMonth(d(this._get(a,c.match(/MM/)?"monthNames":"monthNamesShort"))),b.setDate(d(this._get(a,c.match(/DD/)?"dayNames":"dayNamesShort"))+20-b.getDay())}a.input.attr("size",this._formatDate(a,b).length)}},_inlineDatepicker:function(a,b){var c=$(a);c.hasClass(this.markerClassName)||(c.addClass(this.markerClassName).append(b.dpDiv).bind("setData.datepicker",function(a,c,d){b.settings[c]=d}).bind("getData.datepicker",function(a,c){return this._get(b,c)}),$.data(a,PROP_NAME,b),this._setDate(b,this._getDefaultDate(b),!0),this._updateDatepicker(b),this._updateAlternate(b),b.settings.disabled&&this._disableDatepicker(a),b.dpDiv.css("display","block"))},_dialogDatepicker:function(a,b,c,d,e){var f=this._dialogInst;if(!f){this.uuid+=1;var g="dp"+this.uuid;this._dialogInput=$(''),this._dialogInput.keydown(this._doKeyDown),$("body").append(this._dialogInput),f=this._dialogInst=this._newInst(this._dialogInput,!1),f.settings={},$.data(this._dialogInput[0],PROP_NAME,f)}extendRemove(f.settings,d||{}),b=b&&b.constructor==Date?this._formatDate(f,b):b,this._dialogInput.val(b),this._pos=e?e.length?e:[e.pageX,e.pageY]:null;if(!this._pos){var h=document.documentElement.clientWidth,i=document.documentElement.clientHeight,j=document.documentElement.scrollLeft||document.body.scrollLeft,k=document.documentElement.scrollTop||document.body.scrollTop;this._pos=[h/2-100+j,i/2-150+k]}this._dialogInput.css("left",this._pos[0]+20+"px").css("top",this._pos[1]+"px"),f.settings.onSelect=c,this._inDialog=!0,this.dpDiv.addClass(this._dialogClass),this._showDatepicker(this._dialogInput[0]),$.blockUI&&$.blockUI(this.dpDiv),$.data(this._dialogInput[0],PROP_NAME,f);return this},_destroyDatepicker:function(a){var b=$(a),c=$.data(a,PROP_NAME);if(!!b.hasClass(this.markerClassName)){var d=a.nodeName.toLowerCase();$.removeData(a,PROP_NAME),d=="input"?(c.append.remove(),c.trigger.remove(),b.removeClass(this.markerClassName).unbind("focus",this._showDatepicker).unbind("keydown",this._doKeyDown).unbind("keypress",this._doKeyPress).unbind("keyup",this._doKeyUp)):(d=="div"||d=="span")&&b.removeClass(this.markerClassName).empty()}},_enableDatepicker:function(a){var b=$(a),c=$.data(a,PROP_NAME);if(!!b.hasClass(this.markerClassName)){var d=a.nodeName.toLowerCase();if(d=="input")a.disabled=!1,c.trigger.filter("button").each(function(){this.disabled=!1}).end().filter("img").css({opacity:"1.0",cursor:""});else if(d=="div"||d=="span"){var e=b.children("."+this._inlineClass);e.children().removeClass("ui-state-disabled"),e.find("select.ui-datepicker-month, select.ui-datepicker-year").removeAttr("disabled")}this._disabledInputs=$.map(this._disabledInputs,function(b){return b==a?null:b})}},_disableDatepicker:function(a){var b=$(a),c=$.data(a,PROP_NAME);if(!!b.hasClass(this.markerClassName)){var d=a.nodeName.toLowerCase();if(d=="input")a.disabled=!0,c.trigger.filter("button").each(function(){this.disabled=!0}).end().filter("img").css({opacity:"0.5",cursor:"default"});else if(d=="div"||d=="span"){var e=b.children("."+this._inlineClass);e.children().addClass("ui-state-disabled"),e.find("select.ui-datepicker-month, select.ui-datepicker-year").attr("disabled","disabled")}this._disabledInputs=$.map(this._disabledInputs,function(b){return b==a?null:b}),this._disabledInputs[this._disabledInputs.length]=a}},_isDisabledDatepicker:function(a){if(!a)return!1;for(var b=0;b-1}},_doKeyUp:function(a){var b=$.datepicker._getInst(a.target);if(b.input.val()!=b.lastVal)try{var c=$.datepicker.parseDate($.datepicker._get(b,"dateFormat"),b.input?b.input.val():null,$.datepicker._getFormatConfig(b));c&&($.datepicker._setDateFromField(b),$.datepicker._updateAlternate(b),$.datepicker._updateDatepicker(b))}catch(a){$.datepicker.log(a)}return!0},_showDatepicker:function(a){a=a.target||a,a.nodeName.toLowerCase()!="input"&&(a=$("input",a.parentNode)[0]);if(!$.datepicker._isDisabledDatepicker(a)&&$.datepicker._lastInput!=a){var b=$.datepicker._getInst(a);$.datepicker._curInst&&$.datepicker._curInst!=b&&($.datepicker._curInst.dpDiv.stop(!0,!0),b&&$.datepicker._datepickerShowing&&$.datepicker._hideDatepicker($.datepicker._curInst.input[0]));var c=$.datepicker._get(b,"beforeShow"),d=c?c.apply(a,[a,b]):{};if(d===!1)return;extendRemove(b.settings,d),b.lastVal=null,$.datepicker._lastInput=a,$.datepicker._setDateFromField(b),$.datepicker._inDialog&&(a.value=""),$.datepicker._pos||($.datepicker._pos=$.datepicker._findPos(a),$.datepicker._pos[1]+=a.offsetHeight);var e=!1;$(a).parents().each(function(){e|=$(this).css("position")=="fixed";return!e}),e&&$.browser.opera&&($.datepicker._pos[0]-=document.documentElement.scrollLeft,$.datepicker._pos[1]-=document.documentElement.scrollTop);var f={left:$.datepicker._pos[0],top:$.datepicker._pos[1]};$.datepicker._pos=null,b.dpDiv.empty(),b.dpDiv.css({position:"absolute",display:"block",top:"-1000px"}),$.datepicker._updateDatepicker(b),f=$.datepicker._checkOffset(b,f,e),b.dpDiv.css({position:$.datepicker._inDialog&&$.blockUI?"static":e?"fixed":"absolute",display:"none",left:f.left+"px",top:f.top+"px"});if(!b.inline){var g=$.datepicker._get(b,"showAnim"),h=$.datepicker._get(b,"duration"),i=function(){var a=b.dpDiv.find("iframe.ui-datepicker-cover");if(!!a.length){var c=$.datepicker._getBorders(b.dpDiv);a.css({left:-c[0],top:-c[1],width:b.dpDiv.outerWidth(),height:b.dpDiv.outerHeight()})}};b.dpDiv.zIndex($(a).zIndex()+1),$.datepicker._datepickerShowing=!0,$.effects&&$.effects[g]?b.dpDiv.show(g,$.datepicker._get(b,"showOptions"),h,i):b.dpDiv[g||"show"](g?h:null,i),(!g||!h)&&i(),b.input.is(":visible")&&!b.input.is(":disabled")&&b.input.focus(),$.datepicker._curInst=b}}},_updateDatepicker:function(a){var b=this;b.maxRows=4;var c=$.datepicker._getBorders(a.dpDiv);instActive=a,a.dpDiv.empty().append(this._generateHTML(a));var d=a.dpDiv.find("iframe.ui-datepicker-cover");!d.length||d.css({left:-c[0],top:-c[1],width:a.dpDiv.outerWidth(),height:a.dpDiv.outerHeight()}),a.dpDiv.find("."+this._dayOverClass+" a").mouseover();var e=this._getNumberOfMonths(a),f=e[1],g=17;a.dpDiv.removeClass("ui-datepicker-multi-2 ui-datepicker-multi-3 ui-datepicker-multi-4").width(""),f>1&&a.dpDiv.addClass("ui-datepicker-multi-"+f).css("width",g*f+"em"),a.dpDiv[(e[0]!=1||e[1]!=1?"add":"remove")+"Class"]("ui-datepicker-multi"),a.dpDiv[(this._get(a,"isRTL")?"add":"remove")+"Class"]("ui-datepicker-rtl"),a==$.datepicker._curInst&&$.datepicker._datepickerShowing&&a.input&&a.input.is(":visible")&&!a.input.is(":disabled")&&a.input[0]!=document.activeElement&&a.input.focus();if(a.yearshtml){var h=a.yearshtml;setTimeout(function(){h===a.yearshtml&&a.yearshtml&&a.dpDiv.find("select.ui-datepicker-year:first").replaceWith(a.yearshtml),h=a.yearshtml=null},0)}},_getBorders:function(a){var b=function(a){return{thin:1,medium:2,thick:3}[a]||a};return[parseFloat(b(a.css("border-left-width"))),parseFloat(b(a.css("border-top-width")))]},_checkOffset:function(a,b,c){var d=a.dpDiv.outerWidth(),e=a.dpDiv.outerHeight(),f=a.input?a.input.outerWidth():0,g=a.input?a.input.outerHeight():0,h=document.documentElement.clientWidth+$(document).scrollLeft(),i=document.documentElement.clientHeight+$(document).scrollTop();b.left-=this._get(a,"isRTL")?d-f:0,b.left-=c&&b.left==a.input.offset().left?$(document).scrollLeft():0,b.top-=c&&b.top==a.input.offset().top+g?$(document).scrollTop():0,b.left-=Math.min(b.left,b.left+d>h&&h>d?Math.abs(b.left+d-h):0),b.top-=Math.min(b.top,b.top+e>i&&i>e?Math.abs(e+g):0);return b},_findPos:function(a){var b=this._getInst(a),c=this._get(b,"isRTL");while(a&&(a.type=="hidden"||a.nodeType!=1||$.expr.filters.hidden(a)))a=a[c?"previousSibling":"nextSibling"];var d=$(a).offset();return[d.left,d.top]},_hideDatepicker:function(a){var b=this._curInst;if(!(!b||a&&b!=$.data(a,PROP_NAME))&&this._datepickerShowing){var c=this._get(b,"showAnim"),d=this._get(b,"duration"),e=this,f=function(){$.datepicker._tidyDialog(b),e._curInst=null};$.effects&&$.effects[c]?b.dpDiv.hide(c,$.datepicker._get(b,"showOptions"),d,f):b.dpDiv[c=="slideDown"?"slideUp":c=="fadeIn"?"fadeOut":"hide"](c?d:null,f),c||f(),this._datepickerShowing=!1;var g=this._get(b,"onClose");g&&g.apply(b.input?b.input[0]:null,[b.input?b.input.val():"",b]),this._lastInput=null,this._inDialog&&(this._dialogInput.css({position:"absolute",left:"0",top:"-100px"}),$.blockUI&&($.unblockUI(),$("body").append(this.dpDiv))),this._inDialog=!1}},_tidyDialog:function(a){a.dpDiv.removeClass(this._dialogClass).unbind(".ui-datepicker-calendar")},_checkExternalClick:function(a){if(!!$.datepicker._curInst){var b=$(a.target),c=$.datepicker._getInst(b[0]);(b[0].id!=$.datepicker._mainDivId&&b.parents("#"+$.datepicker._mainDivId).length==0&&!b.hasClass($.datepicker.markerClassName)&&!b.closest("."+$.datepicker._triggerClass).length&&$.datepicker._datepickerShowing&&(!$.datepicker._inDialog||!$.blockUI)||b.hasClass($.datepicker.markerClassName)&&$.datepicker._curInst!=c)&&$.datepicker._hideDatepicker()}},_adjustDate:function(a,b,c){var d=$(a),e=this._getInst(d[0]);this._isDisabledDatepicker(d[0])||(this._adjustInstDate(e,b+(c=="M"?this._get(e,"showCurrentAtPos"):0),c),this._updateDatepicker(e))},_gotoToday:function(a){var b=$(a),c=this._getInst(b[0]);if(this._get(c,"gotoCurrent")&&c.currentDay)c.selectedDay=c.currentDay,c.drawMonth=c.selectedMonth=c.currentMonth,c.drawYear=c.selectedYear=c.currentYear;else{var d=new Date;c.selectedDay=d.getDate(),c.drawMonth=c.selectedMonth=d.getMonth(),c.drawYear=c.selectedYear=d.getFullYear()}this._notifyChange(c),this._adjustDate(b)},_selectMonthYear:function(a,b,c){var d=$(a),e=this._getInst(d[0]);e["selected"+(c=="M"?"Month":"Year")]=e["draw"+(c=="M"?"Month":"Year")]=parseInt(b.options[b.selectedIndex].value,10),this._notifyChange(e),this._adjustDate(d)},_selectDay:function(a,b,c,d){var e=$(a);if(!$(d).hasClass(this._unselectableClass)&&!this._isDisabledDatepicker(e[0])){var f=this._getInst(e[0]);f.selectedDay=f.currentDay=$("a",d).html(),f.selectedMonth=f.currentMonth=b,f.selectedYear=f.currentYear=c,this._selectDate(a,this._formatDate(f,f.currentDay,f.currentMonth,f.currentYear))}},_clearDate:function(a){var b=$(a),c=this._getInst(b[0]);this._selectDate(b,"")},_selectDate:function(a,b){var c=$(a),d=this._getInst(c[0]);b=b!=null?b:this._formatDate(d),d.input&&d.input.val(b),this._updateAlternate(d);var e=this._get(d,"onSelect");e?e.apply(d.input?d.input[0]:null,[b,d]):d.input&&d.input.trigger("change"),d.inline?this._updateDatepicker(d):(this._hideDatepicker(),this._lastInput=d.input[0],typeof d.input[0]!="object"&&d.input.focus(),this._lastInput=null)},_updateAlternate:function(a){var b=this._get(a,"altField");if(b){var c=this._get(a,"altFormat")||this._get(a,"dateFormat"),d=this._getDate(a),e=this.formatDate(c,d,this._getFormatConfig(a));$(b).each(function(){$(this).val(e)})}},noWeekends:function(a){var b=a.getDay();return[b>0&&b<6,""]},iso8601Week:function(a){var b=new Date(a.getTime());b.setDate(b.getDate()+4-(b.getDay()||7));var c=b.getTime();b.setMonth(0),b.setDate(1);return Math.floor(Math.round((c-b)/864e5)/7)+1},parseDate:function(a,b,c){if(a==null||b==null)throw"Invalid arguments";b=typeof b=="object"?b.toString():b+"";if(b=="")return null;var d=(c?c.shortYearCutoff:null)||this._defaults.shortYearCutoff;d=typeof d!="string"?d:(new Date).getFullYear()%100+parseInt(d,10);var e=(c?c.dayNamesShort:null)||this._defaults.dayNamesShort,f=(c?c.dayNames:null)||this._defaults.dayNames,g=(c?c.monthNamesShort:null)||this._defaults.monthNamesShort,h=(c?c.monthNames:null)||this._defaults.monthNames,i=-1,j=-1,k=-1,l=-1,m=!1,n=function(b){var c=s+1-1){j=1,k=l;for(;;){var u=this._getDaysInMonth(i,j-1);if(k<=u)break;j++,k-=u}}var t=this._daylightSavingAdjust(new Date(i,j-1,k));if(t.getFullYear()!=i||t.getMonth()+1!=j||t.getDate()!=k)throw"Invalid date";return t},ATOM:"yy-mm-dd",COOKIE:"D, dd M yy",ISO_8601:"yy-mm-dd",RFC_822:"D, d M y",RFC_850:"DD, dd-M-y",RFC_1036:"D, d M y",RFC_1123:"D, d M yy",RFC_2822:"D, d M yy",RSS:"D, d M y",TICKS:"!",TIMESTAMP:"@",W3C:"yy-mm-dd",_ticksTo1970:(718685+Math.floor(492.5)-Math.floor(19.7)+Math.floor(4.925))*24*60*60*1e7,formatDate:function(a,b,c){if(!b)return"";var d=(c?c.dayNamesShort:null)||this._defaults.dayNamesShort,e=(c?c.dayNames:null)||this._defaults.dayNames,f=(c?c.monthNamesShort:null)||this._defaults.monthNamesShort,g=(c?c.monthNames:null)||this._defaults.monthNames,h=function(b){var c=m+112?a.getHours()+2:0);return a},_setDate:function(a,b,c){var d=!b,e=a.selectedMonth,f=a.selectedYear,g=this._restrictMinMax(a,this._determineDate(a,b,new Date));a.selectedDay=a.currentDay=g.getDate(),a.drawMonth=a.selectedMonth=a.currentMonth=g.getMonth(),a.drawYear=a.selectedYear=a.currentYear=g.getFullYear(),(e!=a.selectedMonth||f!=a.selectedYear)&&!c&&this._notifyChange(a),this._adjustInstDate(a),a.input&&a.input.val(d?"":this._formatDate(a))},_getDate:function(a){var b=!a.currentYear||a.input&&a.input.val()==""?null:this._daylightSavingAdjust(new Date(a.currentYear,a.currentMonth,a.currentDay));return b},_generateHTML:function(a){var b=new Date;b=this._daylightSavingAdjust(new Date(b.getFullYear(),b.getMonth(),b.getDate()));var c=this._get(a,"isRTL"),d=this._get(a,"showButtonPanel"),e=this._get(a,"hideIfNoPrevNext"),f=this._get(a,"navigationAsDateFormat"),g=this._getNumberOfMonths(a),h=this._get(a,"showCurrentAtPos"),i=this._get(a,"stepMonths"),j=g[0]!=1||g[1]!=1,k=this._daylightSavingAdjust(a.currentDay?new Date(a.currentYear,a.currentMonth,a.currentDay):new Date(9999,9,9)),l=this._getMinMaxDate(a,"min"),m=this._getMinMaxDate(a,"max"),n=a.drawMonth-h,o=a.drawYear;n<0&&(n+=12,o--);if(m){var p=this._daylightSavingAdjust(new Date(m.getFullYear(),m.getMonth()-g[0]*g[1]+1,m.getDate()));p=l&&pp)n--,n<0&&(n=11,o--)}a.drawMonth=n,a.drawYear=o;var q=this._get(a,"prevText");q=f?this.formatDate(q,this._daylightSavingAdjust(new Date(o,n-i,1)),this._getFormatConfig(a)):q;var r=this._canAdjustMonth(a,-1,o,n)?''+q+"":e?"":''+q+"",s=this._get(a,"nextText");s=f?this.formatDate(s,this._daylightSavingAdjust(new Date(o,n+i,1)),this._getFormatConfig(a)):s;var t=this._canAdjustMonth(a,1,o,n)?''+s+"":e?"":''+s+"",u=this._get(a,"currentText"),v=this._get(a,"gotoCurrent")&&a.currentDay?k:b;u=f?this.formatDate(u,v,this._getFormatConfig(a)):u;var w=a.inline?"":'",x=d?'
      '+(c?w:"")+(this._isInRange(a,v)?'":"")+(c?"":w)+"
      ":"",y=parseInt(this._get(a,"firstDay"),10);y=isNaN(y)?0:y;var z=this._get(a,"showWeek"),A=this._get(a,"dayNames"),B=this._get(a,"dayNamesShort"),C=this._get(a,"dayNamesMin"),D=this._get(a,"monthNames"),E=this._get(a,"monthNamesShort"),F=this._get(a,"beforeShowDay"),G=this._get(a,"showOtherMonths"),H=this._get(a,"selectOtherMonths"),I=this._get(a,"calculateWeek")||this.iso8601Week,J=this._getDefaultDate(a),K="";for(var L=0;L1)switch(N){case 0:Q+=" ui-datepicker-group-first",P=" ui-corner-"+(c?"right":"left");break;case g[1]-1:Q+=" ui-datepicker-group-last",P=" ui-corner-"+(c?"left":"right");break;default:Q+=" ui-datepicker-group-middle",P=""}Q+='">'}Q+='
      '+(/all|left/.test(P)&&L==0?c?t:r:"")+(/all|right/.test(P)&&L==0?c?r:t:"")+this._generateMonthYearHeader(a,n,o,l,m,L>0||N>0,D,E)+'
      '+"";var R=z?'":"";for(var S=0;S<7;S++){var T=(S+y)%7;R+="=5?' class="ui-datepicker-week-end"':"")+">"+''+C[T]+""}Q+=R+"";var U=this._getDaysInMonth(o,n);o==a.selectedYear&&n==a.selectedMonth&&(a.selectedDay=Math.min(a.selectedDay,U));var V=(this._getFirstDayOfMonth(o,n)-y+7)%7,W=Math.ceil((V+U)/7),X=j?this.maxRows>W?this.maxRows:W:W;this.maxRows=X;var Y=this._daylightSavingAdjust(new Date(o,n,1-V));for(var Z=0;Z";var _=z?'":"";for(var S=0;S<7;S++){var ba=F?F.apply(a.input?a.input[0]:null,[Y]):[!0,""],bb=Y.getMonth()!=n,bc=bb&&!H||!ba[0]||l&&Ym;_+='",Y.setDate(Y.getDate()+1),Y=this._daylightSavingAdjust(Y)}Q+=_+""}n++,n>11&&(n=0,o++),Q+="
      '+this._get(a,"weekHeader")+"
      '+this._get(a,"calculateWeek")(Y)+""+(bb&&!G?" ":bc?''+Y.getDate()+"":''+Y.getDate()+"")+"
      "+(j?""+(g[0]>0&&N==g[1]-1?'
      ':""):""),M+=Q}K+=M}K+=x+($.browser.msie&&parseInt($.browser.version,10)<7&&!a.inline?'':""),a._keyEvent=!1;return K},_generateMonthYearHeader:function(a,b,c,d,e,f,g,h){var i=this._get(a,"changeMonth"),j=this._get(a,"changeYear"),k=this._get(a,"showMonthAfterYear"),l='
      ',m="";if(f||!i)m+=''+g[b]+"";else{var n=d&&d.getFullYear()==c,o=e&&e.getFullYear()==c;m+='"}k||(l+=m+(f||!i||!j?" ":""));if(!a.yearshtml){a.yearshtml="";if(f||!j)l+=''+c+"";else{var q=this._get(a,"yearRange").split(":"),r=(new Date).getFullYear(),s=function(a){var b=a.match(/c[+-].*/)?c+parseInt(a.substring(1),10):a.match(/[+-].*/)?r+parseInt(a,10):parseInt(a,10);return isNaN(b)?r:b},t=s(q[0]),u=Math.max(t,s(q[1]||""));t=d?Math.max(t,d.getFullYear()):t,u=e?Math.min(u,e.getFullYear()):u,a.yearshtml+='",l+=a.yearshtml,a.yearshtml=null}}l+=this._get(a,"yearSuffix"),k&&(l+=(f||!i||!j?" ":"")+m),l+="
      ";return l},_adjustInstDate:function(a,b,c){var d=a.drawYear+(c=="Y"?b:0),e=a.drawMonth+(c=="M"?b:0),f=Math.min(a.selectedDay,this._getDaysInMonth(d,e))+(c=="D"?b:0),g=this._restrictMinMax(a,this._daylightSavingAdjust(new Date(d,e,f)));a.selectedDay=g.getDate(),a.drawMonth=a.selectedMonth=g.getMonth(),a.drawYear=a.selectedYear=g.getFullYear(),(c=="M"||c=="Y")&&this._notifyChange(a)},_restrictMinMax:function(a,b){var c=this._getMinMaxDate(a,"min"),d=this._getMinMaxDate(a,"max"),e=c&&bd?d:e;return e},_notifyChange:function(a){var b=this._get(a,"onChangeMonthYear");b&&b.apply(a.input?a.input[0]:null,[a.selectedYear,a.selectedMonth+1,a])},_getNumberOfMonths:function(a){var b=this._get(a,"numberOfMonths");return b==null?[1,1]:typeof b=="number"?[1,b]:b},_getMinMaxDate:function(a,b){return this._determineDate(a,this._get(a,b+"Date"),null)},_getDaysInMonth:function(a,b){return 32-this._daylightSavingAdjust(new Date(a,b,32)).getDate()},_getFirstDayOfMonth:function(a,b){return(new Date(a,b,1)).getDay()},_canAdjustMonth:function(a,b,c,d){var e=this._getNumberOfMonths(a),f=this._daylightSavingAdjust(new Date(c,d+(b<0?b:e[0]*e[1]),1));b<0&&f.setDate(this._getDaysInMonth(f.getFullYear(),f.getMonth()));return this._isInRange(a,f)},_isInRange:function(a,b){var c=this._getMinMaxDate(a,"min"),d=this._getMinMaxDate(a,"max");return(!c||b.getTime()>=c.getTime())&&(!d||b.getTime()<=d.getTime())},_getFormatConfig:function(a){var b=this._get(a,"shortYearCutoff");b=typeof b!="string"?b:(new Date).getFullYear()%100+parseInt(b,10);return{shortYearCutoff:b,dayNamesShort:this._get(a,"dayNamesShort"),dayNames:this._get(a,"dayNames"),monthNamesShort:this._get(a,"monthNamesShort"),monthNames:this._get(a,"monthNames")}},_formatDate:function(a,b,c,d){b||(a.currentDay=a.selectedDay,a.currentMonth=a.selectedMonth,a.currentYear=a.selectedYear);var e=b?typeof b=="object"?b:this._daylightSavingAdjust(new Date(d,c,b)):this._daylightSavingAdjust(new Date(a.currentYear,a.currentMonth,a.currentDay));return this.formatDate(this._get(a,"dateFormat"),e,this._getFormatConfig(a))}}),$.fn.datepicker=function(a){if(!this.length)return this;$.datepicker.initialized||($(document).mousedown($.datepicker._checkExternalClick).find("body").append($.datepicker.dpDiv),$.datepicker.initialized=!0);var b=Array.prototype.slice.call(arguments,1);if(typeof a=="string"&&(a=="isDisabled"||a=="getDate"||a=="widget" ))return $.datepicker["_"+a+"Datepicker"].apply($.datepicker,[this[0]].concat(b));if(a=="option"&&arguments.length==2&&typeof arguments[1]=="string")return $.datepicker["_"+a+"Datepicker"].apply($.datepicker,[this[0]].concat(b));return this.each(function(){typeof a=="string"?$.datepicker["_"+a+"Datepicker"].apply($.datepicker,[this].concat(b)):$.datepicker._attachDatepicker(this,a)})},$.datepicker=new Datepicker,$.datepicker.initialized=!1,$.datepicker.uuid=(new Date).getTime(),$.datepicker.version="1.8.18",window["DP_jQuery_"+dpuuid]=$}(jQuery),function(a,b){var c="ui-dialog ui-widget ui-widget-content ui-corner-all ",d={buttons:!0,height:!0,maxHeight:!0,maxWidth:!0,minHeight:!0,minWidth:!0,width:!0},e={maxHeight:!0,maxWidth:!0,minHeight:!0,minWidth:!0},f=a.attrFn||{val:!0,css:!0,html:!0,text:!0,data:!0,width:!0,height:!0,offset:!0,click:!0};a.widget("ui.dialog",{options:{autoOpen:!0,buttons:{},closeOnEscape:!0,closeText:"close",dialogClass:"",draggable:!0,hide:null,height:"auto",maxHeight:!1,maxWidth:!1,minHeight:150,minWidth:150,modal:!1,position:{my:"center",at:"center",collision:"fit",using:function(b){var c=a(this).css(b).offset().top;c<0&&a(this).css("top",b.top-c)}},resizable:!0,show:null,stack:!0,title:"",width:300,zIndex:1e3},_create:function(){this.originalTitle=this.element.attr("title"),typeof this.originalTitle!="string"&&(this.originalTitle=""),this.options.title=this.options.title||this.originalTitle;var b=this,d=b.options,e=d.title||" ",f=a.ui.dialog.getTitleId(b.element),g=(b.uiDialog=a("
      ")).appendTo(document.body).hide().addClass(c+d.dialogClass).css({zIndex:d.zIndex}).attr("tabIndex",-1).css("outline",0).keydown(function(c){d.closeOnEscape&&!c.isDefaultPrevented()&&c.keyCode&&c.keyCode===a.ui.keyCode.ESCAPE&&(b.close(c),c.preventDefault())}).attr({role:"dialog","aria-labelledby":f}).mousedown(function(a){b.moveToTop(!1,a)}),h=b.element.show().removeAttr("title").addClass("ui-dialog-content ui-widget-content").appendTo(g),i=(b.uiDialogTitlebar=a("
      ")).addClass("ui-dialog-titlebar ui-widget-header ui-corner-all ui-helper-clearfix").prependTo(g),j=a('').addClass("ui-dialog-titlebar-close ui-corner-all").attr("role","button").hover(function(){j.addClass("ui-state-hover")},function(){j.removeClass("ui-state-hover")}).focus(function(){j.addClass("ui-state-focus")}).blur(function(){j.removeClass("ui-state-focus")}).click(function(a){b.close(a);return!1}).appendTo(i),k=(b.uiDialogTitlebarCloseText=a("")).addClass("ui-icon ui-icon-closethick").text(d.closeText).appendTo(j),l=a("").addClass("ui-dialog-title").attr("id",f).html(e).prependTo(i);a.isFunction(d.beforeclose)&&!a.isFunction(d.beforeClose)&&(d.beforeClose=d.beforeclose),i.find("*").add(i).disableSelection(),d.draggable&&a.fn.draggable&&b._makeDraggable(),d.resizable&&a.fn.resizable&&b._makeResizable(),b._createButtons(d.buttons),b._isOpen=!1,a.fn.bgiframe&&g.bgiframe()},_init:function(){this.options.autoOpen&&this.open()},destroy:function(){var a=this;a.overlay&&a.overlay.destroy(),a.uiDialog.hide(),a.element.unbind(".dialog").removeData("dialog").removeClass("ui-dialog-content ui-widget-content").hide().appendTo("body"),a.uiDialog.remove(),a.originalTitle&&a.element.attr("title",a.originalTitle);return a},widget:function(){return this.uiDialog},close:function(b){var c=this,d,e;if(!1!==c._trigger("beforeClose",b)){c.overlay&&c.overlay.destroy(),c.uiDialog.unbind("keypress.ui-dialog"),c._isOpen=!1,c.options.hide?c.uiDialog.hide(c.options.hide,function(){c._trigger("close",b)}):(c.uiDialog.hide(),c._trigger("close",b)),a.ui.dialog.overlay.resize(),c.options.modal&&(d=0,a(".ui-dialog").each(function(){this!==c.uiDialog[0]&&(e=a(this).css("z-index"),isNaN(e)||(d=Math.max(d,e)))}),a.ui.dialog.maxZ=d);return c}},isOpen:function(){return this._isOpen},moveToTop:function(b,c){var d=this,e=d.options,f;if(e.modal&&!b||!e.stack&&!e.modal)return d._trigger("focus",c);e.zIndex>a.ui.dialog.maxZ&&(a.ui.dialog.maxZ=e.zIndex),d.overlay&&(a.ui.dialog.maxZ+=1,d.overlay.$el.css("z-index",a.ui.dialog.overlay.maxZ=a.ui.dialog.maxZ)),f={scrollTop:d.element.scrollTop(),scrollLeft:d.element.scrollLeft()},a.ui.dialog.maxZ+=1,d.uiDialog.css("z-index",a.ui.dialog.maxZ),d.element.attr(f),d._trigger("focus",c);return d},open:function(){if(!this._isOpen){var b=this,c=b.options,d=b.uiDialog;b.overlay=c.modal?new a.ui.dialog.overlay(b):null,b._size(),b._position(c.position),d.show(c.show),b.moveToTop(!0),c.modal&&d.bind("keydown.ui-dialog",function(b){if(b.keyCode===a.ui.keyCode.TAB){var c=a(":tabbable",this),d=c.filter(":first"),e=c.filter(":last");if(b.target===e[0]&&!b.shiftKey){d.focus(1);return!1}if(b.target===d[0]&&b.shiftKey){e.focus(1);return!1}}}),a(b.element.find(":tabbable").get().concat(d.find(".ui-dialog-buttonpane :tabbable").get().concat(d.get()))).eq(0).focus(),b._isOpen=!0,b._trigger("open");return b}},_createButtons:function(b){var c=this,d=!1,e=a("
      ").addClass("ui-dialog-buttonpane ui-widget-content ui-helper-clearfix"),g=a("
      ").addClass("ui-dialog-buttonset").appendTo(e);c.uiDialog.find(".ui-dialog-buttonpane").remove(),typeof b=="object"&&b!==null&&a.each(b,function(){return!(d=!0)}),d&&(a.each(b,function(b,d){d=a.isFunction(d)?{click:d,text:b}:d;var e=a('').click(function(){d.click.apply(c.element[0],arguments)}).appendTo(g);a.each(d,function(a,b){a!=="click"&&(a in f?e[a](b):e.attr(a,b))}),a.fn.button&&e.button()}),e.appendTo(c.uiDialog))},_makeDraggable:function(){function f(a){return{position:a.position,offset:a.offset}}var b=this,c=b.options,d=a(document),e;b.uiDialog.draggable({cancel:".ui-dialog-content, .ui-dialog-titlebar-close",handle:".ui-dialog-titlebar",containment:"document",start:function(d,g){e=c.height==="auto"?"auto":a(this).height(),a(this).height(a(this).height()).addClass("ui-dialog-dragging"),b._trigger("dragStart",d,f(g))},drag:function(a,c){b._trigger("drag",a,f(c))},stop:function(g,h){c.position=[h.position.left-d.scrollLeft(),h.position.top-d.scrollTop()],a(this).removeClass("ui-dialog-dragging").height(e),b._trigger("dragStop",g,f(h)),a.ui.dialog.overlay.resize()}})},_makeResizable:function(c){function h(a){return{originalPosition:a.originalPosition,originalSize:a.originalSize,position:a.position,size:a.size}}c=c===b?this.options.resizable:c;var d=this,e=d.options,f=d.uiDialog.css("position"),g=typeof c=="string"?c:"n,e,s,w,se,sw,ne,nw";d.uiDialog.resizable({cancel:".ui-dialog-content",containment:"document",alsoResize:d.element,maxWidth:e.maxWidth,maxHeight:e.maxHeight,minWidth:e.minWidth,minHeight:d._minHeight(),handles:g,start:function(b,c){a(this).addClass("ui-dialog-resizing"),d._trigger("resizeStart",b,h(c))},resize:function(a,b){d._trigger("resize",a,h(b))},stop:function(b,c){a(this).removeClass("ui-dialog-resizing"),e.height=a(this).height(),e.width=a(this).width(),d._trigger("resizeStop",b,h(c)),a.ui.dialog.overlay.resize()}}).css("position",f).find(".ui-resizable-se").addClass("ui-icon ui-icon-grip-diagonal-se")},_minHeight:function(){var a=this.options;return a.height==="auto"?a.minHeight:Math.min(a.minHeight,a.height)},_position:function(b){var c=[],d=[0,0],e;if(b){if(typeof b=="string"||typeof b=="object"&&"0"in b)c=b.split?b.split(" "):[b[0],b[1]],c.length===1&&(c[1]=c[0]),a.each(["left","top"],function(a,b){+c[a]===c[a]&&(d[a]=c[a],c[a]=b)}),b={my:c.join(" "),at:c.join(" "),offset:d.join(" ")};b=a.extend({},a.ui.dialog.prototype.options.position,b)}else b=a.ui.dialog.prototype.options.position;e=this.uiDialog.is(":visible"),e||this.uiDialog.show(),this.uiDialog.css({top:0,left:0}).position(a.extend({of:window},b)),e||this.uiDialog.hide()},_setOptions:function(b){var c=this,f={},g=!1;a.each(b,function(a,b){c._setOption(a,b),a in d&&(g=!0),a in e&&(f[a]=b)}),g&&this._size(),this.uiDialog.is(":data(resizable)")&&this.uiDialog.resizable("option",f)},_setOption:function(b,d){var e=this,f=e.uiDialog;switch(b){case"beforeclose":b="beforeClose";break;case"buttons":e._createButtons(d);break;case"closeText":e.uiDialogTitlebarCloseText.text(""+d);break;case"dialogClass":f.removeClass(e.options.dialogClass).addClass(c+d);break;case"disabled":d?f.addClass("ui-dialog-disabled"):f.removeClass("ui-dialog-disabled");break;case"draggable":var g=f.is(":data(draggable)");g&&!d&&f.draggable("destroy"),!g&&d&&e._makeDraggable();break;case"position":e._position(d);break;case"resizable":var h=f.is(":data(resizable)");h&&!d&&f.resizable("destroy"),h&&typeof d=="string"&&f.resizable("option","handles",d),!h&&d!==!1&&e._makeResizable(d);break;case"title":a(".ui-dialog-title",e.uiDialogTitlebar).html(""+(d||" "))}a.Widget.prototype._setOption.apply(e,arguments)},_size:function(){var b=this.options,c,d,e=this.uiDialog.is(":visible");this.element.show().css({width:"auto",minHeight:0,height:0}),b.minWidth>b.width&&(b.width=b.minWidth),c=this.uiDialog.css({height:"auto",width:b.width}).height(),d=Math.max(0,b.minHeight-c);if(b.height==="auto")if(a.support.minHeight)this.element.css({minHeight:d,height:"auto"});else{this.uiDialog.show();var f=this.element.css("height","auto").height();e||this.uiDialog.hide(),this.element.height(Math.max(f,d))}else this.element.height(Math.max(b.height-c,0));this.uiDialog.is(":data(resizable)")&&this.uiDialog.resizable("option","minHeight",this._minHeight())}}),a.extend(a.ui.dialog,{version:"1.8.18",uuid:0,maxZ:0,getTitleId:function(a){var b=a.attr("id");b||(this.uuid+=1,b=this.uuid);return"ui-dialog-title-"+b},overlay:function(b){this.$el=a.ui.dialog.overlay.create(b)}}),a.extend(a.ui.dialog.overlay,{instances:[],oldInstances:[],maxZ:0,events:a.map("focus,mousedown,mouseup,keydown,keypress,click".split(","),function(a){return a+".dialog-overlay"}).join(" "),create:function(b){this.instances.length===0&&(setTimeout(function(){a.ui.dialog.overlay.instances.length&&a(document).bind(a.ui.dialog.overlay.events,function(b){if(a(b.target).zIndex()").addClass("ui-widget-overlay")).appendTo(document.body).css({width:this.width(),height:this.height()});a.fn.bgiframe&&c.bgiframe(),this.instances.push(c);return c},destroy:function(b){var c=a.inArray(b,this.instances);c!=-1&&this.oldInstances.push(this.instances.splice(c,1)[0]),this.instances.length===0&&a([document,window]).unbind(".dialog-overlay"),b.remove();var d=0;a.each(this.instances,function(){d=Math.max(d,this.css("z-index"))}),this.maxZ=d},height:function(){var b,c;if(a.browser.msie&&a.browser.version<7){b=Math.max(document.documentElement.scrollHeight,document.body.scrollHeight),c=Math.max(document.documentElement.offsetHeight,document.body.offsetHeight);return b0?b.left-e:Math.max(b.left-c.collisionPosition.left,b.left)},top:function(b,c){var d=a(window),e=c.collisionPosition.top+c.collisionHeight-d.height()-d.scrollTop();b.top=e>0?b.top-e:Math.max(b.top-c.collisionPosition.top,b.top)}},flip:{left:function(b,c){if(c.at[0]!==e){var d=a(window),f=c.collisionPosition.left+c.collisionWidth-d.width()-d.scrollLeft(),g=c.my[0]==="left"?-c.elemWidth:c.my[0]==="right"?c.elemWidth:0,h=c.at[0]==="left"?c.targetWidth:-c.targetWidth,i=-2*c.offset[0];b.left+=c.collisionPosition.left<0?g+h+i:f>0?g+h+i:0}},top:function(b,c){if(c.at[1]!==e){var d=a(window),f=c.collisionPosition.top+c.collisionHeight-d.height()-d.scrollTop(),g=c.my[1]==="top"?-c.elemHeight:c.my[1]==="bottom"?c.elemHeight:0,h=c.at[1]==="top"?c.targetHeight:-c.targetHeight,i=-2*c.offset[1];b.top+=c.collisionPosition.top<0?g+h+i:f>0?g+h+i:0}}}},a.offset.setOffset||(a.offset.setOffset=function(b,c){/static/.test(a.curCSS(b,"position"))&&(b.style.position="relative");var d=a(b),e=d.offset(),f=parseInt(a.curCSS(b,"top",!0),10)||0,g=parseInt(a.curCSS(b,"left",!0),10)||0,h={top:c.top-e.top+f,left:c.left-e.left+g};"using"in c?c.using.call(b,h):d.css(h)},a.fn.offset=function(b){var c=this[0];if(!c||!c.ownerDocument)return null;if(b)return this.each(function(){a.offset.setOffset(this,b)});return h.call(this)}),function(){var b=document.getElementsByTagName("body")[0],c=document.createElement("div"),d,e,g,h,i;d=document.createElement(b?"div":"body"),g={visibility:"hidden",width:0,height:0,border:0,margin:0,background:"none"},b&&a.extend(g,{position:"absolute",left:"-1000px",top:"-1000px"});for(var j in g)d.style[j]=g[j];d.appendChild(c),e=b||document.documentElement,e.insertBefore(d,e.firstChild),c.style.cssText="position: absolute; left: 10.7432222px; top: 10.432325px; height: 30px; width: 201px;",h=a(c).offset(function(a,b){return b}).offset(),d.innerHTML="",e.removeChild(d),i=h.top+h.left+(b?2e3:0),f.fractions=i>21&&i<22}()}(jQuery),function(a,b){a.widget("ui.progressbar",{options:{value:0,max:100},min:0,_create:function(){this.element.addClass("ui-progressbar ui-widget ui-widget-content ui-corner-all").attr({role:"progressbar","aria-valuemin":this.min,"aria-valuemax":this.options.max,"aria-valuenow":this._value()}),this.valueDiv=a("
      ").appendTo(this.element),this.oldValue=this._value(),this._refreshValue()},destroy:function(){this.element.removeClass("ui-progressbar ui-widget ui-widget-content ui-corner-all").removeAttr("role").removeAttr("aria-valuemin").removeAttr("aria-valuemax").removeAttr("aria-valuenow"),this.valueDiv.remove(),a.Widget.prototype.destroy.apply(this,arguments)},value:function(a){if(a===b)return this._value();this._setOption("value",a);return this},_setOption:function(b,c){b==="value"&&(this.options.value=c,this._refreshValue(),this._value()===this.options.max&&this._trigger("complete")),a.Widget.prototype._setOption.apply(this,arguments)},_value:function(){var a=this.options.value;typeof a!="number"&&(a=0);return Math.min(this.options.max,Math.max(this.min,a))},_percentage:function(){return 100*this._value()/this.options.max},_refreshValue:function(){var a=this.value(),b=this._percentage();this.oldValue!==a&&(this.oldValue=a,this._trigger("change")),this.valueDiv.toggle(a>this.min).toggleClass("ui-corner-right",a===this.options.max).width(b.toFixed(0)+"%"),this.element.attr("aria-valuenow",a)}}),a.extend(a.ui.progressbar,{version:"1.8.18"})}(jQuery),function(a,b){var c=5;a.widget("ui.slider",a.ui.mouse,{widgetEventPrefix:"slide",options:{animate:!1,distance:0,max:100,min:0,orientation:"horizontal",range:!1,step:1,value:0,values:null},_create:function(){var b=this,d=this.options,e=this.element.find(".ui-slider-handle").addClass("ui-state-default ui-corner-all"),f="",g=d.values&&d.values.length||1,h=[];this._keySliding=!1,this._mouseSliding=!1,this._animateOff=!0,this._handleIndex=null,this._detectOrientation(),this._mouseInit(),this.element.addClass("ui-slider ui-slider-"+this.orientation+" ui-widget"+" ui-widget-content"+" ui-corner-all"+(d.disabled?" ui-slider-disabled ui-disabled":"")),this.range=a([]),d.range&&(d.range===!0&&(d.values||(d.values=[this._valueMin(),this._valueMin()]),d.values.length&&d.values.length!==2&&(d.values=[d.values[0],d.values[0]])),this.range=a("
      ").appendTo(this.element).addClass("ui-slider-range ui-widget-header"+(d.range==="min"||d.range==="max"?" ui-slider-range-"+d.range:"")));for(var i=e.length;ic&&(f=c,g=a(this),i=b)}),c.range===!0&&this.values(1)===c.min&&(i+=1,g=a(this.handles[i])),j=this._start(b,i);if(j===!1)return!1;this._mouseSliding=!0,h._handleIndex=i,g.addClass("ui-state-active").focus(),k=g.offset(),l=!a(b.target).parents().andSelf().is(".ui-slider-handle"),this._clickOffset=l?{left:0,top:0}:{left:b.pageX-k.left-g.width()/2,top:b.pageY-k.top-g.height()/2-(parseInt(g.css("borderTopWidth"),10)||0)-(parseInt(g.css("borderBottomWidth"),10)||0)+(parseInt(g.css("marginTop"),10)||0)},this.handles.hasClass("ui-state-hover")||this._slide(b,i,e),this._animateOff=!0;return!0},_mouseStart:function(a){return!0},_mouseDrag:function(a){var b={x:a.pageX,y:a.pageY},c=this._normValueFromMouse(b);this._slide(a,this._handleIndex,c);return!1},_mouseStop:function(a){this.handles.removeClass("ui-state-active"),this._mouseSliding=!1,this._stop(a,this._handleIndex),this._change(a,this._handleIndex),this._handleIndex=null,this._clickOffset=null,this._animateOff=!1;return!1},_detectOrientation:function(){this.orientation=this.options.orientation==="vertical"?"vertical":"horizontal"},_normValueFromMouse:function(a){var b,c,d,e,f;this.orientation==="horizontal"?(b=this.elementSize.width,c=a.x-this.elementOffset.left-(this._clickOffset?this._clickOffset.left:0)):(b=this.elementSize.height,c=a.y-this.elementOffset.top-(this._clickOffset?this._clickOffset.top:0)),d=c/b,d>1&&(d=1),d<0&&(d=0),this.orientation==="vertical"&&(d=1-d),e=this._valueMax()-this._valueMin(),f=this._valueMin()+d*e;return this._trimAlignValue(f)},_start:function(a,b){var c={handle:this.handles[b],value:this.value()};this.options.values&&this.options.values.length&&(c.value=this.values(b),c.values=this.values());return this._trigger("start",a,c)},_slide:function(a,b,c){var d,e,f;this.options.values&&this.options.values.length?(d=this.values(b?0:1),this.options.values.length===2&&this.options.range===!0&&(b===0&&c>d||b===1&&c1)this.options.values[b]=this._trimAlignValue(c),this._refreshValue(),this._change(null,b);else{if(!arguments.length)return this._values();if(!a.isArray(arguments[0]))return this.options.values&&this.options.values.length?this._values(b):this.value();d=this.options.values,e=arguments[0];for(f=0;f=this._valueMax())return this._valueMax();var b=this.options.step>0?this.options.step:1,c=(a-this._valueMin())%b,d=a-c;Math.abs(c)*2>=b&&(d+=c>0?b:-b);return parseFloat(d.toFixed(5))},_valueMin:function(){return this.options.min},_valueMax:function(){return this.options.max},_refreshValue:function(){var b=this.options.range,c=this.options,d=this,e=this._animateOff?!1:c.animate,f,g={},h,i,j,k;this.options.values&&this.options.values.length?this.handles.each(function(b,i){f=(d.values(b)-d._valueMin())/(d._valueMax()-d._valueMin())*100,g[d.orientation==="horizontal"?"left":"bottom"]=f+"%",a(this).stop(1,1)[e?"animate":"css"](g,c.animate),d.options.range===!0&&(d.orientation==="horizontal"?(b===0&&d.range.stop(1,1)[e?"animate":"css"]({left:f+"%"},c.animate),b===1&&d.range[e?"animate":"css"]({width:f-h+"%"},{queue:!1,duration:c.animate})):(b===0&&d.range.stop(1,1)[e?"animate":"css"]({bottom:f+"%"},c.animate),b===1&&d.range[e?"animate":"css"]({height:f-h+"%"},{queue:!1,duration:c.animate}))),h=f}):(i=this.value(),j=this._valueMin(),k=this._valueMax(),f=k!==j?(i-j)/(k-j)*100:0,g[d.orientation==="horizontal"?"left":"bottom"]=f+"%",this.handle.stop(1,1)[e?"animate":"css"](g,c.animate),b==="min"&&this.orientation==="horizontal"&&this.range.stop(1,1)[e?"animate":"css"]({width:f+"%"},c.animate),b==="max"&&this.orientation==="horizontal"&&this.range[e?"animate":"css"]({width:100-f+"%"},{queue:!1,duration:c.animate}),b==="min"&&this.orientation==="vertical"&&this.range.stop(1,1)[e?"animate":"css"]({height:f+"%"},c.animate),b==="max"&&this.orientation==="vertical"&&this.range[e?"animate":"css"]({height:100-f+"%"},{queue:!1,duration:c.animate}))}}),a.extend(a.ui.slider,{version:"1.8.18"})}(jQuery),function(a,b){function f(){return++d}function e(){return++c}var c=0,d=0;a.widget("ui.tabs",{options:{add:null,ajaxOptions:null,cache:!1,cookie:null,collapsible:!1,disable:null,disabled:[],enable:null,event:"click",fx:null,idPrefix:"ui-tabs-",load:null,panelTemplate:"
      ",remove:null,select:null,show:null,spinner:"Loading…",tabTemplate:"
    • #{label}
    • "},_create:function(){this._tabify(!0)},_setOption:function(a,b){if(a=="selected"){if(this.options.collapsible&&b==this.options.selected)return;this.select(b)}else this.options[a]=b,this._tabify()},_tabId:function(a){return a.title&&a.title.replace(/\s/g,"_").replace(/[^\w\u00c0-\uFFFF-]/g,"")||this.options.idPrefix+e()},_sanitizeSelector:function(a){return a.replace(/:/g,"\\:")},_cookie:function(){var b=this.cookie||(this.cookie=this.options.cookie.name||"ui-tabs-"+f());return a.cookie.apply(null,[b].concat(a.makeArray(arguments)))},_ui:function(a,b){return{tab:a,panel:b,index:this.anchors.index(a)}},_cleanup:function(){this.lis.filter(".ui-state-processing").removeClass("ui-state-processing").find("span:data(label.tabs)").each(function(){var b=a(this);b.html(b.data("label.tabs")).removeData("label.tabs")})},_tabify:function(c){function m(b,c){b.css("display",""),!a.support.opacity&&c.opacity&&b[0].style.removeAttribute("filter")}var d=this,e=this.options,f=/^#.+/;this.list=this.element.find("ol,ul").eq(0),this.lis=a(" > li:has(a[href])",this.list),this.anchors=this.lis.map(function(){return a("a",this)[0]}),this.panels=a([]),this.anchors.each(function(b,c){var g=a(c).attr("href"),h=g.split("#")[0],i;h&&(h===location.toString().split("#")[0]||(i=a("base")[0])&&h===i.href)&&(g=c.hash,c.href=g);if(f.test(g))d.panels=d.panels.add(d.element.find(d._sanitizeSelector(g)));else if(g&&g!=="#"){a.data(c,"href.tabs",g),a.data(c,"load.tabs",g.replace(/#.*$/,""));var j=d._tabId(c);c.href="#"+j;var k=d.element.find("#"+j);k.length||(k=a(e.panelTemplate).attr("id",j).addClass("ui-tabs-panel ui-widget-content ui-corner-bottom").insertAfter(d.panels[b-1]||d.list),k.data("destroy.tabs",!0)),d.panels=d.panels.add(k)}else e.disabled.push(b)}),c?(this.element.addClass("ui-tabs ui-widget ui-widget-content ui-corner-all"),this.list.addClass("ui-tabs-nav ui-helper-reset ui-helper-clearfix ui-widget-header ui-corner-all"),this.lis.addClass("ui-state-default ui-corner-top"),this.panels.addClass("ui-tabs-panel ui-widget-content ui-corner-bottom"),e.selected===b?(location.hash&&this.anchors.each(function(a,b){if(b.hash==location.hash){e.selected=a;return!1}}),typeof e.selected!="number"&&e.cookie&&(e.selected=parseInt(d._cookie(),10)),typeof e.selected!="number"&&this.lis.filter(".ui-tabs-selected").length&&(e.selected=this.lis.index(this.lis.filter(".ui-tabs-selected"))),e.selected=e.selected||(this.lis.length?0:-1)):e.selected===null&&(e.selected=-1),e.selected=e.selected>=0&&this.anchors[e.selected]||e.selected<0?e.selected:0,e.disabled=a.unique(e.disabled.concat(a.map(this.lis.filter(".ui-state-disabled"),function(a,b){return d.lis.index(a)}))).sort(),a.inArray(e.selected,e.disabled)!=-1&&e.disabled.splice(a.inArray(e.selected,e.disabled),1),this.panels.addClass("ui-tabs-hide"),this.lis.removeClass("ui-tabs-selected ui-state-active"),e.selected>=0&&this.anchors.length&&(d.element.find(d._sanitizeSelector(d.anchors[e.selected].hash)).removeClass("ui-tabs-hide"),this.lis.eq(e.selected).addClass("ui-tabs-selected ui-state-active"),d.element.queue("tabs",function(){d._trigger("show",null,d._ui(d.anchors[e.selected],d.element.find(d._sanitizeSelector(d.anchors[e.selected].hash))[0]))}),this.load(e.selected)),a(window).bind("unload",function(){d.lis.add(d.anchors).unbind(".tabs"),d.lis=d.anchors=d.panels=null})):e.selected=this.lis.index(this.lis.filter(".ui-tabs-selected")),this.element[e.collapsible?"addClass":"removeClass"]("ui-tabs-collapsible"),e.cookie&&this._cookie(e.selected,e.cookie);for(var g=0,h;h=this.lis[g];g++)a(h)[a.inArray(g,e.disabled)!=-1&&!a(h).hasClass("ui-tabs-selected")?"addClass":"removeClass"]("ui-state-disabled");e.cache===!1&&this.anchors.removeData("cache.tabs"),this.lis.add(this.anchors).unbind(".tabs");if(e.event!=="mouseover"){var i=function(a,b){b.is(":not(.ui-state-disabled)")&&b.addClass("ui-state-"+a)},j=function(a,b){b.removeClass("ui-state-"+a)};this.lis.bind("mouseover.tabs",function(){i("hover",a(this))}),this.lis.bind("mouseout.tabs",function(){j("hover",a(this))}),this.anchors.bind("focus.tabs",function(){i("focus",a(this).closest("li"))}),this.anchors.bind("blur.tabs",function(){j("focus",a(this).closest("li"))})}var k,l;e.fx&&(a.isArray(e.fx)?(k=e.fx[0],l=e.fx[1]):k=l=e.fx);var n=l?function(b,c){a(b).closest("li").addClass("ui-tabs-selected ui-state-active"),c.hide().removeClass("ui-tabs-hide").animate(l,l.duration||"normal",function(){m(c,l),d._trigger("show",null,d._ui(b,c[0]))})}:function(b,c){a(b).closest("li").addClass("ui-tabs-selected ui-state-active"),c.removeClass("ui-tabs-hide"),d._trigger("show",null,d._ui(b,c[0]))},o=k?function(a,b){b.animate(k,k.duration||"normal",function(){d.lis.removeClass("ui-tabs-selected ui-state-active"),b.addClass("ui-tabs-hide"),m(b,k),d.element.dequeue("tabs")})}:function(a,b,c){d.lis.removeClass("ui-tabs-selected ui-state-active"),b.addClass("ui-tabs-hide"),d.element.dequeue("tabs")};this.anchors.bind(e.event+".tabs",function(){var b=this,c=a(b).closest("li"),f=d.panels.filter(":not(.ui-tabs-hide)"),g=d.element.find(d._sanitizeSelector(b.hash));if(c.hasClass("ui-tabs-selected")&&!e.collapsible||c.hasClass("ui-state-disabled")||c.hasClass("ui-state-processing")||d.panels.filter(":animated").length||d._trigger("select",null,d._ui(this,g[0]))===!1){this.blur();return!1}e.selected=d.anchors.index(this),d.abort();if(e.collapsible){if(c.hasClass("ui-tabs-selected")){e.selected=-1,e.cookie&&d._cookie(e.selected,e.cookie),d.element.queue("tabs",function(){o(b,f)}).dequeue("tabs"),this.blur();return!1}if(!f.length){e.cookie&&d._cookie(e.selected,e.cookie),d.element.queue("tabs",function(){n(b,g)} ),d.load(d.anchors.index(this)),this.blur();return!1}}e.cookie&&d._cookie(e.selected,e.cookie);if(g.length)f.length&&d.element.queue("tabs",function(){o(b,f)}),d.element.queue("tabs",function(){n(b,g)}),d.load(d.anchors.index(this));else throw"jQuery UI Tabs: Mismatching fragment identifier.";a.browser.msie&&this.blur()}),this.anchors.bind("click.tabs",function(){return!1})},_getIndex:function(a){typeof a=="string"&&(a=this.anchors.index(this.anchors.filter("[href$="+a+"]")));return a},destroy:function(){var b=this.options;this.abort(),this.element.unbind(".tabs").removeClass("ui-tabs ui-widget ui-widget-content ui-corner-all ui-tabs-collapsible").removeData("tabs"),this.list.removeClass("ui-tabs-nav ui-helper-reset ui-helper-clearfix ui-widget-header ui-corner-all"),this.anchors.each(function(){var b=a.data(this,"href.tabs");b&&(this.href=b);var c=a(this).unbind(".tabs");a.each(["href","load","cache"],function(a,b){c.removeData(b+".tabs")})}),this.lis.unbind(".tabs").add(this.panels).each(function(){a.data(this,"destroy.tabs")?a(this).remove():a(this).removeClass(["ui-state-default","ui-corner-top","ui-tabs-selected","ui-state-active","ui-state-hover","ui-state-focus","ui-state-disabled","ui-tabs-panel","ui-widget-content","ui-corner-bottom","ui-tabs-hide"].join(" "))}),b.cookie&&this._cookie(null,b.cookie);return this},add:function(c,d,e){e===b&&(e=this.anchors.length);var f=this,g=this.options,h=a(g.tabTemplate.replace(/#\{href\}/g,c).replace(/#\{label\}/g,d)),i=c.indexOf("#")?this._tabId(a("a",h)[0]):c.replace("#","");h.addClass("ui-state-default ui-corner-top").data("destroy.tabs",!0);var j=f.element.find("#"+i);j.length||(j=a(g.panelTemplate).attr("id",i).data("destroy.tabs",!0)),j.addClass("ui-tabs-panel ui-widget-content ui-corner-bottom ui-tabs-hide"),e>=this.lis.length?(h.appendTo(this.list),j.appendTo(this.list[0].parentNode)):(h.insertBefore(this.lis[e]),j.insertBefore(this.panels[e])),g.disabled=a.map(g.disabled,function(a,b){return a>=e?++a:a}),this._tabify(),this.anchors.length==1&&(g.selected=0,h.addClass("ui-tabs-selected ui-state-active"),j.removeClass("ui-tabs-hide"),this.element.queue("tabs",function(){f._trigger("show",null,f._ui(f.anchors[0],f.panels[0]))}),this.load(0)),this._trigger("add",null,this._ui(this.anchors[e],this.panels[e]));return this},remove:function(b){b=this._getIndex(b);var c=this.options,d=this.lis.eq(b).remove(),e=this.panels.eq(b).remove();d.hasClass("ui-tabs-selected")&&this.anchors.length>1&&this.select(b+(b+1=b?--a:a}),this._tabify(),this._trigger("remove",null,this._ui(d.find("a")[0],e[0]));return this},enable:function(b){b=this._getIndex(b);var c=this.options;if(a.inArray(b,c.disabled)!=-1){this.lis.eq(b).removeClass("ui-state-disabled"),c.disabled=a.grep(c.disabled,function(a,c){return a!=b}),this._trigger("enable",null,this._ui(this.anchors[b],this.panels[b]));return this}},disable:function(a){a=this._getIndex(a);var b=this,c=this.options;a!=c.selected&&(this.lis.eq(a).addClass("ui-state-disabled"),c.disabled.push(a),c.disabled.sort(),this._trigger("disable",null,this._ui(this.anchors[a],this.panels[a])));return this},select:function(a){a=this._getIndex(a);if(a==-1)if(this.options.collapsible&&this.options.selected!=-1)a=this.options.selected;else return this;this.anchors.eq(a).trigger(this.options.event+".tabs");return this},load:function(b){b=this._getIndex(b);var c=this,d=this.options,e=this.anchors.eq(b)[0],f=a.data(e,"load.tabs");this.abort();if(!f||this.element.queue("tabs").length!==0&&a.data(e,"cache.tabs"))this.element.dequeue("tabs");else{this.lis.eq(b).addClass("ui-state-processing");if(d.spinner){var g=a("span",e);g.data("label.tabs",g.html()).html(d.spinner)}this.xhr=a.ajax(a.extend({},d.ajaxOptions,{url:f,success:function(f,g){c.element.find(c._sanitizeSelector(e.hash)).html(f),c._cleanup(),d.cache&&a.data(e,"cache.tabs",!0),c._trigger("load",null,c._ui(c.anchors[b],c.panels[b]));try{d.ajaxOptions.success(f,g)}catch(h){}},error:function(a,f,g){c._cleanup(),c._trigger("load",null,c._ui(c.anchors[b],c.panels[b]));try{d.ajaxOptions.error(a,f,b,e)}catch(g){}}})),c.element.dequeue("tabs");return this}},abort:function(){this.element.queue([]),this.panels.stop(!1,!0),this.element.queue("tabs",this.element.queue("tabs").splice(-2,2)),this.xhr&&(this.xhr.abort(),delete this.xhr),this._cleanup();return this},url:function(a,b){this.anchors.eq(a).removeData("cache.tabs").data("load.tabs",b);return this},length:function(){return this.anchors.length}}),a.extend(a.ui.tabs,{version:"1.8.18"}),a.extend(a.ui.tabs.prototype,{rotation:null,rotate:function(a,b){var c=this,d=this.options,e=c._rotate||(c._rotate=function(b){clearTimeout(c.rotation),c.rotation=setTimeout(function(){var a=d.selected;c.select(++a").appendTo("body"),c=b.css("display");b.remove();if(c==="none"||c==="")c="block";b_[a]=c}return b_[a]}function ce(a,b){var c={};d.each(cd.concat.apply([],cd.slice(0,b)),function(){c[this]=a});return c}function b$(){try{return new a.ActiveXObject("Microsoft.XMLHTTP")}catch(b){}}function bZ(){try{return new a.XMLHttpRequest}catch(b){}}function bY(){d(a).unload(function(){for(var a in bW)bW[a](0,1)})}function bS(a,c){a.dataFilter&&(c=a.dataFilter(c,a.dataType));var e=a.dataTypes,f={},g,h,i=e.length,j,k=e[0],l,m,n,o,p;for(g=1;g=0===c})}function P(a){return!a||!a.parentNode||a.parentNode.nodeType===11}function H(a,b){return(a&&a!=="*"?a+".":"")+b.replace(t,"`").replace(u,"&")}function G(a){var b,c,e,f,g,h,i,j,k,l,m,n,o,p=[],q=[],s=d._data(this,"events");if(a.liveFired!==this&&s&&s.live&&!a.target.disabled&&(!a.button||a.type!=="click")){a.namespace&&(n=new RegExp("(^|\\.)"+a.namespace.split(".").join("\\.(?:.*\\.)?")+"(\\.|$)")),a.liveFired=this;var t=s.live.slice(0);for(i=0;ic)break;a.currentTarget=f.elem,a.data=f.handleObj.data,a.handleObj=f.handleObj,o=f.handleObj.origHandler.apply(f.elem,arguments);if(o===!1||a.isPropagationStopped()){c=f.level,o===!1&&(b=!1);if(a.isImmediatePropagationStopped())break}}return b}}function E(a,c,e){var f=d.extend({},e[0]);f.type=a,f.originalEvent={},f.liveFired=b,d.event.handle.call(c,f),f.isDefaultPrevented()&&e[0].preventDefault()}function y(){return!0}function x(){return!1}function i(a){for(var b in a)if(b!=="toJSON")return!1;return!0}function h(a,c,e){if(e===b&&a.nodeType===1){e=a.getAttribute("data-"+c);if(typeof e==="string"){try{e=e==="true"?!0:e==="false"?!1:e==="null"?null:d.isNaN(e)?g.test(e)?d.parseJSON(e):e:parseFloat(e)}catch(f){}d.data(a,c,e)}else e=b}return e}var c=a.document,d=function(){function G(){if(!d.isReady){try{c.documentElement.doScroll("left")}catch(a){setTimeout(G,1);return}d.ready()}}var d=function(a,b){return new d.fn.init(a,b,g)},e=a.jQuery,f=a.$,g,h=/^(?:[^<]*(<[\w\W]+>)[^>]*$|#([\w\-]+)$)/,i=/\S/,j=/^\s+/,k=/\s+$/,l=/\d/,m=/^<(\w+)\s*\/?>(?:<\/\1>)?$/,n=/^[\],:{}\s]*$/,o=/\\(?:["\\\/bfnrt]|u[0-9a-fA-F]{4})/g,p=/"[^"\\\n\r]*"|true|false|null|-?\d+(?:\.\d*)?(?:[eE][+\-]?\d+)?/g,q=/(?:^|:|,)(?:\s*\[)+/g,r=/(webkit)[ \/]([\w.]+)/,s=/(opera)(?:.*version)?[ \/]([\w.]+)/,t=/(msie) ([\w.]+)/,u=/(mozilla)(?:.*? rv:([\w.]+))?/,v=navigator.userAgent,w,x,y,z=Object.prototype.toString,A=Object.prototype.hasOwnProperty,B=Array.prototype.push,C=Array.prototype.slice,D=String.prototype.trim,E=Array.prototype.indexOf,F={};d.fn=d.prototype={constructor:d,init:function(a,e,f){var g,i,j,k;if(!a)return this;if(a.nodeType){this.context=this[0]=a,this.length=1;return this}if(a==="body"&&!e&&c.body){this.context=c,this[0]=c.body,this.selector="body",this.length=1;return this}if(typeof a==="string"){g=h.exec(a);if(!g||!g[1]&&e)return!e||e.jquery?(e||f).find(a):this.constructor(e).find(a);if(g[1]){e=e instanceof d?e[0]:e,k=e?e.ownerDocument||e:c,j=m.exec(a),j?d.isPlainObject(e)?(a=[c.createElement(j[1])],d.fn.attr.call(a,e,!0)):a=[k.createElement(j[1])]:(j=d.buildFragment([g[1]],[k]),a=(j.cacheable?d.clone(j.fragment):j.fragment).childNodes);return d.merge(this,a)}i=c.getElementById(g[2]);if(i&&i.parentNode){if(i.id!==g[2])return f.find(a);this.length=1,this[0]=i}this.context=c,this.selector=a;return this}if(d.isFunction(a))return f.ready(a);a.selector!==b&&(this.selector=a.selector,this.context=a.context);return d.makeArray(a,this)},selector:"",jquery:"1.5.2",length:0,size:function(){return this.length},toArray:function(){return C.call(this,0)},get:function(a){return a==null?this.toArray():a<0?this[this.length+a]:this[a]},pushStack:function(a,b,c){var e=this.constructor();d.isArray(a)?B.apply(e,a):d.merge(e,a),e.prevObject=this,e.context=this.context,b==="find"?e.selector=this.selector+(this.selector?" ":"")+c:b&&(e.selector=this.selector+"."+b+"("+c+")");return e},each:function(a,b){return d.each(this,a,b)},ready:function(a){d.bindReady(),x.done(a);return this},eq:function(a){return a===-1?this.slice(a):this.slice(a,+a+1)},first:function(){return this.eq(0)},last:function(){return this.eq(-1)},slice:function(){return this.pushStack(C.apply(this,arguments),"slice",C.call(arguments).join(","))},map:function(a){return this.pushStack(d.map(this,function(b,c){return a.call(b,c,b)}))},end:function(){return this.prevObject||this.constructor(null)},push:B,sort:[].sort,splice:[].splice},d.fn.init.prototype=d.fn,d.extend=d.fn.extend=function(){var a,c,e,f,g,h,i=arguments[0]||{},j=1,k=arguments.length,l=!1;typeof i==="boolean"&&(l=i,i=arguments[1]||{},j=2),typeof i!=="object"&&!d.isFunction(i)&&(i={}),k===j&&(i=this,--j);for(;j0)return;x.resolveWith(c,[d]),d.fn.trigger&&d(c).trigger("ready").unbind("ready")}},bindReady:function(){if(!x){x=d._Deferred();if(c.readyState==="complete")return setTimeout(d.ready,1);if(c.addEventListener)c.addEventListener("DOMContentLoaded",y,!1),a.addEventListener("load",d.ready,!1);else if(c.attachEvent){c.attachEvent("onreadystatechange",y),a.attachEvent("onload",d.ready);var b=!1;try{b=a.frameElement==null}catch(e){}c.documentElement.doScroll&&b&&G()}}},isFunction:function(a){return d.type(a)==="function"},isArray:Array.isArray||function(a){return d.type(a)==="array"},isWindow:function(a){return a&&typeof a==="object"&&"setInterval"in a},isNaN:function(a){return a==null||!l.test(a)||isNaN(a)},type:function(a){return a==null?String(a):F[z.call(a)]||"object"},isPlainObject:function(a){if(!a||d.type(a)!=="object"||a.nodeType||d.isWindow(a))return!1;if(a.constructor&&!A.call(a,"constructor")&&!A.call(a.constructor.prototype,"isPrototypeOf"))return!1;var c;for(c in a){}return c===b||A.call(a,c)},isEmptyObject:function(a){for(var b in a)return!1;return!0},error:function(a){throw a},parseJSON:function(b){if(typeof b!=="string"||!b)return null;b=d.trim(b);if(n.test(b.replace(o,"@").replace(p,"]").replace(q,"")))return a.JSON&&a.JSON.parse?a.JSON.parse(b):(new Function("return "+b))();d.error("Invalid JSON: "+b)},parseXML:function(b,c,e){a.DOMParser?(e=new DOMParser,c=e.parseFromString(b,"text/xml")):(c=new ActiveXObject("Microsoft.XMLDOM"),c.async="false",c.loadXML(b)),e=c.documentElement,(!e||!e.nodeName||e.nodeName==="parsererror")&&d.error("Invalid XML: "+b);return c},noop:function(){},globalEval:function(a){if(a&&i.test(a)){var b=c.head||c.getElementsByTagName("head")[0]||c.documentElement,e=c.createElement("script");d.support.scriptEval()?e.appendChild(c.createTextNode(a)):e.text=a,b.insertBefore(e,b.firstChild),b.removeChild(e)}},nodeName:function(a,b){return a.nodeName&&a.nodeName.toUpperCase()===b.toUpperCase()},each:function(a,c,e){var f,g=0,h=a.length,i=h===b||d.isFunction(a);if(e){if(i){for(f in a)if(c.apply(a[f],e)===!1)break}else for(;g1?f.call(arguments,0):c,--g||h.resolveWith(h,f.call(b,0))}}var b=arguments,c=0,e=b.length,g=e,h=e<=1&&a&&d.isFunction(a.promise)?a:d.Deferred();if(e>1){for(;c
      a";var e=b.getElementsByTagName("*"),f=b.getElementsByTagName("a")[0],g=c.createElement("select"),h=g.appendChild(c.createElement("option")),i=b.getElementsByTagName("input")[0];if(e&&e.length&&f){d.support={leadingWhitespace:b.firstChild.nodeType===3,tbody:!b.getElementsByTagName("tbody").length,htmlSerialize:!!b.getElementsByTagName("link").length,style:/red/.test(f.getAttribute("style")),hrefNormalized:f.getAttribute("href")==="/a",opacity:/^0.55$/.test(f.style.opacity),cssFloat:!!f.style.cssFloat,checkOn:i.value==="on",optSelected:h.selected,deleteExpando:!0,optDisabled:!1,checkClone:!1,noCloneEvent:!0,noCloneChecked:!0,boxModel:null,inlineBlockNeedsLayout:!1,shrinkWrapBlocks:!1,reliableHiddenOffsets:!0,reliableMarginRight:!0},i.checked=!0,d.support.noCloneChecked=i.cloneNode(!0).checked,g.disabled=!0,d.support.optDisabled=!h.disabled;var j=null;d.support.scriptEval=function(){if(j===null){var b=c.documentElement,e=c.createElement("script"),f="script"+d.now();try{e.appendChild(c.createTextNode("window."+f+"=1;"))}catch(g){}b.insertBefore(e,b.firstChild),a[f]?(j=!0,delete a[f]):j=!1,b.removeChild(e)}return j};try{delete b.test}catch(k){d.support.deleteExpando=!1}!b.addEventListener&&b.attachEvent&&b.fireEvent&&(b.attachEvent("onclick",function l(){d.support.noCloneEvent=!1,b.detachEvent("onclick",l)}),b.cloneNode(!0).fireEvent("onclick")),b=c.createElement("div"),b.innerHTML="";var m=c.createDocumentFragment();m.appendChild(b.firstChild),d.support.checkClone=m.cloneNode(!0).cloneNode(!0).lastChild.checked,d(function(){var a=c.createElement("div"),b=c.getElementsByTagName("body")[0];if(b){a.style.width=a.style.paddingLeft="1px",b.appendChild(a),d.boxModel=d.support.boxModel=a.offsetWidth===2,"zoom"in a.style&&(a.style.display="inline",a.style.zoom=1,d.support.inlineBlockNeedsLayout=a.offsetWidth===2,a.style.display="",a.innerHTML="
      ",d.support.shrinkWrapBlocks=a.offsetWidth!==2),a.innerHTML="
      t
      ";var e=a.getElementsByTagName("td");d.support.reliableHiddenOffsets=e[0].offsetHeight===0,e[0].style.display="",e[1].style.display="none",d.support.reliableHiddenOffsets=d.support.reliableHiddenOffsets&&e[0].offsetHeight===0,a.innerHTML="",c.defaultView&&c.defaultView.getComputedStyle&&(a.style.width="1px",a.style.marginRight="0",d.support.reliableMarginRight=(parseInt(c.defaultView.getComputedStyle(a,null).marginRight,10)||0)===0),b.removeChild(a).style.display="none",a=e=null}});var n=function(a){var b=c.createElement("div");a="on"+a;if(!b.attachEvent)return!0;var d=a in b;d||(b.setAttribute(a,"return;"),d=typeof b[a]==="function");return d};d.support.submitBubbles=n("submit"),d.support.changeBubbles=n("change"),b=e=f=null}}();var g=/^(?:\{.*\}|\[.*\])$/;d.extend({cache:{},uuid:0,expando:"jQuery"+(d.fn.jquery+Math.random()).replace(/\D/g,""),noData:{embed:!0,object:"clsid:D27CDB6E-AE6D-11cf-96B8-444553540000",applet:!0},hasData:function(a){a=a.nodeType?d.cache[a[d.expando]]:a[d.expando];return!!a&&!i(a)},data:function(a,c,e,f){if(d.acceptData(a)){var g=d.expando,h=typeof c==="string",i,j=a.nodeType,k=j?d.cache:a,l=j?a[d.expando]:a[d.expando]&&d.expando;if((!l||f&&l&&!k[l][g])&&h&&e===b)return;l||(j?a[d.expando]=l=++d.uuid:l=d.expando),k[l]||(k[l]={},j||(k[l].toJSON=d.noop));if(typeof c==="object"||typeof c==="function")f?k[l][g]=d.extend(k[l][g],c):k[l]=d.extend(k[l],c);i=k[l],f&&(i[g]||(i[g]={}),i=i[g]),e!==b&&(i[c]=e);if(c==="events"&&!i[c])return i[g]&&i[g].events;return h?i[c]:i}},removeData:function(b,c,e){if(d.acceptData(b)){var f=d.expando,g=b.nodeType,h=g?d.cache:b,j=g?b[d.expando]:d.expando;if(!h[j])return;if(c){var k=e?h[j][f]:h[j];if(k){delete k[c];if(!i(k))return}}if(e){delete h[j][f];if(!i(h[j]))return}var l=h[j][f];d.support.deleteExpando||h!=a?delete h[j]:h[j]=null,l?(h[j]={},g||(h[j].toJSON=d.noop),h[j][f]=l):g&&(d.support.deleteExpando?delete b[d.expando]:b.removeAttribute?b.removeAttribute(d.expando):b[d.expando]=null)}},_data:function(a,b,c){return d.data(a,b,c,!0)},acceptData:function(a){if(a.nodeName){var b=d.noData[a.nodeName.toLowerCase()];if(b)return b!==!0&&a.getAttribute("classid")===b}return!0}}),d.fn.extend({data:function(a,c){var e=null;if(typeof a==="undefined"){if(this.length){e=d.data(this[0]);if(this[0].nodeType===1){var f=this[0].attributes,g;for(var i=0,j=f.length;i-1)return!0;return!1},val:function(a){if(!arguments.length){var c=this[0];if(c){if(d.nodeName(c,"option")){var e=c.attributes.value;return!e||e.specified?c.value:c.text}if(d.nodeName(c,"select")){var f=c.selectedIndex,g=[],h=c.options,i=c.type==="select-one";if(f<0)return null;for(var j=i?f:0,k=i?f+1:h.length;j=0;else if(d.nodeName(this,"select")){var f=d.makeArray(e);d("option",this).each(function(){this.selected=d.inArray(d(this).val(),f)>=0}),f.length||(this.selectedIndex=-1)}else this.value=e}})}}),d.extend({attrFn:{val:!0,css:!0,html:!0,text:!0,data:!0,width:!0,height:!0,offset:!0},attr:function(a,c,e,f){if(!a||a.nodeType===3||a.nodeType===8||a.nodeType===2)return b;if(f&&c in d.attrFn)return d(a)[c](e);var g=a.nodeType!==1||!d.isXMLDoc(a),h=e!==b;c=g&&d.props[c]||c;if(a.nodeType===1){var i=m.test(c);if(c==="selected"&&!d.support.optSelected){var j=a.parentNode;j&&(j.selectedIndex,j.parentNode&&j.parentNode.selectedIndex)}if((c in a||a[c]!==b)&&g&&!i){h&&(c==="type"&&n.test(a.nodeName)&&a.parentNode&&d.error("type property can't be changed"),e===null?a.nodeType===1&&a.removeAttribute(c):a[c]=e);if(d.nodeName(a,"form")&&a.getAttributeNode(c))return a.getAttributeNode(c).nodeValue;if(c==="tabIndex"){var k=a.getAttributeNode("tabIndex");return k&&k.specified?k.value:o.test(a.nodeName)||p.test(a.nodeName)&&a.href?0:b}return a[c]}if(!d.support.style&&g&&c==="style"){h&&(a.style.cssText=""+e);return a.style.cssText}h&&a.setAttribute(c,""+e);if(!a.attributes[c]&&(a.hasAttribute&&!a.hasAttribute(c)))return b;var l=!d.support.hrefNormalized&&g&&i?a.getAttribute(c,2):a.getAttribute(c);return l===null?b:l}h&&(a[c]=e);return a[c]}});var r=/\.(.*)$/,s=/^(?:textarea|input|select)$/i,t=/\./g,u=/ /g,v=/[^\w\s.|`]/g,w=function(a){return a.replace(v,"\\$&")};d.event={add:function(c,e,f,g){if(c.nodeType!==3&&c.nodeType!==8){try{d.isWindow(c)&&(c!==a&&!c.frameElement)&&(c=a)}catch(h){}if(f===!1)f=x;else if(!f)return;var i,j;f.handler&&(i=f,f=i.handler),f.guid||(f.guid=d.guid++);var k=d._data(c);if(!k)return;var l=k.events,m=k.handle;l||(k.events=l={}),m||(k.handle=m=function(a){return typeof d!=="undefined"&&d.event.triggered!==a.type?d.event.handle.apply(m.elem,arguments):b}),m.elem=c,e=e.split(" ");var n,o=0,p;while(n=e[o++]){j=i?d.extend({},i):{handler:f,data:g},n.indexOf(".")>-1?(p=n.split("."),n=p.shift(),j.namespace=p.slice(0).sort().join(".")):(p=[],j.namespace=""),j.type=n,j.guid||(j.guid=f.guid);var q=l[n],r=d.event.special[n]||{};if(!q){q=l[n]=[];if(!r.setup||r.setup.call(c,g,p,m)===!1)c.addEventListener?c.addEventListener(n,m,!1):c.attachEvent&&c.attachEvent("on"+n,m)}r.add&&(r.add.call(c,j),j.handler.guid||(j.handler.guid=f.guid)),q.push(j),d.event.global[n]=!0}c=null}},global:{},remove:function(a,c,e,f){if(a.nodeType!==3&&a.nodeType!==8){e===!1&&(e=x);var g,h,i,j,k=0,l,m,n,o,p,q,r,s=d.hasData(a)&&d._data(a),t=s&&s.events;if(!s||!t)return;c&&c.type&&(e=c.handler,c=c.type);if(!c||typeof c==="string"&&c.charAt(0)==="."){c=c||"";for(h in t)d.event.remove(a,h+c);return}c=c.split(" ");while(h=c[k++]){r=h,q=null,l=h.indexOf(".")<0,m=[],l||(m=h.split("."),h=m.shift(),n=new RegExp("(^|\\.)"+d.map(m.slice(0).sort(),w).join("\\.(?:.*\\.)?")+"(\\.|$)")),p=t[h];if(!p)continue;if(!e){for(j=0;j=0&&(a.type=f=f.slice(0,-1),a.exclusive=!0),e||(a.stopPropagation(),d.event.global[f]&&d.each(d.cache,function(){var b=d.expando,e=this[b];e&&e.events&&e.events[f]&&d.event.trigger(a,c,e.handle.elem)}));if(!e||e.nodeType===3||e.nodeType===8)return b;a.result=b,a.target=e,c=d.makeArray(c),c.unshift(a)}a.currentTarget=e;var h=d._data(e,"handle");h&&h.apply(e,c);var i=e.parentNode||e.ownerDocument;try{e&&e.nodeName&&d.noData[e.nodeName.toLowerCase()]||e["on"+f]&&e["on"+f].apply(e,c)===!1&&(a.result=!1,a.preventDefault())}catch(j){}if(!a.isPropagationStopped()&&i)d.event.trigger(a,c,i,!0);else if(!a.isDefaultPrevented()){var k,l=a.target,m=f.replace(r,""),n=d.nodeName(l,"a")&&m==="click",o=d.event.special[m]||{};if((!o._default||o._default.call(e,a)===!1)&&!n&&!(l&&l.nodeName&&d.noData[l.nodeName.toLowerCase()])){try{l[m]&&(k=l["on"+m],k&&(l["on"+m]=null),d.event.triggered=a.type,l[m]())}catch(p){}k&&(l["on"+m]=k),d.event.triggered=b}}},handle:function(c){var e,f,g,h,i,j=[],k=d.makeArray(arguments);c=k[0]=d.event.fix(c||a.event),c.currentTarget=this,e=c.type.indexOf(".")<0&&!c.exclusive,e||(g=c.type.split("."),c.type=g.shift(),j=g.slice(0).sort(),h=new RegExp("(^|\\.)"+j.join("\\.(?:.*\\.)?")+"(\\.|$)")),c.namespace=c.namespace||j.join("."),i=d._data(this,"events"),f=(i||{})[c.type];if(i&&f){f=f.slice(0);for(var l=0,m=f.length;l-1?d.map(a.options,function(a){return a.selected}).join("-"):"":a.nodeName.toLowerCase()==="select"&&(c=a.selectedIndex);return c},D=function D(a){var c=a.target,e,f;if(s.test(c.nodeName)&&!c.readOnly){e=d._data(c,"_change_data"),f=C(c),(a.type!=="focusout"||c.type!=="radio")&&d._data(c,"_change_data",f);if(e===b||f===e)return;if(e!=null||f)a.type="change",a.liveFired=b,d.event.trigger(a,arguments[1],c)}};d.event.special.change={filters:{focusout:D,beforedeactivate:D,click:function(a){var b=a.target,c=b.type;(c==="radio"||c==="checkbox"||b.nodeName.toLowerCase()==="select")&&D.call(this,a)},keydown:function(a){var b=a.target,c=b.type;(a.keyCode===13&&b.nodeName.toLowerCase()!=="textarea"||a.keyCode===32&&(c==="checkbox"||c==="radio")||c==="select-multiple")&&D.call(this,a)},beforeactivate:function(a){var b=a.target;d._data(b,"_change_data",C(b))}},setup:function(a,b){if(this.type==="file")return!1;for(var c in B)d.event.add(this,c+".specialChange",B[c]);return s.test(this.nodeName)},teardown:function(a){d.event.remove(this,".specialChange");return s.test(this.nodeName)}},B=d.event.special.change.filters,B.focus=B.beforeactivate}c.addEventListener&&d.each({focus:"focusin",blur:"focusout"},function(a,b){function f(a){var c=d.event.fix(a);c.type=b,c.originalEvent={},d.event.trigger(c,null,c.target),c.isDefaultPrevented()&&a.preventDefault()}var e=0;d.event.special[b]={setup:function(){e++===0&&c.addEventListener(a,f,!0)},teardown:function(){--e===0&&c.removeEventListener(a,f,!0)}}}),d.each(["bind","one"],function(a,c){d.fn[c]=function(a,e,f){if(typeof a==="object"){for(var g in a)this[c](g,e,a[g],f);return this}if(d.isFunction(e)||e===!1)f=e,e=b;var h=c==="one"?d.proxy(f,function(a){d(this).unbind(a,h);return f.apply(this,arguments)}):f;if(a==="unload"&&c!=="one")this.one(a,e,f);else for(var i=0,j=this.length;i0?this.bind(b,a,c):this.trigger(b)},d.attrFn&&(d.attrFn[b]=!0)}),function(){function u(a,b,c,d,e,f){for(var g=0,h=d.length;g0){j=i;break}}i=i[a]}d[g]=j}}}function t(a,b,c,d,e,f){for(var g=0,h=d.length;g+~,(\[\\]+)+|[>+~])(\s*,\s*)?((?:.|\r|\n)*)/g,e=0,f=Object.prototype.toString,g=!1,h=!0,i=/\\/g,j=/\W/;[0,0].sort(function(){h=!1;return 0});var k=function(b,d,e,g){e=e||[],d=d||c;var h=d;if(d.nodeType!==1&&d.nodeType!==9)return[];if(!b||typeof b!=="string")return e;var i,j,n,o,q,r,s,t,u=!0,w=k.isXML(d),x=[],y=b;do{a.exec(""),i=a.exec(y);if(i){y=i[3],x.push(i[1]);if(i[2]){o=i[3];break}}}while(i);if(x.length>1&&m.exec(b))if(x.length===2&&l.relative[x[0]])j=v(x[0]+x[1],d);else{j=l.relative[x[0]]?[d]:k(x.shift(),d);while(x.length)b=x.shift(),l.relative[b]&&(b+=x.shift()),j=v(b,j)}else{!g&&x.length>1&&d.nodeType===9&&!w&&l.match.ID.test(x[0])&&!l.match.ID.test(x[x.length-1])&&(q=k.find(x.shift(),d,w),d=q.expr?k.filter(q.expr,q.set)[0]:q.set[0]);if(d){q=g?{expr:x.pop(),set:p(g)}:k.find(x.pop(),x.length===1&&(x[0]==="~"||x[0]==="+")&&d.parentNode?d.parentNode:d,w),j=q.expr?k.filter(q.expr,q.set):q.set,x.length>0?n=p(j):u=!1;while(x.length)r=x.pop(),s=r,l.relative[r]?s=x.pop():r="",s==null&&(s=d),l.relative[r](n,s,w)}else n=x=[]}n||(n=j),n||k.error(r||b);if(f.call(n)==="[object Array]")if(u)if(d&&d.nodeType===1)for(t=0;n[t]!=null;t++)n[t]&&(n[t]===!0||n[t].nodeType===1&&k.contains(d,n[t]))&&e.push(j[t]);else for(t=0;n[t]!=null;t++)n[t]&&n[t].nodeType===1&&e.push(j[t]);else e.push.apply(e,n);else p(n,e);o&&(k(o,h,e,g),k.uniqueSort(e));return e};k.uniqueSort=function(a){if(r){g=h,a.sort(r);if(g)for(var b=1;b0},k.find=function(a,b,c){var d;if(!a)return[];for(var e=0,f=l.order.length;e":function(a,b){var c,d=typeof b==="string",e=0,f=a.length;if(d&&!j.test(b)){b=b.toLowerCase();for(;e=0)?c||d.push(h):c&&(b[g]=!1));return!1},ID:function(a){return a[1].replace(i,"")},TAG:function(a,b){return a[1].replace(i,"").toLowerCase()},CHILD:function(a){if(a[1]==="nth"){a[2]||k.error(a[0]),a[2]=a[2].replace(/^\+|\s*/g,"");var b=/(-?)(\d*)(?:n([+\-]?\d*))?/.exec(a[2]==="even"&&"2n"||a[2]==="odd"&&"2n+1"||!/\D/.test(a[2])&&"0n+"+a[2]||a[2]);a[2]=b[1]+(b[2]||1)-0,a[3]=b[3]-0}else a[2]&&k.error(a[0]);a[0]=e++;return a},ATTR:function(a,b,c,d,e,f){var g=a[1]=a[1].replace(i,"");!f&&l.attrMap[g]&&(a[1]=l.attrMap[g]),a[4]=(a[4]||a[5]||"").replace(i,""),a[2]==="~="&&(a[4]=" "+a[4]+" ");return a},PSEUDO:function(b,c,d,e,f){if(b[1]==="not")if((a.exec(b[3])||"").length>1||/^\w/.test(b[3]))b[3]=k(b[3],null,null,c);else{var g=k.filter(b[3],c,d,!0^f);d||e.push.apply(e,g);return!1}else if(l.match.POS.test(b[0])||l.match.CHILD.test(b[0]))return!0;return b},POS:function(a){a.unshift(!0);return a}},filters:{enabled:function(a){return a.disabled===!1&&a.type!=="hidden"},disabled:function(a){return a.disabled===!0},checked:function(a){return a.checked===!0},selected:function(a){a.parentNode&&a.parentNode.selectedIndex;return a.selected===!0},parent:function(a){return!!a.firstChild},empty:function(a){return!a.firstChild},has:function(a,b,c){return!!k(c[3],a).length},header:function(a){return/h\d/i.test(a.nodeName)},text:function(a){var b=a.getAttribute("type"),c=a.type;return"text"===c&&(b===c||b===null)},radio:function(a){return"radio"===a.type},checkbox:function(a){return"checkbox"===a.type},file:function(a){return"file"===a.type},password:function(a){return"password"===a.type},submit:function(a){return"submit"===a.type},image:function(a){return"image"===a.type},reset:function(a){return"reset"===a.type},button:function(a){return"button"===a.type||a.nodeName.toLowerCase()==="button"},input:function(a){return/input|select|textarea|button/i.test(a.nodeName)}},setFilters:{first:function(a,b){return b===0},last:function(a,b,c,d){return b===d.length-1},even:function(a,b){return b%2===0},odd:function(a,b){return b%2===1},lt:function(a,b,c){return bc[3]-0},nth:function(a,b,c){return c[3]-0===b},eq:function(a,b,c){return c[3]-0===b}},filter:{PSEUDO:function(a,b,c,d){var e=b[1],f=l.filters[e];if(f)return f(a,c,b,d);if(e==="contains")return(a.textContent||a.innerText||k.getText([a])||"").indexOf(b[3])>=0;if(e==="not"){var g=b[3];for(var h=0,i=g.length;h=0}},ID:function(a,b){return a.nodeType===1&&a.getAttribute("id")===b},TAG:function(a,b){return b==="*"&&a.nodeType===1||a.nodeName.toLowerCase()===b},CLASS:function(a,b){return(" "+(a.className||a.getAttribute("class"))+" ").indexOf(b)>-1},ATTR:function(a,b){var c=b[1],d=l.attrHandle[c]?l.attrHandle[c](a):a[c]!=null?a[c]:a.getAttribute(c),e=d+"",f=b[2],g=b[4];return d==null?f==="!=":f==="="?e===g:f==="*="?e.indexOf(g)>=0:f==="~="?(" "+e+" ").indexOf(g)>=0:g?f==="!="?e!==g:f==="^="?e.indexOf(g)===0:f==="$="?e.substr(e.length-g.length)===g:f==="|="?e===g||e.substr(0,g.length+1)===g+"-":!1:e&&d!==!1},POS:function(a,b,c,d){var e=b[2],f=l.setFilters[e];if(f)return f(a,c,b,d)}}},m=l.match.POS,n=function(a,b){return"\\"+(b-0+1)};for(var o in l.match)l.match[o]=new RegExp(l.match[o].source+/(?![^\[]*\])(?![^\(]*\))/.source),l.leftMatch[o]=new RegExp(/(^(?:.|\r|\n)*?)/.source+l.match[o].source.replace(/\\(\d+)/g,n));var p=function(a,b){a=Array.prototype.slice.call(a,0);if(b){b.push.apply(b,a);return b}return a};try{Array.prototype.slice.call(c.documentElement.childNodes,0)[0].nodeType}catch(q){p=function(a,b){var c=0,d=b||[];if(f.call(a)==="[object Array]")Array.prototype.push.apply(d,a);else if(typeof a.length==="number")for(var e=a.length;c",e.insertBefore(a,e.firstChild),c.getElementById(d)&&(l.find.ID=function(a,c,d){if(typeof c.getElementById!=="undefined"&&!d){var e=c.getElementById(a[1]);return e?e.id===a[1]||typeof e.getAttributeNode!=="undefined"&&e.getAttributeNode("id").nodeValue===a[1]?[e]:b:[]}},l.filter.ID=function(a,b){var c=typeof a.getAttributeNode!=="undefined"&&a.getAttributeNode("id");return a.nodeType===1&&c&&c.nodeValue===b}),e.removeChild(a),e=a=null}(),function(){var a=c.createElement("div");a.appendChild(c.createComment("")),a.getElementsByTagName("*").length>0&&(l.find.TAG=function(a,b){var c=b.getElementsByTagName(a[1]);if(a[1]==="*"){var d=[];for(var e=0;c[e];e++)c[e].nodeType===1&&d.push(c[e]);c=d}return c}),a.innerHTML="",a.firstChild&&typeof a.firstChild.getAttribute!=="undefined"&&a.firstChild.getAttribute("href")!=="#"&&(l.attrHandle.href=function(a){return a.getAttribute("href",2)}),a=null}(),c.querySelectorAll&&function(){var a=k,b=c.createElement("div"),d="__sizzle__";b.innerHTML="

      ";if(!b.querySelectorAll||b.querySelectorAll(".TEST").length!==0){k=function(b,e,f,g){e=e||c;if(!g&&!k.isXML(e)){var h=/^(\w+$)|^\.([\w\-]+$)|^#([\w\-]+$)/.exec(b);if(h&&(e.nodeType===1||e.nodeType===9)){if(h[1])return p(e.getElementsByTagName(b),f);if(h[2]&&l.find.CLASS&&e.getElementsByClassName)return p(e.getElementsByClassName(h[2]),f)}if(e.nodeType===9){if(b==="body"&&e.body)return p([e.body],f);if(h&&h[3]){var i=e.getElementById(h[3]);if(!i||!i.parentNode)return p([],f);if(i.id===h[3])return p([i],f)}try{return p(e.querySelectorAll(b),f)}catch(j){}}else if(e.nodeType===1&&e.nodeName.toLowerCase()!=="object"){var m=e,n=e.getAttribute("id"),o=n||d,q=e.parentNode,r=/^\s*[+~]/.test(b);n?o=o.replace(/'/g,"\\$&"):e.setAttribute("id",o),r&&q&&(e=e.parentNode);try{if(!r||q)return p(e.querySelectorAll("[id='"+o+"'] "+b),f)}catch(s){}finally{n||m.removeAttribute("id")}}}return a(b,e,f,g)};for(var e in a)k[e]=a[e];b=null}}(),function(){var a=c.documentElement,b=a.matchesSelector||a.mozMatchesSelector||a.webkitMatchesSelector||a.msMatchesSelector;if(b){var d=!b.call(c.createElement("div"),"div"),e=!1;try{b.call(c.documentElement,"[test!='']:sizzle")}catch(f){e=!0}k.matchesSelector=function(a,c){c=c.replace(/\=\s*([^'"\]]*)\s*\]/g,"='$1']");if(!k.isXML(a))try{if(e||!l.match.PSEUDO.test(c)&&!/!=/.test(c)){var f=b.call(a,c);if(f||!d||a.document&&a.document.nodeType!==11)return f}}catch(g){}return k(c,null,null,[a]).length>0}}}(),function(){var a=c.createElement("div");a.innerHTML="
      ";if(a.getElementsByClassName&&a.getElementsByClassName("e").length!==0){a.lastChild.className="e";if(a.getElementsByClassName("e").length===1)return;l.order.splice(1,0,"CLASS"),l.find.CLASS=function(a,b,c){if(typeof b.getElementsByClassName!=="undefined"&&!c)return b.getElementsByClassName(a[1])},a=null}}(),c.documentElement.contains?k.contains=function(a,b){return a!==b&&(a.contains?a.contains(b):!0)}:c.documentElement.compareDocumentPosition?k.contains=function(a,b){return!!(a.compareDocumentPosition(b)&16)}:k.contains=function(){return!1},k.isXML=function(a){var b=(a?a.ownerDocument||a:0).documentElement;return b?b.nodeName!=="HTML":!1};var v=function(a,b){var c,d=[],e="",f=b.nodeType?[b]:b;while(c=l.match.PSEUDO.exec(a))e+=c[0],a=a.replace(l.match.PSEUDO,"");a=l.relative[a]?a+"*":a;for(var g=0,h=f.length;g0)for(var g=c;g0},closest:function(a,b){var c=[],e,f,g=this[0];if(d.isArray(a)){var h,i,j={},k=1;if(g&&a.length){for(e=0,f=a.length;e-1:d(g).is(h))&&c.push({selector:i,elem:g,level:k});g=g.parentNode,k++}}return c}var l=N.test(a)?d(a,b||this.context):null;for(e=0,f=this.length;e-1:d.find.matchesSelector(g,a)){c.push(g);break}g=g.parentNode;if(!g||!g.ownerDocument||g===b)break}}c=c.length>1?d.unique(c):c;return this.pushStack(c,"closest",a)},index:function(a){if(!a||typeof a==="string")return d.inArray(this[0],a?d(a):this.parent().children());return d.inArray(a.jquery?a[0]:a,this)},add:function(a,b){var c=typeof a==="string"?d(a,b):d.makeArray(a),e=d.merge(this.get(),c);return this.pushStack(P(c[0])||P(e[0])?e:d.unique(e))},andSelf:function(){return this.add(this.prevObject)}}),d.each({parent:function(a){var b=a.parentNode;return b&&b.nodeType!==11?b:null},parents:function(a){return d.dir(a,"parentNode")},parentsUntil:function(a,b,c){return d.dir(a,"parentNode",c)},next:function(a){return d.nth(a,2,"nextSibling")},prev:function(a){return d.nth(a,2,"previousSibling")},nextAll:function(a){return d.dir(a,"nextSibling")},prevAll:function(a){return d.dir(a,"previousSibling")},nextUntil:function(a,b,c){return d.dir(a,"nextSibling",c)},prevUntil:function(a,b,c){return d.dir(a,"previousSibling",c)},siblings:function(a){return d.sibling(a.parentNode.firstChild,a)},children:function(a){return d.sibling(a.firstChild)},contents:function(a){return d.nodeName(a,"iframe")?a.contentDocument||a.contentWindow.document:d.makeArray(a.childNodes)}},function(a,b){d.fn[a]=function(c,e){var f=d.map(this,b,c),g=M.call(arguments);I.test(a)||(e=c),e&&typeof e==="string"&&(f=d.filter(e,f)),f=this.length>1&&!O[a]?d.unique(f):f,(this.length>1||K.test(e))&&J.test(a)&&(f=f.reverse());return this.pushStack(f,a,g.join(","))}}),d.extend({filter:function(a,b,c){c&&(a=":not("+a+")");return b.length===1?d.find.matchesSelector(b[0],a)?[b[0]]:[]:d.find.matches(a,b)},dir:function(a,c,e){var f=[],g=a[c];while(g&&g.nodeType!==9&&(e===b||g.nodeType!==1||!d(g).is(e)))g.nodeType===1&&f.push(g),g=g[c];return f},nth:function(a,b,c,d){b=b||1;var e=0;for(;a;a=a[c])if(a.nodeType===1&&++e===b)break;return a},sibling:function(a,b){var c=[];for(;a;a=a.nextSibling)a.nodeType===1&&a!==b&&c.push(a);return c}});var R=/ jQuery\d+="(?:\d+|null)"/g,S=/^\s+/,T=/<(?!area|br|col|embed|hr|img|input|link|meta|param)(([\w:]+)[^>]*)\/>/ig,U=/<([\w:]+)/,V=/
      ","
      "],tr:[2,"","
      "],td:[3,"","
      "],col:[2,"","
      "],area:[1,"",""],_default:[0,"",""]};Z.optgroup=Z.option,Z.tbody=Z.tfoot=Z.colgroup=Z.caption=Z.thead,Z.th=Z.td,d.support.htmlSerialize||(Z._default=[1,"div
      ","
      "]),d.fn.extend({text:function(a){if(d.isFunction(a))return this.each(function(b){var c=d(this);c.text(a.call(this,b,c.text()))});if(typeof a!=="object"&&a!==b)return this.empty().append((this[0]&&this[0].ownerDocument||c).createTextNode(a));return d.text(this)},wrapAll:function(a){if(d.isFunction(a))return this.each(function(b){d(this).wrapAll(a.call(this,b))});if(this[0]){var b=d(a,this[0].ownerDocument).eq(0).clone(!0);this[0].parentNode&&b.insertBefore(this[0]),b.map(function(){var a=this;while(a.firstChild&&a.firstChild.nodeType===1)a=a.firstChild;return a}).append(this)}return this},wrapInner:function(a){if(d.isFunction(a))return this.each(function(b){d(this).wrapInner(a.call(this,b))});return this.each(function(){var b=d(this),c=b.contents();c.length?c.wrapAll(a):b.append(a)})},wrap:function(a){return this.each(function(){d(this).wrapAll(a)})},unwrap:function(){return this.parent().each(function(){d.nodeName(this,"body")||d(this).replaceWith(this.childNodes)}).end()},append:function(){return this.domManip(arguments,!0,function(a){this.nodeType===1&&this.appendChild(a)})},prepend:function(){return this.domManip(arguments,!0,function(a){this.nodeType===1&&this.insertBefore(a,this.firstChild)})},before:function(){if(this[0]&&this[0].parentNode)return this.domManip(arguments,!1,function(a){this.parentNode.insertBefore(a,this)});if(arguments.length){var a=d(arguments[0]);a.push.apply(a,this.toArray());return this.pushStack(a,"before",arguments)}},after:function(){if(this[0]&&this[0].parentNode)return this.domManip(arguments,!1,function(a){this.parentNode.insertBefore(a,this.nextSibling)});if(arguments.length){var a=this.pushStack(this,"after",arguments);a.push.apply(a,d(arguments[0]).toArray());return a}},remove:function(a,b){for(var c=0,e;(e=this[c])!=null;c++)if(!a||d.filter(a,[e]).length)!b&&e.nodeType===1&&(d.cleanData(e.getElementsByTagName("*")),d.cleanData([e])),e.parentNode&&e.parentNode.removeChild(e);return this},empty:function(){for(var a=0,b;(b=this[a])!=null;a++){b.nodeType===1&&d.cleanData(b.getElementsByTagName("*"));while(b.firstChild)b.removeChild(b.firstChild)}return this},clone:function(a,b){a=a==null?!1:a,b=b==null?a:b;return this.map(function(){return d.clone(this,a,b)})},html:function(a){if(a===b)return this[0]&&this[0].nodeType===1?this[0].innerHTML.replace(R,""):null;if(typeof a!=="string"||X.test(a)||!d.support.leadingWhitespace&&S.test(a)||Z[(U.exec(a)||["",""])[1].toLowerCase()])d.isFunction(a)?this.each(function(b){var c=d(this);c.html(a.call(this,b,c.html()))}):this.empty().append(a);else{a=a.replace(T,"<$1>");try{for(var c=0,e=this.length;c1&&l0?this.clone(!0):this).get();d(f[h])[b](j),e=e.concat(j)}return this.pushStack(e,a,f.selector)}}),d.extend({clone:function(a,b,c){var e=a.cloneNode(!0),f,g,h;if((!d.support.noCloneEvent||!d.support.noCloneChecked)&&(a.nodeType===1||a.nodeType===11)&&!d.isXMLDoc(a)){ba(a,e),f=bb(a),g=bb(e);for(h=0;f[h];++h)ba(f[h],g[h])}if(b){_(a,e);if(c){f=bb(a),g=bb(e);for(h=0;f[h];++h)_(f[h],g[h])}}return e},clean:function(a,b,e,f){b=b||c,typeof b.createElement==="undefined"&&(b=b.ownerDocument||b[0]&&b[0].ownerDocument||c);var g=[];for(var h=0,i;(i=a[h])!=null;h++){typeof i==="number"&&(i+="");if(!i)continue;if(typeof i!=="string"||W.test(i)){if(typeof i==="string"){i=i.replace(T,"<$1>");var j=(U.exec(i)||["",""])[1].toLowerCase(),k=Z[j]||Z._default,l=k[0],m=b.createElement("div");m.innerHTML=k[1]+i+k[2];while(l--)m=m.lastChild;if(!d.support.tbody){var n=V.test(i),o=j==="table"&&!n?m.firstChild&&m.firstChild.childNodes:k[1]===""&&!n?m.childNodes:[];for(var p=o.length-1;p>=0;--p)d.nodeName(o[p],"tbody")&&!o[p].childNodes.length&&o[p].parentNode.removeChild(o[p])}!d.support.leadingWhitespace&&S.test(i)&&m.insertBefore(b.createTextNode(S.exec(i)[0]),m.firstChild),i=m.childNodes}}else i=b.createTextNode(i);i.nodeType?g.push(i):g=d.merge(g,i)}if(e)for(h=0;g[h];h++)!f||!d.nodeName(g[h],"script")||g[h].type&&g[h].type.toLowerCase()!=="text/javascript"?(g[h].nodeType===1&&g.splice.apply(g,[h+1,0].concat(d.makeArray(g[h].getElementsByTagName("script")))),e.appendChild(g[h])):f.push(g[h].parentNode?g[h].parentNode.removeChild(g[h]):g[h]);return g},cleanData:function(a){var b,c,e=d.cache,f=d.expando,g=d.event.special,h=d.support.deleteExpando;for(var i=0,j;(j=a[i])!=null;i++){if(j.nodeName&&d.noData[j.nodeName.toLowerCase()])continue;c=j[d.expando];if(c){b=e[c]&&e[c][f];if(b&&b.events){for(var k in b.events)g[k]?d.event.remove(j,k):d.removeEvent(j,k,b.handle);b.handle&&(b.handle.elem=null)}h?delete j[d.expando]:j.removeAttribute&&j.removeAttribute(d.expando),delete e[c]}}}});var bd=/alpha\([^)]*\)/i,be=/opacity=([^)]*)/,bf=/-([a-z])/ig,bg=/([A-Z]|^ms)/g,bh=/^-?\d+(?:px)?$/i,bi=/^-?\d/,bj={position:"absolute",visibility:"hidden",display:"block"},bk=["Left","Right"],bl=["Top","Bottom"],bm,bn,bo,bp=function(a,b){return b.toUpperCase()};d.fn.css=function(a,c){if(arguments.length===2&&c===b)return this;return d.access(this,a,c,!0,function(a,c,e){return e!==b?d.style(a,c,e):d.css(a,c)})},d.extend({cssHooks:{opacity:{get:function(a,b){if(b){var c=bm(a,"opacity","opacity");return c===""?"1":c}return a.style.opacity}}},cssNumber:{zIndex:!0,fontWeight:!0,opacity:!0,zoom:!0,lineHeight:!0},cssProps:{"float":d.support.cssFloat?"cssFloat":"styleFloat"},style:function(a,c,e,f){if(a&&a.nodeType!==3&&a.nodeType!==8&&a.style){var g,h=d.camelCase(c),i=a.style,j=d.cssHooks[h];c=d.cssProps[h]||h;if(e===b){if(j&&"get"in j&&(g=j.get(a,!1,f))!==b)return g;return i[c]}if(typeof e==="number"&&isNaN(e)||e==null)return;typeof e==="number"&&!d.cssNumber[h]&&(e+="px");if(!j||!("set"in j)||(e=j.set(a,e))!==b)try{i[c]=e}catch(k){}}},css:function(a,c,e){var f,g=d.camelCase(c),h=d.cssHooks[g];c=d.cssProps[g]||g;if(h&&"get"in h&&(f=h.get(a,!0,e))!==b)return f;if(bm)return bm(a,c,g)},swap:function(a,b,c){var d={};for(var e in b)d[e]=a.style[e],a.style[e]=b[e];c.call(a);for(e in b)a.style[e]=d[e]},camelCase:function(a){return a.replace(bf,bp)}}),d.curCSS=d.css,d.each(["height","width"],function(a,b){d.cssHooks[b]={get:function(a,c,e){var f;if(c){a.offsetWidth!==0?f=bq(a,b,e):d.swap(a,bj,function(){f=bq(a,b,e)});if(f<=0){f=bm(a,b,b),f==="0px"&&bo&&(f=bo(a,b,b));if(f!=null)return f===""||f==="auto"?"0px":f}if(f<0||f==null){f=a.style[b];return f===""||f==="auto"?"0px":f}return typeof f==="string"?f:f+"px"}},set:function(a,b){if(!bh.test(b))return b;b=parseFloat(b);if(b>=0)return b+"px"}}}),d.support.opacity||(d.cssHooks.opacity={get:function(a,b){return be.test((b&&a.currentStyle?a.currentStyle.filter:a.style.filter)||"")?parseFloat(RegExp.$1)/100+"":b?"1":""},set:function(a,b){var c=a.style;c.zoom=1;var e=d.isNaN(b)?"":"alpha(opacity="+b*100+")",f=c.filter||"";c.filter=bd.test(f)?f.replace(bd,e):c.filter+" "+e}}),d(function(){d.support.reliableMarginRight||(d.cssHooks.marginRight={get:function(a,b){var c;d.swap(a,{display:"inline-block"},function(){b?c=bm(a,"margin-right","marginRight"):c=a.style.marginRight});return c}})}),c.defaultView&&c.defaultView.getComputedStyle&&(bn=function(a,c,e){var f,g,h;e=e.replace(bg,"-$1").toLowerCase();if(!(g=a.ownerDocument.defaultView))return b;if(h=g.getComputedStyle(a,null))f=h.getPropertyValue(e),f===""&&!d.contains(a.ownerDocument.documentElement,a)&&(f=d.style(a,e));return f}),c.documentElement.currentStyle&&(bo=function(a,b){var c,d=a.currentStyle&&a.currentStyle[b],e=a.runtimeStyle&&a.runtimeStyle[b],f=a.style;!bh.test(d)&&bi.test(d)&&(c=f.left,e&&(a.runtimeStyle.left=a.currentStyle.left),f.left=b==="fontSize"?"1em":d||0,d=f.pixelLeft+"px",f.left=c,e&&(a.runtimeStyle.left=e));return d===""?"auto":d}),bm=bn||bo,d.expr&&d.expr.filters&&(d.expr.filters.hidden=function(a){var b=a.offsetWidth,c=a.offsetHeight;return b===0&&c===0||!d.support.reliableHiddenOffsets&&(a.style.display||d.css(a,"display"))==="none"},d.expr.filters.visible=function(a){return!d.expr.filters.hidden(a)});var br=/%20/g,bs=/\[\]$/,bt=/\r?\n/g,bu=/#.*$/,bv=/^(.*?):[ \t]*([^\r\n]*)\r?$/mg,bw=/^(?:color|date|datetime|email|hidden|month|number|password|range|search|tel|text|time|url|week)$/i,bx=/^(?:about|app|app\-storage|.+\-extension|file|widget):$/,by=/^(?:GET|HEAD)$/,bz=/^\/\//,bA=/\?/,bB=/)<[^<]*)*<\/script>/gi,bC=/^(?:select|textarea)/i,bD=/\s+/,bE=/([?&])_=[^&]*/,bF=/(^|\-)([a-z])/g,bG=function(a,b,c){return b+c.toUpperCase()},bH=/^([\w\+\.\-]+:)(?:\/\/([^\/?#:]*)(?::(\d+))?)?/,bI=d.fn.load,bJ={},bK={},bL,bM;try{bL=c.location.href}catch(bN){bL=c.createElement("a"),bL.href="",bL=bL.href}bM=bH.exec(bL.toLowerCase())||[],d.fn.extend({load:function(a,c,e){if(typeof a!=="string"&&bI)return bI.apply(this,arguments);if(!this.length)return this;var f=a.indexOf(" ");if(f>=0){var g=a.slice(f,a.length);a=a.slice(0,f)}var h="GET";c&&(d.isFunction(c)?(e=c,c=b):typeof c==="object"&&(c=d.param(c,d.ajaxSettings.traditional),h="POST"));var i=this;d.ajax({url:a,type:h,dataType:"html",data:c,complete:function(a,b,c){c=a.responseText,a.isResolved()&&(a.done(function(a){c=a}),i.html(g?d("
      ").append(c.replace(bB,"")).find(g):c)),e&&i.each(e,[c,b,a])}});return this},serialize:function(){return d.param(this.serializeArray())},serializeArray:function(){return this.map(function(){return this.elements?d.makeArray(this.elements):this}).filter(function(){return this.name&&!this.disabled&&(this.checked||bC.test(this.nodeName)||bw.test(this.type))}).map(function(a,b){var c=d(this).val();return c==null?null:d.isArray(c)?d.map(c,function(a,c){return{name:b.name,value:a.replace(bt,"\r\n")}}):{name:b.name,value:c.replace(bt,"\r\n")}}).get()}}),d.each("ajaxStart ajaxStop ajaxComplete ajaxError ajaxSuccess ajaxSend".split(" "),function(a,b){d.fn[b]=function(a){return this.bind(b,a)}}),d.each(["get","post"],function(a,c){d[c]=function(a,e,f,g){d.isFunction(e)&&(g=g||f,f=e,e=b);return d.ajax({type:c,url:a,data:e,success:f,dataType:g})}}),d.extend({getScript:function(a,c){return d.get(a,b,c,"script")},getJSON:function(a,b,c){return d.get(a,b,c,"json")},ajaxSetup:function(a,b){b?d.extend(!0,a,d.ajaxSettings,b):(b=a,a=d.extend(!0,d.ajaxSettings,b));for(var c in {context:1,url:1})c in b?a[c]=b[c]:c in d.ajaxSettings&&(a[c]=d.ajaxSettings[c]);return a},ajaxSettings:{url:bL,isLocal:bx.test(bM[1]),global:!0,type:"GET",contentType:"application/x-www-form-urlencoded",processData:!0,async:!0,accepts:{xml:"application/xml, text/xml",html:"text/html",text:"text/plain",json:"application/json, text/javascript","*":"*/*"},contents:{xml:/xml/,html:/html/,json:/json/},responseFields:{xml:"responseXML",text:"responseText"},converters:{"* text":a.String,"text html":!0,"text json":d.parseJSON,"text xml":d.parseXML}},ajaxPrefilter:bO(bJ),ajaxTransport:bO(bK),ajax:function(a,c){function v(a,c,l,n){if(r!==2){r=2,p&&clearTimeout(p),o=b,m=n||"",u.readyState=a?4:0;var q,t,v,w=l?bR(e,u,l):b,x,y;if(a>=200&&a<300||a===304){if(e.ifModified){if(x=u.getResponseHeader("Last-Modified"))d.lastModified[k]=x;if(y=u.getResponseHeader("Etag"))d.etag[k]=y}if(a===304)c="notmodified",q=!0;else try{t=bS(e,w),c="success",q=!0}catch(z){c="parsererror",v=z}}else{v=c;if(!c||a)c="error",a<0&&(a=0)}u.status=a,u.statusText=c,q?h.resolveWith(f,[t,c,u]):h.rejectWith(f,[u,c,v]),u.statusCode(j),j=b,s&&g.trigger("ajax"+(q?"Success":"Error"),[u,e,q?t:v]),i.resolveWith(f,[u,c]),s&&(g.trigger("ajaxComplete",[u,e]),--d.active||d.event.trigger("ajaxStop"))}}typeof a==="object"&&(c=a,a=b),c=c||{};var e=d.ajaxSetup({},c),f=e.context||e,g=f!==e&&(f.nodeType||f instanceof d)?d(f):d.event,h=d.Deferred(),i=d._Deferred(),j=e.statusCode||{},k,l={},m,n,o,p,q,r=0,s,t,u={readyState:0,setRequestHeader:function(a,b){r||(l[a.toLowerCase().replace(bF,bG)]=b);return this},getAllResponseHeaders:function(){return r===2?m:null},getResponseHeader:function(a){var c;if(r===2){if(!n){n={};while(c=bv.exec(m))n[c[1].toLowerCase()]=c[2]}c=n[a.toLowerCase()]}return c===b?null:c},overrideMimeType:function(a){r||(e.mimeType=a);return this},abort:function(a){a=a||"abort",o&&o.abort(a),v(0,a);return this}};h.promise(u),u.success=u.done,u.error=u.fail,u.complete=i.done,u.statusCode=function(a){if(a){var b;if(r<2)for(b in a)j[b]=[j[b],a[b]];else b=a[u.status],u.then(b,b)}return this},e.url=((a||e.url)+"").replace(bu,"").replace(bz,bM[1]+"//"),e.dataTypes=d.trim(e.dataType||"*").toLowerCase().split(bD),e.crossDomain==null&&(q=bH.exec(e.url.toLowerCase()),e.crossDomain=q&&(q[1]!=bM[1]||q[2]!=bM[2]||(q[3]||(q[1]==="http:"?80:443))!=(bM[3]||(bM[1]==="http:"?80:443)))),e.data&&e.processData&&typeof e.data!=="string"&&(e.data=d.param(e.data,e.traditional)),bP(bJ,e,c,u);if(r===2)return!1;s=e.global,e.type=e.type.toUpperCase(),e.hasContent=!by.test(e.type),s&&d.active++===0&&d.event.trigger("ajaxStart");if(!e.hasContent){e.data&&(e.url+=(bA.test(e.url)?"&":"?")+e.data),k=e.url;if(e.cache===!1){var w=d.now(),x=e.url.replace(bE,"$1_="+w);e.url=x+(x===e.url?(bA.test(e.url)?"&":"?")+"_="+w:"")}}if(e.data&&e.hasContent&&e.contentType!==!1||c.contentType)l["Content-Type"]=e.contentType;e.ifModified&&(k=k||e.url,d.lastModified[k]&&(l["If-Modified-Since"]=d.lastModified[k]),d.etag[k]&&(l["If-None-Match"]=d.etag[k])),l.Accept=e.dataTypes[0]&&e.accepts[e.dataTypes[0]]?e.accepts[e.dataTypes[0]]+(e.dataTypes[0]!=="*"?", */*; q=0.01":""):e.accepts["*"];for(t in e.headers)u.setRequestHeader(t,e.headers[t]);if(e.beforeSend&&(e.beforeSend.call(f,u,e)===!1||r===2)){u.abort();return!1}for(t in {success:1,error:1,complete:1})u[t](e[t]);o=bP(bK,e,c,u);if(o){u.readyState=1,s&&g.trigger("ajaxSend",[u,e]),e.async&&e.timeout>0&&(p=setTimeout(function(){u.abort("timeout")},e.timeout));try{r=1,o.send(l,v)}catch(y){status<2?v(-1,y):d.error(y)}}else v(-1,"No Transport");return u},param:function(a,c){var e=[],f=function(a,b){b=d.isFunction(b)?b():b,e[e.length]=encodeURIComponent(a)+"="+encodeURIComponent(b)};c===b&&(c=d.ajaxSettings.traditional);if(d.isArray(a)||a.jquery&&!d.isPlainObject(a))d.each(a,function(){f(this.name,this.value)});else for(var g in a)bQ(g,a[g],c,f);return e.join("&").replace(br,"+")}}),d.extend({active:0,lastModified:{},etag:{}});var bT=d.now(),bU=/(\=)\?(&|$)|\?\?/i;d.ajaxSetup({jsonp:"callback",jsonpCallback:function(){return d.expando+"_"+bT++}}),d.ajaxPrefilter("json jsonp",function(b,c,e){var f=typeof b.data==="string";if(b.dataTypes[0]==="jsonp"||c.jsonpCallback||c.jsonp!=null||b.jsonp!==!1&&(bU.test(b.url)||f&&bU.test(b.data))){var g,h=b.jsonpCallback=d.isFunction(b.jsonpCallback)?b.jsonpCallback():b.jsonpCallback,i=a[h],j=b.url,k=b.data,l="$1"+h+"$2",m=function(){a[h]=i,g&&d.isFunction(i)&&a[h](g[0])};b.jsonp!==!1&&(j=j.replace(bU,l),b.url===j&&(f&&(k=k.replace(bU,l)),b.data===k&&(j+=(/\?/.test(j)?"&":"?")+b.jsonp+"="+h))),b.url=j,b.data=k,a[h]=function(a){g=[a]},e.then(m,m),b.converters["script json"]=function(){g||d.error(h+" was not called");return g[0]},b.dataTypes[0]="json";return"script"}}),d.ajaxSetup({accepts:{script:"text/javascript, application/javascript, application/ecmascript, application/x-ecmascript"},contents:{script:/javascript|ecmascript/},converters:{"text script":function(a){d.globalEval(a);return a}}}),d.ajaxPrefilter("script",function(a){a.cache===b&&(a.cache=!1),a.crossDomain&&(a.type="GET",a.global=!1)}),d.ajaxTransport("script",function(a){if(a.crossDomain){var d,e=c.head||c.getElementsByTagName("head")[0]||c.documentElement;return{send:function(f,g){d=c.createElement("script"),d.async="async",a.scriptCharset&&(d.charset=a.scriptCharset),d.src=a.url,d.onload=d.onreadystatechange=function(a,c){if(!d.readyState||/loaded|complete/.test(d.readyState))d.onload=d.onreadystatechange=null,e&&d.parentNode&&e.removeChild(d),d=b,c||g(200,"success")},e.insertBefore(d,e.firstChild)},abort:function(){d&&d.onload(0,1)}}}});var bV=d.now(),bW,bX;d.ajaxSettings.xhr=a.ActiveXObject?function(){return!this.isLocal&&bZ()||b$()}:bZ,bX=d.ajaxSettings.xhr(),d.support.ajax=!!bX,d.support.cors=bX&&"withCredentials"in bX,bX=b,d.support.ajax&&d.ajaxTransport(function(a){if(!a.crossDomain||d.support.cors){var c;return{send:function(e,f){var g=a.xhr(),h,i;a.username?g.open(a.type,a.url,a.async,a.username,a.password):g.open(a.type,a.url,a.async);if(a.xhrFields)for(i in a.xhrFields)g[i]=a.xhrFields[i];a.mimeType&&g.overrideMimeType&&g.overrideMimeType(a.mimeType),!a.crossDomain&&!e["X-Requested-With"]&&(e["X-Requested-With"]="XMLHttpRequest");try{for(i in e)g.setRequestHeader(i,e[i])}catch(j){}g.send(a.hasContent&&a.data||null),c=function(e,i){var j,k,l,m,n;try{if(c&&(i||g.readyState===4)){c=b,h&&(g.onreadystatechange=d.noop,delete bW[h]);if(i)g.readyState!==4&&g.abort();else{j=g.status,l=g.getAllResponseHeaders(),m={},n=g.responseXML,n&&n.documentElement&&(m.xml=n),m.text=g.responseText;try{k=g.statusText}catch(o){k=""}j||!a.isLocal||a.crossDomain?j===1223&&(j=204):j=m.text?200:404}}}catch(p){i||f(-1,p)}m&&f(j,k,m,l)},a.async&&g.readyState!==4?(bW||(bW={},bY()),h=bV++,g.onreadystatechange=bW[h]=c):c()},abort:function(){c&&c(0,1)}}}});var b_={},ca=/^(?:toggle|show|hide)$/,cb=/^([+\-]=)?([\d+.\-]+)([a-z%]*)$/i,cc,cd=[["height","marginTop","marginBottom","paddingTop","paddingBottom"],["width","marginLeft","marginRight","paddingLeft","paddingRight"],["opacity"]];d.fn.extend({show:function(a,b,c){var e,f;if(a||a===0)return this.animate(ce("show",3),a,b,c);for(var g=0,h=this.length;g=0;a--)c[a].elem===this&&(b&&c[a](!0),c.splice(a,1))}),b||this.dequeue();return this}}),d.each({slideDown:ce("show",1),slideUp:ce("hide",1),slideToggle:ce("toggle",1),fadeIn:{opacity:"show"},fadeOut:{opacity:"hide"},fadeToggle:{opacity:"toggle"}},function(a,b){d.fn[a]=function(a,c,d){return this.animate(b,a,c,d)}}),d.extend({speed:function(a,b,c){var e=a&&typeof a==="object"?d.extend({},a):{complete:c||!c&&b||d.isFunction(a)&&a,duration:a,easing:c&&b||b&&!d.isFunction(b)&&b};e.duration=d.fx.off?0:typeof e.duration==="number"?e.duration:e.duration in d.fx.speeds?d.fx.speeds[e.duration]:d.fx.speeds._default,e.old=e.complete,e.complete=function(){e.queue!==!1&&d(this).dequeue(),d.isFunction(e.old)&&e.old.call(this)};return e},easing:{linear:function(a,b,c,d){return c+d*a},swing:function(a,b,c,d){return(-Math.cos(a*Math.PI)/2+.5)*d+c}},timers:[],fx:function(a,b,c){this.options=b,this.elem=a,this.prop=c,b.orig||(b.orig={})}}),d.fx.prototype={update:function(){this.options.step&&this.options.step.call(this.elem,this.now,this),(d.fx.step[this.prop]||d.fx.step._default)(this)},cur:function(){if(this.elem[this.prop]!=null&&(!this.elem.style||this.elem.style[this.prop]==null))return this.elem[this.prop];var a,b=d.css(this.elem,this.prop);return isNaN(a=parseFloat(b))?!b||b==="auto"?0:b:a},custom:function(a,b,c){function g(a){return e.step(a)}var e=this,f=d.fx;this.startTime=d.now(),this.start=a,this.end=b,this.unit=c||this.unit||(d.cssNumber[this.prop]?"":"px"),this.now=this.start,this.pos=this.state=0,g.elem=this.elem,g()&&d.timers.push(g)&&!cc&&(cc=setInterval(f.tick,f.interval))},show:function(){this.options.orig[this.prop]=d.style(this.elem,this.prop),this.options.show=!0,this.custom(this.prop==="width"||this.prop==="height"?1:0,this.cur()),d(this.elem).show()},hide:function(){this.options.orig[this.prop]=d.style(this.elem,this.prop),this.options.hide=!0,this.custom(this.cur(),0)},step:function(a){var b=d.now(),c=!0;if(a||b>=this.options.duration+this.startTime){this.now=this.end,this.pos=this.state=1,this.update(),this.options.curAnim[this.prop]=!0;for(var e in this.options.curAnim)this.options.curAnim[e]!==!0&&(c=!1);if(c){if(this.options.overflow!=null&&!d.support.shrinkWrapBlocks){var f=this.elem,g=this.options;d.each(["","X","Y"],function(a,b){f.style["overflow"+b]=g.overflow[a]})}this.options.hide&&d(this.elem).hide();if(this.options.hide||this.options.show)for(var h in this.options.curAnim)d.style(this.elem,h,this.options.orig[h]);this.options.complete.call(this.elem)}return!1}var i=b-this.startTime;this.state=i/this.options.duration;var j=this.options.specialEasing&&this.options.specialEasing[this.prop],k=this.options.easing||(d.easing.swing?"swing":"linear");this.pos=d.easing[j||k](this.state,i,0,1,this.options.duration),this.now=this.start+(this.end-this.start)*this.pos,this.update();return!0}},d.extend(d.fx,{tick:function(){var a=d.timers;for(var b=0;b
      ";d.extend(b.style,{position:"absolute",top:0,left:0,margin:0,border:0,width:"1px",height:"1px",visibility:"hidden"}),b.innerHTML=j,a.insertBefore(b,a.firstChild),e=b.firstChild,f=e.firstChild,h=e.nextSibling.firstChild.firstChild,this.doesNotAddBorder=f.offsetTop!==5,this.doesAddBorderForTableAndCells=h.offsetTop===5,f.style.position="fixed",f.style.top="20px",this.supportsFixedPosition=f.offsetTop===20||f.offsetTop===15,f.style.position=f.style.top="",e.style.overflow="hidden",e.style.position="relative",this.subtractsBorderForOverflowNotVisible=f.offsetTop===-5,this.doesNotIncludeMarginInBodyOffset=a.offsetTop!==i,a.removeChild(b),d.offset.initialize=d.noop},bodyOffset:function(a){var b=a.offsetTop,c=a.offsetLeft;d.offset.initialize(),d.offset.doesNotIncludeMarginInBodyOffset&&(b+=parseFloat(d.css(a,"marginTop"))||0,c+=parseFloat(d.css(a,"marginLeft"))||0);return{top:b,left:c}},setOffset:function(a,b,c){var e=d.css(a,"position");e==="static"&&(a.style.position="relative");var f=d(a),g=f.offset(),h=d.css(a,"top"),i=d.css(a,"left"),j=(e==="absolute"||e==="fixed")&&d.inArray("auto",[h,i])>-1,k={},l={},m,n;j&&(l=f.position()),m=j?l.top:parseInt(h,10)||0,n=j?l.left:parseInt(i,10)||0,d.isFunction(b)&&(b=b.call(a,c,g)),b.top!=null&&(k.top=b.top-g.top+m),b.left!=null&&(k.left=b.left-g.left+n),"using"in b?b.using.call(a,k):f.css(k)}},d.fn.extend({position:function(){if(!this[0])return null;var a=this[0],b=this.offsetParent(),c=this.offset(),e=ch.test(b[0].nodeName)?{top:0,left:0}:b.offset();c.top-=parseFloat(d.css(a,"marginTop"))||0,c.left-=parseFloat(d.css(a,"marginLeft"))||0,e.top+=parseFloat(d.css(b[0],"borderTopWidth"))||0,e.left+=parseFloat(d.css(b[0],"borderLeftWidth"))||0;return{top:c.top-e.top,left:c.left-e.left}},offsetParent:function(){return this.map(function(){var a=this.offsetParent||c.body;while(a&&(!ch.test(a.nodeName)&&d.css(a,"position")==="static"))a=a.offsetParent;return a})}}),d.each(["Left","Top"],function(a,c){var e="scroll"+c;d.fn[e]=function(c){var f=this[0],g;if(!f)return null;if(c!==b)return this.each(function(){g=ci(this),g?g.scrollTo(a?d(g).scrollLeft():c,a?c:d(g).scrollTop()):this[e]=c});g=ci(f);return g?"pageXOffset"in g?g[a?"pageYOffset":"pageXOffset"]:d.support.boxModel&&g.document.documentElement[e]||g.document.body[e]:f[e]}}),d.each(["Height","Width"],function(a,c){var e=c.toLowerCase();d.fn["inner"+c]=function(){return this[0]?parseFloat(d.css(this[0],e,"padding")):null},d.fn["outer"+c]=function(a){return this[0]?parseFloat(d.css(this[0],e,a?"margin":"border")):null},d.fn[e]=function(a){var f=this[0];if(!f)return a==null?null:this;if(d.isFunction(a))return this.each(function(b){var c=d(this);c[e](a.call(this,b,c[e]()))});if(d.isWindow(f)){var g=f.document.documentElement["client"+c];return f.document.compatMode==="CSS1Compat"&&g||f.document.body["client"+c]||g}if(f.nodeType===9)return Math.max(f.documentElement["client"+c],f.body["scroll"+c],f.documentElement["scroll"+c],f.body["offset"+c],f.documentElement["offset"+c]);if(a===b){var h=d.css(f,e),i=parseFloat(h);return d.isNaN(i)?h:i}return this.css(e,typeof a==="string"?a:a+"px")}}),a.jQuery=a.$=d})(window);cobbler-2.4.1/web/content/jsGrowl.css000066400000000000000000000044011227367477500175410ustar00rootroot00000000000000 .jsgrowl_msg_container{ font-family: Arial, Helvetica, sans-serif; position:fixed; z-index: 1000000; opacity: .75; /*filter: alpha(opacity = 75);*/ border-collapse:collapse; width: 450px; text-align:left; } .jsgrowl_msg_container .jsg_body{ background-color:white; width:380px; } .jsgrowl_msg_container .jsg_body_container{ position:relative; min-height:30px; } .jsgrowl_msg_container .jsg_clickable{ cursor:pointer; } .jsgrowl_msg_container .jsg_close{ position:absolute; right:0; top:0; cursor:pointer; display:block; background-image: none ; background-color: transparent ; background-repeat: no-repeat; height:22px; width:22px; } .jsgrowl_msg_container .jsg_icon{ float:left; padding:0 10px 0 0 ; height: 32px; } .jsgrowl_msg_container .jsg_title{ font-weight:bold; color:#000; font-size:14px; margin:0 0 10px 0; word-spacing: 3px; border-bottom:1px #000 solid; } .jsgrowl_msg_container .jsg_msg{ font-weight:bold; color:#000; margin: 0 20px 0 0; font-size:12px; word-spacing: 3px; } .jsgrowl_msg_container .jsg_side{ background-color:white; width:10px; } .jsgrowl_msg_container .jsg_middle{ background-color:white; width:380px; } .jsgrowl_msg_container .jsg_corner { height:10px; width:10px; background-color: transparent ; background-repeat: no-repeat; background-image:url(/cobbler_webui_content/jsgrowl_corners.png) ; } .jsgrowl_msg_container .jsg_tl{ background-position: top left; } .jsgrowl_msg_container .jsg_tr{ background-position: top right; } .jsgrowl_msg_container .jsg_bl{ background-position: bottom left; } .jsgrowl_msg_container .jsg_br{ background-position: bottom right; } .jsgrowl_msg_container .jsg_mr{ background-position: top right; } .jsgrowl_msg_container .jsg_mb{ background-position: bottom right; } .jsgrowl_msg_container:hover .jsg_corner { background-image:url(/cobbler_webui_content/jsgrowl_corners_hover.png) ; } .jsgrowl_msg_container:hover .jsg_middle { background-image:url(/cobbler_webui_content/jsgrowl_middle_hover.png) ; } .jsgrowl_msg_container:hover .jsg_side { background-image:url(/cobbler_webui_content/jsgrowl_side_hover.png) ; } .jsgrowl_msg_container:hover .jsg_close{ background-image: url(/cobbler_webui_content/jsgrowl_close.png); } cobbler-2.4.1/web/content/jsGrowl.js000066400000000000000000000161521227367477500173730ustar00rootroot00000000000000/* jsGrowl - JavaScript Messaging System, version 1.6.0.3 * (c) 2009 Rodrigo DeJuana * * jsGrowl is freely distributable under the terms of the BSD license. * *--------------------------------------------------------------------------*/ function jsGrowlMsg(msg, id) { this.initialize(msg, id); } //var jsGrowlMsg = Class.create(); jsGrowlMsg.prototype = { timeout: 5000, hover_timeout: 2000, id: null, title: null, msg: null, top: -1000, right: -1000, element: null, timeout_id: 0, icon_src: null, click_callback: null, close_callback: null, initialize: function(msg, id){ this.msg = msg.msg; this.title = msg.title; this.icon_src = msg.icon; this.click_callback = msg.click_callback; this.close_callback = msg.close_callback; this.sticky = msg.sticky; this.id = id; }, html: function(jsg){ var icon_html = this.icon_src ? '' : ''; var click_html = this.click_callback ? ' onclick="'+jsg+'.clickMsg('+this.id+');" ' : ''; var click_css = this.click_callback ? ' class="jsg_clickable" ' : ''; var title_html = this.title ? '
      '+this.title+'
      ' : ''; return '' + ''+ ''+ ''+ '
      '+ '
      '+ title_html + icon_html + '
      '+this.msg+'
      '+ '
      '; }, setElement: function(){ if (!(this.element)){ this.element = document.getElementById('growl_flash_msg_' + this.id); if (!(this.element)){ return false; } } return true; }, height: function(){ if (this.setElement()){ return this.element.offsetHeight; }else{ return 75; } }, show: function(){ if (this.setElement()){ this.element.show(); } }, hide: function(){ if (this.setElement()){ this.element.style.display = 'none';//hide(); } }, appear: function(){ jsGrowlInterface.appear(this.element); }, fade: function(jsg,timeout){ if ( this.sticky ){ return; } timeout = timeout > 0 ? timeout : this.timeout; var e = this.element; var id = this.id; var f = function(){ jsGrowlInterface.fadeAndRemove(e,jsg,id); }; this.timeout_id = setTimeout(f,timeout); }, removeMsg: function(){ jsGrowlInterface.remove(this.element); if ( this.close_callback ){ this.close_callback(); } }, onOver: function(){ clearTimeout(this.timeout_id); }, onOut: function(jsg){ this.fade(jsg,this.hover_timeout); }, setTop: function(top){ this.top = top; this.element.style.top = top+'px'; }, setRight: function(right){ this.right = right; this.element.style.right = right+'px'; }, click:function(){ this.click_callback(); } }; function jsGrowl(name,opts) { this.initialize(name,opts); } //var jsGrowl = Class.create(); jsGrowl.prototype = { start_top: 10, start_right: 10, gap: 10, table_width: 300, top: 0, right: 10, width: 0, height: 0, msg_id: 0, messages: {}, order: [], queue: [], loaded: false, showing: false, name: '', msg_container: '', initialize: function(name,opts){ opts = opts ? opts : {}; if ( name ){ this.name = name; }else{ alert('The variable name is required. I need this for when I write out the message, the message message can fire the correct events.'); } this.msg_container = opts.msg_container ? opts.msg_container : 'jsGrowl'; var jsg = this; jsGrowlInterface.onload(jsg); }, onload: function(){ this.msg_container = document.getElementById(this.msg_container); if ( !this.msg_container ){ //alert('I need a div on your page with the id "jsGrowl" or I will not work.'); } this.loaded = true; this.height = (typeof window.innerHeight != 'undefined' ? window.innerHeight : document.documentElement.clientHeight); this.width = (typeof window.innerWidth != 'undefined' ? window.innerWidth : document.documentElement.clientWidth); this.showMessage(); }, addMessage: function(msg) { this.queue.push(msg); if ( this.loaded && !this.showing ){ this.showMessage(); } }, showMessage: function(){ if ( !this.loaded ){ return; } this.showing = true; var msg = this.queue.shift(); if (!msg){ this.showing = false; return; } var jsg_msg = new jsGrowlMsg(msg, this.msg_id++); var jsg = this; //this.msg_container.insert(jsg_msg.html(jsg.name)); jsGrowlInterface.insert(this.msg_container, jsg_msg.html(jsg.name)) jsg_msg.setElement(); var insert = this.setLocation(jsg_msg); jsg_msg.appear(); jsg_msg.fade(jsg); this.messages[jsg_msg.id] = jsg_msg ; if (insert){ this.order.unshift( jsg_msg.id ); }else{ this.order.push( jsg_msg.id ); } var f = function(){ jsg.showMessage(); }; setTimeout(f,250); }, setLocation: function(msg){ var insert = this.insertMessage(msg); var top = 0; var right = 0; if (insert){ top = this.gap; right = this.gap; }else{ top = this.top + this.gap; right = this.right; if (top+msg.height() > this.height){ right = right + this.table_width + this.gap; top = this.gap; } } this.top = top + msg.height(); this.right = right; msg.hide(); msg.setTop(top); msg.setRight(right); return insert; }, insertMessage: function(msg){ for( var i = 0, l = this.order.length; i < l; i++){ var old_msg = this.messages[this.order[i]]; if (old_msg){ return (msg.height()+this.gap) < old_msg.top; } } this.top = 0; this.right = 10; return false; }, removeMsg: function(id){ var order = this.order; delete this.messages[id]; var i,l; for( i = 0,l = order.length; i < l; i++){ if (order[i] == id){ break; } } order.splice(i,1); this.order = order; }, overMsg: function(id){ if (this.messages[id]){ this.messages[id].onOver(); } }, outMsg: function(id){ var jsg = this; if (jsg.messages[id]){ jsg.messages[id].onOut(jsg); } }, closeMsg: function(id){ var jsg = this; if (jsg.messages[id]){ jsg.messages[id].removeMsg(); jsg.removeMsg(id); } }, clickMsg: function(id){ var jsg = this; if (jsg.messages[id]){ jsg.messages[id].click(); } } }; cobbler-2.4.1/web/content/jsGrowl_jquery.js000066400000000000000000000011671227367477500207720ustar00rootroot00000000000000/* jsGrowl jQuery Interface */ var jsGrowlInterface = { onload: function(jsg){ $(window).load( function() { jsg.onload(); }); }, insert: function(element,html){ $(element).append(html); }, fade:function(element,after_finish){ $("#"+element.id).fadeOut(1000,after_finish); }, appear: function(element){ $("#"+element.id).fadeIn(250); }, remove: function(element){ $("#"+element.id).remove(); }, fadeAndRemove: function(element,jsg,id){ var f = function(){ jsGrowlInterface.remove(element); jsg.removeMsg(id); }; jsGrowlInterface.fade(element, f); } };cobbler-2.4.1/web/content/jsgrowl_close.png000066400000000000000000000075341227367477500207740ustar00rootroot00000000000000PNG  IHDRĴl; pHYs   OiCCPPhotoshop ICC profilexڝSgTS=BKKoR RB&*! J!QEEȠQ, !{kּ> H3Q5 B.@ $pd!s#~<<+"x M0B\t8K@zB@F&S`cbP-`'{[! eDh;VEX0fK9-0IWfH  0Q){`##xFW<+*x<$9E[-qWW.(I+6aa@.y24x6_-"bbϫp@t~,/;m%h^ uf@Wp~<5j>{-]cK'Xto(hw?G%fIq^D$.Tʳ?D*A, `6B$BB dr`)B(Ͱ*`/@4Qhp.U=pa( Aa!ڈbX#!H$ ɈQ"K5H1RT UH=r9\F;2G1Q= C7F dt1r=6Ыhڏ>C03l0.B8, c˱" VcϱwE 6wB aAHXLXNH $4 7 Q'"K&b21XH,#/{C7$C2'ITFnR#,4H#dk9, +ȅ3![ b@qS(RjJ4e2AURݨT5ZBRQ4u9̓IKhhitݕNWGw Ljg(gwLӋT071oUX**| J&*/Tު UUT^S}FU3S ԖUPSSg;goT?~YYLOCQ_ cx,!k u5&|v*=9C3J3WRf?qtN (~))4L1e\kXHQG6EYAJ'\'GgSSݧ M=:.kDwn^Loy}/TmG X $ <5qo</QC]@Caaᄑ.ȽJtq]zۯ6iܟ4)Y3sCQ? 0k߬~OCOg#/c/Wװwa>>r><72Y_7ȷOo_C#dz%gA[z|!?:eAAA!h쐭!ΑiP~aa~ 'W?pX15wCsDDDޛg1O9-J5*>.j<74?.fYXXIlK9.*6nl {/]py.,:@LN8A*%w% yg"/6шC\*NH*Mz쑼5y$3,幄'L Lݛ:v m2=:1qB!Mggfvˬen/kY- BTZ(*geWf͉9+̳ې7ᒶKW-X潬j9(xoʿܔĹdff-[n ڴ VE/(ۻCɾUUMfeI?m]Nmq#׹=TR+Gw- 6 U#pDy  :v{vg/jBFS[b[O>zG499?rCd&ˮ/~јѡ򗓿m|x31^VwwO| (hSЧc3- cHRMz%u0`:o_FIDATxڄ}lSe-mu . 9`i1BҁYX$i ΰ !f̄a5)#M(N'ttJn /yr7yޛ#1 a8Ol/6̙_+)<̒,ˁ xSxSy3g3+'6 Y)??N'8i MR,̜s=fNa-z fr#q\d4 ik?x<g&طoRwR0`prڽ{7A B pS hib@{ Ndنq ?_z1e)f6 [KB<عs'TWq@.]!xP}"`Tf?@H7Gt\Yq:"Ff̩`i<(-(h@)TUUE^/xah|@Bś?d3' TtHNNhhi{> =۶]j!),1 H3Q5 B.@ $pd!s#~<<+"x M0B\t8K@zB@F&S`cbP-`'{[! eDh;VEX0fK9-0IWfH  0Q){`##xFW<+*x<$9E[-qWW.(I+6aa@.y24x6_-"bbϫp@t~,/;m%h^ uf@Wp~<5j>{-]cK'Xto(hw?G%fIq^D$.Tʳ?D*A, `6B$BB dr`)B(Ͱ*`/@4Qhp.U=pa( Aa!ڈbX#!H$ ɈQ"K5H1RT UH=r9\F;2G1Q= C7F dt1r=6Ыhڏ>C03l0.B8, c˱" VcϱwE 6wB aAHXLXNH $4 7 Q'"K&b21XH,#/{C7$C2'ITFnR#,4H#dk9, +ȅ3![ b@qS(RjJ4e2AURݨT5ZBRQ4u9̓IKhhitݕNWGw Ljg(gwLӋT071oUX**| J&*/Tު UUT^S}FU3S ԖUPSSg;goT?~YYLOCQ_ cx,!k u5&|v*=9C3J3WRf?qtN (~))4L1e\kXHQG6EYAJ'\'GgSSݧ M=:.kDwn^Loy}/TmG X $ <5qo</QC]@Caaᄑ.ȽJtq]zۯ6iܟ4)Y3sCQ? 0k߬~OCOg#/c/Wװwa>>r><72Y_7ȷOo_C#dz%gA[z|!?:eAAA!h쐭!ΑiP~aa~ 'W?pX15wCsDDDޛg1O9-J5*>.j<74?.fYXXIlK9.*6nl {/]py.,:@LN8A*%w% yg"/6шC\*NH*Mz쑼5y$3,幄'L Lݛ:v m2=:1qB!Mggfvˬen/kY- BTZ(*geWf͉9+̳ې7ᒶKW-X潬j9(xoʿܔĹdff-[n ڴ VE/(ۻCɾUUMfeI?m]Nmq#׹=TR+Gw- 6 U#pDy  :v{vg/jBFS[b[O>zG499?rCd&ˮ/~јѡ򗓿m|x31^VwwO| (hSЧc3- cHRMz%u0`:o_FIDATxfQ{n݆D)YB)!]O0̃B"!.BeKΘTE^b>,r]$@\c\?|K`<3UT?}I x[I[nj.(I8I' pFJ WNpX\:N܉;q?#x7x ~@A{`A<>ϖ B  H3Q5 B.@ $pd!s#~<<+"x M0B\t8K@zB@F&S`cbP-`'{[! eDh;VEX0fK9-0IWfH  0Q){`##xFW<+*x<$9E[-qWW.(I+6aa@.y24x6_-"bbϫp@t~,/;m%h^ uf@Wp~<5j>{-]cK'Xto(hw?G%fIq^D$.Tʳ?D*A, `6B$BB dr`)B(Ͱ*`/@4Qhp.U=pa( Aa!ڈbX#!H$ ɈQ"K5H1RT UH=r9\F;2G1Q= C7F dt1r=6Ыhڏ>C03l0.B8, c˱" VcϱwE 6wB aAHXLXNH $4 7 Q'"K&b21XH,#/{C7$C2'ITFnR#,4H#dk9, +ȅ3![ b@qS(RjJ4e2AURݨT5ZBRQ4u9̓IKhhitݕNWGw Ljg(gwLӋT071oUX**| J&*/Tު UUT^S}FU3S ԖUPSSg;goT?~YYLOCQ_ cx,!k u5&|v*=9C3J3WRf?qtN (~))4L1e\kXHQG6EYAJ'\'GgSSݧ M=:.kDwn^Loy}/TmG X $ <5qo</QC]@Caaᄑ.ȽJtq]zۯ6iܟ4)Y3sCQ? 0k߬~OCOg#/c/Wװwa>>r><72Y_7ȷOo_C#dz%gA[z|!?:eAAA!h쐭!ΑiP~aa~ 'W?pX15wCsDDDޛg1O9-J5*>.j<74?.fYXXIlK9.*6nl {/]py.,:@LN8A*%w% yg"/6шC\*NH*Mz쑼5y$3,幄'L Lݛ:v m2=:1qB!Mggfvˬen/kY- BTZ(*geWf͉9+̳ې7ᒶKW-X潬j9(xoʿܔĹdff-[n ڴ VE/(ۻCɾUUMfeI?m]Nmq#׹=TR+Gw- 6 U#pDy  :v{vg/jBFS[b[O>zG499?rCd&ˮ/~јѡ򗓿m|x31^VwwO| (hSЧc3- cHRMz%u0`:o_FIDATxAJ@^d@pJ9P(nr`oMJVxbM ͌LKf?/87{ɤ_mo$9$ɾ}# :`k]{9I8a.;GCxoa)ktm!ڥ^"IENDB`cobbler-2.4.1/web/content/jsgrowl_middle_hover.png000066400000000000000000000067651227367477500223350ustar00rootroot00000000000000PNG  IHDRK pHYs   OiCCPPhotoshop ICC profilexڝSgTS=BKKoR RB&*! J!QEEȠQ, !{kּ> H3Q5 B.@ $pd!s#~<<+"x M0B\t8K@zB@F&S`cbP-`'{[! eDh;VEX0fK9-0IWfH  0Q){`##xFW<+*x<$9E[-qWW.(I+6aa@.y24x6_-"bbϫp@t~,/;m%h^ uf@Wp~<5j>{-]cK'Xto(hw?G%fIq^D$.Tʳ?D*A, `6B$BB dr`)B(Ͱ*`/@4Qhp.U=pa( Aa!ڈbX#!H$ ɈQ"K5H1RT UH=r9\F;2G1Q= C7F dt1r=6Ыhڏ>C03l0.B8, c˱" VcϱwE 6wB aAHXLXNH $4 7 Q'"K&b21XH,#/{C7$C2'ITFnR#,4H#dk9, +ȅ3![ b@qS(RjJ4e2AURݨT5ZBRQ4u9̓IKhhitݕNWGw Ljg(gwLӋT071oUX**| J&*/Tު UUT^S}FU3S ԖUPSSg;goT?~YYLOCQ_ cx,!k u5&|v*=9C3J3WRf?qtN (~))4L1e\kXHQG6EYAJ'\'GgSSݧ M=:.kDwn^Loy}/TmG X $ <5qo</QC]@Caaᄑ.ȽJtq]zۯ6iܟ4)Y3sCQ? 0k߬~OCOg#/c/Wװwa>>r><72Y_7ȷOo_C#dz%gA[z|!?:eAAA!h쐭!ΑiP~aa~ 'W?pX15wCsDDDޛg1O9-J5*>.j<74?.fYXXIlK9.*6nl {/]py.,:@LN8A*%w% yg"/6шC\*NH*Mz쑼5y$3,幄'L Lݛ:v m2=:1qB!Mggfvˬen/kY- BTZ(*geWf͉9+̳ې7ᒶKW-X潬j9(xoʿܔĹdff-[n ڴ VE/(ۻCɾUUMfeI?m]Nmq#׹=TR+Gw- 6 U#pDy  :v{vg/jBFS[b[O>zG499?rCd&ˮ/~јѡ򗓿m|x31^VwwO| (hSЧc3- cHRMz%u0`:o_FPLTE~~~}}}|||{{{zzzyyyxxxwwwvvvuuutttsssrrrqqqpppooonnnmmmlllkkkjjjiiihhhgggfffeeedddcccbbbaaa```___^^^]]]\\\[[[ZZZYYYXXXWWWVVVUUUTTTSSSRRRQQQPPPOOONNNMMMLLLKKKJJJIIIHHHGGGFFFEEEDDDCCCBBBAAA@@@???>>>===<<<;;;:::999888777666555444333222111000///...---,,,+++***)))((('''&&&%%%$$$###"""!!!  o^IDATxb`````$200HU#IENDB`cobbler-2.4.1/web/content/jsgrowl_side_hover.png000066400000000000000000000067671227367477500220250ustar00rootroot00000000000000PNG  IHDRn pHYs   OiCCPPhotoshop ICC profilexڝSgTS=BKKoR RB&*! J!QEEȠQ, !{kּ> H3Q5 B.@ $pd!s#~<<+"x M0B\t8K@zB@F&S`cbP-`'{[! eDh;VEX0fK9-0IWfH  0Q){`##xFW<+*x<$9E[-qWW.(I+6aa@.y24x6_-"bbϫp@t~,/;m%h^ uf@Wp~<5j>{-]cK'Xto(hw?G%fIq^D$.Tʳ?D*A, `6B$BB dr`)B(Ͱ*`/@4Qhp.U=pa( Aa!ڈbX#!H$ ɈQ"K5H1RT UH=r9\F;2G1Q= C7F dt1r=6Ыhڏ>C03l0.B8, c˱" VcϱwE 6wB aAHXLXNH $4 7 Q'"K&b21XH,#/{C7$C2'ITFnR#,4H#dk9, +ȅ3![ b@qS(RjJ4e2AURݨT5ZBRQ4u9̓IKhhitݕNWGw Ljg(gwLӋT071oUX**| J&*/Tު UUT^S}FU3S ԖUPSSg;goT?~YYLOCQ_ cx,!k u5&|v*=9C3J3WRf?qtN (~))4L1e\kXHQG6EYAJ'\'GgSSݧ M=:.kDwn^Loy}/TmG X $ <5qo</QC]@Caaᄑ.ȽJtq]zۯ6iܟ4)Y3sCQ? 0k߬~OCOg#/c/Wװwa>>r><72Y_7ȷOo_C#dz%gA[z|!?:eAAA!h쐭!ΑiP~aa~ 'W?pX15wCsDDDޛg1O9-J5*>.j<74?.fYXXIlK9.*6nl {/]py.,:@LN8A*%w% yg"/6шC\*NH*Mz쑼5y$3,幄'L Lݛ:v m2=:1qB!Mggfvˬen/kY- BTZ(*geWf͉9+̳ې7ᒶKW-X潬j9(xoʿܔĹdff-[n ڴ VE/(ۻCɾUUMfeI?m]Nmq#׹=TR+Gw- 6 U#pDy  :v{vg/jBFS[b[O>zG499?rCd&ˮ/~јѡ򗓿m|x31^VwwO| (hSЧc3- cHRMz%u0`:o_FPLTE~~~}}}|||{{{zzzyyyxxxwwwvvvuuutttsssrrrqqqpppooonnnmmmlllkkkjjjiiihhhgggfffeeedddcccbbbaaa```___^^^]]]\\\[[[ZZZYYYXXXWWWVVVUUUTTTSSSRRRQQQPPPOOONNNMMMLLLKKKJJJIIIHHHGGGFFFEEEDDDCCCBBBAAA@@@???>>>===<<<;;;:::999888777666555444333222111000///...---,,,+++***)))((('''&&&%%%$$$###"""!!!  o^IDATxb``` IENDB`cobbler-2.4.1/web/content/logo-cobbler.png000066400000000000000000000451421227367477500204630ustar00rootroot00000000000000PNG  IHDRMB pHYs B(x IDATx E?]f_BQ&$)CTTDEy@v,a=3̾{{oOL"?o:ԩSՊ/> > [1c(ƌ8no{IIIIkqܾ}\)`M*`C8crgnqV> Y)Oެ5[-]}Y)Oެ5[-]}Y)Oެ5[-]}Y)Oެ5[-]}Y)o_liiРvuuUߍ%)h^\6lZ[[K XF@ 1<Wy|FWC'c3~MQ={?}ٲe!Xp]wia($TMC UW] &AV ( `<e_~T>0 l=q7}5]Hk$鑗m3urDUU0p,D|ܱ{(ؑ*SGU̖"tz!P1[*S~їR;e܅|mnZ쿢UH ݎW D(/qpHu]q2[B}PVfPb"DDQB*N] G W~aq:4$:P_CH1 ]'_As~q:e {9 0dlvg,&9Lr;i.h=];Alc9` M"G+V>G/ ~p=us[YjG΢#0/0s $L '$X%J)@/F@z4ꧨLY2{Qy#e Z"!,Nq3?ԱmW(?Eh@NU})Y%?dŝ"(1Fب! ѧƈ^7@]`  IEA~n7w@x΂`QO @>.݈{=9#-IwׇG)w㤎t2%IA¨td8$Zţрm6Bhgcl2G8Lp&_ ӏ:x壼[gt ~#PY!{ɼȋ@u CDSZTG\U@4hYЛCGC>M6}h֬YX 2<S r()%^HQPwS3,5SAħwCNG#=,gѦ>ЙR6/C^@mR,8e666eeVėuqh\;e06ex~m.e? nI"{8j>T{܁ Ȁ$BFw{ak[>\kM N\A 8JHx]i|ӐWzGK,U|#R%ޅOaĞz_|סh NAK9!7'CWj.;g̘1 dCt'm=e1 HI Pz^Ysq1}bR=ZEWA8===USʜ5辁_pн]^BP= 1z- \z7*>g \7xpEi64ZyX{>yW&#a+>C{`= e: ʵn_9yc}f %2K }W}XzT>5qSzmA$K=]<_ `v#jf" cIkm2e)G TeTȧ=e~O/IXx}޵IƁ/Ǫ&kGG;/I"A'jf,tȣSeᥫ}%KP4@̋ʅ:U, '8ϳw%JF)$ \jӕO,r|:d!/cd;ccvA,_qNǷߵBE-Yº><!6T* g(=O> qzT>~>.Ƽ/~|X4ecxG :Dÿ/'HGq/wWӬ,?W KB'[gt1=p8HUkRvg GPQY`쾮#Q_PG(/~+$5"D>=.égI/N 0>Ed ۻ+TTUFlX2+3# %-g]p"Ǘds?yQkbW_փAyE{.}pN!Q<^gLOB=rގ-.WELFRJM̛,`Zv1XWuwB4(yճ~Fb ,H SwN1ఖ9/$\tA0RraCJ6\xJc_5RG/HsegBlV|Էv56aLTU sI\xX`㹾अC`?Pd1 ? xڳMSHi<Ȗ|}ZڤW}1? {H\?|HyO4׬N~9zθ+/;4w &19G<:r\4w=s3j+^W{ޕ4K=~"TDmYm4'=U]m/ӘBr)٪,*t7rW]ސo-cE`̃1h۶?2;I7ǟ߳j,$tcӕp͇‚_];'X0,è,˒q 1GcLǿG-Nwt$B1X~}Ă >&Aq Lȋly ĉsp^OQGc3ÌcL=1 1qPT>c<&LJp!2,Q t-;`g14_P {R>pĊb$2CҴt}:H/!j);RuWsV3#Dо1NsdI_0qL[XDDl-L5i l4HB"%&](hFK@) fp 'B+-CYa^ zMA 9=냞rB|.:!_P Fw: 51@ >x`:~ߑòŧ1 ,PJ1V&c̹-xeR ?_PAtkRvË-2L6ټ.ԏ4CT/j!h #b u? R&"|zoVGq'Bb\dI]*x(*X-<ٵ[h 6&D0̯<ߙF~U %;m9>JPNx,"RUŋ0jqBgzx8̓+0O!P\е7 Gϰ% QF+a^ I$J@Rʭߚ\#͞jS<0>-'&T\B(ަ*)3&bSPY< SHʠt$epF%zBBf dKJC1 up)~Sc)RѲhлpD@ElEhMV& TKm_{%ڡC}!k$,~1fٻ2b#vhRT_O`Hh(8|B)]%8do(H=> U̩?]/G l$TLD>CdS+,Et-LHl_)Q0sfʤM0 :jP1 ZIcF;[LUB#Ț,dD"CC#<*qL8CZIL ߀#1-{Fc.A={̼kdӈtD3?JGc!{"DO>_C$L;@*C6k2$t | VTvH&pEQq ƷS10Jȶg.#ckd` ߞ%/jc,'EqA)ukq9g6UE44ԛS]F􆏌D7xĪx`ѓVTRzl"eN3OUuNH:d #R̃-D3&=C%9xozW1gRcI6-:T7$Zj3+Ɓs0};m|LNj DO<0?9KKh,ĴQ'MdN=ɻ+\x~ec>&dШ0t%ZѹևM^#)0Lo1 42a u1=|&c'kaB:s*h ܚWr,?9ւ'+]Pˏ K@49O$ADoILPO& d|v_VhCo'&ɃΧlI#Њ.gxd0yYI|x((04g(K^ImĖ^b^c R HM&9&a_W ʡu+o !77:\1/?ZaO8+ߍ9d ՆT _;@¾i4ᨄs>~oSx(_8]L(QR%H)Pul[o>KSӳ{*Nx.ݍ;[&*VW"ZubREN^2W~*Mjhb|ڏC:85V q_L О,d&=V=iIjo|丿 Ҹ~KqmA^ERu ~w1CFxcOqxjr8pL"n x wڡ_]1~U` v7.=m \ @;7r~syKVL <>-yoȀC:,̻F$ q1"KO&!;/ĮT!$V7LY,AE 4ZUz h{Lw3?7!K@l_ii>pXހ{qR\H)_y7ˍsvZ OD} * =JD8V4o8y)K/sZ|]~ʏrـb}YvTHɪ>[le#7!!’ti}#K%B IÊKW1f8{΀qlj˖L_};(۸NJ$b[?N !}',Kĸ`K,_5y(.fPe82/u ̰'5h1 6E7-R"~'Dw'+#5R r?d+eT>- ia Mq$q@u~iCT8/qpr>&ms oŠ9z>Mm ػ!5Gh=qGi3K8jG%HP iww9;\ S(Gޓ0AG8y@Sraci `?vO'L>Cq+(T&݊)o^E<,s]U^'Lǿ"l#5mP& $17dYqC}M =+Xo8F^Fwy(>6B_߫2Wzp/ZR.J .O !_d:,}'tF#!qzox_[ucw%Qka{}x] .̭֗گ/Q;Ċ7FzϿ]y #-0L!][2'?QX5޴l3Ե<| * ;=/\wƑ>;#Ӱ|/!ݰBEi+˾tHkKsG^,[C/ZKPei;hC|-EKah <͋^'h H',xc5 P%H1NmnUheOёO\WδR,8=MfFA(,4u**zNnaqSw1ks[v|^g~;([^YDʆa-7U逕V9SEOw$V 6zf|KO*AEJiz߁Zc} ?Ҫ濎f/da ؟B?>K?! 3߈i cNEh=/ji+/<_(QIG"NZ b;Po=SoFnAXW @W7b׿ӥ#33=^) 3o4gUH|pM ”7}Ni9vgc#5:> ܿ33'{Ђ6a &:omˆͦ_ j箺̿ _FS .0 IL;,ҥT_$b7jD/i"l0Buu(D[YHK%uwN:uUܦG>6Lol;|B,d5w~XOqΖEE XĚ|Hq(6!ŹG~ˑ X?s a }P"_~Ҝ)F>+埝eETQQ),]tdq1?B+ڒR >EXqa>ސkXR/ 7~ِ4qD&⏪p )0:Q.]}?!ɈP]9yǔI6 Fx'EӼOPh$NhZgrpa(pF &ǯ9߀RR^%hᏦ^HbR%Lk, D$YMɟsUm@vQ=gzr<$|k>|=/k+뤼|Z((`F&UFD`@q8O_r%B58QN"SκNq?sh# q'.g7p?s9p>=݀n(٘N{D0^:h[B^{ֻ&ylT/n{ᅊvcoLICGR3?^OЂ>BʟRj CLl|OlAz/z-({rBP~| HC}J] 14וbcDl4<_9#+E ja˱I }Y^Bf]i?!sӌv Epo#[:qy.Ŵ@Kv*ts1 u^UcxdI^NnCu=H#  xvۭUѱ1\~q/R#(<\i?!AQgCLJxgS{eҤ?,bO!<${H1(@EV~7Y>Jhi}6 Qᳮ/~7oڄVI4&{ {-,܊|'kơSSJu'TV(|{ ؂[mrHTh [{TFS.kklŚBPKORa G7pEڴQàrѦu S?.~BAf](v|)8 *pJp2>zvýw!< ֙1[ #BeTt̙jM˫jΪVKfݫTi,HEO>wy Db EHg,m 15VY[Vf{媥kcujYO ,l^n [);6g-B48߹^YKRUddFg.QzV(U5Yp0hhjF6%m%mҔp͹Eb`h m E6HˀI61Tz'q*R xV "Y_?T\_Ǹ157S11) C7һ$AEJܘF.ewiBt4[{ŰtYTFR^?ۜ꧲mJ<@޹(\> ?=? ĀDM"s(UQi9W./19T+} 2;XT,jcՅ' {X`IY ^ & A(7a9k9E}k:1ᡍ8gSԔYGŤ5䄡aaހ0g4 L' C\ϼ][}[-q94X7#iY lUWi36!z._;@6@gS#~yWVҷgfY%Q)51Q}n2cTLD#WSU!3|$ F ѯN)#4,S~LdW ic[\_6^ @c3 ˜͹/EuYL#,灁 €0o `,N003fQkrF=z:u-  )_jZGԩTbZ ~B` 9_ lY a!8,_2pG:DA"`}urV‹^N=<9h.Qi=O{ jg w _MkBM8<.V&|HX60ϸ'B7 z3*W'm,mrL ӱs#Ԭߣ:Ӆq4,A"${ F˖}_}^)707Iy@>NuE&4:I⧥1&7t*/4πpWpڬT(4#3D4`X )(.Pj  Dd/5ojV)]t)lgio:V)},sCE P㌭4θy:j"JwT?ZU=O.=U3- 5q<x9W%_i|EMA#'+ j-n47}R%դm՜?hZ&a|9țYTm:nBs5 =Z#Yyр z VC8<+wǍ=Rឮ6K0~[2 Ei֩$mjVX |Od 4_ J}FC@\N/|3e< 1O؁SGDc[n SZAY&!ϟ)=yIT3:eaDA*'.0 R ^IlKgpB0dY[mJ,lifS+[.y;P~e|^M +8tlJz!+sK |2N^)9Y4F0^KQĀJiZh E=+t24z6("R@sPy#]Jf&)/iBRdw2!6>e+Pّ Yp湞dph蚵L:LcBe$+:V!8>7r6頑bOϷy3 #exL4ٳgF_2SYuc ʹ"c$@:X8>EXvwXc#ė@!\4dpϚz.-g:z`58AGZv$ vRi46\qvl>j*dT` ŔAH^֭4&.TʾZ_5͙ή7 FfXR>ݝԜ9.O<ɇ<]VפҦʻt/A!lTY4Wa>_*w6UFp%yȔG;[unBit)m"LF F/>#Dy,/%\a?92 C :Z$Ciwc(3Rk# 2ժoijvv L`( `i%yߌY,Uu/˄d=Ͳ6·”/ea:|I""p U-R΍U rVUI`$Q}=TN*?x~ 05wʯi5ɘ0 ѳT ^l.Qrd _Xc__BSPFa혪q1joG{+3'+TTe,W&fi]VV ׀af}֪v<*8+~4CXǂ6 |Fŵtm^߁or+jXit~V;# zY k/\e9aAGWxhzewv"-ALҼI`:ܲ;VM0L=V9+,` $ף LDi QYڏʦ)UV cHqGA͊hzmU4nz3Vg1_O+Wh ͑< k%N x m 7Qy U9|?GWmiA’ZVOR5tXgbZB^ݻ ,"ЋcFi?h6U|Jb̔g\6UYa< s#[\+եJRcxЪ5*XI~Uw!*ЗrPVJIC|b8dPa&:vTL 5ޢNT7/1%nc TSu%k0%DLXj*O Mנ/qщ Qsi2~@ءpcpC|hnr0*z֠<%\ &w7U+` 2)U O#R`dsׂn#tA_ũ̧U4Ufwhy-8üv:j׵wLufz̥ %C74(꽧ݷjfkkT۬^*]P|;hT%_pdj]PƆA= tj nM*E 4Q{r[imV6 rZsL"`83N3_|i1LMg;J\I=I qe`nz8;U*+ǘ vj-pquUb{ WV֔)--5J `J:e* S)FpQ=#X.A滫Ι6.C=`%/@R` k=îh - 2lٯ`8>6GfrZ>eѣ*O`Ь0v#1C\c2{ny[وp]lT+VcЊaSql81;ώa9spRv5-}Udza[N` L]&Rm҂T>oOPK1~W)1SGe\=xn:ݖt:V\j˭t|jb}E1 /)uTOO!z~sv2}IZR][-~à@70j9Hho1R]B#@pͺ n6v>@er`XRmRٌiaZ<ͺun{naڎ .]^,[?gCIw@Wȗo\TI5:u%S7]=',~eDSjtuMP6./N {GrWv ~J!=W$:WV¾-1_ PP@g>lGs:\h-jG*^;7r6yOW/j?B\v&S1#ktYnx,&M6Jp5af S2|6pq%s%%H#a1f7I{V0mS/`SP~Q l-eϴU5M3=kYZwB\7H^ң>Eః+=51keYU ތiLǓ$u ;돬2 eY see.VӭeLWGP}O: Fh^($I<嫦?mjgMd-t ccUOIoԈ% ϸ)ʌH++Ru4H~IzIT IDAT vx_M<ن^;0i46&냀e>,dPES*CyT_Fs}dh8Yͅ>lEnqxJ.[ A ;۞Iym/i]׬YèeJ)pN)M#b.i ` ħB  NHF0P%tvGڌ^*l#rS=Fm wIYM yXngVMю|>9I1TӅP7 |A as 5t rn}3VV3U9Kc'C`{U6&ӄT_,[/`&fpm :O_k[[GJ9C1:]PKRPG,^ÐZP^5j[̩gwE7btUNs@Uɉ]łbW/_:5=Vy=){:z4Xl `R!yJwZ8Vbҩeh mDҰpc9uʐ0!&7^BKҫ ` nAoߝ7<߀qB?^+tuaiVIKG)<3p[DmNWRao3܋pkA?ad3`u,>chmm 2OB$8P Enn8Q'#\wAPEkN3"S1ѵ%[_<ֆ4\qd5 mzne7* чeXL=|5ìQ'Y GD:|n⼭ŻPEŽ~BO>6)cLuu8z$;`JkF t6ȥX},6,wh_X*?Iz~YSv:%a=΋QTw{iޛߔv VC@&Nʦc1 `"fZlV9G8tL~D$~gʽrb{zYLN)1 N<[_Ԡ|AuMU`̛]A>!Li|C1nJ'XDH|ҰW;vQR˷ܹs*3SyAFEv9tSzIPmm L=Qߥr$jF 0Lf{;"E~o c;N1cD+aww[n ;cd㬆+aBAږ℩הƹsÙ/'T(.#ǘN;4 &SݼYر#Dtb+-?^IENDB`cobbler-2.4.1/web/content/style.css000066400000000000000000000146341227367477500172630ustar00rootroot00000000000000/*Reset to a sane default across browsers*/ html, body{ margin: 0; padding: 0; height: 100%; } a,blockquote,body,div,dl,dt,dd,form,fieldset,h1,h2,h3,h4,h5,h6,html,img,input,label,li,ul,ol,p,pre,span,textarea,table,tbody,td,th,thead,tr { border: 0; margin: 0; padding: 0; } select,input,button,textarea{ font-size:99%; } select { margin: 0; } table{ border-collapse:collapse; border-spacing:0; empty-cells:show; width: 100%; } /*End Reset*/ /* Lets set the primary styles and feel for the app */ body { font-family: "URW Gothic", "Liberation Sans", "Helvetica", "Luxi Sans", "Bitstream Vera Sans", arial,helvetica,clean,sans-serif; color: #333333; background-color: #EEEEEE; font-size: 1em; } a { color: #333333; text-decoration: none; } a:hover { color: #306CAC; } img { border: 0px; } h1,h2,legend { color: #306CAC; font-weight: bold; } h1 { font-size: 1.5em; } h2,legend { font-size: 1.2em; margin-bottom: 0px; } hr { height: 1px; margin: 0px; } input, textarea { background-color: white; padding: 2px 4px; border: 1px solid #306CAC; border-radius:4px; -moz-border-radius:4px; -webkit-border-radius:4px; } input[type=text], input[type=password], select { width: 350px; } input[type=submit], input[type=reset], input[type=button] { width: auto; margin-top: 5px; margin-bottom: 5px; } input[type=checkbox] { background: none; border: none; margin-right: 196px; margin-top: 5px; } label { font-weight: bold; } ol, ul { list-style: none; } textarea { height: 8em; width: 25em; margin-bottom: 4px; } pre textarea { height: 60em; width: 60em; } /* Defining the classes */ .button, .action { text-decoration: none; background-color: #306CAC; border: 0px; font-weight: bold; color: #dddddd; cursor: pointer; margin: 0px 2px; padding: 2px 7px; -moz-border-radius:3px; -webkit-border-radius: 3px; text-transform: capitalize; } .button:hover, .action:hover { color: #DDDDDD; background-color: #333333; } .warn { color: #990000; font-weight: bold; font-size: 1.0em; } .rpointers { font-size: 1.0em; margin-left: 5px; } .lpointers { font-size: 1.0em; margin-right: 5px; } /* Defining classes specific to a tag */ body.loginscreen { background-image: none; } input.action { height: 22px; } ol.sectionbody label { width: 300px; vertical-align: top; display: inline-block; line-height: 1.8; } ol label { width: 300px; vertical-align: top; display: inline-block; line-height: 1.8; } div.multiselect { display: -moz-inline-box; display:inline-block; vertical-align:top; clear: both; margin-bottom: 4px;} div.multiselect label { width: auto; font-weight: normal; display: inline; } div.multileft { float: left; text-align: left; width: auto; } div.multileft select, div.multiright select { height: 156px; min-width: 150px; width: auto; } div.multibuttons { width: 55px; padding-top: 75px; float: left;} div.multibuttons input.button { font-size: .8em; clear: both; width: 50px; margin: 2px;} div.multiright { float: left; text-align: right; width: auto; } li.paginate { float:right; } li.paginate select { width: auto; } span.context-tip { display: none; background-repeat: no-repeat; background-image: url('/cobbler_webui_content/tooltip.png'); padding-left: 30px; color: #EEEEEE; } span.context-tip:hover { background: none; padding: 0; color: #333333; font-size: .7em; font-style: italic; } /* Now for the 'id' specific statements */ #login { width: 250px; margin: auto; padding-top: 100px; text-align: center;} #login input { text-align: center;} #login input[type=text], #login input[type=password] { width: 200px; } #container { position: relative; min-height: 100%; } #header { height: 80px; width: 100%; } #logo { float: left; } #logo img { width: 180px; height: 60px; padding: 10px; } #user { margin: 10px; text-align: right; font-size: 0.85em; float: right; } #body { width: 1050px; padding: 5px; padding-bottom: 25px; } #menubar { float: left; padding-left: 10px; width: 190px; } #menubar h1 { font-size: 1.0em; margin-bottom: 4px; } #menubar ul { padding-left: 2px; margin: 0px; margin-bottom: 8px; } #menubar ul li { font-size: .8em; padding-left: 8px; } #submenubar { width: 100%; } #submenubar li { float: left; height: 26px; margin-left: 8px; } #submenubar li:first-child { margin-left: 0px; margin-top: 3px; } #submenubar li.paginate { float: right; } #submenubar li.paginate select { width: auto; } #submenubar input { margin: 0px; } #submenubar .action, #submenu .button { height: 26px; } #content { float: left; width: 850px; } #spacer { clear: both; } #footer { font-size: .8em; text-align: center; position: absolute; width: 100%; bottom: 0px; height: 25px; } #batchactions { font-weight: normal; width: auto; } #batchactions option { padding-left: 1em; font-weight: normal; } #batchactions option:first-child { font-weight: bold; padding-left: 0; } #filter_field { width: auto; } #listitems { clear: both; } #listitems thead { background-color: #306CAC; color: #dddddd; font-weight: bold; text-align: left; } #listitems thead a { color: #dddddd; } #listitems thead tr th:first-child { width: 2em; padding-left: .3em;} #listitems tbody tr a.action, #listitems tbody tr span.action { font-size: .8em; float: left; padding: 1px 7px; } #listitems tbody tr { background-color: #eeeeee; } #listitems tbody tr.selected { background-color: #e0e0e0; } #listitems tbody tr:hover { background-color: #cccccc; } #listitems tbody td { border-bottom: 1px solid #cccccc; } #listitems tbody td:first-child { width: 2em; padding-left: .3em;} #listitems tbody td:last-child { width: 21em; } #listitems input[type=checkbox] { margin-right: 0px; } #ksform li, #snippetform li { margin-top: 4px; } #ksform input[type=checkbox], #snippetform input[type=checkbox] { margin-right: 0px; } #filter-adder { margin-top: 5px; } /* jQuery UI formatting for tabs, some styling based on examples here: http://keith-wood.name/uiTabs.html */ .ui-tabs .ui-tabs-nav li a { padding: 0.5em 0.5em; font-size: 0.95em; font-weight: bold; } .ui-widget { font-family: "URW Gothic", "Liberation Sans", "Helvetica", "Luxi Sans", "Bitstream Vera Sans", arial,helvetica,clean,sans-serif; font-size: 0.85em; font-weight: normal; } #tabs { width:100%; padding: 0px; background: none; border-width: 0px; } #tabs .ui-tabs-nav { padding-left: 0px; background: transparent; border-width: 0px 0px 1px 0px; -moz-border-radius: 0px; -webkit-border-radius: 0px; border-radius: 0px; } #tabs .ui-tabs-panel { border-width: 0px 1px 1px 1px; background-color: white; } cobbler-2.4.1/web/content/tooltip.png000066400000000000000000000006521227367477500176040ustar00rootroot00000000000000PNG  IHDRVΎWsRGBbKGD pHYs  tIMETAtEXtCommentCreated with GIMPWIDAT8˭ԱJA30`b'X 7%| NRbF AmlSA-46[^633?FD$j`7ٓH)\ z]Z *>bU;>8iwuTKB8XT$ Vs5ې-b-cx]LI4egl+WDNd3򍥊]g׸C%G ? ,hQIENDB`cobbler-2.4.1/web/manage.py000066400000000000000000000010421227367477500155260ustar00rootroot00000000000000#!/usr/bin/env python from django.core.management import execute_manager try: import settings # Assumed to be in the same directory. except ImportError: import sys sys.stderr.write("Error: Can't find the file 'settings.py' in the directory containing %r. It appears you've customized things.\nYou'll have to run django-admin.py, passing it your settings module.\n(If the file settings.py does indeed exist, it's causing an ImportError somehow.)\n" % __file__) sys.exit(1) if __name__ == "__main__": execute_manager(settings) cobbler-2.4.1/web/settings.py000066400000000000000000000044641227367477500161510ustar00rootroot00000000000000# Django settings for cobbler-web project. import django DEBUG = True TEMPLATE_DEBUG = DEBUG ADMINS = ( # ('Your Name', 'your_email@domain.com'), ) MANAGERS = ADMINS DATABASE_ENGINE = '' # cobbler-web does not use a database DATABASE_NAME = '' DATABASE_USER = '' DATABASE_PASSWORD = '' DATABASE_HOST = '' DATABASE_PORT = '' # Force Django to use the systems timezone TIME_ZONE = None # Language section # TBD. LANGUAGE_CODE = 'en-us' USE_I18N = False SITE_ID = 1 # not used MEDIA_ROOT = '' MEDIA_URL = '' if django.VERSION[0] == 1 and django.VERSION[1] < 4: ADMIN_MEDIA_PREFIX = '/media/' else: STATIC_URL = '/media/' SECRET_KEY = '' # code config if django.VERSION[0] == 1 and django.VERSION[1] < 4: TEMPLATE_LOADERS = ( 'django.template.loaders.filesystem.load_template_source', 'django.template.loaders.app_directories.load_template_source', ) else: TEMPLATE_LOADERS = ( 'django.template.loaders.filesystem.Loader', 'django.template.loaders.app_directories.Loader', ) if django.VERSION[0] == 1 and django.VERSION[1] < 2: # Legacy django had a different CSRF method, which also had # different middleware. We check the vesion here so we bring in # the correct one. MIDDLEWARE_CLASSES = ( 'django.middleware.common.CommonMiddleware', 'django.contrib.csrf.middleware.CsrfMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', ) else: MIDDLEWARE_CLASSES = ( 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', ) ROOT_URLCONF = 'urls' TEMPLATE_DIRS = ( '/usr/share/cobbler/web/templates', ) INSTALLED_APPS = ( 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'cobbler_web', ) from django.conf.global_settings import TEMPLATE_CONTEXT_PROCESSORS TEMPLATE_CONTEXT_PROCESSORS += ( 'django.core.context_processors.request', ) SESSION_ENGINE = 'django.contrib.sessions.backends.file' SESSION_FILE_PATH = '/var/lib/cobbler/webui_sessions' cobbler-2.4.1/web/urls.py000066400000000000000000000003351227367477500152670ustar00rootroot00000000000000from django.conf.urls.defaults import * # Uncomment the next two lines to enable the admin: #from django.contrib import admin #admin.autodiscover() urlpatterns = patterns('', (r'^', include('cobbler_web.urls')), ) cobbler-2.4.1/web_setup.py000066400000000000000000000110131227367477500155150ustar00rootroot00000000000000#!/usr/bin/env python from distutils.core import setup #Django Configuration if os.path.exists("/etc/SuSE-release"): dj_config = "/etc/apache2/conf.d" elif os.path.exists("/etc/debian_version"): dj_config = "/etc/apache2/conf.d" else: dj_config = "/etc/httpd/conf.d" #dj_templates = "/usr/share/cobbler/web/cobbler_web/templates" #dj_webui = "/usr/share/cobbler/web/cobbler_web" #dj_webui2 = "/usr/share/cobbler/web/cobbler_web/templatetags" #dj_webui_proj = "/usr/share/cobbler/web" dj_sessions = "/var/lib/cobbler/webui_sessions" dj_js = "/var/www/cobbler_webui_content/" #Web Content wwwcon = "/var/www/cobbler_webui_content" setup( name = "cobbler-web", version = "2.0.4", description = "Web interface for Cobbler", long_description = "Web interface for Cobbler that allows visiting http://server/cobbler_web to configure the install server.", author = "Michael DeHaan", author_email = "michael.dehaan AT gmail", url = "http://www.cobblerd.org/", license = "GPLv2+", requires = ["mod_python", "cobbler", ], packages = ["web", "web.cobbler_web", "web.cobbler_web.templatetags"], package_dir = {"cobbler_web": "web/cobbler_web"}, package_data = {"web.cobbler_web": ["templates/*.tmpl"]}, data_files = [ (dj_config, ['config/cobbler_web.conf']), (dj_sessions, []), (wwwcon, ['web/content/style.css']), (wwwcon, ['web/content/logo-cobbler.png']), (dj_js, ['web/content/cobbler.js']), # FIXME: someday Fedora/EPEL will package these and then we should not embed them then. (dj_js, ['web/content/jquery-1.3.2.js']), (dj_js, ['web/content/jquery-1.3.2.min.js']), (dj_js, ['web/content/jsGrowl_jquery.js']), (dj_js, ['web/content/jsGrowl.js']), (dj_js, ['web/content/jsgrowl_close.png']), (dj_js, ['web/content/jsgrowl_corners.png']), (dj_js, ['web/content/jsgrowl_middle_hover.png']), (dj_js, ['web/content/jsgrowl_corners_hover.png']), (dj_js, ['web/content/jsgrowl_side_hover.png']), (dj_js, ['web/content/jsGrowl.css']), # # # django webui content (dj_config, ['config/cobbler_web.conf']), # (dj_templates, ['web/cobbler_web/templates/blank.tmpl']), # (dj_templates, ['web/cobbler_web/templates/empty.tmpl']), # (dj_templates, ['web/cobbler_web/templates/enoaccess.tmpl']), # (dj_templates, ['web/cobbler_web/templates/header.tmpl']), # (dj_templates, ['web/cobbler_web/templates/index.tmpl']), # (dj_templates, ['web/cobbler_web/templates/item.tmpl']), # (dj_templates, ['web/cobbler_web/templates/ksfile_edit.tmpl']), # (dj_templates, ['web/cobbler_web/templates/ksfile_list.tmpl']), # (dj_templates, ['web/cobbler_web/templates/snippet_edit.tmpl']), # (dj_templates, ['web/cobbler_web/templates/snippet_list.tmpl']), # (dj_templates, ['web/cobbler_web/templates/master.tmpl']), # (dj_templates, ['web/cobbler_web/templates/message.tmpl']), # (dj_templates, ['web/cobbler_web/templates/paginate.tmpl']), # (dj_templates, ['web/cobbler_web/templates/settings.tmpl']), # (dj_templates, ['web/cobbler_web/templates/generic_edit.tmpl']), # (dj_templates, ['web/cobbler_web/templates/generic_list.tmpl']), # (dj_templates, ['web/cobbler_web/templates/generic_delete.tmpl']), # (dj_templates, ['web/cobbler_web/templates/generic_rename.tmpl']), # (dj_templates, ['web/cobbler_web/templates/events.tmpl']), # (dj_templates, ['web/cobbler_web/templates/eventlog.tmpl']), # (dj_templates, ['web/cobbler_web/templates/import.tmpl']), # (dj_templates, ['web/cobbler_web/templates/task_created.tmpl']), # (dj_templates, ['web/cobbler_web/templates/check.tmpl']), # # django code, private to cobbler-web application # (dj_webui, ['web/cobbler_web/__init__.py']), # (dj_webui_proj, ['web/__init__.py']), # (dj_webui_proj, ['web/urls.py']), # (dj_webui_proj, ['web/manage.py']), # (dj_webui_proj, ['web/settings.py']), # (dj_webui, ['web/cobbler_web/urls.py']), # (dj_webui, ['web/cobbler_web/views.py']), # (dj_webui2, ['web/cobbler_web/templatetags/site.py']), # (dj_webui2, ['web/cobbler_web/templatetags/__init__.py']), (dj_sessions, []), ], )