bdii-5.2.22/0000775001227000117040000000000012213331107012043 5ustar ellertellertbdii-5.2.22/README0000664001227000117040000000571211220377342012740 0ustar ellertellertREADME for bdii package ======================= Function: --------- The Berkeley Database Information Index (BDII) consists of two or more standard LDAP databases that are populated by an update process. Port forwarding is used to enable one or more databases to serve data while one database is being refreshed. The databases are refreshed cyclically. Any incoming connection is forwarded to the most recently updated database, while old connections are allowed to linger until it is the turn of their database to be refreshed and restarted. The update process obtains LDIF from either doing an ldapsearch on LDAP URLs or by running a local script (given by a URL with "file" protocol) that generates LDIF. The LDIF is then inserted into the LDAP database. Options exist to update the list of LDAP URLs from a web page and to use an LDIF file from a web page to modify the data before it is inserted into the database. Cache use: ---------- Whenever a remote server is contacted and the ldapsearch command times out the update process tries to find an (old) cached entry in the var/cache/ directory. If no entry is found a message is printed to the logfile. ! Attention ! If the remote host cannot be contacted due to a connection problem no cached entry is taken. No message is printed to the logfile. Compressed Content Exchange Mechanism (CCEM): --------------------------------------------- The Compressed Content Exchange Mechanism is intended to speed up the gathering of information in case of a ldapsearch to another BDII instance. The update process first tries to find the entry containing the compressed content of the queried instance and subsequently adds the information to its upcoming database. If the CCEM fails the normal procedure as described in the previous paragraph is executed. The CCEM function is enabled by default in version >= 3.9.1. To disable, add the following to your bdii.conf: BDII_CCEM=no BDII Status Information Mechanism (BSIM): ----------------------------------------- The BDII Status Information Mechanism is intended to allow better monitoring possibilities, spotting of upraising problems and resulting failure prevention. It adds status information about the BDII instance into the 'o=infosys' root containing metrics like the number of entries added in the last cyle, the time to do so, etc. The description of thoese metrics can be found in the etc/BDII.schema file. History: Original version David Groep, NIKHEF 22-01-2004 Restructured by Laurence Field 2005 Restructured by Maarten Litmaath 2008 Enhanced by Felix Ehm License: Build dependencies: None Runtime dependencies: openldap How to build and install: make install How to configure: http://lfield.home.cern.ch/lfield/bdii/ More information: http://lfield.home.cern.ch/lfield/bdii/ Known bugs: http://lfield.home.cern.ch/lfield/bdii/ https://savannah.cern.ch/projects/lcgoperation/ Planned evolution: Contact: Laurence.Field@cern.ch, Maarten.Litmaath@cern.ch bdii-5.2.22/man/0000775001227000117040000000000012213331107012616 5ustar ellertellertbdii-5.2.22/man/bdii-update.10000664001227000117040000000150111607011545015073 0ustar ellertellert.TH BDII_UPDATE 1 .SH NAME bdii-update \- the bdii update process .SH SYNOPSIS .B bdii-update [-d ] -c .I config-file .SH DESCRIPTION The .B bdii-update process obtains the LDIF by reading files found the .B ldif directory, running providers found in the .B provider directory and running plugins found in the .B plugin directory. The difference between providers and plugins is that providers return complete entries and plugins provide modifications to existing entries. The process can be run either as a daemon that periodically syncronizes an LDAP database or as a command that will print the result to stdout. .SH OPTIONS .IP -d Run as a daemon process. .IP "-c config" The configuration to use. .SH FILES .I /etc/bdii.conf .RS The default configuration file. .SH AUTHOR Laurence Field bdii-5.2.22/tests/0000775001227000117040000000000012213331107013205 5ustar ellertellertbdii-5.2.22/tests/test-bdii0000775001227000117040000002007211605606657015043 0ustar ellertellert#!/bin/sh working_dir=$(mktemp -d) cp -r ldif provider plugin ${working_dir} . /etc/bdii/bdii.conf chown $BDII_USER:$BDII_USER ${working_dir} chmod +x ${working_dir}/provider/* ${working_dir}/plugin/* sed "s#/var/lib/bdii/gip#${working_dir}#" /etc/bdii/bdii.conf > ${working_dir}/bdii.conf sed -i "s#BDII_READ_TIMEOUT=.*#BDII_READ_TIMEOUT=3#" ${working_dir}/bdii.conf sed -i "s#BDII_BREATHE_TIME=.*#BDII_BREATHE_TIME=10#" ${working_dir}/bdii.conf sed -i "s#BDII_DELETE_DELAY=.*#BDII_DELETE_DELAY=2#" ${working_dir}/bdii.conf sed -i "s#ERROR#DEBUG#" ${working_dir}/bdii.conf sed -i "s#/var/log/bdii#${working_dir}#" ${working_dir}/bdii.conf export BDII_CONF=${working_dir}/bdii.conf /etc/init.d/bdii restart command="ldapsearch -LLL -x -h $(hostname -f) -p 2170 -b o=grid" command_glue2="ldapsearch -LLL -x -h $(hostname -f) -p 2170 -b o=glue" RETVAL=0 echo "Waiting 10 seconds for the BDII to start." sleep 10 echo -n "Testing the timeout for hanging providers: " ${command} >/dev/null 2>/dev/null if [ $? -eq 32 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi echo -n "Testing static LDIF file: " filter=GlueServiceUniqueID ${command} ${filter} | grep "service_1" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi echo -n "Testing GLUE2 service: " filter=objectClass=GLUE2Service ${command_glue2} ${filter} | grep "glue2-service" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi echo -n "Testing GlueTop modification: " filter="objectClass=MDS" ${command} ${filter} | grep "nordugrid_1" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi echo -n "Testing provider: " filter="GlueServiceUniqueID=service_2" ${command} ${filter} | grep "service_2" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi echo -n "Testing GLUE2 provider: " filter="objectClass=GLUE2Service" ${command_glue2} ${filter} | grep "cream-06" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi echo -n "Testing the handling of long DNs: " filter="GlueServiceUniqueID" ${command} ${filter} | grep "really_long" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi echo -n "Testing ignoring of junk files: " filter="GlueServiceUniqueID=service_4" ${command} ${filter} | grep "service_4" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "OK" else echo "FAIL" RETVAL=1 fi echo -n "Testing basic plugin: " filter=GlueServiceStatus=Failed ${command} ${filter} | grep "Failed" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi echo -n "Testing plugin mutlivalued delete: " filter=GlueServiceAccessControlBaseRule ${command} ${filter} | grep "atlas" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "OK" else echo "FAIL" RETVAL=1 fi echo -n "Testing plugin mutlivalued add: " filter=GlueServiceAccessControlBaseRule=cms ${command} ${filter} | grep "cms" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi echo -n "Testing plugin modify: " filter=GlueServiceStatusInfo ${command} ${filter} | grep "Broken" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi echo -n "Testing two plugins extending the attribute: " filter="GlueServiceUniqueID=service_7" ${command} ${filter} | grep "vo_1" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi rm -f ${working_dir}/ldif/service-long-dn.ldif rm -f ${working_dir}/ldif/service-unstable.ldif rm -f ${working_dir}/ldif/service-spaces-2.ldif sed -i "s#Failed#Unknown#" ${working_dir}/plugin/service-status sed -i "s#=# = #" ${working_dir}/ldif/service-spaces-1.ldif sed -i "s#2011-02-07T10:57:48Z#2011-02-07T10:58:57Z#" ${working_dir}/provider/glue2-provider echo "Wating for update ..." sleep 14 echo -n "Testing modify on update: " filter=GlueServiceStatus=Unknown ${command} ${filter} | grep "Unknown" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi echo -n "Testing modify GLUE2 Service: " filter=objectClass=GLUE2Service ${command_glue2} ${filter} | grep "GLUE2_Serivce_OK" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi echo -n "Testing GLUE2 provider updated: " filter="objectClass=GLUE2Service" ${command_glue2} ${filter} | grep "2011-02-07T10:58:57Z" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi echo -n "Testing delayed delete: " filter=GlueServiceUniqueID ${command} ${filter} | grep "_long_" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi echo -n "Testing ignoring spaces in dn: " filter=GlueServiceUniqueID ${command} ${filter} | grep "service_5" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi echo -n "Testing dn special character '\': " filter=GlueServiceUniqueID ${command} ${filter} | grep "slash" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi echo -n "Testing dn special character ',': " filter=GlueServiceUniqueID ${command} ${filter} | grep "comma" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi echo -n "Testing dn special character '=': " filter=GlueServiceUniqueID ${command} ${filter} | grep "equal" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi echo -n "Testing dn special character '+': " filter=GlueServiceUniqueID ${command} ${filter} | grep "plus" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi echo -n "Testing dn special character '\"': " filter=GlueServiceUniqueID ${command} ${filter} | grep "quote" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi echo -n "Testing dn special character ';': " filter=GlueServiceUniqueID ${command} ${filter} | grep "semi" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi echo -n "Testing dn special character '<': " filter=GlueServiceUniqueID ${command} ${filter} | grep "less" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi echo -n "Testing dn special character '>': " filter=GlueServiceUniqueID ${command} ${filter} | grep "greater" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi cp ldif/service-unstable.ldif -f ${working_dir}/ldif/ echo "Wating for update ..." sleep 13 echo -n "Testing deleting obsolete entry: " filter=GlueServiceUniqueID ${command} ${filter} | grep "_long_" >/dev/null 2>/dev/null if [ ! $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi echo -n "Testing delete with space in uniqueID: " filter=GlueServiceUniqueID ${command} ${filter} | grep "service 6" >/dev/null 2>/dev/null if [ ! $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi echo -n "Testing unstable service is not deleted: " filter=GlueServiceUniqueID ${command} ${filter} | grep "service_7" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi rm -f ${working_dir}/ldif/service-unstable.ldif echo "Wating for update ..." sleep 13 echo -n "Testing unstable service is not deleted: " filter=GlueServiceUniqueID ${command} ${filter} | grep "service_7" >/dev/null 2>/dev/null if [ $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi echo "Wating for update ..." sleep 13 echo -n "Testing unstable service is deleted: " filter=GlueServiceUniqueID ${command} ${filter} | grep "service_7" >/dev/null 2>/dev/null if [ ! $? -gt 0 ]; then echo "FAIL" RETVAL=1 else echo "OK" fi /etc/init.d/bdii stop mv ${working_dir}/bdii-update.log /tmp rm -rf ${working_dir} if [ ${RETVAL} -eq 1 ]; then echo "Test Failed" exit 1 else echo "Test Passed" exit 0 fi bdii-5.2.22/tests/ldif/0000775001227000117040000000000012213331107014123 5ustar ellertellertbdii-5.2.22/tests/ldif/service-unstable.ldif0000664001227000117040000000105711522257104020247 0ustar ellertellertdn: GlueServiceUniqueID=service_7,mds-vo-name=resource,o=grid objectClass: GlueTop objectClass: GlueService objectClass: GlueKey objectClass: GlueSchemaVersion GlueServiceUniqueID: service_7 GlueServiceName: Test Service Seven GlueServiceType: unstable GlueServiceVersion: 3.0.0 GlueServiceEndpoint: ldap://host-invalid:2170/mds-vo-name=resource,o=grid GlueServiceStatus: Unstable GlueServiceStatusInfo: Service Unstable GlueServiceAccessControlBaseRule: none GlueForeignKey: GlueSiteUniqueID=my-site-name GlueSchemaVersionMajor: 1 GlueSchemaVersionMinor: 3 bdii-5.2.22/tests/ldif/service.ldif0000664001227000117040000000111211522257104016424 0ustar ellertellertdn: GlueServiceUniqueID=service_1,mds-vo-name=resource,o=grid objectClass: GlueTop objectClass: GlueService objectClass: GlueKey objectClass: GlueSchemaVersion GlueServiceUniqueID: service_1 GlueServiceName: Test Service One GlueServiceType: bdii GlueServiceVersion: 3.0.0 GlueServiceEndpoint: ldap://host-invalid:2170/mds-vo-name=resource,o=grid GlueServiceStatus: OK GlueServiceStatusInfo: BDII Runnning GlueServiceAccessControlBaseRule: dteam GlueServiceAccessControlBaseRule: atlas GlueForeignKey: GlueSiteUniqueID=my-site-name GlueSchemaVersionMajor: 1 GlueSchemaVersionMinor: 3 bdii-5.2.22/tests/ldif/default.ldif0000664001227000117040000000066311524465466016440 0ustar ellertellertdn: o=shadow objectClass: organization o: shadow dn: o=grid objectClass: organization o: grid dn: mds-vo-name=local,o=grid objectClass: MDS mds-vo-name: local dn: mds-vo-name=resource,o=grid objectClass: MDS mds-vo-name: resource dn: o=glue objectClass: organization o: glue dn: GLUE2GroupID=resource, o=glue objectClass: GLUE2Group GLUE2GroupID: resource dn: GLUE2GroupID=grid, o=glue objectClass: GLUE2Group GLUE2GroupID: grid bdii-5.2.22/tests/ldif/service-long-dn.ldif0000664001227000117040000000127111220377342017770 0ustar ellertellertdn: GlueServiceUniqueID=service_this_is_a_really_really_long_dn_to_test_for_the_correct_handling_of_wrapping,mds-vo-name=resource,o=grid objectClass: GlueTop objectClass: GlueService objectClass: GlueKey objectClass: GlueSchemaVersion GlueServiceUniqueID: service_this_is_a_really_really_long_dn_to_test_for_the_correct_handling_of_wrapping GlueServiceName: Test Service Three GlueServiceType: bdii GlueServiceVersion: 3.0.0 GlueServiceEndpoint: ldap://host-invalid:2170/mds-vo-name=resource,o=grid GlueServiceStatus: OK GlueServiceStatusInfo: BDII Runnning GlueServiceAccessControlBaseRule: dteam GlueForeignKey: GlueSiteUniqueID=my-site-name GlueSchemaVersionMajor: 1 GlueSchemaVersionMinor: 3 bdii-5.2.22/tests/ldif/service-encoding.ldif0000664001227000117040000000227211360564264020230 0ustar ellertellertdn: GlueServiceUniqueID=service_\\slash,mds-vo-name=resource,o=grid objectClass: GlueTop objectClass: GlueService GlueServiceUniqueID: service_\slash dn: GlueServiceUniqueID=service_\,comma,mds-vo-name=resource,o=grid objectClass: GlueTop objectClass: GlueService GlueServiceUniqueID: service_,comma dn: GlueServiceUniqueID=service_\=equals,mds-vo-name=resource,o=grid objectClass: GlueTop objectClass: GlueService GlueServiceUniqueID: service_=equals dn: GlueServiceUniqueID=service_\+plus,mds-vo-name=resource,o=grid objectClass: GlueTop objectClass: GlueService GlueServiceUniqueID: service_+plus dn: GlueServiceUniqueID=service_\;semi,mds-vo-name=resource,o=grid objectClass: GlueTop objectClass: GlueService GlueServiceUniqueID: service_;semi dn: GlueServiceUniqueID=service_\"quote,mds-vo-name=resource,o=grid objectClass: GlueTop objectClass: GlueService GlueServiceUniqueID: service_"quote dn: GlueServiceUniqueID=service_\>greater,mds-vo-name=resource,o=grid objectClass: GlueTop objectClass: GlueService GlueServiceUniqueID: service_>greater dn: GlueServiceUniqueID=service_\ -1): line=line[:index] index=line.find("=") if (index > -1): key=line[:index].strip() value=line[index+1:].strip() config[key] = value if 'SLAPD_CONF' in os.environ: config['SLAPD_CONF'] = os.environ['SLAPD_CONF'] if ( not config.has_key('BDII_DAEMON') ): config['BDII_DAEMON'] = False if ( not config.has_key('BDII_RUN_DIR') ): config['BDII_RUN_DIR'] = '/var/run/bdii' if ( not config.has_key('BDII_PID_FILE') ): config['BDII_PID_FILE'] = "%s/bdii-update.pid" % config['BDII_RUN_DIR'] for parameter in ['BDII_LOG_FILE', 'BDII_LOG_LEVEL', 'BDII_LDIF_DIR', 'BDII_PROVIDER_DIR', 'BDII_PLUGIN_DIR', 'BDII_READ_TIMEOUT']: if ( not config.has_key(parameter) ): sys.stderr.write("Error: Configuration parameter %s is not specified in the configuration file %s.\n" % (parameter, config['BDII_CONFIG_FILE'])) sys.exit(1) for parameter in ['BDII_LDIF_DIR','BDII_PROVIDER_DIR','BDII_PLUGIN_DIR']: if ( not os.path.exists(config[parameter])): sys.stderr.write("Error: %s %s does not exist.\n" % (parameter, config[parameter])) sys.exit(1) if not config.has_key('BDII_LOG_LEVEL'): config['BDII_LOG_LEVEL']='ERROR' else: log_levels=['CRITICAL','ERROR','WARNING','INFO','DEBUG'] try: log_levels.index(config['BDII_LOG_LEVEL']) except ValueError, e: sys.stderr.write("Error: Log level %s is not an allowed level. %s\n" % (config['BDII_LOG_LEVEL'], log_levels)) sys.exit(1) config['BDII_READ_TIMEOUT'] = int(config['BDII_READ_TIMEOUT']) if ( config['BDII_DAEMON'] == True ): for parameter in ['BDII_PORT', 'BDII_BREATHE_TIME', 'BDII_VAR_DIR', 'BDII_ARCHIVE_SIZE', 'BDII_DELETE_DELAY', 'SLAPD_CONF']: if ( not config.has_key(parameter) ): sys.stderr.write("Error: Configuration parameter %s is not specified in the configuration file %s.\n" % (parameter, config['BDII_CONFIG_FILE'])) sys.exit(1) if ( os.path.exists(config['SLAPD_CONF']) ): config['BDII_PASSWD'] = {} config['BDII_PASSWD_FILE'] = {} if not os.path.exists(config['BDII_RUN_DIR']): os.makedirs(config['BDII_RUN_DIR']) rootdn = False rootpw = False filename = "" for line in open(config['SLAPD_CONF']): if ( line.find("rootdn") > -1 ): rootdn = line.replace("rootdn","").strip() rootdn = rootdn.replace('"','').replace(" ","") filename = rootdn.replace('o=','') if ( rootpw ): config['BDII_PASSWD'][rootdn] = rootpw config['BDII_PASSWD_FILE'][rootdn] = "%s/%s" % (config['BDII_RUN_DIR'], filename) pf = os.open(config['BDII_PASSWD_FILE'][rootdn], os.O_WRONLY | os.O_CREAT, 0600) os.write(pf,rootpw) os.close(pf) rootdn = False rootpw = False if ( line.find("rootpw") > -1 ): rootpw = line.replace("rootpw","").strip() if ( rootdn ): config['BDII_PASSWD'][rootdn] = rootpw config['BDII_PASSWD_FILE'][rootdn] = "%s/%s" % (config['BDII_RUN_DIR'], filename) pf = os.open(config['BDII_PASSWD_FILE'][rootdn], os.O_WRONLY | os.O_CREAT, 0600) os.write(pf,rootpw) os.close(pf) rootdn = False rootpw = False config['BDII_BREATHE_TIME'] = float(config['BDII_BREATHE_TIME']) config['BDII_ARCHIVE_SIZE'] = int(config['BDII_ARCHIVE_SIZE']) config['BDII_DELETE_DELAY'] = int(config['BDII_DELETE_DELAY']) config['BDII_HOSTNAME'] = 'localhost' return config def print_usage(): sys.stderr.write('''Usage: %s [ OPTIONS ] -c --config BDII configuration file -d --daemon Run BDII in daemon mode ''' % (str(sys.argv[0]))) def create_daemon(log_file): try: pid = os.fork() except OSError, e: return((e.errno, e.strerror)) if (pid == 0): os.setsid() signal.signal(signal.SIGHUP, signal.SIG_IGN) try: pid = os.fork() except OSError, e: return((e.errno, e.strerror)) if (pid == 0): os.umask(022) else: os._exit(0) else: os._exit(0) try: maxfd=os.sysconf("SC_OPEN_MAX") except (AttributeError, ValueError): maxfd=256 for fd in range(3, maxfd): try: os.close(fd) except OSError: pass os.close(0) os.open("/dev/null", os.O_RDONLY) os.close(1) os.open("/dev/null", os.O_WRONLY) # connect stderr to log file e = os.open(log_file, os.O_WRONLY | os.O_APPEND | os.O_CREAT, 0644) os.dup2(e, 2) os.close(e) sys.stderr = os.fdopen(2, 'a', 0) # Write PID pid_file = open(config['BDII_PID_FILE'],'w') pid_file.write("%s\n" % (str(os.getpid()))) pid_file.close() def get_logger(log_file,log_level): log = logging.getLogger('bdii-update') hdlr = logging.StreamHandler(sys.stderr) formatter = logging.Formatter('%(asctime)s: [%(levelname)s] %(message)s') hdlr.setFormatter(formatter) log.addHandler(hdlr) log.setLevel(logging.__dict__.get(log_level)) return log def handler(signum, frame): if ( signum ==14 ): # Commit suicide process_group=os.getpgrp() os.killpg(process_group, signal.SIGTERM) sys.exit(1) def read_ldif(source): # Get pipe file descriptors read_fd, write_fd = os.pipe() # Fork pid = os.fork() if pid: # Close write file descriptor as we don't need it. os.close(write_fd) read_fh = os.fdopen(read_fd) raw_ldif = read_fh.read() result = os.waitpid(pid, 0) if (result[1] > 0): log.error("Timed out while reading %s", (source)) return "" raw_ldif = raw_ldif.replace("\n ", "") return raw_ldif else: # Close read file d os.close(read_fd) # Set process group os.setpgrp() # Setup signal handler signal.signal(signal.SIGALRM, handler) signal.alarm(config['BDII_READ_TIMEOUT']) # Open pipe to LDIF if ( source[:7] == 'ldap://'): url=source.split('/') command = "ldapsearch -LLL -x -h %s -b %s 2>/dev/null" % ( url[2], url[3]) pipe = os.popen(command) elif( source[:7] == 'file://' ): pipe=open(source[7:]) else: pipe=os.popen(source) raw_ldif=pipe.read() # Close LDIF pipe pipe.close() try: write_fh = os.fdopen(write_fd, 'w') write_fh.write(raw_ldif) write_fh.close() except IOError: log.error("Information provider %s terminated unexpectedly." % source) signal.alarm(0) # Disable the alarm sys.exit(0) def get_dns(ldif): dns = {} last_dn_index = len(ldif) while ( 1 ): dn_index = ldif.rfind("dn:",0,last_dn_index) if ( dn_index == -1): break end_dn_index = ldif.find("\n",dn_index, last_dn_index) dn = ldif[dn_index + 4 :end_dn_index].lower() dn = re.sub("\s*,\s*",",",dn) dn = re.sub("\s*=\s*","=",dn) dn = dn.replace("\\5c","\\\\") # Replace encoded slash dn = dn.replace("\\2c","\\,") # Replace encoded comma dn = dn.replace("\\3d","\\=") # Replace encoded equals dn = dn.replace("\\2b","\\+") # Replace encoded plus dn = dn.replace("\\3b","\\;") # Replace encoded semi colon dn = dn.replace("\\22","\\\"") # Replace encoded quote dn = dn.replace("\\3e","\\>") # Replace encoded greater than dn = dn.replace("\\3c","\\<") # Replace encoded less than end_entry_index = ldif.find("\n\n",dn_index, last_dn_index) dns[dn] = (dn_index, last_dn_index, end_dn_index) last_dn_index = dn_index return dns def group_dns(dns): grouped = {} for dn in dns: index = dn.rfind(",") root = dn[index +1 :].strip() if grouped.has_key(root): grouped[root].append(dn) else: if root in config['BDII_PASSWD']: grouped[root] = [ dn ] else: if "o=shadow" in config['BDII_PASSWD'] and root == "o=grid": grouped[root] = [ dn ] elif root != "o=shadow": log.error("dn suffix %s in not specified in the slapd configuration file." % (root)) return grouped def convert_entry(entry_string): multivalued = [ 'objectclass', 'gluehostapplicationsoftwareruntimeenvironment', 'glueserviceaccesscontrolrule', 'glueserviceaccesscontrolbaserule', 'glueceaccesscontrolbaserule', 'gluesaaccesscontrolbaserule', 'gluesecontrolprotocolcapability', 'glueseaccessprotocolsupportedsecurity', 'gluesacapability', 'gluevoinfoaccesscontrolbaserule', 'gluesecontrolprotocolcapability', 'gluecesebindgroupseuniqueid', 'gluecesebindseuniqueid', 'gluecesebindmountinfo', 'gluecesebindweight', 'glueserviceowner', 'gluechunkkey', 'glueforeignkey', 'gluecesebindmountinfo', 'glueseaccessprotocolcapability', 'gluesiteotherinfo', 'gluesitesponsor', 'gluececapability', 'glueclusterservice', 'glue2entityotherinfo', 'glue2extensionentityforeignkey', 'glue2locationserviceforeignkey', 'glue2locationdomainforeignkey', 'glue2contactserviceforeignkey', 'glue2contactdomainforeignkey', 'glue2domainwww', 'glue2admindomainowner', 'glue2admindomainadmindomainforeignkey', 'glue2userdomainusermanager', 'glue2userdomainmember', 'glue2userdomainuserdomainforeignkey', 'glue2servicecapability', 'glue2servicestatusinfo', 'glue2serviceadmindomainforeignkey', 'glue2serviceserviceforeignkey', 'glue2endpointcapability', 'glue2endpointinterfaceextension', 'glue2endpointwsdl', 'glue2endpointsupportedprofile', 'glue2endpointsemantics', 'glue2endpointtrustedca', 'glue2shareendpointforeignkey', 'glue2shareresourceforeignkey', 'glue2activityactivityforeignkey', 'glue2policyrule', 'glue2policyuserdomainforeignkey', 'glue2accesspolicyendpointforeignkey', 'glue2mappingpolicyshareforeignkey', 'glue2computingendpointjobdescription', 'glue2computingsharetag', 'glue2computingsharecomputingendpointforeignkey', 'glue2computingshareexecutionenvironmentforeignkey', 'glue2computingmanagernetworkinfo', 'glue2executionenvironmentnetworkinfo', 'glue2applicationenvironmentbestbenchmark', 'glue2applicationenvironmentexecutionenvironmentforeignkey', 'glue2computingactivitystate', 'glue2computingactivityrestartstate', 'glue2computingactivityrequestedapplicationenvironment', 'glue2computingactivityexecutionnode', 'glue2storageshareaccessmode', 'glue2storageshareretentionpolicy', 'glue2storagesharestorageendpointforeignkey', 'glue2storagesharedatastoreforeignkey', 'glue2storagesharecapacitystorageshareforeignkey', 'glue2tocomputingservicestorageaccessprotocolforeignkey'] entry = {} for line in entry_string.split("\n"): index = line.find(":") if (index > -1): attribute = line[:index].lower() value = line[index + 1:].strip() if entry.has_key(attribute): if not value in entry[attribute]: entry[attribute].append(value) else: entry[attribute] = [value] return entry def convert_back(entry): entry_string = "dn: %s\n" %(entry["dn"][0]) entry.pop("dn") for attribute in entry.keys(): attribute=attribute.lower() for value in entry[attribute]: entry_string += "%s: %s\n" %(attribute, value) return entry_string def ldif_diff(dn, old_entry, new_entry): add_attribute={} delete_attribute={} replace_attribute={} old_entry = convert_entry(old_entry) new_entry = convert_entry(new_entry) dn_perserved_case = None for attribute in new_entry.keys(): attribute=attribute.lower() if (attribute == "dn"): dn_perserved_case = new_entry['dn'][0] continue # If the old entry has the attribue we need to compare values if ( old_entry.has_key(attribute) ): # If the old entries are different find the modify. if ( not new_entry[attribute] == old_entry[attribute]): replace_attribute[attribute] = new_entry[attribute] # The old entry does not have the attribute so add it. else: add_attribute[attribute] = new_entry[attribute] # Checking for removed attributes for attribute in old_entry.keys(): if (attribute.lower() == "dn"): continue if not new_entry.has_key(attribute): delete_attribute[attribute]=old_entry[attribute] # Create LDIF modify statement ldif=['dn: %s' % dn_perserved_case ] ldif.append('changetype: modify') for attribute in add_attribute.keys(): attribute=attribute.lower() ldif.append('add: %s' % (attribute) ) for value in add_attribute[attribute]: ldif.append('%s: %s' % (attribute, value)) ldif.append('-') for attribute in replace_attribute.keys(): attribute=attribute.lower() ldif.append('replace: %s' % (attribute) ) for value in replace_attribute[attribute]: ldif.append('%s: %s' % (attribute, value)) ldif.append('-') for attribute in delete_attribute.keys(): attribute=attribute.lower() ldif.append('delete: %s' % (attribute) ) ldif.append('-') if (len(ldif) > 3): ldif = "\n".join(ldif) + "\n\n" else: ldif = "" return ldif def modify_entry(entry, mods): mods = convert_entry(mods) entry = convert_entry(entry) if ( mods.has_key('changetype')): # Handle LDIF delete attribute if ( mods.has_key('delete')): for attribute in mods['delete']: attribute=attribute.lower() if (entry.has_key(attribute)): if (mods.has_key(attribute)): for value in mods[attribute]: try: entry[attribute].remove(value) if (len(entry[attribute]) == 0): entry.pop(attribute) except ValueError, e: pass except KeyError, e: pass else: entry.pop(attribute) # Handle LDIF replace attribute if ( mods.has_key('replace')): for attribute in mods['replace']: attribute=attribute.lower() if (entry.has_key(attribute)): if (mods.has_key(attribute)): entry[attribute] = mods[attribute] # Handle LDIF add attribute if ( mods.has_key('add')): for attribute in mods['add']: attribute=attribute.lower() if ( not entry.has_key(attribute)): log.debug("attribute: %s" %(attribute)) entry[attribute] = mods[attribute] else: entry[attribute].extend(mods[attribute]) # Just old style just change else: for attribute in mods.keys(): if (entry.has_key(attribute)): entry[attribute]=mods[attribute] entry_string = convert_back(entry) return entry_string def fix(dns,ldif): response = [] append = response.append for dn in dns.keys(): entry = convert_entry(ldif[dns[dn][0]:dns[dn][1]]) if ( dn[:11].lower() == "mds-vo-name" ): if 'objectclass' in entry: if 'mds' in map( lambda x : x.lower(), entry['objectclass']): if 'gluetop' in map( lambda x : x.lower(), entry['objectclass']): value=dn[12:dn.index(",")] entry = { 'dn': [dn], 'objectclass': ['MDS'], 'mds-vo-name': [value] } entry = convert_back(entry) append(entry) response = "".join(response) return response def log_errors(error_file, dns): log.debug("Logging Errors") request=0 dn = None error_counter=0 for line in open(error_file).readlines(): if ( line[:7] == 'request' ): request += 1 else: if ( request > 1 ): try: if ( not dn == dns[request - 2] ): error_counter += 1 dn = dns[request - 2] log.warn( "dn: %s" %(dn) ) except IndexError, e: log.error("Problem with error reporting ...") log.error("Request Num: %i, Line: %s, dns: %i" %(request,line,len(dns) )) if ( len(line) > 5 ): log.warn(line.strip()) return error_counter def main(config, log): log.info("Starting Update Process") while 1: log.info("Starting Update") stats={} stats['update_start'] = time.time() new_ldif="" log.info("Reading static LDIF files ...") stats['read_start'] = time.time() ldif_files=os.listdir(config['BDII_LDIF_DIR']) for file_name in ldif_files: if ( file_name[-5:] == '.ldif' ): if ( not ((file_name[0] == '#') or (file_name[0] == '.'))): file_url="file://%s/%s" %(config['BDII_LDIF_DIR'],file_name) log.debug("Reading %s" % (file_url[7:]) ) response = read_ldif(file_url) new_ldif = new_ldif + response stats['read_stop'] = time.time() log.info("Running Providers") stats['providers_start'] = time.time() providers=os.listdir(config['BDII_PROVIDER_DIR']) for provider in providers: if ( not ( provider[-1:] == '~' ) or (provider[0] == '#') or (provider[0] == '.')): log.debug("Running %s/%s" % (config['BDII_PROVIDER_DIR'],provider) ) response=read_ldif("%s/%s" % (config['BDII_PROVIDER_DIR'],provider)) new_ldif = new_ldif + response stats['providers_stop'] = time.time() new_dns = get_dns(new_ldif) ldif_modify="" log.info("Running Plugins") stats['plugins_start'] = time.time() plugins=os.listdir(config['BDII_PLUGIN_DIR']) for plugin in plugins: if ( not ( plugin[-1:] == '~' ) or (plugin[0] == '#') or (plugin[0] == '.')): log.debug("Running %s/%s" % (config['BDII_PLUGIN_DIR'],plugin) ) response = read_ldif("%s/%s" % (config['BDII_PLUGIN_DIR'],plugin)) modify_dns = get_dns(response) for dn in modify_dns.keys(): if ( new_dns.has_key(dn)): mod_entry = modify_entry( new_ldif[new_dns[dn][0]:new_dns[dn][1]],\ response[modify_dns[dn][0]:modify_dns[dn][1]]) start = len(new_ldif) end = start + len(mod_entry) new_dns[dn]=(start, end) new_ldif = new_ldif + mod_entry else: ldif_modify += response[modify_dns[dn][0]:modify_dns[dn][1]] stats['plugins_stop'] = time.time() log.debug("Doing Fix") new_ldif = fix(new_dns, new_ldif) log.debug("Writing new_ldif to disk") if ( config['BDII_LOG_LEVEL'] == 'DEBUG' ): dump_fh=open("%s/new.ldif" % (config['BDII_VAR_DIR']),'w') dump_fh.write(new_ldif) dump_fh.close() if ( not config['BDII_DAEMON'] ): print new_ldif sys.exit(0) log.info("Reading old LDIF file ...") stats['read_old_start'] = time.time() old_ldif_file = "%s/old.ldif" % (config['BDII_VAR_DIR']) if ( os.path.exists(old_ldif_file) ): old_ldif = read_ldif("file://%s" % (old_ldif_file)) else: old_ldif = "" stats['read_old_stop'] = time.time() log.debug("Starting Diff") ldif_add=[] ldif_delete=[] new_dns = get_dns(new_ldif) old_dns = get_dns(old_ldif) for dn in new_dns.keys(): if old_dns.has_key(dn): old = old_ldif[old_dns[dn][0]:old_dns[dn][1]].strip() new = new_ldif[new_dns[dn][0]:new_dns[dn][1]].strip() # If the entries are different we need to compare them if ( not new == old): entry = ldif_diff(dn,old,new) ldif_modify += entry else: ldif_add.append(dn) # Checking for removed entries for dn in old_dns.keys(): if not new_dns.has_key(dn): ldif_delete.append(old_ldif[old_dns[dn][0] + 4:old_dns[dn][2]].strip()) log.debug("Finished Diff") log.debug("Sorting Add Keys") ldif_add.sort(lambda x, y: cmp(len(x), len(y))) log.debug("Writing ldif_add to disk") if ( config['BDII_LOG_LEVEL'] == 'DEBUG' ): dump_fh=open("%s/add.ldif" % (config['BDII_VAR_DIR']),'w') for dn in ldif_add: dump_fh.write(new_ldif[new_dns[dn][0]:new_dns[dn][1]]) dump_fh.write("\n") dump_fh.close() log.debug("Adding New Entries") stats['db_update_start'] = time.time() if ( config['BDII_LOG_LEVEL'] == 'DEBUG' ): error_file="%s/add.err" %(config['BDII_VAR_DIR']) else: error_file=tempfile.mktemp() roots = group_dns(ldif_add) suffixes = roots.keys() if "o=shadow" in suffixes: index = suffixes.index("o=shadow") if index > 0: suffixes[index] = suffixes[0] suffixes[0] = "o=shadow" add_error_counter = 0 for root in suffixes: try: bind = root if "o=shadow" in config['BDII_PASSWD']: if root == "o=grid": bind = "o=shadow" input_fh=os.popen("ldapadd -d 256 -x -c -h %s -p %s -D %s -y %s >/dev/null 2>%s" %(config['BDII_HOSTNAME'], config['BDII_PORT'], bind, config['BDII_PASSWD_FILE'][bind], error_file), 'w') for dn in roots[root]: input_fh.write(new_ldif[new_dns[dn][0]:new_dns[dn][1]]) input_fh.write("\n") input_fh.close() except IOError, KeyError: log.error("Could not add new entries to the database.") add_error_counter += log_errors(error_file,ldif_add) if ( not config['BDII_LOG_LEVEL'] == 'DEBUG' ): os.remove(error_file) log.debug("Writing ldif_modify to disk") if ( config['BDII_LOG_LEVEL'] == 'DEBUG' ): dump_fh=open("%s/modify.ldif" % (config['BDII_VAR_DIR']),'w') dump_fh.write(ldif_modify) dump_fh.close() log.debug("Modify New Entries") if ( config['BDII_LOG_LEVEL'] == 'DEBUG' ): error_file="%s/modify.err" % (config['BDII_VAR_DIR']) else: error_file=tempfile.mktemp() ldif_modify_dns = get_dns(ldif_modify) roots = group_dns(ldif_modify_dns) modify_error_counter = 0 for root in roots.keys(): try: bind = root if "o=shadow" in config['BDII_PASSWD']: if root == "o=grid": bind = "o=shadow" input_fh=os.popen("ldapmodify -d 256 -x -c -h %s -p %s -D %s -y %s >/dev/null 2>%s" %(config['BDII_HOSTNAME'], config['BDII_PORT'], bind, config['BDII_PASSWD_FILE'][bind], error_file), 'w') for dn in roots[root]: input_fh.write(ldif_modify[ldif_modify_dns[dn][0]:ldif_modify_dns[dn][1]]) input_fh.write("\n") input_fh.close() except IOError, KeyError: log.error("Could not modify entries in the database.") modify_error_counter += log_errors(error_file, ldif_modify_dns.keys()) if ( not config['BDII_LOG_LEVEL'] == 'DEBUG' ): os.remove(error_file) log.debug("Sorting Delete Keys") ldif_delete.sort(lambda x, y: cmp(len(y), len(x))) log.debug("Writing ldif_delete to disk") if ( config['BDII_LOG_LEVEL'] == 'DEBUG' ): dump_fh=open("%s/delete.ldif" % (config['BDII_VAR_DIR']),'w') for dn in ldif_delete: dump_fh.write("%s\n" % (dn)) dump_fh.close() # Delayed delete Function if config['BDII_DELETE_DELAY'] > 0: log.debug("Doing Delayed Delete") delete_timestamp = time.time() # Get DNs of entries to be deleted not yet in delayed delete so their status can be updated new_delayed_delete_file = '%s/new_delayed_delete.pkl' % (config['BDII_VAR_DIR']) try: nfh = open(new_delayed_delete_file,'w') nfh.write("") except IOError: log.error("Unable to open new_delayed_delete file %s" % (new_delayed_delete)) delayed_delete_file = '%s/delayed_delete.pkl' % (config['BDII_VAR_DIR']) if os.path.exists(delayed_delete_file): file_handle = open(delayed_delete_file, 'rb') delay_delete = pickle.load(file_handle) file_handle.close() else: delay_delete = {} # Add remove cache timestamps that have been readded for dn in delay_delete.keys(): if dn not in ldif_delete: log.debug("Removing %s from cache (readded)" %(dn,)) delay_delete.pop(dn) # Add current timestamp for new deletes for dn in ldif_delete: if dn not in delay_delete: delay_delete[dn] = delete_timestamp nfh.write("%s\n" % (dn)) nfh.close() # Remove delayed deletes from LDIF or remove from cache for dn in delay_delete.keys(): if delay_delete[dn] + config['BDII_DELETE_DELAY'] >= delete_timestamp: ldif_delete.remove(dn) else: delay_delete.pop(dn) # Store Delayed Deletes log.debug("Storing delayed deletes") file_handle = open(delayed_delete_file, 'wb') pickle.dump(delay_delete, file_handle) file_handle.close() log.debug("Deleting Old Entries") if ( config['BDII_LOG_LEVEL'] == 'DEBUG' ): error_file="%s/delete.err" % (config['BDII_VAR_DIR']) else: error_file=tempfile.mktemp() roots = group_dns(ldif_delete) delete_error_counter = 0 for root in roots.keys(): try: bind = root if "o=shadow" in config['BDII_PASSWD']: if root == "o=grid": bind = "o=shadow" input_fh=os.popen("ldapdelete -d 256 -x -c -h %s -p %s -D %s -y %s >/dev/null 2>%s" %(config['BDII_HOSTNAME'], config['BDII_PORT'], bind, config['BDII_PASSWD_FILE'][bind], error_file), 'w') for dn in roots[root]: input_fh.write("%s\n" % (dn)) log.debug("Deleting %s" %(dn)) input_fh.close() except IOError, KeyError: log.error("Could not delete old entries in the database.") delete_error_counter += log_errors(error_file, ldif_delete) if ( not config['BDII_LOG_LEVEL'] == 'DEBUG' ): os.remove(error_file) roots = group_dns(new_dns) stats['query_start'] = time.time() if ( os.path.exists("%s/old.ldif" % config['BDII_VAR_DIR']) ): os.remove("%s/old.ldif" % config['BDII_VAR_DIR']) if ( os.path.exists("%s/old.err" % config['BDII_VAR_DIR']) ): os.remove("%s/old.err" % config['BDII_VAR_DIR']) for root in roots.keys(): # Stop flapping due to o=shadow if root == "o=shadow": command = "ldapsearch -LLL -x -h %s -p %s -b %s -s base >> %s/old.ldif 2>> %s/old.err" % (config['BDII_HOSTNAME'], config['BDII_PORT'], root, config['BDII_VAR_DIR'], config['BDII_VAR_DIR']) else: command = "ldapsearch -LLL -x -h %s -p %s -b %s >> %s/old.ldif 2>> %s/old.err" % (config['BDII_HOSTNAME'], config['BDII_PORT'], root, config['BDII_VAR_DIR'], config['BDII_VAR_DIR']) result = os.system(command) if ( result > 0): log.error("Query to self failed.") stats['query_stop'] = time.time() out_file="%s/archive/%s-snapshot.gz" % (config['BDII_VAR_DIR'], time.strftime('%y-%m-%d-%H-%M-%S')) log.debug("Creating GZIP file") os.system("gzip -c %s/old.ldif > %s" %(config['BDII_VAR_DIR'], out_file) ) infosys_output="" if (len(old_ldif) == 0 ): log.debug("ldapadd o=infosys compression") command="ldapadd" infosys_output+="dn: o=infosys\n" infosys_output+="objectClass: organization\n" infosys_output+="o: infosys\n\n" infosys_output+="dn: CompressionType=zip,o=infosys\n" infosys_output+="objectClass: CompressedContent\n" infosys_output+="Hostname: %s\n" %(config['BDII_HOSTNAME']) infosys_output+="CompressionType: zip\n" infosys_output+="Data: file://%s\n\n" %(out_file) else: log.debug("ldapmodify o=infosys compression") command="ldapmodify" infosys_output+="dn: CompressionType=zip,o=infosys\n" infosys_output+="changetype: Modify\n" infosys_output+="replace: Data\n" infosys_output+="Data: file://%s\n\n" %(out_file) try: output_fh = os.popen("%s -x -c -h %s -p %s -D o=infosys -y %s >/dev/null" %(command, config['BDII_HOSTNAME'], config['BDII_PORT'], config['BDII_PASSWD_FILE']['o=infosys']), 'w') output_fh.write(infosys_output) output_fh.close() except IOError, KeyError: log.error("Could not add compressed data to the database.") old_files=os.popen("ls -t %s/archive" % (config['BDII_VAR_DIR']) ).readlines() log.debug("Deleting old GZIP files") for file in old_files[config['BDII_ARCHIVE_SIZE']:]: os.remove("%s/archive/%s" % (config['BDII_VAR_DIR'],file.strip())) stats['db_update_stop'] = time.time() stats['update_stop'] = time.time() stats['UpdateTime'] = int(stats['update_stop'] - stats['update_start']) stats['ReadTime'] = int(stats['read_old_stop'] - stats['read_old_start']) stats['ProvidersTime'] = int(stats['providers_stop'] - stats['providers_start']) stats['PluginsTime'] = int(stats['plugins_stop'] - stats['plugins_start']) stats['QueryTime'] = int(stats['query_stop'] - stats['query_start']) stats['DBUpdateTime'] = int(stats['db_update_stop'] - stats['db_update_start']) stats['TotalEntries'] = len(old_dns) stats['NewEntries'] = len(ldif_add) stats['ModifiedEntries'] = len(ldif_modify_dns.keys()) stats['DeletedEntries'] = len(ldif_delete) stats['FailedAdds'] = add_error_counter stats['FailedModifies'] = modify_error_counter stats['FailedDeletes'] = delete_error_counter for key in stats.keys(): if ( key.find("_") == -1 ): log.info("%s: %i" % (key, stats[key]) ) infosys_output="" if (len(old_ldif) == 0 ): log.debug("ldapadd o=infosys updatestats") command="ldapadd" infosys_output+="dn: Hostname=%s,o=infosys\n" %(config['BDII_HOSTNAME']) infosys_output+="objectClass: UpdateStats\n" infosys_output+="Hostname: %s\n" %(config['BDII_HOSTNAME']) for key in stats.keys(): if ( key.find("_") == -1): infosys_output+="%s: %i\n" %(key, stats[key]) infosys_output+="\n" else: log.debug("ldapmodify o=infosys updatestats") command="ldapmodify" infosys_output+="dn: Hostname=%s,o=infosys\n" %(config['BDII_HOSTNAME']) infosys_output+="changetype: Modify\n" for key in stats.keys(): if ( key.find("_") == -1): infosys_output+="replace: %s\n" %(key) infosys_output+="%s: %i\n" %(key, stats[key]) infosys_output+="-\n" infosys_output+="\n" try: output_fh = os.popen("%s -x -c -h %s -p %s -D o=infosys -y %s >/dev/null" %(command, config['BDII_HOSTNAME'], config['BDII_PORT'], config['BDII_PASSWD_FILE']['o=infosys']), 'w') output_fh.write(infosys_output) output_fh.close() except IOError, KeyError: log.error("Could not add stats entries to the database.") old_ldif = None new_ldif = None new_dns = None ldif_delete = None ldif_add = None ldif_modify = None log.info("Sleeping for %i seconds" %(int(config['BDII_BREATHE_TIME']))) time.sleep(config['BDII_BREATHE_TIME']) if __name__ == '__main__': config=parse_options() config=get_config(config) if ( config['BDII_DAEMON'] ): create_daemon(config['BDII_LOG_FILE']) # Giving some time for the init.d script to finish time.sleep(3) else: # connect stderr to log file e = os.open(config['BDII_LOG_FILE'], os.O_WRONLY | os.O_APPEND | os.O_CREAT, 0644) os.dup2(e, 2) os.close(e) sys.stderr = os.fdopen(2, 'a', 0) log=get_logger(config['BDII_LOG_FILE'],config['BDII_LOG_LEVEL']) main(config,log) bdii-5.2.22/etc/0000775001227000117040000000000012213331107012616 5ustar ellertellertbdii-5.2.22/etc/bdii-slapd.conf0000664001227000117040000000715412055414352015514 0ustar ellertellertinclude /etc/openldap/schema/core.schema include /etc/openldap/schema/cosine.schema include /etc/openldap/schema/nis.schema include /etc/bdii/BDII.schema include /etc/ldap/schema/Glue-CORE.schema include /etc/ldap/schema/Glue-MDS.schema include /etc/ldap/schema/Glue-CE.schema include /etc/ldap/schema/Glue-CESEBind.schema include /etc/ldap/schema/Glue-SE.schema include /etc/ldap/schema/GLUE20.schema allow bind_v2 pidfile /var/run/bdii/db/slapd.pid argsfile /var/run/bdii/db/slapd.args loglevel 0 idletimeout 120 sizelimit unlimited timelimit 2400 moduleload rwm moduleload back_relay ####################################################################### # GLUE 1.3 database definitions ####################################################################### database hdb suffix "o=grid" cachesize 30000 checkpoint 1024 0 dbnosync rootdn "o=grid" rootpw secret directory /var/lib/bdii/db/grid index GlueCEAccessControlBaseRule eq index GlueCESEBindCEUniqueID eq index GlueCESEBindSEUniqueID eq index GlueCEUniqueID eq index GlueChunkKey eq index GlueClusterUniqueID eq index GlueSAAccessControlBaseRule eq index GlueSALocalID eq index GlueSEAccessProtocolType pres index GlueSEUniqueID eq index GlueServiceAccessControlRule eq index GlueServiceAccessControlBaseRule eq index GlueServiceType eq,sub index GlueServiceEndpoint eq,sub index GlueServiceURI eq,sub index GlueServiceDataKey eq index GlueSubClusterUniqueID eq index GlueVOInfoAccessControlBaseRule eq index objectClass eq,pres ####################################################################### # Relay DB to address DIT changes requested by ARC ####################################################################### database relay suffix "GLUE2GroupName=services,o=glue" overlay rwm suffixmassage "GLUE2GroupID=resource,o=glue" database relay suffix "GLUE2GroupName=services,GLUE2DomainID=*,o=glue" overlay rwm suffixmassage "GLUE2GroupID=resource,GLUE2DomainID=*,o=glue" database relay suffix "GLUE2GroupName=services,GLUE2DomainID=*,GLUE2GroupName=grid,o=glue" overlay rwm suffixmassage "GLUE2GroupID=resource,GLUE2DomainID=*,GLUE2GroupID=grid,o=glue" ####################################################################### # GLUE 2.0 database definitions ####################################################################### database hdb suffix "o=glue" cachesize 30000 checkpoint 1024 0 dbnosync rootdn "o=glue" rootpw secret directory /var/lib/bdii/db/glue index GLUE2GroupID eq index GLUE2ExtensionLocalID eq index GLUE2LocationID eq index GLUE2ContactID eq index GLUE2DomainID eq index GLUE2ServiceID eq index GLUE2EndpointID eq index GLUE2ShareID eq index GLUE2ManagerID eq index GLUE2ResourceID eq index GLUE2ActivityID eq index GLUE2PolicyID eq index GLUE2BenchmarkID eq index GLUE2ApplicationEnvironmentID eq index GLUE2ApplicationHandleID eq index GLUE2ToStorageServiceID eq index GLUE2StorageServiceCapacityID eq index GLUE2StorageAccessProtocolID eq index GLUE2StorageShareSharingID eq index GLUE2StorageShareCapacityID eq index GLUE2EndpointInterfaceName eq index GLUE2PolicyRule eq index objectClass eq,pres ####################################################################### # Stats database definitions ####################################################################### database hdb suffix "o=infosys" cachesize 10 checkpoint 1024 0 dbnosync rootdn "o=infosys" rootpw secret directory /var/lib/bdii/db/stats bdii-5.2.22/etc/BDII.schema0000664001227000117040000000760211220377342014524 0ustar ellertellert# # BDII Update Process Monitoring Schema # attributetype ( 1.3.6.1.4.1.8006.100.3.1 NAME 'Hostname' DESC 'The hostname of the BDII this data refers to' EQUALITY caseIgnoreIA5Match SUBSTR caseIgnoreIA5SubstringsMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.26 SINGLE-VALUE) attributetype ( 1.3.6.1.4.1.8006.100.3.2 NAME 'TotalEntries' DESC 'The number of Entries in the LDAP database' SYNTAX 1.3.6.1.4.1.1466.115.121.1.27 SINGLE-VALUE) attributetype ( 1.3.6.1.4.1.8006.100.3.3 NAME 'UpdateTime' DESC 'The time in seconds for the update process to complete' SYNTAX 1.3.6.1.4.1.1466.115.121.1.27 SINGLE-VALUE) attributetype ( 1.3.6.1.4.1.8006.100.3.4 NAME 'DBUpdateTime' DESC 'The time in seconds to update the LDAP database' SYNTAX 1.3.6.1.4.1.1466.115.121.1.27 SINGLE-VALUE) attributetype ( 1.3.6.1.4.1.8006.100.3.5 NAME 'NewEntries' DESC 'The number of new entries this update' SYNTAX 1.3.6.1.4.1.1466.115.121.1.27 SINGLE-VALUE) attributetype ( 1.3.6.1.4.1.8006.100.3.6 NAME 'QueryTime' DESC 'The time to query the LDAP database' SYNTAX 1.3.6.1.4.1.1466.115.121.1.27 SINGLE-VALUE) attributetype ( 1.3.6.1.4.1.8006.100.3.7 NAME 'ProvidersTime' DESC 'The time in seconds to run the information providers' SYNTAX 1.3.6.1.4.1.1466.115.121.1.27 SINGLE-VALUE) attributetype ( 1.3.6.1.4.1.8006.100.3.8 NAME 'PluginsTime' DESC 'The time in seconds to run the information plugins' SYNTAX 1.3.6.1.4.1.1466.115.121.1.27 SINGLE-VALUE) attributetype ( 1.3.6.1.4.1.8006.100.3.9 NAME 'FailedAdds' DESC 'The number failed add entries' SYNTAX 1.3.6.1.4.1.1466.115.121.1.27 SINGLE-VALUE) attributetype ( 1.3.6.1.4.1.8006.100.3.10 NAME 'ModifiedEntries' DESC 'The number entries which were modified' SYNTAX 1.3.6.1.4.1.1466.115.121.1.27 SINGLE-VALUE) attributetype ( 1.3.6.1.4.1.8006.100.3.11 NAME 'DeletedEntries' DESC 'The number entries which were deleted' SYNTAX 1.3.6.1.4.1.1466.115.121.1.27 SINGLE-VALUE) attributetype ( 1.3.6.1.4.1.8006.100.3.12 NAME 'FailedDeletes' DESC 'The entries that failed to delete' SYNTAX 1.3.6.1.4.1.1466.115.121.1.27 SINGLE-VALUE) attributetype ( 1.3.6.1.4.1.8006.100.3.13 NAME 'FailedModifies' DESC 'The number of entries which failed to be modified' SYNTAX 1.3.6.1.4.1.1466.115.121.1.27 SINGLE-VALUE) attributetype ( 1.3.6.1.4.1.8006.100.3.14 NAME 'ReadTime' DESC 'The time taken to read the old entries' SYNTAX 1.3.6.1.4.1.1466.115.121.1.27 SINGLE-VALUE) objectclass ( 1.2.6.1.4.1.8006.100.3 NAME 'UpdateStats' DESC 'An entity which keeps statistical data for a BDII instance' MUST ( Hostname $ TotalEntries $ UpdateTime $ DBUpdateTime $ NewEntries $ QueryTime $ ProvidersTime $ FailedAdds $ ModifiedEntries $ DeletedEntries $ FailedDeletes $ FailedModifies $ PluginsTime $ ReadTime) ) # # BDII Compresses Content # attributetype ( 1.3.6.1.4.1.8006.100.1.1 NAME 'CompressionType' DESC 'The compression type which the data has been created with' EQUALITY caseIgnoreIA5Match SUBSTR caseIgnoreIA5SubstringsMatch SYNTAX 1.3.6.1.4.1.1466.115.121.1.26 SINGLE-VALUE) attributetype ( 1.3.6.1.4.1.8006.100.1.2 NAME 'Data' DESC 'The compressed data' SYNTAX 1.3.6.1.4.1.1466.115.121.1.5 SINGLE-VALUE) objectclass ( 1.2.6.1.4.1.8006.100.1 NAME 'CompressedContent' DESC 'An entity which keeps the content of a BDII in a compressd format' MUST ( Hostname $ CompressionType $ Data ) ) bdii-5.2.22/etc/init.d/0000775001227000117040000000000012213331107014003 5ustar ellertellertbdii-5.2.22/etc/init.d/bdii0000775001227000117040000002552612176476615014700 0ustar ellertellert#! /bin/bash # # BDII system startup script # $Id: bdii,v 1.9 2009/06/18 14:26:52 lfield Exp $ # chkconfig: - 95 5 # description: BDII Service # config: /etc/bdii/bdii.conf ### BEGIN INIT INFO # Provides: bdii # Required-Start: $remote_fs $syslog # Required-Stop: $remote_fs $syslog # Default-Stop: 0 1 2 3 4 5 6 # Short-Description: BDII # Description: Berkeley Database Information Index ### END INIT INFO shopt -s expand_aliases if [ -f /etc/init.d/functions ]; then . /etc/init.d/functions else echo "Error: Cannot source /etc/init.d/functions" fi log_success_msg () { success echo } log_failure_msg() { failure echo } prog=bdii # Debian does not have /var/lock/subsys if [ -d /var/lock/subsys ] ; then LOCK_DIR=/var/lock/subsys else LOCK_DIR=/var/lock fi lockfile=${LOCK_DIR}/$prog RUN=yes if [ -r /etc/default/bdii ] ; then . /etc/default/bdii fi if [ -r /etc/sysconfig/bdii ] ; then . /etc/sysconfig/bdii fi if [ "x$RUN" != "xyes" ] ; then echo "bdii disabled, please adjust the configuration to your needs " echo "and then set RUN to 'yes' in /etc/default/bdii to enable it." exit 0 fi BDII_CONF=${BDII_CONF:-/etc/bdii/bdii.conf} if [ -f "${BDII_CONF}" ]; then . "${BDII_CONF}" fi UPDATE_LOCK_FILE=${UPDATE_LOCK_FILE:-${LOCK_DIR}/bdii-update} SLAPD_LOCK_FILE=${SLAPD_LOCK_FILE:-${LOCK_DIR}/bdii-slapd} UPDATE_PID_FILE=${BDII_PID_FILE:-/var/run/bdii/bdii-update.pid} BDII_USER=${BDII_USER:-ldap} BDII_VAR_DIR=${BDII_VAR_DIR:-/var/lib/bdii} BDII_UPDATE=${BDII_UPDATE:-/usr/sbin/bdii-update} SLAPD=${SLAPD:-/usr/sbin/slapd} SLAPD_CONF=${SLAPD_CONF:-/etc/bdii/bdii-slapd.conf} SLAPD_HOST=${SLAPD_HOST:-0.0.0.0} SLAPD_PORT=${SLAPD_PORT:-2170} BDII_IPV6_SUPPORT=${BDII_IPV6_SUPPORT:-no} SLAPD_HOST6=${SLAPD_HOST6:-::} SLAPD_DB_DIR=${SLAPD_DB_DIR:-$BDII_VAR_DIR/db} SLAPD_PID_FILE=${SLAPD_PID_FILE:-/var/run/bdii/db/slapd.pid} DB_CONFIG=${DB_CONFIG:-/etc/bdii/DB_CONFIG} DELAYED_DELETE=${DELAYED_DELETE:-${BDII_VAR_DIR}/delayed_delete.pkl} BDII_RAM_SIZE=${BDII_RAM_SIZE:-1500M} if [ "x${BDII_IPV6_SUPPORT}" == "xyes" ]; then SLAPD_HOST_STRING="'ldap://${SLAPD_HOST}:${SLAPD_PORT} ldap://[${SLAPD_HOST6}]:${SLAPD_PORT}'" else SLAPD_HOST_STRING="ldap://${SLAPD_HOST}:${SLAPD_PORT}" fi if [ -x /sbin/runuser ] ; then RUNUSER=/sbin/runuser else RUNUSER=su fi function start(){ # Check status if [ -f "${SLAPD_LOCK_FILE}" ] || [ -f "${UPDATE_LOCK_FILE}" ] ; then echo -n "Starting BDII: " result=$($0 status) if [ $? -gt 0 ]; then echo ${result} 1>&2 exit 1 else echo "BDII already started" exit 0 fi fi # Create RAM Disk if [ "${BDII_RAM_DISK}" = "yes" ]; then mkdir -p ${SLAPD_DB_DIR} mount -t tmpfs -o size=${BDII_RAM_SIZE},mode=0744 tmpfs ${SLAPD_DB_DIR} fi # Remove delayed_delete.pkl if it exists if [ -f "${DELAYED_DELETE}" ] ; then rm -f ${DELAYED_DELETE} fi #Initialize the database directory. mkdir -p ${SLAPD_DB_DIR}/stats mkdir -p ${SLAPD_DB_DIR}/glue mkdir -p ${SLAPD_DB_DIR}/grid mkdir -p ${BDII_VAR_DIR}/archive chown -R ${BDII_USER}:${BDII_USER} ${BDII_VAR_DIR} chown -R ${BDII_USER}:${BDII_USER} ${SLAPD_DB_DIR} [ -x /sbin/restorecon ] && /sbin/restorecon -R ${BDII_VAR_DIR} mkdir -p /var/run/bdii/db chown -R ${BDII_USER}:${BDII_USER} /var/run/bdii [ -x /sbin/restorecon ] && /sbin/restorecon -R /var/run/bdii/db $RUNUSER -s /bin/sh ${BDII_USER} -c "rm -f ${SLAPD_DB_DIR}/stats/* 2>/dev/null" $RUNUSER -s /bin/sh ${BDII_USER} -c "rm -f ${SLAPD_DB_DIR}/glue/* 2>/dev/null" $RUNUSER -s /bin/sh ${BDII_USER} -c "rm -f ${SLAPD_DB_DIR}/grid/* 2>/dev/null" $RUNUSER -s /bin/sh ${BDII_USER} -c "rm -f ${BDII_VAR_DIR}/old.ldif 2>/dev/null" $RUNUSER -s /bin/sh ${BDII_USER} -c "ln -sf ${DB_CONFIG} ${SLAPD_DB_DIR}/grid/" $RUNUSER -s /bin/sh ${BDII_USER} -c "ln -sf ${DB_CONFIG} ${SLAPD_DB_DIR}/stats/" $RUNUSER -s /bin/sh ${BDII_USER} -c "ln -sf ${DB_CONFIG} ${SLAPD_DB_DIR}/glue/" if [ ${SLAPD_CONF} = "/etc/bdii/bdii-top-slapd.conf" ] ; then $RUNUSER -s /bin/sh ${BDII_USER} -c "ln -sf ${DB_CONFIG}_top ${SLAPD_DB_DIR}/grid/DB_CONFIG" $RUNUSER -s /bin/sh ${BDII_USER} -c "ln -sf ${DB_CONFIG}_top ${SLAPD_DB_DIR}/stats/DB_CONFIG" $RUNUSER -s /bin/sh ${BDII_USER} -c "ln -sf ${DB_CONFIG}_top ${SLAPD_DB_DIR}/glue/DB_CONFIG" $RUNUSER -s /bin/sh ${BDII_USER} -c "rm -f ${BDII_VAR_DIR}/gip/cache/gip/top-urls.conf/* 2>/dev/null" $RUNUSER -s /bin/sh ${BDII_USER} -c "rm -f ${BDII_VAR_DIR}/gip/cache/gip/top-urls.conf-glue2/* 2>/dev/null" else if [ -r "${BDII_VAR_DIR}/gip/cache" ]; then $RUNUSER -s /bin/sh ${BDII_USER} -c "rm -f ${BDII_VAR_DIR}/gip/cache/gip/site-urls.conf/* 2>/dev/null" $RUNUSER -s /bin/sh ${BDII_USER} -c "rm -f ${BDII_VAR_DIR}/gip/cache/gip/site-urls.conf-glue2/* 2>/dev/null" fi fi cd / echo -n "Starting BDII slapd: " COMMAND="${SLAPD} -f ${SLAPD_CONF} -h ${SLAPD_HOST_STRING} -u ${BDII_USER}" eval ${COMMAND} touch ${SLAPD_LOCK_FILE} if [ ! -f "${SLAPD_PID_FILE}" ]; then sleep 2 fi if [ -f "${SLAPD_PID_FILE}" ]; then ps $(cat ${SLAPD_PID_FILE}) >/dev/null 2>&1 RETVAL=$? else RETVAL=1 fi if [ ${RETVAL} -gt 0 ]; then echo -n "BDII slapd failed to start" 1>&2 rm -f ${SLAPD_LOCK_FILE} eval log_failure_msg echo "${COMMAND} -d 256" ${COMMAND} -d 256 return 1 else eval log_success_msg fi cd / export SLAPD_CONF=${SLAPD_CONF} $RUNUSER -s /bin/sh ${BDII_USER} -c "sh -l -c '${BDII_UPDATE} -c ${BDII_CONF} -d'" touch ${UPDATE_LOCK_FILE} if [ ! -f ${UPDATE_PID_FILE} ]; then sleep 2 fi if [ -f ${UPDATE_PID_FILE} ]; then ps $(cat ${UPDATE_PID_FILE}) >/dev/null 2>&1 RETVAL=$? else RETVAL=1 fi echo -n "Starting BDII update process: " if [ ${RETVAL} -gt 0 ]; then echo -n "BDII update process failed to start" 1>&2 rm -f ${UPDATE_LOCK_FILE} eval log_failure_msg return 1 else eval log_success_msg touch $lockfile return 0 fi } function stop(){ # Check the existance of the lock file if [ ! -f "${SLAPD_LOCK_FILE}" ] && [ ! -f "${UPDATE_LOCK_FILE}" ]; then echo -n "Stopping BDII: " result=$($0 status) if [ $? -gt 0 -a $? -ne 3 ]; then echo ${result} 1>&2 return 1 else echo "BDII already stopped" return 0 fi fi RETVAL=0 echo -n "Stopping BDII update process: " if [ -f "${UPDATE_PID_FILE}" ]; then UPDATE_PID=$(cat ${UPDATE_PID_FILE}) fi $RUNUSER -s /bin/sh ${BDII_USER} -c "kill -15 ${UPDATE_PID} 2>/dev/null" if [ -n "${UPDATE_PID}" ]; then ps ${UPDATE_PID} >/dev/null 2>&1 if [ $? = 0 ]; then sleep 2 ps ${UPDATE_PID} >/dev/null 2>&1 if [ $? = 0 ]; then $RUNUSER -s /bin/sh ${BDII_USER} -c "kill -9 ${UPDATE_PID} 2>/dev/null" sleep 2 ps ${UPDATE_PID} >/dev/null 2>&1 if [ $? = 0 ]; then echo -n "Could not kill BDII update process ${UPDATE_PID}" 1>&2 RETVAL=1 fi fi fi fi if [ ${RETVAL} = 0 ]; then rm -f ${UPDATE_PID_FILE} rm -f ${UPDATE_LOCK_FILE} eval log_success_msg else eval log_failure_msg fi echo -n "Stopping BDII slapd: " if [ -f "${SLAPD_PID_FILE}" ]; then SLAPD_PID=$(cat ${SLAPD_PID_FILE}) fi $RUNUSER -s /bin/sh ${BDII_USER} -c "kill -15 ${SLAPD_PID} 2>/dev/null" if [ -n "${SLAPD_PID}" ]; then ps ${SLAPD_PID} >/dev/null 2>&1 if [ $? = 0 ]; then sleep 2 ps ${SLAPD_PID} >/dev/null 2>&1 if [ $? = 0 ]; then $RUNUSER -s /bin/sh ${BDII_USER} -c "kill -9 ${SLAPD_PID} 2>/dev/null" sleep 2 ps ${SLAPD_PID} >/dev/null 2>&1 if [ $? = 0 ]; then echo -n "Could not kill BDII slapd process ${SLAPD_PID}" 1>&2 RETVAL=2 else rm -f {SLAPD_PID_FILE} fi fi fi fi if [ ${RETVAL} = 2 ]; then eval log_failure_msg else rm -f ${SLAPD_LOCK_FILE} eval log_success_msg fi if [ ! ${RETVAL} = 0 ]; then return 1 else mountpoint -q ${SLAPD_DB_DIR} && umount ${SLAPD_DB_DIR} rm -f $lockfile return 0 fi } function status(){ if [ ! -f "${SLAPD_LOCK_FILE}" ] && [ ! -f "${UPDATE_LOCK_FILE}" ]; then echo -n "BDII Stopped" eval log_success_msg return 3 fi if [ -f ${SLAPD_PID_FILE} ]; then ps $(cat ${SLAPD_PID_FILE}) >/dev/null 2>&1 if [ ! $? = 0 ]; then echo -n "BDII slapd PID file exists but the process died" 1>&2 eval log_failure_msg return 1 fi else echo -n "BDII slapd PID file ${SLAPD_PID_FILE} does not exist" 1>&2 eval log_failure_msg return 1 fi if [ -f ${UPDATE_PID_FILE} ]; then ps $(cat ${UPDATE_PID_FILE}) >/dev/null 2>&1 if [ ! $? = 0 ]; then echo -n "BDII update process died" 1>&2 eval log_failure_msg return 1 fi else echo -n "BDII update process failed to start" 1>&2 eval log_failure_msg return 1 fi # Check for hanging process response=$(ldapsearch -LLL -x -h ${SLAPD_HOST} -p ${SLAPD_PORT} -b o=infosys objectClass=UpdateStats modifyTimestamp 2>/dev/null | grep modifyTimestamp ) if [ $? -eq 0 ]; then time_stamp=$(echo ${response} | cut -d" " -f2) time_string=$(echo ${time_stamp} | sed 's/^\([0-9][0-9][0-9][0-9]\)\([0-9][0-9]\)\([0-9][0-9]\)\([0-9][0-9]\)\([0-9][0-9]\).*/\1-\2-\3 \4:\5/') time_int=$(date --utc --date "${time_string}" +%s) let time_threshold=${time_int}+1200 time_now=$(date --utc +%s) if [ ${time_now} -gt ${time_threshold} ]; then echo -n "BDII update process hanging" 1>&2 eval log_failure_msg return 1 fi fi echo -n "BDII Runnning " eval log_success_msg return 0 } case "$1" in start) start RETVAL=$? ;; stop) stop RETVAL=$? ;; status) status RETVAL=$? ;; reload) ;; restart | force-reload) stop start RETVAL=$? ;; condrestart | try-restart) if [ -f ${SLAPD_LOCK_FILE} ] || [ -f ${UPDATE_LOCK_FILE} ]; then stop start RETVAL=$? fi ;; *) echo $"Usage: $0 {start|stop|restart|status|condrestart}" RETVAL=1 esac exit ${RETVAL} bdii-5.2.22/etc/default.ldif0000664001227000117040000000066711475414752015134 0ustar ellertellertdn: o=shadow objectClass: organization o: o=shadow dn: o=grid objectClass: organization o: grid dn: mds-vo-name=local,o=grid objectClass: MDS mds-vo-name: local dn: mds-vo-name=resource,o=grid objectClass: MDS mds-vo-name: resource dn: o=glue objectClass: organization o: glue dn: GLUE2GroupID=resource, o=glue objectClass: GLUE2Group GLUE2GroupID: resource dn: GLUE2GroupID=grid, o=glue objectClass: GLUE2Group GLUE2GroupID: grid bdii-5.2.22/etc/cron.d/0000775001227000117040000000000012213331107014001 5ustar ellertellertbdii-5.2.22/etc/bdii.conf0000664001227000117040000000056512012717512014407 0ustar ellertellertBDII_LOG_FILE=/var/log/bdii/bdii-update.log BDII_PID_FILE=/var/run/bdii/bdii-update.pid BDII_LOG_LEVEL=ERROR BDII_LDIF_DIR=/var/lib/bdii/gip/ldif BDII_PROVIDER_DIR=/var/lib/bdii/gip/provider BDII_PLUGIN_DIR=/var/lib/bdii/gip/plugin BDII_PORT=2170 BDII_BREATHE_TIME=120 BDII_READ_TIMEOUT=300 BDII_ARCHIVE_SIZE=0 BDII_DELETE_DELAY=0 BDII_USER=ldap BDII_VAR_DIR=/var/lib/bdii bdii-5.2.22/etc/bdii-top-slapd.conf0000664001227000117040000000762112052713004016304 0ustar ellertellertinclude /etc/openldap/schema/core.schema include /etc/openldap/schema/cosine.schema include /etc/openldap/schema/nis.schema include /etc/bdii/BDII.schema include /etc/ldap/schema/Glue-CORE.schema include /etc/ldap/schema/Glue-MDS.schema include /etc/ldap/schema/Glue-CE.schema include /etc/ldap/schema/Glue-CESEBind.schema include /etc/ldap/schema/Glue-SE.schema include /etc/ldap/schema/GLUE20.schema allow bind_v2 pidfile /var/run/bdii/db/slapd.pid argsfile /var/run/bdii/db/slapd.args loglevel 0 idletimeout 120 sizelimit unlimited timelimit 2400 moduleload rwm moduleload back_relay ####################################################################### # GLUE 1.3 database definitions ####################################################################### database hdb cachesize 300000 dbnosync suffix "o=shadow" checkpoint 1024 0 rootdn "o=shadow" rootpw secret directory /var/lib/bdii/db/grid index GlueCEAccessControlBaseRule eq index GlueCESEBindCEUniqueID eq index GlueCESEBindSEUniqueID eq index GlueCEUniqueID eq index GlueChunkKey eq index GlueClusterUniqueID eq index GlueSAAccessControlBaseRule eq index GlueSALocalID eq index GlueSEAccessProtocolType pres index GlueSEUniqueID eq index GlueServiceAccessControlRule eq index GlueServiceAccessControlBaseRule eq index GlueServiceType eq,sub index GlueServiceEndpoint eq,sub index GlueServiceURI eq,sub index GlueServiceDataKey eq index GlueSubClusterUniqueID eq index GlueVOInfoAccessControlBaseRule eq index objectClass eq,pres ####################################################################### # Relay DB to address performance issues ####################################################################### database relay suffix "o=grid" overlay rwm suffixmassage "o=grid,o=shadow" ####################################################################### # Relay DB to address DIT changes requested by ARC ####################################################################### database relay suffix "GLUE2GroupName=services,o=glue" overlay rwm suffixmassage "GLUE2GroupID=resource,o=glue" database relay suffix "GLUE2GroupName=services,GLUE2DomainID=*,o=glue" overlay rwm suffixmassage "GLUE2GroupID=resource,GLUE2DomainID=*,o=glue" database relay suffix "GLUE2GroupName=services,GLUE2DomainID=*,GLUE2GroupName=grid,o=glue" overlay rwm suffixmassage "GLUE2GroupID=resource,GLUE2DomainID=*,GLUE2GroupID=grid,o=glue" ####################################################################### # GLUE 2.0 database definitions ####################################################################### database hdb cachesize 300000 dbnosync suffix "o=glue" checkpoint 1024 0 rootdn "o=glue" rootpw secret directory /var/lib/bdii/db/glue index GLUE2GroupID eq index GLUE2ExtensionLocalID eq index GLUE2LocationID eq index GLUE2ContactID eq index GLUE2DomainID eq index GLUE2ServiceID eq index GLUE2EndpointID eq index GLUE2ShareID eq index GLUE2ManagerID eq index GLUE2ResourceID eq index GLUE2ActivityID eq index GLUE2PolicyID eq index GLUE2BenchmarkID eq index GLUE2ApplicationEnvironmentID eq index GLUE2ApplicationHandleID eq index GLUE2ToStorageServiceID eq index GLUE2StorageServiceCapacityID eq index GLUE2StorageAccessProtocolID eq index GLUE2StorageShareSharingID eq index GLUE2StorageShareCapacityID eq index GLUE2EndpointInterfaceName eq index GLUE2PolicyRule eq index objectClass eq,pres ####################################################################### # Stats database definitions ####################################################################### database hdb cachesize 10 dbnosync suffix "o=infosys" checkpoint 1024 0 rootdn "o=infosys" rootpw secret directory /var/lib/bdii/db/stats bdii-5.2.22/etc/sysconfig/0000775001227000117040000000000012213331107014622 5ustar ellertellertbdii-5.2.22/etc/sysconfig/bdii0000664001227000117040000000011711535366511015470 0ustar ellertellert#SLAPD_CONF=/etc/bdii/bdii-slapd.conf #SLAPD=/usr/sbin/slapd #BDII_RAM_DISK=no bdii-5.2.22/etc/DB_CONFIG_top0000664001227000117040000000077612176432201014754 0ustar ellertellert# Maintain transaction logs in memory rather than on disk set_flags DB_LOG_INMEMORY # Set in-memory transaction log cache (10MB) set_lg_bsize 10485760 # Set the maximum size of log files (40MB) set_lg_max 41943040 # Automatically remove log files as soon as they are no longer needed set_flags DB_LOG_AUTOREMOVE # Do not write or synchronously flush the log on transaction commit set_flags DB_TXN_NOSYNC # Set the size of the shared memory buffer pool (gbytes, bytes, ncache) set_cachesize 0 524288000 1 bdii-5.2.22/etc/DB_CONFIG0000664001227000117040000000042411714443722014067 0ustar ellertellert# Maintain transaction logs in memory rather than on disk set_flags DB_LOG_INMEMORY # Automatically remove log files as soon as they are no longer needed set_flags DB_LOG_AUTOREMOVE # Do not write or synchronously flush the log on transaction commit set_flags DB_TXN_NOSYNC bdii-5.2.22/etc/logrotate.d/0000775001227000117040000000000012213331107015040 5ustar ellertellertbdii-5.2.22/etc/logrotate.d/bdii0000664001227000117040000000015211220377342015700 0ustar ellertellert/var/log/bdii/bdii-update.log { daily rotate 30 missingok compress copytruncate } bdii-5.2.22/bdii.spec0000664001227000117040000002245012212372532013637 0ustar ellertellertName: bdii Version: 5.2.22 Release: 1%{?dist} Summary: The Berkeley Database Information Index (BDII) Group: System Environment/Daemons License: ASL 2.0 URL: http://gridinfo.web.cern.ch # The source for this package was pulled from upstream's vcs. Use the # following commands to generate the tarball: # svn export http://svnweb.cern.ch/guest/gridinfo/bdii/tags/R_5_2_22_1 %{name}-%{version} # tar --gzip -czvf %{name}-%{version}.tar.gz %{name}-%{version} Source: %{name}-%{version}.tar.gz BuildArch: noarch BuildRoot: %{_tmppath}/%{name}-%{version}-build %if "%{?dist}" == ".el5" Requires: openldap2.4-servers Requires: openldap2.4-clients %endif Requires: openldap-clients Requires: openldap-servers Requires: glue-schema >= 2.0.0 Requires(post): chkconfig Requires(post): expect Requires(preun): chkconfig Requires(preun): initscripts Requires(postun): initscripts %if %{?fedora}%{!?fedora:0} >= 5 || %{?rhel}%{!?rhel:0} >= 5 Requires(post): policycoreutils Requires(postun): policycoreutils %if %{?fedora}%{!?fedora:0} >= 11 || %{?rhel}%{!?rhel:0} >= 6 Requires(post): policycoreutils-python Requires(postun): policycoreutils-python %endif %endif %description The Berkeley Database Information Index (BDII) consists of a standard LDAP database which is updated by an external process. The update process obtains LDIF from a number of sources and merges them. It then compares this to the contents of the database and creates an LDIF file of the differences. This is then used to update the database. %prep %setup -q %build %install rm -rf %{buildroot} make install prefix=%{buildroot} chmod 644 %{buildroot}%{_sysconfdir}/sysconfig/%{name} %clean rm -rf %{buildroot} %pre # Temp fix for upgrade from 5.2.5 to 5.2.7 service %{name} status > /dev/null 2>&1 if [ $? -eq 0 ]; then touch %{_localstatedir}/run/%{name}/bdii.upgrade service %{name} stop > /dev/null 2>&1 fi %post sed "s/\(rootpw *\)secret/\1$(mkpasswd -s 0 | tr '/' 'x')/" \ -i %{_sysconfdir}/%{name}/bdii-slapd.conf \ %{_sysconfdir}/%{name}/bdii-top-slapd.conf # Temp fix for upgrade from 5.2.5 to 5.2.7 if [ -f %{_localstatedir}/run/%{name}/bdii.upgrade ]; then rm -f %{_localstatedir}/run/%{name}/bdii.upgrade service %{name} start > /dev/null 2>&1 fi /sbin/chkconfig --add %{name} %if %{?fedora}%{!?fedora:0} >= 5 || %{?rhel}%{!?rhel:0} >= 5 semanage port -a -t ldap_port_t -p tcp 2170 2>/dev/null || : semanage fcontext -a -t slapd_db_t "%{_localstatedir}/lib/%{name}/db(/.*)?" 2>/dev/null || : semanage fcontext -a -t slapd_var_run_t "%{_localstatedir}/run/%{name}/db(/.*)?" 2>/dev/null || : # Remove selinux labels for old bdii var dir semanage fcontext -d -t slapd_db_t "%{_localstatedir}/run/%{name}(/.*)?" 2>/dev/null || : %endif %preun if [ $1 -eq 0 ]; then service %{name} stop > /dev/null 2>&1 /sbin/chkconfig --del %{name} fi %postun if [ $1 -ge 1 ]; then service %{name} condrestart > /dev/null 2>&1 fi %if %{?fedora}%{!?fedora:0} >= 5 || %{?rhel}%{!?rhel:0} >= 5 if [ $1 -eq 0 ]; then semanage port -d -t ldap_port_t -p tcp 2170 2>/dev/null || : semanage fcontext -d -t slapd_db_t "%{_localstatedir}/lib/%{name}/db(/.*)?" 2>/dev/null || : semanage fcontext -d -t slapd_var_run_t "%{_localstatedir}/run/%{name}/db(/.*)?" 2>/dev/null || : fi %endif %files %defattr(-,root,root,-) %attr(-,ldap,ldap) %{_localstatedir}/lib/%{name} %attr(-,ldap,ldap) %{_localstatedir}/log/%{name} %dir %{_sysconfdir}/%{name} %config(noreplace) %{_sysconfdir}/%{name}/DB_CONFIG %config(noreplace) %{_sysconfdir}/%{name}/DB_CONFIG_top %config(noreplace) %{_sysconfdir}/%{name}/bdii.conf %config(noreplace) %{_sysconfdir}/%{name}/BDII.schema %attr(-,ldap,ldap) %config %{_sysconfdir}/%{name}/bdii-slapd.conf %attr(-,ldap,ldap) %config %{_sysconfdir}/%{name}/bdii-top-slapd.conf %config(noreplace) %{_sysconfdir}/sysconfig/%{name} %config(noreplace) %{_sysconfdir}/logrotate.d/%{name} %{_initrddir}/%{name} %{_sbindir}/bdii-update %{_mandir}/man1/bdii-update.1* %doc copyright %changelog * Fri Sep 9 2013 Maria Alandes - 5.2.22-1 - BUG #102503: Make /var/run/bdii configurable * Fri Aug 2 2013 Maria Alandes - 5.2.21-1 - Add plugin modifications to LDIF modify instead of LDIF new for cached objects - Do not clean glite-update-endpoints cache files - Fixed wrong 'if' check in init.d script - BUG #99298: Set status attributes of delayed delete entries to 'Unknown' - BUG #102014: Clean caches after a BDII restart - BUG #101709: Start bdii-update daemon with -l option - BUG #102140: Start daemons from "/" - BUG #101389: RAM size can be now configured - BUG #101398: Defined the max log file size for the LDAP DB backend in top level BDIIs * Fri May 31 2013 Maria Alandes - 5.2.20-1 - Changed URL in spec file to point to new Information System web pages - Added missing dist in the rpm target of the Makefile * Fri May 31 2013 Maria Alandes - 5.2.19-1 - BUG #101090: added missing symlink to DB_CONFIG_top for GLUE2 DB backend * Fri May 03 2013 Maria Alandes - 5.2.18-1 - BUG #101237: bdii-update: GLUE2 entries marked for deletion keep the correct case and can be deleted * Tue Jan 15 2013 Maria Alandes - 5.2.17-1 - BUG #99622: Add dependency on openldap2.4-clients in SL5 * Thu Jan 10 2013 Maria Alandes - 5.2.16-1 - BUG #99622: Add dependency on openldap2.4-servers in SL5 * Wed Nov 28 2012 Maria Alandes - 5.2.15-1 - Fixes after testing: Load rwm and back_relay modules in the slapd configuration for site and resource BDII * Tue Nov 20 2012 Maria Alandes - 5.2.14-1 - BUG #98931: /sbin/runuser instead of runuser - BUG #98711: Optimise LDAP queries in GLUE 2.0 - BUG #98682: Delete delayed_delete.pkl when BDII is restarted - BUG #97717: Relay database created to be able to define the GLUE2GroupName and services alias * Wed Aug 15 2012 Laurence Field - 5.2.13-1 - Included Fedora patches upstream: - BUG #97223: Changes needed for EPEL - BUG #97217: Issues with lsb dependencies * Fri Jul 20 2012 Maria Alandes - 5.2.12-1 - Fixed BDII_IPV6_SUPPORT after testing * Wed Jul 18 2012 Maria Alandes - 5.2.11-1 - BUG 95122: Created SLAPD_DB_DIR directoy with correct ownership if it doesn't exist - BUG 95839: Added BDII_IPV6_SUPPORT * Thu Mar 8 2012 Laurence Field - 5.2.10-1 - New upsteam version that includes a new DB_CONFIG * Tue Feb 8 2012 Laurence Field - 5.2.9-1 - Fixed /var/run packaging issue * Tue Feb 8 2012 Laurence Field - 5.2.8-1 - Fixed a base64 encoding issue and added /var/run/bdii to the package * Tue Feb 7 2012 Laurence Field - 5.2.7-1 - Performance improvements to reduce memory and disk usage * Wed Jan 25 2012 Laurence Field - 5.2.6-1 - New upstream version that includes fedora patches and fix for EGI RT 3235 * Thu Jan 12 2012 Fedora Release Engineering - 5.2.5-2 - Rebuilt for https://fedoraproject.org/wiki/Fedora_17_Mass_Rebuild * Sun Sep 4 2011 Mattias Ellert - 5.2.5-1 - New upstream version 5.2.5 * Tue Jul 26 2011 Mattias Ellert - 5.2.4-1 - New upstream version 5.2.4 - Drop patch accepted upstream: bdii-mdsvo.patch - Move large files away from /var/run in order not to fill up /run partition * Mon Jun 27 2011 Mattias Ellert - 5.2.3-2 - Revert upstream hack that breaks ARC infosys * Mon Jun 13 2011 Mattias Ellert - 5.2.3-1 - New upstream version 5.2.3 - Drop patches accepted upstream: bdii-runuser.patch, bdii-context.patch, bdii-default.patch, bdii-shadowerr.patch, bdii-sysconfig.patch * Mon Feb 07 2011 Fedora Release Engineering - 5.1.13-2 - Rebuilt for https://fedoraproject.org/wiki/Fedora_15_Mass_Rebuild * Sat Jan 01 2011 Mattias Ellert - 5.1.13-1 - New upstream version 5.1.13 - Move restorecon from post sctiptlet to startup script in order to support /var/run on tmpfs * Thu Sep 23 2010 Mattias Ellert - 5.1.9-1 - New upstream version 5.1.9 * Thu Sep 02 2010 Mattias Ellert - 5.1.8-1 - New upstream version 5.1.8 * Fri Jun 18 2010 Mattias Ellert - 5.1.7-1 - New upstream version 5.1.7 * Sun May 23 2010 Mattias Ellert - 5.1.5-1 - New upstream release 5.1.5 - Get rid of lsb initscript dependency * Mon Apr 05 2010 Mattias Ellert - 5.1.0-1 - New upstream verison 5.1.0 - Add SELinux context management to scriptlets * Thu Mar 25 2010 Mattias Ellert - 5.0.8-4.460 - Update (svn revision 460) - Use proper anonymous svn checkout instead of svnweb generated tarball * Fri Feb 26 2010 Mattias Ellert - 5.0.8-3.443 - Update (svn revision 443) * Wed Feb 24 2010 Mattias Ellert - 5.0.8-2.436 - Update (svn revision 436) * Mon Feb 08 2010 Mattias Ellert - 5.0.8-1.375 - Initial package (svn revision 375) bdii-5.2.22/Makefile0000664001227000117040000000513412152073536013521 0ustar ellertellert NAME= $(shell grep Name: *.spec | sed 's/^[^:]*:[^a-zA-Z]*//' ) VERSION= $(shell grep Version: *.spec | sed 's/^[^:]*:[^0-9]*//' ) RELEASE= $(shell grep Release: *.spec |cut -d"%" -f1 |sed 's/^[^:]*:[^0-9]*//') build=$(shell pwd)/build DATE=$(shell date "+%a, %d %b %Y %T %z") dist=$(shell rpm --eval '%dist' | sed 's/%dist/.el5/') init_dir=$(shell rpm --eval '%{_initrddir}' || echo '/etc/init.d/') default: @echo "Nothing to do" install: @echo installing ... @mkdir -p $(prefix)/usr/sbin/ @mkdir -p $(prefix)/var/run/bdii/ @mkdir -p $(prefix)/var/lib/bdii/gip/ldif/ @mkdir -p $(prefix)/var/lib/bdii/gip/provider/ @mkdir -p $(prefix)/var/lib/bdii/gip/plugin/ @mkdir -p $(prefix)/etc/bdii/ @mkdir -p $(prefix)/etc/sysconfig/ @mkdir -p $(prefix)$(init_dir)/ @mkdir -p $(prefix)/etc/logrotate.d/ @mkdir -p $(prefix)/var/log/bdii/ @mkdir -p $(prefix)/usr/share/man/man1 @install -m 0755 etc/init.d/bdii $(prefix)/${init_dir}/ @install -m 0644 etc/sysconfig/bdii $(prefix)/etc/sysconfig/ @install -m 0755 bin/bdii-update $(prefix)/usr/sbin/ @install -m 0644 etc/bdii.conf $(prefix)/etc/bdii/ @install -m 0644 etc/BDII.schema $(prefix)/etc/bdii/ @install -m 0640 etc/bdii-slapd.conf $(prefix)/etc/bdii/ @install -m 0640 etc/bdii-top-slapd.conf $(prefix)/etc/bdii/ @install -m 0644 etc/DB_CONFIG $(prefix)/etc/bdii/ @install -m 0644 etc/DB_CONFIG_top $(prefix)/etc/bdii/ @install -m 0644 etc/default.ldif $(prefix)/var/lib/bdii/gip/ldif/ @install -m 0644 etc/logrotate.d/bdii $(prefix)/etc/logrotate.d @install -m 0644 man/bdii-update.1 $(prefix)/usr/share/man/man1/ dist: @mkdir -p $(build)/$(NAME)-$(VERSION)/ rsync -HaS --exclude ".svn" --exclude "$(build)" * $(build)/$(NAME)-$(VERSION)/ cd $(build); tar --gzip -cf $(NAME)-$(VERSION).tar.gz $(NAME)-$(VERSION)/; cd - sources: dist cp $(build)/$(NAME)-$(VERSION).tar.gz . deb: dist cd $(build)/$(NAME)-$(VERSION); dpkg-buildpackage -us -uc; cd - # ETICS packager can't find build packages.... mkdir $(build)/deb ; cp $(build)/*.deb $(build)/*.dsc $(build)/deb/ prepare: dist @mkdir -p $(build)/RPMS/noarch @mkdir -p $(build)/SRPMS/ @mkdir -p $(build)/SPECS/ @mkdir -p $(build)/SOURCES/ @mkdir -p $(build)/BUILD/ cp $(build)/$(NAME)-$(VERSION).tar.gz $(build)/SOURCES srpm: prepare rpmbuild -bs --define="dist ${dist}" --define='_topdir ${build}' $(NAME).spec rpm: srpm rpmbuild --rebuild --define='_topdir ${build} ' --define="dist ${dist}" $(build)/SRPMS/$(NAME)-$(VERSION)-$(RELEASE)${dist}.src.rpm clean: rm -f *~ $(NAME)-$(VERSION).tar.gz rm -rf $(build) .PHONY: dist srpm rpm sources clean bdii-5.2.22/copyright0000664001227000117040000000123311340725717014013 0ustar ellertellertCopyright (c) Members of the EGEE Collaboration. 2004. See http://www.eu-egee.org/partners/ for details on the copyright holders. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at # http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. bdii-5.2.22/debian/0000775001227000117040000000000012213331107013265 5ustar ellertellertbdii-5.2.22/debian/changelog0000664001227000117040000000117111732102617015146 0ustar ellertellertbdii (5.2.10-1) UNRELEASED; urgency=low * New upstream release -- Andrew Elwell Thu, 15 Mar 2012 12:00:00 +0100 bdii (5.0.8+443-1) UNRELEASED; urgency=low * Updated packaging etc - svn revision 443 -- Daniel Johansson Thu, 25 Feb 2010 16:58:51 +0100 bdii (5.0.8+436-1) UNRELEASED; urgency=low * Update - svn revision 436 -- Mattias Ellert Wed, 24 Feb 2010 09:07:50 +0100 bdii (5.0.8+375-1) UNRELEASED; urgency=low * Initial release - svn revision 375 -- Mattias Ellert Mon, 08 Feb 2010 16:27:40 +0100 bdii-5.2.22/debian/patches/0000775001227000117040000000000012213331107014714 5ustar ellertellertbdii-5.2.22/debian/patches/series0000664001227000117040000000002211336275562016144 0ustar ellertellert#Add patches here bdii-5.2.22/debian/bdii.default0000664001227000117040000000001111336275562015553 0ustar ellertellertRUN="no" bdii-5.2.22/debian/bdii.postinst0000664001227000117040000000031411336275606016017 0ustar ellertellert#!/bin/sh set -e sed "s/\(rootpw *\)secret/\1$(mkpasswd -s 0 | tr '/' 'x')/" -i /etc/bdii/bdii-slapd.conf chown -R openldap:openldap /var/lib/bdii chown -R openldap:openldap /var/log/bdii #DEBHELPER# bdii-5.2.22/debian/rules0000775001227000117040000000070211732361436014361 0ustar ellertellert#!/usr/bin/make -f %: dh $@ override_dh_auto_install: $(MAKE) prefix=debian/bdii install slapd_modulepath="modulepath /usr/lib/ldap" ; \ slapd_moduleload="moduleload back_hdb" ; \ sed -e "/allow bind_v2/i$${slapd_modulepath}\n$${slapd_moduleload}" \ -e "s!etc/openldap/schema!etc/ldap/schema!" \ -i debian/bdii/etc/bdii/bdii-slapd.conf ; \ sed "s/BDII_USER=.*/BDII_USER=openldap/" \ -i debian/bdii/etc/bdii/bdii.conf bdii-5.2.22/debian/README.source0000664001227000117040000000020611341521272015447 0ustar ellertellertThe source is a svn snapshot downloaded using the following URL: http://svnweb.cern.ch/world/wsvn/gridinfo/bdii/trunk/?op=dl&rev=443 bdii-5.2.22/debian/copyright0000664001227000117040000000147111341510312015221 0ustar ellertellertName: bdii Copyright: © 2004, Members of the EGEE Collaboration as defined on http://www.eu-egee.org/partners License: Apache-2.0 Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. On Debian and Ubuntu systems the Apache license is available on '/usr/share/common-licenses/Apache-2.0'. bdii-5.2.22/debian/control0000664001227000117040000000161211732353431014701 0ustar ellertellertSource: bdii Maintainer: Mattias Ellert Section: net Priority: optional Build-Depends: debhelper (>= 5), python-support Standards-Version: 3.9.3 Homepage: https://twiki.cern.ch/twiki/bin/view/EGEE/BDII Vcs-Browser: http://svnweb.cern.ch/world/wsvn/gridinfo/bdii/ Vcs-Svn: https://svn.cern.ch/reps/gridinfo/bdii/ DM-Upload-Allowed: yes Package: bdii Architecture: all Depends: slapd, ldap-utils, glue-schema, whois, ${misc:Depends}, ${python:Depends} Suggests: logrotate Description: The Berkeley Database Information Index (BDII) The Berkeley Database Information Index (BDII) consists of a standard LDAP database which is updated by an external process. The update process obtains LDIF from a number of sources and merges them. It then compares this to the contents of the database and creates an LDIF file of the differences. This is then used to update the database. bdii-5.2.22/debian/compat0000664001227000117040000000000211732102617014472 0ustar ellertellert8