pax_global_header00006660000000000000000000000064132140616610014512gustar00rootroot0000000000000052 comment=1cd3dd89c49c1d3b53690d417c4f4afc71cf771e dnsrecon-0.8.12/000077500000000000000000000000001321406166100134155ustar00rootroot00000000000000dnsrecon-0.8.12/.gitignore000066400000000000000000000000231321406166100154000ustar00rootroot00000000000000*.orig *.pyc *.pyo dnsrecon-0.8.12/.idea/000077500000000000000000000000001321406166100143755ustar00rootroot00000000000000dnsrecon-0.8.12/.idea/.name000066400000000000000000000000101321406166100153050ustar00rootroot00000000000000dnsrecondnsrecon-0.8.12/.idea/dnsrecon.iml000066400000000000000000000004361321406166100167160ustar00rootroot00000000000000 dnsrecon-0.8.12/.idea/encodings.xml000066400000000000000000000002461321406166100170720ustar00rootroot00000000000000 dnsrecon-0.8.12/.idea/misc.xml000066400000000000000000000003171321406166100160530ustar00rootroot00000000000000 dnsrecon-0.8.12/.idea/modules.xml000066400000000000000000000004161321406166100165700ustar00rootroot00000000000000 dnsrecon-0.8.12/.idea/scopes/000077500000000000000000000000001321406166100156715ustar00rootroot00000000000000dnsrecon-0.8.12/.idea/scopes/scope_settings.xml000066400000000000000000000002131321406166100214400ustar00rootroot00000000000000 dnsrecon-0.8.12/.idea/vcs.xml000066400000000000000000000002661321406166100157160ustar00rootroot00000000000000 dnsrecon-0.8.12/README.md000066400000000000000000000140201321406166100146710ustar00rootroot00000000000000 ## Version 0.8.12 **Date: 12/12/17** **Changes:** - Removed AXFR from std enumeration type unless -a is specified. - Fixed processing of TXT records. ## Version 0.8.11 **Date: 10/23/17** **Changes:** - Bug fix for python 3.6.x and the Google enumeration type. - Merged PR for Bing support. - Fixed issue when doing zone walks on servers without a SOA record. ## Version 0.8.9 ### Date: 1/14/14 ### Changes: - Bug fixes. ## Version 0.8.8 - Minor bug fixes in parsing tool and dnsrecon. ### Date: 4/14/14 ### Changes: - Support for saving results to a JSON file. - Bug fixes for: - Parsing SPF and TXT records when saving to XML, CSV and SQLite3. - Filtering of wildcard records when brute forcing a forward lookup zone. - Several typos and misspelled words. ## Version 0.8.5 ### Date: 5/25/13 ### Changes: - Changed the way IP ranges are handled. - Greatly improved speed and memory use in a reverse lookup of large networks. ## Version 0.8.4 ### Date: 5/19/13 ### Changes: - Improved Whois parsing for ranges and organization. - Better Whois record and request handling for RIPE and APNIC. - Several bug fixes. - Added print messages when saving output to files. ## Version 0.8.1 ### Changes: - Improved DNSEC zone walk. - Several bug fixes for exporting data and parsing records in zone transfers. - DigiNinja Edition for all his hard work in making dnsrecon better. ## Version 0.7.8 ### Date: 7/8/12 ### Changes: - CSV files now have a proper header for better parsing on tools that support them like Excel and PowerShell. - Windows System Console printing is now managed properly. - CNAME records are now saved in SQLite3 and CSV output. - Fixed an error when performing zone transfers due to improper indent. - Fixed mislabeling of -c option in the help message. - If a range or CIDR is provided and no scan type is given, dnsrecon will perform a reverse lookup against it. When other types are given, the rvl type will be appended to the list automaticaly. - Improved NSEC type detection to eliminate possible false positives. - Added processing of LOC, NAPTR, CERT and RP records of zone transfers returned. Proper information saved on XML output with proper field names in the attributes for these. - Fixes on Google enumeration parsing. - Fixed several typos. - Better handling and canceling of threaded tasks. ## Version 0.7.3 ### Date: 5/2/12 ### Changes: - Fixes for Python 3 compatibility. - Fixed key values when saving results to XML and CSV. ## Version 0.7.0 ### Date: 3/2/12 ### Changes: - Fixes to zonewalk option. - Query for _domainkey record in standard enumeration. ## Version 0.6.8 ### Date: 2/15/12 ### Changes: - Added tool folder with Python script for parsing results in XML and CSV format. - Added the ability to filter and extract hostnames and subdomains. - Added Metasploit plugin for importing into Metasploit the CSV and XML results in a very fast manner using Nokogiri for XML. It will add hosts, notes for hostnames and service entries. -Improvements on the zone transfer code: - Handling of zones with no NS records. - Proper parsing of PTR records in returned zones. - De-duplication of NS records IP addresses. - Provide additional info on failure. - Provide more info on actions being taken. - Bug fixes reported by users at RandomStorm and by Robin Wood. - Zone walking has been greatly improved including the accuracy of the results and the formatting to extract the information in a manner more useful for a pentester. ## Version 0.6.6 ### Date: 1/20/12 ### Changes: - Does not for a Origin Check for zones transferred since some admin may have configured their zones without NS servers as experienced by a user. - Handles exception if NS records cannot be resolved when performing a zone transfer test. - Will always ??? for a test for the SOA and test it for zone transfer. - Fixed a problem when generating an XML file from a zone transfer with the new parsing method. Info type was added to the XML output. ## Version 0.6.5 ### Date: 1/16/12 ### Changes: - Fixed problem with get_ns. - Python 3.2 support. - Color printing of messages like Metasploit. - New library for printing color messages. - Improved parsing of records when there is a zone transfer. ## Version 0.6.1 ### Date: 1/14/12 ### Changes: - IPv6 support for ranges in reverse lookup. - Enhanced parsing of SPF records ranges to cover includes and IPv6. - Specific host query for TXT RR. - Better handling and logging of TXT and SPF RR. - Started changes for Python 3.x compatibility. - Filtering of wildcard records when saving brute force results. - Show found records after brute force of domain is finished. - Manage Ctrl-c when doing a brute force and save results for those records found. - Corrected several spelling errors. ## Version 0.6 ### Date: 1/11/12 ### Changes: - Removed mDNS enumeration due to the pybonjour library has been abandoned and faster ways are available to achieve enumeration of mDNS records in a sub-net. - Removed unused variables. - Applied changes for PEP8 compliance. - Added comma delimited value to a file for the results. ## Version 0.5.1 ### Date: 1/8/12 ### Changes: - Additional fixes for XML formatting. - Ability to end a zonewalk with Ctrl-c and not lose data. - Initial Metasploit plug-in to be able to import data from XML file generated by dnsrecon. ## Version 0.5 ### Date: 1/8/12 ### Changes: - Will check in standard enumeration if DNSSEC is configured for the zone by checking for DNSKEY records and checking if the zone is configured as NSEC or NSEC3. - With the get_ip() method it will also check for CNAME records and add those to the list found hosts. - Will perform a DNSSEC zonewalk if NSEC records are available. It currently identifies A, AAAA, CNAME, NS and SRV records. For any other, it will just print the RDATA info. - General record resolver method added. - Changes to the options. Known Issues: - For some reason, the Python getopt is not parsing the options correctly in some cases. Considering changing to optparse even if it is more complicated to manage. - When running Python 3.x the Whois query does not show the organization. dnsrecon-0.8.12/dnsrecon.py000077500000000000000000002023121321406166100156050ustar00rootroot00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- # DNSRecon # # Copyright (C) 2015 Carlos Perez # # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; Applies version 2 of the License. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, write to the Free Software # Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA __version__ = '0.8.12' __author__ = 'Carlos Perez, Carlos_Perez@darkoperator.com' __doc__ = """ DNSRecon http://www.darkoperator.com by Carlos Perez, Darkoperator requires dnspython http://www.dnspython.org/ requires netaddr https://github.com/drkjam/netaddr/ """ import argparse import os import string import sqlite3 import datetime import netaddr # Manage the change in Python3 of the name of the Queue Library try: from Queue import Queue except ImportError: from queue import Queue from random import Random from threading import Lock, Thread from xml.dom import minidom from xml.etree import ElementTree from xml.etree.ElementTree import Element import dns.message import dns.query import dns.rdatatype import dns.resolver import dns.reversename import dns.zone import dns.message import dns.rdata import dns.rdatatype import dns.flags import json from dns.dnssec import algorithm_to_text from lib.gooenum import * from lib.bingenum import * from lib.whois import * from lib.dnshelper import DnsHelper from lib.msf_print import * # Global Variables for Brute force Threads brtdata = [] # Function Definitions # ------------------------------------------------------------------------------- # Worker & Threadpool classes ripped from # http://code.activestate.com/recipes/577187-python-thread-pool/ class Worker(Thread): """Thread executing tasks from a given tasks queue""" lck = Lock() def __init__(self, tasks): Thread.__init__(self) self.tasks = tasks self.daemon = True self.start() # Global variable that will hold the results global brtdata def run(self): found_record = [] while True: (func, args, kargs) = self.tasks.get() try: found_record = func(*args, **kargs) if found_record: Worker.lck.acquire() brtdata.append(found_record) for r in found_record: if type(r).__name__ == "dict": for k, v in r.iteritems(): print_status("\t{0}:{1}".format(k, v)) print_status() else: print_status("\t {0}".format(" ".join(r))) Worker.lck.release() except Exception as e: print_debug(e) self.tasks.task_done() class ThreadPool: """Pool of threads consuming tasks from a queue""" def __init__(self, num_threads): self.tasks = Queue(num_threads) for _ in range(num_threads): Worker(self.tasks) def add_task(self, func, *args, **kargs): """Add a task to the queue""" self.tasks.put((func, args, kargs)) def wait_completion(self): """Wait for completion of all the tasks in the queue""" self.tasks.join() def count(self): """Return number of tasks in the queue""" return self.tasks.qsize() def exit_brute(pool): print_error("You have pressed Ctrl-C. Saving found records.") print_status("Waiting for {0} remaining threads to finish.".format(pool.count())) pool.wait_completion() def process_range(arg): """ Function will take a string representation of a range for IPv4 or IPv6 in CIDR or Range format and return a list of IPs. """ try: ip_list = None range_vals = [] if re.match(r"\S*\/\S*", arg): ip_list = IPNetwork(arg) elif (re.match(r"\S*\-\S*", arg)): range_vals.extend(arg.split("-")) if len(range_vals) == 2: ip_list = IPRange(range_vals[0], range_vals[1]) else: print_error("Range provided is not valid") return [] except: print_error("Range provided is not valid") return [] return ip_list def process_spf_data(res, data): """ This function will take the text info of a TXT or SPF record, extract the IPv4, IPv6 addresses and ranges, request process include records and return a list of IP Addresses for the records specified in the SPF Record. """ # Declare lists that will be used in the function. ipv4 = [] ipv6 = [] includes = [] ip_list = [] # check first if it is a sfp record if not re.search(r"v\=spf", data): return # Parse the record for IPv4 Ranges, individual IPs and include TXT Records. ipv4.extend(re.findall("ip4:(\S*)", "".join(data))) ipv6.extend(re.findall("ip6:(\S*)", "".join(data))) # Create a list of IPNetwork objects. for ip in ipv4: for i in IPNetwork(ip): ip_list.append(i) for ip in ipv6: for i in IPNetwork(ip): ip_list.append(i) # Extract and process include values. includes.extend(re.findall("include:(\S*)", "".join(data))) for inc_ranges in includes: for spr_rec in res.get_txt(inc_ranges): spf_data = process_spf_data(res, spr_rec[2]) if spf_data is not None: ip_list.extend(spf_data) # Return a list of IP Addresses return [str(ip) for ip in ip_list] def expand_cidr(cidr_to_expand): """ Function to expand a given CIDR and return an Array of IP Addresses that form the range covered by the CIDR. """ ip_list = [] c1 = IPNetwork(cidr_to_expand) return c1 def expand_range(startip, endip): """ Function to expand a given range and return an Array of IP Addresses that form the range. """ return IPRange(startip, endip) def range2cidr(ip1, ip2): """ Function to return the maximum CIDR given a range of IP's """ r1 = IPRange(ip1, ip2) return str(r1.cidrs()[-1]) def write_to_file(data, target_file): """ Function for writing returned data to a file """ f = open(target_file, "w") f.write(data) f.close() def check_wildcard(res, domain_trg): """ Function for checking if Wildcard resolution is configured for a Domain """ wildcard = None test_name = ''.join(Random().sample(string.hexdigits + string.digits, 12)) + "." + domain_trg ips = res.get_a(test_name) if len(ips) > 0: print_debug("Wildcard resolution is enabled on this domain") print_debug("It is resolving to {0}".format("".join(ips[0][2]))) print_debug("All queries will resolve to this address!!") wildcard = "".join(ips[0][2]) return wildcard def brute_tlds(res, domain, verbose=False): """ This function performs a check of a given domain for known TLD values. prints and returns a dictionary of the results. """ global brtdata brtdata = [] # tlds taken from http://data.iana.org/TLD/tlds-alpha-by-domain.txt gtld = ['co', 'com', 'net', 'biz', 'org'] tlds = ['ac', 'ad', 'ae', 'aero', 'af', 'ag', 'ai', 'al', 'am', 'an', 'ao', 'aq', 'ar', 'arpa', 'as', 'asia', 'at', 'au', 'aw', 'ax', 'az', 'ba', 'bb', 'bd', 'be', 'bf', 'bg', 'bh', 'bi', 'biz', 'bj', 'bm', 'bn', 'bo', 'br', 'bs', 'bt', 'bv', 'bw', 'by', 'bz', 'ca', 'cat', 'cc', 'cd', 'cf', 'cg', 'ch', 'ci', 'ck', 'cl', 'cm', 'cn', 'co', 'com', 'coop', 'cr', 'cu', 'cv', 'cx', 'cy', 'cz', 'de', 'dj', 'dk', 'dm', 'do', 'dz', 'ec', 'edu', 'ee', 'eg', 'er', 'es', 'et', 'eu', 'fi', 'fj', 'fk', 'fm', 'fo', 'fr', 'ga', 'gb', 'gd', 'ge', 'gf', 'gg', 'gh', 'gi', 'gl', 'gm', 'gn', 'gov', 'gp', 'gq', 'gr', 'gs', 'gt', 'gu', 'gw', 'gy', 'hk', 'hm', 'hn', 'hr', 'ht', 'hu', 'id', 'ie', 'il', 'im', 'in', 'info', 'int', 'io', 'iq', 'ir', 'is', 'it', 'je', 'jm', 'jo', 'jobs', 'jp', 'ke', 'kg', 'kh', 'ki', 'km', 'kn', 'kp', 'kr', 'kw', 'ky', 'kz', 'la', 'lb', 'lc', 'li', 'lk', 'lr', 'ls', 'lt', 'lu', 'lv', 'ly', 'ma', 'mc', 'md', 'me', 'mg', 'mh', 'mil', 'mk', 'ml', 'mm', 'mn', 'mo', 'mobi', 'mp', 'mq', 'mr', 'ms', 'mt', 'mu', 'museum', 'mv', 'mw', 'mx', 'my', 'mz', 'na', 'name', 'nc', 'ne', 'net', 'nf', 'ng', 'ni', 'nl', 'no', 'np', 'nr', 'nu', 'nz', 'om', 'org', 'pa', 'pe', 'pf', 'pg', 'ph', 'pk', 'pl', 'pm', 'pn', 'pr', 'pro', 'ps', 'pt', 'pw', 'py', 'qa', 're', 'ro', 'rs', 'ru', 'rw', 'sa', 'sb', 'sc', 'sd', 'se', 'sg', 'sh', 'si', 'sj', 'sk', 'sl', 'sm', 'sn', 'so', 'sr', 'st', 'su', 'sv', 'sy', 'sz', 'tc', 'td', 'tel', 'tf', 'tg', 'th', 'tj', 'tk', 'tl', 'tm', 'tn', 'to', 'tp', 'tr', 'travel', 'tt', 'tv', 'tw', 'tz', 'ua', 'ug', 'uk', 'us', 'uy', 'uz', 'va', 'vc', 've', 'vg', 'vi', 'vn', 'vu', 'wf', 'ws', 'ye', 'yt', 'za', 'zm', 'zw'] found_tlds = [] domain_main = domain.split(".")[0] # Let the user know how long it could take print_status("The operation could take up to: {0}".format(time.strftime('%H:%M:%S', time.gmtime(len(tlds) / 4)))) try: for t in tlds: if verbose: print_status("Trying {0}".format(domain_main + "." + t)) pool.add_task(res.get_ip, domain_main + "." + t) for g in gtld: if verbose: print_status("Trying {0}".format(domain_main + "." + g + "." + t)) pool.add_task(res.get_ip, domain_main + "." + g + "." + t) # Wait for threads to finish. pool.wait_completion() except (KeyboardInterrupt): exit_brute(pool) # Process the output of the threads. for rcd_found in brtdata: for rcd in rcd_found: if re.search(r"^A", rcd[0]): found_tlds.extend([{"type": rcd[0], "name": rcd[1], "address": rcd[2]}]) print_good("{0} Records Found".format(len(found_tlds))) return found_tlds def brute_srv(res, domain, verbose=False): """ Brute-force most common SRV records for a given Domain. Returns an Array with records found. """ global brtdata brtdata = [] returned_records = [] srvrcd = [ '_gc._tcp.', '_kerberos._tcp.', '_kerberos._udp.', '_ldap._tcp.', '_test._tcp.', '_sips._tcp.', '_sip._udp.', '_sip._tcp.', '_aix._tcp.', '_aix._tcp.', '_finger._tcp.', '_ftp._tcp.', '_http._tcp.', '_nntp._tcp.', '_telnet._tcp.', '_whois._tcp.', '_h323cs._tcp.', '_h323cs._udp.', '_h323be._tcp.', '_h323be._udp.', '_h323ls._tcp.', '_https._tcp.', '_h323ls._udp.', '_sipinternal._tcp.', '_sipinternaltls._tcp.', '_sip._tls.', '_sipfederationtls._tcp.', '_jabber._tcp.', '_xmpp-server._tcp.', '_xmpp-client._tcp.', '_imap.tcp.', '_certificates._tcp.', '_crls._tcp.', '_pgpkeys._tcp.', '_pgprevokations._tcp.', '_cmp._tcp.', '_svcp._tcp.', '_crl._tcp.', '_ocsp._tcp.', '_PKIXREP._tcp.', '_smtp._tcp.', '_hkp._tcp.', '_hkps._tcp.', '_jabber._udp.', '_xmpp-server._udp.', '_xmpp-client._udp.', '_jabber-client._tcp.', '_jabber-client._udp.', '_kerberos.tcp.dc._msdcs.', '_ldap._tcp.ForestDNSZones.', '_ldap._tcp.dc._msdcs.', '_ldap._tcp.pdc._msdcs.', '_ldap._tcp.gc._msdcs.', '_kerberos._tcp.dc._msdcs.', '_kpasswd._tcp.', '_kpasswd._udp.', '_imap._tcp.', '_imaps._tcp.', '_submission._tcp.', '_pop3._tcp.', '_pop3s._tcp.', '_caldav._tcp.', '_caldavs._tcp.', '_carddav._tcp.', '_carddavs._tcp.', '_x-puppet._tcp.', '_x-puppet-ca._tcp.'] try: for srvtype in srvrcd: if verbose: print_status("Trying {0}".format(res.get_srv, srvtype + domain)) pool.add_task(res.get_srv, srvtype + domain) # Wait for threads to finish. pool.wait_completion() except (KeyboardInterrupt): exit_brute(pool) # Make sure we clear the variable if len(brtdata) > 0: for rcd_found in brtdata: for rcd in rcd_found: returned_records.extend([{"type": rcd[0], "name": rcd[1], "target": rcd[2], "address": rcd[3], "port": rcd[4]}]) else: print_error("No SRV Records Found for {0}".format(domain)) print_good("{0} Records Found".format(len(returned_records))) return returned_records def brute_reverse(res, ip_list, verbose=False): """ Reverse look-up brute force for given CIDR example 192.168.1.1/24. Returns an Array of found records. """ global brtdata brtdata = [] returned_records = [] print_status("Performing Reverse Lookup from {0} to {1}".format(ip_list[0], ip_list[-1])) # Resolve each IP in a separate thread. try: ip_range = xrange(len(ip_list) - 1) except NameError: ip_range = range(len(ip_list) - 1) try: for x in ip_range: ipaddress = str(ip_list[x]) if verbose: print_status("Trying {0}".format(ipaddress)) pool.add_task(res.get_ptr, ipaddress) # Wait for threads to finish. pool.wait_completion() except (KeyboardInterrupt): exit_brute(pool) for rcd_found in brtdata: for rcd in rcd_found: returned_records.extend([{'type': rcd[0], "name": rcd[1], 'address': rcd[2]}]) print_good("{0} Records Found".format(len(returned_records))) return returned_records def brute_domain(res, dict, dom, filter=None, verbose=False, ignore_wildcard=False): """ Main Function for domain brute forcing """ global brtdata brtdata = [] wildcard_ip = None found_hosts = [] continue_brt = "y" # Check if wildcard resolution is enabled wildcard_ip = check_wildcard(res, dom) if wildcard_ip and not ignore_wildcard: print_status("Do you wish to continue? y/n") continue_brt = str(sys.stdin.readline()[:-1]) if re.search(r"y", continue_brt, re.I): # Check if Dictionary file exists if os.path.isfile(dict): with open(dict) as f: # Thread brute-force. try: for line in f: if verbose: print_status("Trying {0}".format(line.strip() + '.' + dom.strip())) target = line.strip() + "." + dom.strip() pool.add_task(res.get_ip, target) # Wait for threads to finish pool.wait_completion() except (KeyboardInterrupt): exit_brute(pool) # Process the output of the threads. for rcd_found in brtdata: for rcd in rcd_found: if re.search(r"^A", rcd[0]): # Filter Records if filtering was enabled if filter: if not wildcard_ip == rcd[2]: found_hosts.extend([{"type": rcd[0], "name": rcd[1], "address": rcd[2]}]) else: found_hosts.extend([{"type": rcd[0], "name": rcd[1], "address": rcd[2]}]) elif re.search(r"^CNAME", rcd[0]): found_hosts.extend([{"type": rcd[0], "name": rcd[1], "target": rcd[2]}]) # Clear Global variable brtdata = [] print_good("{0} Records Found".format(len(found_hosts))) return found_hosts def in_cache(dict_file, ns): """ Function for Cache Snooping, it will check a given NS server for specific type of records for a given domain are in it's cache. """ found_records = [] with open(dict_file) as f: for zone in f: dom_to_query = str.strip(zone) query = dns.message.make_query(dom_to_query, dns.rdatatype.A, dns.rdataclass.IN) query.flags ^= dns.flags.RD answer = dns.query.udp(query, ns) if len(answer.answer) > 0: for an in answer.answer: for rcd in an: if rcd.rdtype == 1: print_status("\tName: {0} TTL: {1} Address: {2} Type: A".format(an.name, an.ttl, rcd.address)) found_records.extend([{"type": "A", "name": an.name, "address": rcd.address, "ttl": an.ttl}]) elif rcd.rdtype == 5: print_status("\tName: {0} TTL: {1} Target: {2} Type: CNAME".format(an.name, an.ttl, rcd.target)) found_records.extend([{"type": "CNAME", "name": an.name, "target": rcd.target, "ttl": an.ttl}]) else: print_status() return found_records def se_result_process(res, found_hosts): """ This function processes the results returned from a Search Engine and does an A and AAAA query for the IP of the found host. Prints and returns a dictionary with all the results found. """ returned_records = [] if found_hosts == None: return None for sd in found_hosts: for sdip in res.get_ip(sd): if re.search(r"^A|CNAME", sdip[0]): print_status("\t {0} {1} {2}".format(sdip[0], sdip[1], sdip[2])) if re.search(r"^A", sdip[0]): returned_records.extend([{"type": sdip[0], "name": sdip[1], "address": sdip[2]}]) else: returned_records.extend([{"type": sdip[0], "name": sdip[1], "target": sdip[2]}]) print_good("{0} Records Found".format(len(returned_records))) return returned_records def get_whois_nets_iplist(ip_list): """ This function will perform whois queries against a list of IP's and extract the net ranges and if available the organization list of each and remover any duplicate entries. """ seen = {} idfun = repr found_nets = [] for ip in ip_list: if ip != "no_ip": # Find appropriate Whois Server for the IP whois_server = get_whois(ip) # If we get a Whois server Process get the whois and process. if whois_server: whois_data = whois(ip, whois_server) arin_style = re.search("NetRange", whois_data) ripe_apic_style = re.search("netname", whois_data) if (arin_style or ripe_apic_style): net = get_whois_nets(whois_data) if net: for network in net: org = get_whois_orgname(whois_data) found_nets.append({"start": network[0], "end": network[1], "orgname": "".join(org)}) else: for line in whois_data.splitlines(): recordentrie = re.match("^(.*)\s\S*-\w*\s\S*\s(\S*\s-\s\S*)", line) if recordentrie: org = recordentrie.group(1) net = get_whois_nets(recordentrie.group(2)) for network in net: found_nets.append({"start": network[0], "end": network[1], "orgname": "".join(org)}) #Remove Duplicates return [seen.setdefault(idfun(e), e) for e in found_nets if idfun(e) not in seen] def whois_ips(res, ip_list): """ This function will process the results of the whois lookups and present the user with the list of net ranges found and ask the user if he wishes to perform a reverse lookup on any of the ranges or all the ranges. """ answer = "" found_records = [] print_status("Performing Whois lookup against records found.") list = get_whois_nets_iplist(unique(ip_list)) if len(list) > 0: print_status("The following IP Ranges where found:") for i in range(len(list)): print_status( "\t {0} {1}-{2} {3}".format(str(i) + ")", list[i]["start"], list[i]["end"], list[i]["orgname"])) print_status("What Range do you wish to do a Revers Lookup for?") print_status("number, comma separated list, a for all or n for none") val = sys.stdin.readline()[:-1] answer = str(val).split(",") if "a" in answer: for i in range(len(list)): print_status("Performing Reverse Lookup of range {0}-{1}".format(list[i]['start'], list[i]['end'])) found_records.append(brute_reverse(res, expand_range(list[i]['start'], list[i]['end']))) elif "n" in answer: print_status("No Reverse Lookups will be performed.") pass else: for a in answer: net_selected = list[int(a)] print_status(net_selected['orgname']) print_status( "Performing Reverse Lookup of range {0}-{1}".format(net_selected['start'], net_selected['end'])) found_records.append(brute_reverse(res, expand_range(net_selected['start'], net_selected['end']))) else: print_error("No IP Ranges where found in the Whois query results") return found_records def prettify(elem): """ Return a pretty-printed XML string for the Element. """ rough_string = ElementTree.tostring(elem, 'utf-8') reparsed = minidom.parseString(rough_string) return reparsed.toprettyxml(indent=" ") def dns_record_from_dict(record_dict_list, scan_info, domain): """ Saves DNS Records to XML Given a a list of dictionaries each representing a record to be saved, returns the XML Document formatted. """ xml_doc = Element("records") for r in record_dict_list: elem = Element("record") if type(r) is not str: try: for k, v in r.items(): try: k = unicode(str(k)) v = unicode(str(v)) elem.attrib[k] = v except: print_error("Could not convert key or value to unicode: '{0} = {1}'".format((repr(k), repr(v)))) print_error("In element: {0}".format(repr(elem.attrib))) continue xml_doc.append(elem) except AttributeError: continue xml_doc.append(elem) except AttributeError: continue scanelem = Element("scaninfo") scanelem.attrib["arguments"] = scan_info[0] scanelem.attrib["time"] = scan_info[1] xml_doc.append(scanelem) if domain is not None: domelem = Element("domain") domelem.attrib["domain_name"] = domain xml_doc.append(domelem) return prettify(xml_doc) def create_db(db): """ Function will create the specified database if not present and it will create the table needed for storing the data returned by the modules. """ # Connect to the DB con = sqlite3.connect(db) # Create SQL Queries to be used in the script make_table = """CREATE TABLE data ( serial integer Primary Key Autoincrement, type TEXT(8), name TEXT(32), address TEXT(32), target TEXT(32), port TEXT(8), text TEXT(256), zt_dns TEXT(32) )""" # Set the cursor for connection con.isolation_level = None cur = con.cursor() # Connect and create table cur.execute("SELECT name FROM sqlite_master WHERE type='table' AND name='data';") if cur.fetchone() is None: cur.execute(make_table) con.commit() else: pass def make_csv(data): csv_data = "Type,Name,Address,Target,Port,String\n" for n in data: # make sure that we are working with a dictionary. if isinstance(n, dict): if re.search(r"PTR|^[A]$|AAAA", n["type"]): csv_data += n["type"] + "," + n["name"] + "," + n["address"] + "\n" elif re.search(r"NS$", n["type"]): csv_data += n["type"] + "," + n["target"] + "," + n["address"] + "\n" elif re.search(r"SOA", n["type"]): csv_data += n["type"] + "," + n["mname"] + "," + n["address"] + "\n" elif re.search(r"MX", n["type"]): csv_data += n["type"] + "," + n["exchange"] + "," + n["address"] + "\n" elif re.search(r"SPF", n["type"]): if "zone_server" in n: csv_data += n["type"] + ",,,,,\'" + n["strings"] + "\'\n" else: csv_data += n["type"] + ",,,,,\'" + n["strings"] + "\'\n" elif re.search(r"TXT", n["type"]): if "zone_server" in n: csv_data += n["type"] + ",,,,,\'" + n["strings"] + "\'\n" else: csv_data += n["type"] + "," + n["name"] + ",,,,\'" + n["strings"] + "\'\n" elif re.search(r"SRV", n["type"]): csv_data += n["type"] + "," + n["name"] + "," + n["address"] + "," + n["target"] + "," + n["port"] + "\n" elif re.search(r"CNAME", n["type"]): csv_data += n["type"] + "," + n["name"] + ",," + n["target"] + ",\n" else: # Handle not common records t = n["type"] del n["type"] record_data = "".join(["%s =%s," % (key, value) for key, value in n.items()]) records = [t, record_data] csv_data + records[0] + ",,,,," + records[1] + "\n" return csv_data def write_json(jsonfile, data, scan_info): scaninfo = {"type": "ScanInfo", "arguments": scan_info[0], "date": scan_info[1]} data.insert(0, scaninfo) json_data = json.dumps(data, sort_keys=True, indent=4, separators=(',', ': ')) write_to_file(json_data, jsonfile) def write_db(db, data): """ Function to write DNS Records SOA, PTR, NS, A, AAAA, MX, TXT, SPF and SRV to DB. """ con = sqlite3.connect(db) # Set the cursor for connection con.isolation_level = None cur = con.cursor() # Normalize the dictionary data for n in data: if re.match(r'PTR|^[A]$|AAAA', n['type']): query = 'insert into data( type, name, address ) ' + \ 'values( "%(type)s", "%(name)s","%(address)s" )' % n elif re.match(r'NS$', n['type']): query = 'insert into data( type, name, address ) ' + \ 'values( "%(type)s", "%(target)s", "%(address)s" )' % n elif re.match(r'SOA', n['type']): query = 'insert into data( type, name, address ) ' + \ 'values( "%(type)s", "%(mname)s", "%(address)s" )' % n elif re.match(r'MX', n['type']): query = 'insert into data( type, name, address ) ' + \ 'values( "%(type)s", "%(exchange)s", "%(address)s" )' % n elif re.match(r'TXT', n['type']): query = 'insert into data( type, text) ' + \ 'values( "%(type)s","%(strings)s" )' % n elif re.match(r'SPF', n['type']): query = 'insert into data( type, text) ' + \ 'values( "%(type)s","%(text)s" )' % n elif re.match(r'SPF', n['type']): query = 'insert into data( type, text) ' + \ 'values( "%(type)s","%(text)s" )' % n elif re.match(r'SRV', n['type']): query = 'insert into data( type, name, target, address, port ) ' + \ 'values( "%(type)s", "%(name)s" , "%(target)s", "%(address)s" ,"%(port)s" )' % n elif re.match(r'CNAME', n['type']): query = 'insert into data( type, name, target ) ' + \ 'values( "%(type)s", "%(name)s" , "%(target)s" )' % n else: # Handle not common records t = n['type'] del n['type'] record_data = "".join(['%s=%s,' % (key, value) for key, value in n.items()]) records = [t, record_data] query = "insert into data(type,text) values ('" + \ records[0] + "','" + records[1] + "')" # Execute Query and commit cur.execute(query) con.commit() def get_nsec_type(domain, res): target = "0." + domain answer = get_a_answer(target, res._res.nameservers[0], res._res.timeout) for a in answer.authority: if a.rdtype == 50: return "NSEC3" elif a.rdtype == 47: return "NSEC" def dns_sec_check(domain, res): """ Check if a zone is configured for DNSSEC and if so if NSEC or NSEC3 is used. """ try: answer = res._res.query(domain, 'DNSKEY') print_status("DNSSEC is configured for {0}".format(domain)) nsectype = get_nsec_type(domain, res) print_status("DNSKEYs:") for rdata in answer: if rdata.flags == 256: key_type = "ZSK" if rdata.flags == 257: key_type = "KSk" print_status("\t{0} {1} {2} {3}".format(nsectype, key_type, algorithm_to_text(rdata.algorithm), dns.rdata._hexify(rdata.key))) except dns.resolver.NXDOMAIN: print_error("Could not resolve domain: {0}".format(domain)) sys.exit(1) except dns.exception.Timeout: print_error("A timeout error occurred please make sure you can reach the target DNS Servers") print_error("directly and requests are not being filtered. Increase the timeout from {0} second".format( res._res.timeout)) print_error("to a higher number with --lifetime