checkbot-1.80/ 0000755 0001750 0000144 00000000000 11075364115 012374 5 ustar graaff users checkbot-1.80/TODO 0000644 0001750 0000144 00000000571 10520645675 013076 0 ustar graaff users This file is no longer being updated, as I have moved my project
management into ShadowPlan. I hope to have an HTML version of the
current state online soon. -- Hans 30-Apr-2001
* Handle the javascript: scheme, possibly by relying on an external
javascript interpreter. This should be optional functionality, as
this may be introducing difficult to satisfy dependencies.
checkbot-1.80/Makefile.PL 0000644 0001750 0000144 00000010031 10570600536 014340 0 ustar graaff users # This -*- perl -*- script makes the Makefile
# Based on the Makefile.PL in the libwww-perl distribution
# $Id: Makefile.PL 231 2007-02-26 15:51:46Z graaff $
require 5.005;
use strict;
use ExtUtils::MakeMaker;
my $missing_modules = 0;
print "\nChecking for LWP...........";
eval {
require LWP;
LWP->VERSION(5.803);
};
if ($@) {
print " failed\n";
$missing_modules++;
print <VERSION(1.10);
};
if ($@) {
print " failed\n";
$missing_modules++;
print <VERSION(3.33);
};
if ($@) {
print " failed\n";
$missing_modules++;
print <VERSION('2.00');
};
if ($@) {
print " failed\n";
$missing_modules++;
print <VERSION('2.58');
};
if ($@) {
print " failed\n";
$missing_modules++;
print < to find a CPAN site near you.
EOT
print "\n";
# Write the Makefile
WriteMakefile(
NAME => "checkbot",
EXE_FILES => [ 'checkbot' ],
MAN3PODS => {},
PM => {},
VERSION_FROM => q(checkbot),
dist => {COMPRESS => 'gzip',
SUFFIX => 'gz' },
);
checkbot-1.80/checkbot.css 0000644 0001750 0000144 00000002210 10520645675 014672 0 ustar graaff users body {
font-family: sans-serif;
margin-left : 50px;
margin-right : 50px;
}
hr {
color: white;
background-color: black;
width: 1px;
position: absolute;
}
h1 {
padding-bottom: 10px;
color: #00008B;
background-color: transparent;
}
h2 {
color: #8B0000;
background-color: #FFEBCD;
margin-top: 20px;
padding: 10px;
border: thin Black solid;
}
h4 {
border-width: thin;
border-style: solid;
padding: 10px;
width: 50%;
color: Black;
background-color: #FFEBCD;
}
em {
text-decoration : underline;
font-style : normal;
}
dt {
color: Black;
background-color: #F0FFFF;
}
dd {
border-left: thick Red solid;
padding-left: 10px;
padding-top: 5px;
margin-bottom: 10px;
}
table {
color: Black;
background-color: Gray;
}
th {
color: Black;
background: #DCDCDC;
padding : 5px;
}
td {
color: Black;
background-color: White;
padding-right: 10px;
padding-left: 10px;
}
a:link {
text-decoration: none;
color: #007676;
background-color: White;
}
a:visited {
text-decoration: none;
color: #006060;
background-color: White;
}
a:hover {
text-decoration: none;
color: green;
background-color: #F5F5DC;
}
img {
border: none;
}
checkbot-1.80/t/ 0000755 0001750 0000144 00000000000 11075364114 012636 5 ustar graaff users checkbot-1.80/t/test.t 0000755 0001750 0000144 00000000401 10520645675 014010 0 ustar graaff users #!/usr/bin/env perl -w
use strict;
use Test;
BEGIN { plan tests => 1 }
# Testing the executable is hard, but I suppose we can at least try to
# run the to be installed Checkbot to see if things work out ok.
ok(not system('blib/script/checkbot'));
exit;
checkbot-1.80/checkbot 0000755 0001750 0000144 00000151673 11075364045 014123 0 ustar graaff users #!/usr/bin/perl -w
#
# checkbot - A perl5 script to check validity of links in www document trees
#
# Hans de Graaff , 1994-2005.
# Based on Dimitri Tischenko, Delft University of Technology, 1994
# Based on the testlinks script by Roy Fielding
# With contributions from Bruce Speyer
#
# This application is free software; you can redistribute it and/or
# modify it under the same terms as Perl itself.
#
# Info-URL: http://degraaff.org/checkbot/
#
# $Id: checkbot 238 2008-10-15 12:55:00Z graaff $
# (Log information can be found at the end of the script)
require 5.004;
use strict;
require LWP;
use File::Basename;
BEGIN {
eval "use Time::Duration qw(duration)";
$main::useduration = ($@ ? 0 : 1);
}
# Version information
my
$VERSION = '1.80';
=head1 NAME
Checkbot - WWW Link Verifier
=head1 SYNOPSIS
checkbot [B<--cookies>] [B<--debug>] [B<--file> file name] [B<--help>]
[B<--mailto> email addresses] [B<--noproxy> list of domains]
[B<--verbose>]
[B<--url> start URL]
[B<--match> match string] [B<--exclude> exclude string]
[B<--proxy> proxy URL] [B<--internal-only>]
[B<--ignore> ignore string]
[B<--filter> substitution regular expression]
[B<--style> style file URL]
[B<--note> note] [B<--sleep> seconds] [B<--timeout> timeout]
[B<--interval> seconds] [B<--dontwarn> HTTP responde codes]
[B<--enable-virtual>]
[B<--language> language code]
[B<--suppress> suppression file]
[start URLs]
=head1 DESCRIPTION
Checkbot verifies the links in a specific portion of the World Wide
Web. It creates HTML pages with diagnostics.
Checkbot uses LWP to find URLs on pages and to check them. It supports
the same schemes as LWP does, and finds the same links that
HTML::LinkExtor will find.
Checkbot considers links to be either 'internal' or
'external'. Internal links are links within the web space that needs
to be checked. If an internal link points to a web document this
document is retrieved, and its links are extracted and
processed. External links are only checked to be working. Checkbot
checks links as it finds them, so internal and external links are
checked at the same time, even though they are treated differently.
Options for Checkbot are:
=over 4
=item --cookies
Accept cookies from the server and offer them again at later
requests. This may be useful for servers that use cookies to handle
sessions. By default Checkbot does not accept any cookies.
=item --debug
Enable debugging mode. Not really supported anymore, but it will keep
some files around that otherwise would be deleted.
=item --file
Use the file I as the basis for the summary file names. The
summary page will get the I given, and the server pages are
based on the I without the .html extension. For example,
setting this option to C will create a summary page called
index.html and server pages called index-server1.html and
index-server2.html.
The default value for this option is C.
=item --help
Shows brief help message on the standard output.
=item --mailto [,]
Send mail to the I when Checkbot is done checking. You
can give more than one address separated by commas. The notification
email includes a small summary of the results. As of Checkbot 1.76
email is only sent if problems have been found during the Checkbot
run.
=item --noproxy
Do not proxy requests to the given domains. The list of domains must
be a comma-separated list. For example, so avoid using the proxy for
the localhost and someserver.xyz, you can use C<--noproxy
localhost,someserver.xyz>.
=item --verbose
Show verbose output while running. Includes all links checked, results
from the checks, etc.
=item --url
Set the start URL. Checkbot starts checking at this URL, and then
recursively checks all links found on this page. The start URL takes
precedence over additional URLs specified on the command line.
If no scheme is specified for the URL, the file protocol is assumed.
=item --match
This option selects which pages Checkbot considers local. If the
I is contained within the URL, then Checkbot considers
the page local, retrieves it, and will check all the links contained
on it. Otherwise the page is considered external and it is only
checked with a HEAD request.
If no explicit I is given, the start URLs (See option
C<--url>) will be used as a match string instead. In this case the
last page name, if any, will be trimmed. For example, a start URL like
C will result in a default I of C.
The I can be a perl regular expression. For example, to
check the main server page and all HTML pages directly underneath it,
but not the HTML pages in the subdirectories of the server, the
I would be C.
=item --exclude
URLs matching the I are considered to be external,
even if they happen to match the I (See option
C<--match>). URLs matching the --exclude string are still being
checked and will be reported if problems are found, but they will not
be checked for further links into the site.
The I can be a perl regular expression. For example,
to consider all URLs with a query string external, use C<[=\?]>. This
can be useful when a URL with a query string unlocks the path to a
huge database which will be checked.
=item --filter
This option defines a I, which is a perl regular
expression. This filter is run on each URL found, thus rewriting the
URL before it enters the queue to be checked. It can be used to remove
elements from a URL. This option can be useful when symbolic links
point to the same directory, or when a content management system adds
session IDs to URLs.
For example C would replace occurrences of 'old' with 'new'
in each URL.
=item --ignore
URLs matching the I are not checked at all, they are
completely ignored by Checkbot. This can be useful to ignore known
problem links, or to ignore links leading into databases. The I is matched after the I has been applied.
The I can be a perl regular expression.
For example C would match all URLs starting
with either www.server.com/one or www.server.com/two.
=item --proxy
This attribute specifies the URL of a proxy server. Only the HTTP and
FTP requests will be sent to that proxy server.
=item --internal-only
Skip the checking of external links at the end of the Checkbot
run. Only matching links are checked. Note that some redirections may
still cause external links to be checked.
=item --note
The I is included verbatim in the mail message (See option
C<--mailto>). This can be useful to include the URL of the summary HTML page
for easy reference, for instance.
Only meaningful in combination with the C<--mailto> option.
=item --sleep
Number of I to sleep in between requests. Default is 0
seconds, i.e. do not sleep at all between requests. Setting this
option can be useful to keep the load on the web server down while
running Checkbot. This option can also be set to a fractional number,
i.e. a value of 0.1 will sleep one tenth of a second between requests.
=item --timeout
Default timeout for the requests, specified in seconds. The default is
2 minutes.
=item --interval
The maximum interval between updates of the results web pages in
seconds. Default is 3 hours (10800 seconds). Checkbot will start the
interval at one minute, and gradually extend it towards the maximum
interval.
=item --style
When this option is used, Checkbot embeds this URL as a link to a
style file on each page it writes. This makes it easy to customize the
layout of pages generated by Checkbot.
=item --dontwarn
Do not include warnings on the result pages for those HTTP response
codes which match the regular expression. For instance, --dontwarn
"(301|404)" would not include 301 and 404 response codes.
Checkbot uses the response codes generated by the server, even if this
response code is not defined in RFC 2616 (HTTP/1.1). In addition to
the normal HTTP response code, Checkbot defines a few response codes
for situations which are not technically a problem, but which causes
problems in many cases anyway. These codes are:
901 Host name expected but not found
In this case the URL supports a host name, but non was found
in the URL. This usually indicates a mistake in the URL. An
exception is that this check is not applied to news: URLs.
902 Unqualified host name found
In this case the host name does not contain the domain part.
This usually means that the pages work fine when viewed within
the original domain, but not when viewed from outside it.
903 Double slash in URL path
The URL has a double slash in it. This is legal, but some web
servers cannot handle it very well and may cause Checkbot to
run away. See also the comments below.
904 Unknown scheme in URL
The URL starts with a scheme that Checkbot does not know
about. This is often caused by mistyping the scheme of the URL,
but the scheme can also be a legal one. In that case please let
me know so that it can be added to Checkbot.
=item --enable-virtual
This option enables dealing with virtual servers. Checkbot then
assumes that all hostnames for internal servers are unique, even
though their IP addresses may be the same. Normally Checkbot uses the
IP address to distinguish servers. This has the advantage that if a
server has two names (e.g. www and bamboozle) its pages only get
checked once. When you want to check multiple virtual servers this
causes problems, which this feature works around by using the hostname
to distinguish the server.
=item --language
The argument for this option is a two-letter language code. Checkbot
will use language negotiation to request files in that language. The
default is to request English language (language code 'en').
=item --suppress
The argument for this option is a file which contains combinations of
error codes and URLs for which to suppress warnings. This can be used
to avoid reporting of known and unfixable URL errors or warnings.
The format of the suppression file is a simple whitespace delimited
format, first listing the error code followed by the URL. Each error
code and URL combination is listed on a new line. Comments can be
added to the file by starting the line with a C<#> character.
# 301 Moved Permanently
301 http://www.w3.org/P3P
# 403 Forbidden
403 http://www.herring.com/
For further flexibility a regular expression can be used instead of a
normal URL. The regular expression must be enclosed with forward
slashes. For example, to suppress all 403 errors on wikipedia:
403 /http:\/\/wikipedia.org\/.*/
=back
Deprecated options which will disappear in a future release:
=over
=item --allow-simple-hosts (deprecated)
This option turns off warnings about URLs which contain unqualified
host names. This is useful for intranet sites which often use just a
simple host name or even C in their links.
Use of this option is deprecated. Please use the --dontwarn mechanism
for error 902 instead.
=back
=head1 HINTS AND TIPS
=over
=item Problems with checking FTP links
Some users may experience consistent problems with checking FTP
links. In these cases it may be useful to instruct Net::FTP to use
passive FTP mode to check files. This can be done by setting the
environment variable FTP_PASSIVE to 1. For example, using the bash
shell: C. See the Net::FTP documentation
for more details.
=item Run-away Checkbot
In some cases Checkbot literally takes forever to finish. There are two
common causes for this problem.
First, there might be a database application as part of the web site
which generates a new page based on links on another page. Since
Checkbot tries to travel through all links this will create an
infinite number of pages. This kind of run-away effect is usually predictable. It can be avoided by using the --exclude option.
Second, a server configuration problem can cause a loop in generating
URLs for pages that really do not exist. This will result in URLs of
the form http://some.server/images/images/images/logo.png, with ever
more 'images' included. Checkbot cannot check for this because the
server should have indicated that the requested pages do not
exist. There is no easy way to solve this other than fixing the
offending web server or the broken links.
=item Problems with https:// links
The error message
Can't locate object method "new" via package "LWP::Protocol::https::Socket"
usually means that the current installation of LWP does not support
checking of SSL links (i.e. links starting with https://). This
problem can be solved by installing the Crypt::SSLeay module.
=back
=head1 EXAMPLES
The most simple use of Checkbot is to check a set of pages on a
server. To check my checkbot pages I would use:
checkbot http://degraaff.org/checkbot/
Checkbot runs can take some time so Checkbot can send a notification
mail when the run is done:
checkbot --mailto hans@degraaff.org http://degraaff.org/checkbot/
It is possible to check a set of local file without using a web
server. This only works for static files but may be useful in some
cases.
checkbot file:///var/www/documents/
=head1 PREREQUISITES
This script uses the C modules.
=head1 COREQUISITES
This script can send mail when C is present.
=head1 AUTHOR
Hans de Graaff
=pod OSNAMES
any
=cut
# Declare some global variables, avoids ugly use of main:: all around
my %checkbot_errors = ('901' => 'Host name expected but not found',
'902' => 'Unqualified host name in URL',
'903' => 'URL contains double slash in URL',
'904' => 'Unknown scheme in URL',
);
my @starturls = ();
# Two hashes to store the response to a URL, and all the parents of the URL
my %url_error = ();
my %url_parent = ();
# Hash for storing the title of a URL for use in reports. TODO: remove
# this and store title as part of queue.
my %url_title = ();
# Hash for suppressions, which are defined as a combination of code and URL
my %suppression = ();
# Hash to store statistics on link checking
my %stats = ('todo' => 0,
'link' => 0,
'problem' => 0 );
# Options hash (to be filled by GetOptions)
my %options = ();
# Keep track of start time so that we can use it in reports
my $start_time = time();
# If on a Mac we should ask for the arguments through some MacPerl stuff
if ($^O eq 'MacOS') {
$main::mac_answer = eval "MacPerl::Ask('Enter Command-Line Options')";
push(@ARGV, split(' ', $main::mac_answer));
}
# Prepare
check_options();
init_modules();
init_globals();
init_suppression();
# Start actual application
check_links();
# Finish up
create_page(1);
send_mail() if defined $main::opt_mailto and $stats{problem} > 0;
exit 0;
# output prints stuff on stderr if --verbose, and takes care of proper
# indentation
sub output {
my ($line, $level) = @_;
return unless $main::opt_verbose;
chomp $line;
my $indent = '';
if (defined $level) {
while ($level-- > 0) {
$indent .= ' ';
}
}
print STDERR $indent, $line, "\n";
}
### Initialization and setup routines
sub check_options {
# Get command-line arguments
use Getopt::Long;
my $result = GetOptions(qw(cookies debug help noproxy=s verbose url=s match=s exclude|x=s file=s filter=s style=s ignore|z=s mailto|M=s note|N=s proxy=s internal-only sleep=f timeout=i interval=i dontwarn=s enable-virtual language=s allow-simple-hosts suppress=s));
# Handle arguments, some are mandatory, some have defaults
&print_help if (($main::opt_help && $main::opt_help)
|| (!$main::opt_url && $#ARGV == -1));
$main::opt_timeout = 120 unless defined($main::opt_timeout) && length($main::opt_timeout);
$main::opt_verbose = 0 unless $main::opt_verbose;
$main::opt_sleep = 0 unless defined($main::opt_sleep) && length($main::opt_sleep);
$main::opt_interval = 10800 unless defined $main::opt_interval and length $main::opt_interval;
$main::opt_dontwarn = "xxx" unless defined $main::opt_dontwarn and length $main::opt_dontwarn;
$main::opt_enable_virtual = 0 unless defined $main::opt_enable_virtual;
# Set the default language and make sure it is a two letter, lowercase code
$main::opt_language = 'en' unless defined $main::opt_language;
$main::opt_language = lc(substr($main::opt_language, 0, 2));
$main::opt_language =~ tr/a-z//cd;
if ($main::opt_language !~ /[a-z][a-z]/) {
warn "Argument --language $main::opt_language is not a valid language code\nUsing English as a default.\n";
$main::opt_language = 'en';
}
$main::opt_allow_simple_hosts = 0
unless $main::opt_allow_simple_hosts;
output "--allow-simple-hosts is deprecated, please use the --dontwarn mechanism", 0 if $main::opt_allow_simple_hosts;
# The default for opt_match will be set later, because we might want
# to muck with opt_url first.
# Display messages about the options
output "*** Starting Checkbot $VERSION in verbose mode";
output 'Will skip checking of external links', 1
if $main::opt_internal_only;
output "Allowing unqualified host names", 1
if $main::opt_allow_simple_hosts;
output "Not using optional Time::Duration module: not found", 1
unless $main::useduration;
}
sub init_modules {
use URI;
# Prepare the user agent to be used:
use LWP::UserAgent;
use LWP::MediaTypes;
#use LWP::Debug qw(- +debug);
use HTML::LinkExtor;
$main::ua = new LWP::UserAgent;
$main::ua->agent("Checkbot/$VERSION LWP/" . LWP::Version);
$main::ua->timeout($main::opt_timeout);
# Add a proxy to the user agent, if defined
$main::ua->proxy(['http', 'ftp'], $main::opt_proxy)
if defined($main::opt_proxy);
$main::ua->no_proxy(split(',', $main::opt_noproxy))
if defined $main::opt_noproxy;
# Add a cookie jar to the UA if requested by the user
$main::ua->cookie_jar( {} )
if defined $main::opt_cookies or $main::opt_cookies;
require Mail::Send if defined $main::opt_mailto;
use HTTP::Status;
}
sub init_globals {
my $url;
# Directory and files for output
if ($main::opt_file) {
$main::file = $main::opt_file;
$main::file =~ /(.*)\./;
$main::server_prefix = $1;
} else {
$main::file = "checkbot.html";
$main::server_prefix = "checkbot";
}
$main::tmpdir = ($ENV{'TMPDIR'} or $ENV{'TMP'} or $ENV{'TEMP'} or "/tmp") . "/Checkbot.$$";
$main::cur_queue = $main::tmpdir . "/queue";
$main::new_queue = $main::tmpdir . "/queue-new";
# Make sure we catch signals so that we can clean up temporary files
$SIG{'INT'} = $SIG{'TERM'} = $SIG{'HUP'} = $SIG{'QUIT'} = \&got_signal;
# Set up hashes to be used
%main::checked = ();
%main::servers = ();
%main::servers_get_only = ();
# Initialize the start URLs. --url takes precedence. Otherwise
# just process URLs in order as they appear on the command line.
unshift(@ARGV, $main::opt_url) if $main::opt_url;
foreach (@ARGV) {
$url = URI->new($_);
# If no scheme is defined we will assume file is used, so that
# it becomes easy to check a single file.
$url->scheme('file') unless defined $url->scheme;
$url->host('localhost') if $url->scheme eq 'file';
if (!defined $url->host) {
warn "No host specified in URL $url, ignoring it.\n";
next;
}
push(@starturls, $url);
}
die "There are no valid starting URLs to begin checking with!\n"
if scalar(@starturls) == -1;
# Set the automatic matching expression to a concatenation of the starturls
if (!defined $main::opt_match) {
my @matchurls;
foreach my $url (@starturls) {
# Remove trailing files from the match, e.g. remove index.html
# stuff so that we match on the host and/or directory instead,
# but only if there is a path component in the first place.
my $matchurl = $url->as_string;
$matchurl =~ s!/[^/]+$!/! unless $url->path eq '';
push(@matchurls, quotemeta $matchurl);
}
$main::opt_match = '^(' . join('|', @matchurls) . ')';
output "--match defaults to $main::opt_match";
}
# Initialize statistics hash with number of start URLs
$stats{'todo'} = scalar(@starturls);
# We write out our status every now and then.
$main::cp_int = 1;
$main::cp_last = 0;
}
sub init_suppression {
return if not defined $main::opt_suppress;
die "Suppression file \"$main::opt_suppress\" is in fact a directory"
if -d $main::opt_suppress;
open(SUPPRESSIONS, $main::opt_suppress)
or die "Unable to open $main::opt_suppress for reading: $!\n";
while (my $line = ) {
chomp $line;
next if $line =~ /^#/ or $line =~ /^\s*$/;
if ($line !~ /^\s*(\d+)\s+(\S+)/) {
output "WARNING: Unable to parse line in suppression file $main::opt_suppress:\n $line\n";
} else {
output "Suppressed: $1 $2\n" if $main::opt_verbose;
$suppression{$1}{$2} = $2;
}
}
close SUPPRESSIONS;
}
### Main application code
sub check_links {
my $line;
mkdir $main::tmpdir, 0755
|| die "$0: unable to create directory $main::tmpdir: $!\n";
# Explicitly set the record separator. I had the problem that this
# was not defined under my perl 5.00502. This should fix that, and
# not cause problems for older versions of perl.
$/ = "\n";
open(CURRENT, ">$main::cur_queue")
|| die "$0: Unable to open CURRENT $main::cur_queue for writing: $!\n";
open(QUEUE, ">$main::new_queue")
|| die "$0: Unable to open QUEUE $main::new_queue for writing: $!\n";
# Prepare CURRENT queue with starting URLs
foreach (@starturls) {
print CURRENT $_->as_string . "|\n";
}
close CURRENT;
open(CURRENT, $main::cur_queue)
|| die "$0: Unable to open CURRENT $main::cur_queue for reading: $!\n";
do {
# Read a line from the queue, and process it
while (defined ($line = ) ) {
chomp($line);
&handle_url($line);
&check_point();
}
# Move queues around, and try again, but only if there are still
# things to do
output "*** Moving queues around, " . $stats{'todo'} . " links to do.";
close CURRENT
or warn "Error while closing CURRENT filehandle: $!\n";
close QUEUE;
# TODO: should check whether these succeed
unlink($main::cur_queue);
rename($main::new_queue, $main::cur_queue);
open(CURRENT, "$main::cur_queue")
|| die "$0: Unable to open $main::cur_queue for reading: $!\n";
open(QUEUE, ">$main::new_queue")
|| die "$0: Unable to open $main::new_queue for writing: $!\n";
} while (not -z $main::cur_queue);
close CURRENT;
close QUEUE;
unless (defined($main::opt_debug)) {
clean_up();
}
}
sub clean_up {
unlink $main::cur_queue, $main::new_queue;
rmdir $main::tmpdir;
output "Removed temporary directory $main::tmpdir and its contents.\n", 1;
}
sub got_signal {
my ($signalname) = @_;
clean_up() unless defined $main::opt_debug;
print STDERR "Caught SIG$signalname.\n";
exit 1;
}
# Whether URL is 'internal' or 'external'
sub is_internal ($) {
my ($url) = @_;
return ( $url =~ /$main::opt_match/o
and not (defined $main::opt_exclude and $url =~ /$main::opt_exclude/o));
}
sub handle_url {
my ($line) = @_;
my ($urlstr, $urlparent) = split(/\|/, $line);
my $reqtype;
my $response;
my $type;
$stats{'todo'}--;
# Add this URL to the ones we've seen already, return if it is a
# duplicate.
return if add_checked($urlstr);
$stats{'link'}++;
# Is this an external URL and we only check internal stuff?
return if defined $main::opt_internal_only
and not is_internal($urlstr);
my $url = URI->new($urlstr);
# Perhaps this is a URL we are not interested in checking...
if (not defined($url->scheme)
or $url->scheme !~ /^(https?|file|ftp|gopher|nntp)$/o ) {
# Ignore URLs which we know we can ignore, create error for others
if ($url->scheme =~ /^(news|mailto|javascript|mms)$/o) {
output "Ignore $url", 1;
} else {
add_error($urlstr, $urlparent, 904, "Unknown scheme in URL: "
. $url->scheme);
}
return;
}
# Guess/determine the type of document we might retrieve from this
# URL. We do this because we only want to use a full GET for HTML
# document. No need to retrieve images, etc.
if ($url->path =~ /\/$/o || $url->path eq "") {
$type = 'text/html';
} else {
$type = guess_media_type($url->path);
}
# application/octet-stream is the fallback of LWP's guess stuff, so
# if we get this then we ask the server what we got just to be sure.
if ($type eq 'application/octet-stream') {
$response = performRequest('HEAD', $url, $urlparent, $type, $main::opt_language);
$type = $response->content_type;
}
# Determine if this is a URL we should GET fully or partially (using HEAD)
if ($type =~ /html/o
&& $url->scheme =~ /^(https?|file|ftp|gopher)$/o
and is_internal($url->as_string)
&& (!defined $main::opt_exclude || $url !~ /$main::opt_exclude/o)) {
$reqtype = 'GET';
} else {
$reqtype = 'HEAD';
}
# Get the document, unless we already did while determining the type
$response = performRequest($reqtype, $url, $urlparent, $type, $main::opt_language)
unless defined($response) and $reqtype eq 'HEAD';
# Ok, we got something back from checking, let's see what it is
if ($response->is_success) {
select(undef, undef, undef, $main::opt_sleep)
unless $main::opt_debug || $url->scheme eq 'file';
# Internal HTML documents need to be given to handle_doc for processing
if ($reqtype eq 'GET' and is_internal($url->as_string)) {
handle_doc($response, $urlstr);
}
} else {
# Right, so it wasn't the smashing succes we hoped for, so bring
# the bad news and store the pertinent information for later
add_error($url, $urlparent, $response->code, $response->message);
if ($response->is_redirect and is_internal($url->as_string)) {
if ($response->code == 300) { # multiple choices, but no redirection available
output 'Multiple choices', 2;
} else {
my $baseURI = URI->new($url);
if (defined $response->header('Location')) {
my $redir_url = URI->new_abs($response->header('Location'), $baseURI);
output "Redirected to $redir_url", 2;
add_to_queue($redir_url, $urlparent);
$stats{'todo'}++;
} else {
output 'Location header missing from redirect response', 2;
}
}
}
}
# Done with this URL
}
sub performRequest {
my ($reqtype, $url, $urlparent, $type, $language) = @_;
my ($response);
# A better solution here would be to use GET exclusively. Here is how
# to do that. We would have to set this max_size thing in
# check_external, I guess...
# Set $ua->max_size(1) and then try a normal GET request. However,
# that doesn't always work as evidenced by an FTP server that just
# hangs in this case... Needs more testing to see if the timeout
# catches this.
# Normally, we would only need to do a HEAD, but given the way LWP
# handles gopher requests, we need to do a GET on those to get at
# least a 500 and 501 error. We would need to parse the document
# returned by LWP to find out if we had problems finding the
# file. -- Patch by Bruce Speyer
# We also need to do GET instead of HEAD if we know the remote
# server won't accept it. The standard way for an HTTP server to
# indicate this is by returning a 405 ("Method Not Allowed") or 501
# ("Not Implemented"). Other circumstances may also require sending
# GETs instead of HEADs to a server. Details are documented below.
# -- Larry Gilbert
# Normally we try a HEAD request first, then a GET request if
# needed. There may be circumstances in which we skip doing a HEAD
# (e.g. when we should be getting the whole document).
foreach my $try ('HEAD', 'GET') {
# Skip trying HEAD when we know we need to do a GET or when we
# know only a GET will work anyway.
next if $try eq 'HEAD' and
($reqtype eq 'GET'
or $url->scheme eq 'gopher'
or (defined $url->authority and $main::servers_get_only{$url->authority}));
# Output what we are going to do with this link
output(sprintf("%4s %s (%s)\n", $try, $url, $type), 1);
# Create the request with all appropriate headers
my %header_hash = ( 'Referer' => $urlparent );
if (defined($language) && ($language ne '')) {
$header_hash{'Accept-Language'} = $language;
}
my $ref_header = new HTTP::Headers(%header_hash);
my $request = new HTTP::Request($try, $url, $ref_header);
$response = $main::ua->simple_request($request);
# If we are doing a HEAD request we need to make sure nothing
# fishy happened. we use some heuristics to see if we are ok, or
# if we should try again with a GET request.
if ($try eq 'HEAD') {
# 400, 405, 406 and 501 are standard indications that HEAD
# shouldn't be used
# We used to check for 403 here also, but according to the HTTP spec
# a 403 indicates that the server understood us fine but really does
# not want us to see the page, so we SHOULD NOT retry.
if ($response->code =~ /^(400|405|406|501)$/o) {
output "Server does not seem to like HEAD requests; retrying", 2;
$main::servers_get_only{$url->authority}++;
next;
};
# There are many servers out there that have real trouble with
# HEAD, so if we get a 500 Internal Server error just retry with
# a GET request to get an authoritive answer. We used to do this
# only for special cases, but the list got big and some
# combinations (e.g. Zope server behind Apache proxy) can't
# easily be detected from the headers.
if ($response->code =~ /^500$/o) {
output "Internal server error on HEAD request; retrying with GET", 2;
$main::servers_get_only{$url->authority}++ if defined $url->authority;
next;
}
# If we know the server we can try some specific heuristics
if (defined $response->server) {
# Netscape Enterprise has been seen returning 500 and even 404
# (yes, 404!!) in response to HEAD requests
if ($response->server =~ /^Netscape-Enterprise/o
and $response->code =~ /^404$/o) {
output "Unreliable Netscape-Enterprise response to HEAD request; retrying", 2;
$main::servers_get_only{$url->authority}++;
next;
};
}
# If a HEAD request resulted in nothing noteworthy, no need for
# any further attempts using GET, we are done.
last;
}
}
return $response;
}
# This routine creates a (temporary) WWW page based on the current
# findings This allows somebody to monitor the process, but is also
# convenient when this program crashes or waits because of diskspace
# or memory problems
sub create_page {
my($final_page) = @_;
my $path = "";
my $prevpath = "";
my $prevcode = 0;
my $prevmessage = "";
output "*** Start writing results page";
open(OUT, ">$main::file.new")
|| die "$0: Unable to open $main::file.new for writing:\n";
print OUT "\n";
print OUT "\n";
print OUT "\n";
print OUT "\n";
if (!$final_page) {
printf OUT "\n",
int($main::cp_int * 60 / 2 - 5);
}
print OUT "Checkbot report\n";
print OUT "\n" if defined $main::opt_style;
print OUT "\n";
print OUT "\n";
print OUT "
Checkbot: main report
\n";
# Show the status of this checkbot session
print OUT "
Status:
";
if ($final_page) {
print OUT "Done. \n";
print OUT 'Run started on ' . localtime($start_time) . ". \n";
print OUT 'Run duration ', duration(time() - $start_time), ".\n"
if $main::useduration;
} else {
print OUT "Running since " . localtime($start_time) . ". \n";
print OUT "Last update at ". localtime() . ". \n";
print OUT "Next update in ", int($main::cp_int), " minutes.\n";
}
print OUT "
\n\n";
# Summary (very brief overview of key statistics)
print OUT "
Report summary
\n";
print OUT "
\n";
print OUT "
Links checked
", $stats{'link'}, "
\n";
print OUT "
Problems so far
", $stats{'problem'}, "
\n";
print OUT "
Links to do
", $stats{'todo'}, "
\n";
print OUT "
\n";
# Server information
printAllServers($final_page);
# Checkbot session parameters
print OUT "
Checkbot session parameters
\n";
print OUT "
\n";
print OUT "
--url & <command line urls>
Start URL(s)
",
join(',', @starturls), "
\n";
print OUT "
--match
Match regular expression
$main::opt_match
\n";
print OUT "
--exclude
Exclude regular expression
$main::opt_exclude
\n" if defined $main::opt_exclude;
print OUT "
--filter
Filter regular expression
$main::opt_filter
\n" if defined $main::opt_filter;
print OUT "
--noproxy
No Proxy for the following domains
$main::opt_noproxy
\n" if defined $main::opt_noproxy;
print OUT "
--ignore
Ignore regular expression
$main::opt_ignore
\n" if defined $main::opt_ignore;
print OUT "
--suppress
Suppress error code and URL specified by
$main::opt_suppress
\n" if defined $main::opt_suppress;
print OUT "
--dontwarn
Don't warn for these codes
$main::opt_dontwarn
\n" if $main::opt_dontwarn ne 'xxx';
print OUT "
--enable-virtual
Use virtual names only
yes
\n" if $main::opt_enable_virtual;
print OUT "
--internal-only
Check only internal links
yes
\n" if defined $main::opt_internal_only;
print OUT "
--cookies
Accept cookies
yes
\n" if defined $main::opt_cookies;
print OUT "
--sleep
Sleep seconds between requests
$main::opt_sleep
\n" if ($main::opt_sleep != 0);
print OUT "
--timeout
Request timeout seconds
$main::opt_timeout
\n";
print OUT "
\n";
# Statistics for types of links
print OUT signature();
close(OUT);
rename($main::file, $main::file . ".bak");
rename($main::file . ".new", $main::file);
unlink $main::file . ".bak" unless $main::opt_debug;
output "*** Done writing result page";
}
# Create a list of all the servers, and create the corresponding table
# and subpages. We use the servers overview for this. This can result
# in strange effects when the same server (e.g. IP address) has
# several names, because several entries will appear. However, when
# using the IP address there are also a number of tricky situations,
# e.g. with virtual hosting. Given that likely the servers have
# different names for a reasons, I think it is better to have
# duplicate entries in some cases, instead of working off of the IP
# addresses.
sub printAllServers {
my ($finalPage) = @_;
my $server;
print OUT "
\n\n";
}
sub get_server_type {
my($server) = @_;
my $result;
if ( ! defined($main::server_type{$server})) {
if ($server eq 'localhost') {
$result = 'Direct access through filesystem';
} else {
my $request = new HTTP::Request('HEAD', "http://$server/");
my $response = $main::ua->simple_request($request);
$result = $response->header('Server');
}
$result = "Unknown server type" if ! defined $result or $result eq "";
output "=== Server $server is a $result";
$main::server_type{$server} = $result;
}
$main::server_type{$server};
}
sub add_checked {
my($urlstr) = @_;
my $item;
my $result = 0;
if (is_internal($urlstr) and not $main::opt_enable_virtual) {
# Substitute hostname with IP-address. This keeps us from checking
# the same pages for each name of the server, wasting time & resources.
# Only do this if we are not dealing with virtual servers. Also, we
# only do this for internal servers, because it makes no sense for
# external links.
my $url = URI->new($urlstr);
$url->host(ip_address($url->host)) if $url->can('host');
$urlstr = $url->as_string;
}
if (defined $main::checked{$urlstr}) {
$result = 1;
$main::checked{$urlstr}++;
} else {
$main::checked{$urlstr} = 1;
}
return $result;
}
# Has this URL already been checked?
sub is_checked {
my ($urlstr) = @_;
if (is_internal($urlstr) and not $main::opt_enable_virtual) {
# Substitute hostname with IP-address. This keeps us from checking
# the same pages for each name of the server, wasting time & resources.
# Only do this if we are not dealing with virtual servers. Also, we
# only do this for internal servers, because it makes no sense for
# external links.
my $url = URI->new($urlstr);
$url->host(ip_address($url->host)) if $url->can('host');
$urlstr = $url->as_string;
}
return defined $main::checked{$urlstr};
}
sub add_error ($$$$) {
my ($url, $urlparent, $code, $status) = @_;
# Check for the quick eliminations first
return if $code =~ /$main::opt_dontwarn/o
or defined $suppression{$code}{$url};
# Check for matches on the regular expressions in the supression file
if (defined $suppression{$code}) {
foreach my $item ( %{$suppression{$code}} ) {
if ($item =~ /^\/(.*)\/$/) {
my $regexp = $1;
if ($url =~ $regexp) {
output "Supressing error $code for $url due to regular expression match on $regexp", 2;
return;
}
}
}
}
$status = checkbot_status_message($code) if not defined $status;
output "$code $status", 2;
$url_error{$url}{'code'} = $code;
$url_error{$url}{'status'} = $status;
push @{$url_parent{$url}}, $urlparent;
$stats{'problem'}++;
}
# Parse document, and get the links
sub handle_doc {
my ($response, $urlstr) = @_;
my $num_links = 0;
my $new_links = 0;
# TODO: we are making an assumption here that the $reponse->base is
# valid, which might not always be true! This needs to be fixed, but
# first let's try to find out why this stuff is sometimes not
# valid... Aha. a simple will do the trick. It is
# not clear what the right fix for this is.
# We use the URL we used to retrieve this document as the URL to
# attach the problem reports to, even though this may not be the
# proper base url.
my $baseurl = URI->new($urlstr);
# When we received the document we can add a notch to its server
$main::servers{$baseurl->authority}++;
# Retrieve useful information from this document.
# TODO: using a regexp is NOT how this should be done, but it is
# easy. The right way would be to write a HTML::Parser or to use
# XPath on the document DOM provided that the document is easily
# parsed as XML. Either method is a lot of overhead.
if ($response->content =~ /title\>(.*?)\<\/title/si) {
# TODO: using a general hash that stores titles for all pages may
# consume too much memory. It would be better to only store the
# titles for requests that had problems. That requires passing them
# down to the queue. Take the easy way out for now.
$url_title{$baseurl} = $1;
}
# Check if this document has a Robots META tag. If so, check if
# Checkbot is allowed to FOLLOW the links on this page. Note that we
# ignore the INDEX directive because Checkbot is not an indexing
# robot. See http://www.robotstxt.org/wc/meta-user.html
# TODO: one more reason (see title) to properly parse this document...
if ($response->content =~ /\]*?robots[^\>]*?nofollow[^\>]*?\>/si) {
output "Obeying robots meta tag $&, skipping document", 2;
return;
}
# Parse the document just downloaded, using the base url as defined
# in the response, otherwise we won't get the same behavior as
# browsers and miss things like a BASE url in pages.
my $p = HTML::LinkExtor->new(undef, $response->base);
# If charset information is missing then decoded_content doesn't
# work. Fall back to content in this case, even though that may lead
# to charset warnings. See bug 1665075 for reference.
my $content = $response->decoded_content || $response->content;
$p->parse($content);
$p->eof;
# Deal with the links we found in this document
my @links = $p->links();
foreach (@links) {
my ($tag, %l) = @{$_};
foreach (keys %l) {
# Get the canonical URL, so we don't need to worry about base, case, etc.
my $url = $l{$_}->canonical;
# Remove fragments, if any
$url->fragment(undef);
# Determine in which tag this URL was found
# Ignore tags because they need not point to a valid URL
# in order to work (e.g. when directory indexing is turned off).
next if $tag eq 'base';
# Skip some 'links' that are not required to link to an actual
# live link but which LinkExtor returns as links anyway.
next if $tag eq 'applet' and $_ eq 'code';
next if $tag eq 'object' and $_ eq 'classid';
# Run filter on the URL if defined
if (defined $main::opt_filter) {
die "Filter supplied with --filter option contains errors!\n$@\n"
unless defined eval '$url =~ s' . $main::opt_filter
}
# Should we ignore this URL?
if (defined $main::opt_ignore and $url =~ /$main::opt_ignore/o) {
output "--ignore: $url", 1;
next;
}
# Check whether URL has fully-qualified hostname
if ($url->can('host') and $url->scheme ne 'news') {
if (! defined $url->host) {
add_error($url, $baseurl->as_string, '901',
$checkbot_errors{'901'});
} elsif (!$main::opt_allow_simple_hosts && $url->host !~ /\./) {
add_error($url, $baseurl->as_string, '902',
$checkbot_errors{'902'});
}
}
# Some servers do not process // correctly in requests for relative
# URLs. We should flag them here. Note that // in a URL path is
# actually valid per RFC 2396, and that they should not be removed
# when processing relative URLs as per RFC 1808. See
# e.g. .
# Thanks to Randal Schwartz and Reinier Post for their explanations.
if ($url =~ /^http:\/\/.*\/\//) {
add_error($url, $baseurl->as_string, '903',
$checkbot_errors{'903'});
}
# We add all URLs found to the queue, unless we already checked
# it earlier
if (is_checked($url)) {
# If an error has already been logged for this URL we add the
# current parent to the list of parents on which this URL
# appears.
if (defined $url_error{$url}) {
push @{$url_parent{$url}}, $baseurl->as_string;
$stats{'problem'}++;
}
$stats{'link'}++;
} else {
add_to_queue($url, $baseurl);
$stats{'todo'}++;
$new_links++;
}
$num_links++;
}
}
output "Got $num_links links ($new_links new) from document", 2;
}
sub add_to_queue {
my ($url, $parent) = @_;
print QUEUE $url . '|' . $parent . "\n";
}
sub checkbot_status_message ($) {
my ($code) = @_;
my $result = status_message($code) || $checkbot_errors{$code}
|| '(Undefined status)';
}
sub print_server ($$) {
my($server, $final_page) = @_;
my $host = $server;
$host =~ s/(.*):\d+/$1/;
output "Writing server $server (really " . ip_address($host) . ")", 1;
my $server_problem = count_problems($server);
my $filename = "$main::server_prefix-$server.html";
$filename =~ s/:/-/o;
print OUT "
",
$main::servers{$server} + $server_problem;
if ($server_problem) {
printf OUT "
%d
",
$server_problem;
} else {
printf OUT "
%d
",
$server_problem;
}
my $ratio = $server_problem / ($main::servers{$server} + $server_problem) * 100;
print OUT "
";
print OUT "" unless $ratio < 0.5;
printf OUT "%4d%%", $ratio;
print OUT "" unless $ratio < 0.5;
print OUT "
";
print OUT "
\n";
# Create this server file
open(SERVER, ">$filename")
|| die "Unable to open server file $filename for writing: $!";
print SERVER "\n";
print SERVER "\n";
print SERVER "\n";
print SERVER "\n";
if (!$final_page) {
printf SERVER "\n",
int($main::cp_int * 60 / 2 - 5);
}
print SERVER "\n" if defined $main::opt_style;
print SERVER "Checkbot: output for server $server\n";
print SERVER "
Checkbot: report for server $server
\n";
print SERVER "
Go To: Main report page";
printServerProblems($server, $final_page);
print SERVER "\n";
print SERVER signature();
close SERVER;
}
# Return a string containing Checkbot's signature for HTML pages
sub signature {
return "
".
"";
}
# Loop through all possible problems, select relevant ones for this server
# and display them in a meaningful way.
sub printServerProblems ($$) {
my ($server, $final_page) = @_;
$server = quotemeta $server;
my $separator = "\n";
my %thisServerList = ();
# First we find all the problems for this particular server
foreach my $url (keys %url_parent) {
foreach my $parent (@{$url_parent{$url}}) {
next if $parent !~ $server;
chomp $parent;
$thisServerList{$url_error{$url}{'code'}}{$parent}{$url}
= $url_error{$url}{'status'};
}
}
# Do a run to find all error codes on this page, and include a table
# of contents to the actual report
foreach my $code (sort keys %thisServerList) {
print SERVER ", $code ";
print SERVER checkbot_status_message($code);
print SERVER "";
}
print SERVER ".
\n";
# Now run through this list and print the errors
foreach my $code (sort keys %thisServerList) {
my $codeOut = '';
foreach my $parent (sort keys %{ $thisServerList{$code} }) {
my $urlOut = '';
foreach my $url (sort keys %{ $thisServerList{$code}{$parent} }) {
my $status = $thisServerList{$code}{$parent}{$url};
$urlOut .= "
$url \n";
$urlOut .= "$status"
if defined $status and $status ne checkbot_status_message($code);
$urlOut .= "
\n";
}
if ($urlOut ne '') {
$codeOut .= "
$parent";
$codeOut .= " $url_title{$parent}\n" if defined $url_title{$parent};
$codeOut .= "
\n$urlOut\n
\n\n";
}
}
if ($codeOut ne '') {
print SERVER $separator if $separator;
$separator = '';
print SERVER "
$code ";
print SERVER checkbot_status_message($code);
print SERVER "
\n
\n$codeOut\n
\n";
}
}
}
sub check_point {
if ( ($main::cp_last + 60 * $main::cp_int < time())
|| ($main::opt_debug && $main::opt_verbose)) {
&create_page(0);
$main::cp_last = time();
# Increase the intervall from one snapshot to the next by 25%
# until we have reached the maximum.
$main::cp_int *= 1.25 unless $main::opt_debug;
$main::cp_int = $main::opt_interval if $main::cp_int > $main::opt_interval;
}
}
sub send_mail {
my $msg = new Mail::Send;
my $sub = 'Checkbot results for ';
$sub .= join(', ', @starturls);
$sub .= ': ' . $stats{'problem'} . ' errors';
$msg->to($main::opt_mailto);
$msg->subject($sub);
my $fh = $msg->open;
print $fh "Checkbot results for:\n " . join("\n ", @starturls) . "\n\n";
print $fh "User-supplied note: $main::opt_note\n\n"
if defined $main::opt_note;
print $fh $stats{'link'}, " links were checked, and ";
print $fh $stats{'problem'}, " problems were detected.\n";
print $fh 'Run started on ' . localtime($start_time) . "\n";
print $fh 'Run duration ', duration(time() - $start_time), "\n"
if $main::useduration;
print $fh "\n-- \nCheckbot $VERSION\n";
print $fh "\n";
$fh->close;
}
sub print_help {
print <<"__EOT__";
Checkbot $VERSION command line options:
--cookies Accept cookies from the server
--debug Debugging mode: No pauses, stop after 25 links.
--file file Use file as basis for output file names.
--help Provide this message.
--mailto address Mail brief synopsis to address when done.
--noproxy domains Do not proxy requests to given domains.
--verbose Verbose mode: display many messages about progress.
--url url Start URL
--match match Check pages only if URL matches `match'
If no match is given, the start URL is used as a match
--exclude exclude Exclude pages if the URL matches 'exclude'
--filter regexp Run regexp on each URL found
--ignore ignore Ignore URLs matching 'ignore'
--suppress file Use contents of 'file' to suppress errors in output
--note note Include Note (e.g. URL to report) along with Mail message.
--proxy URL URL of proxy server for HTTP and FTP requests.
--internal-only Only check internal links, skip checking external links.
--sleep seconds Sleep this many seconds between requests (default 0)
--style url Reference the style sheet at this URL.
--timeout seconds Timeout for http requests in seconds (default 120)
--interval seconds Maximum time interval between updates (default 10800)
--dontwarn codes Do not write warnings for these HTTP response codes
--enable-virtual Use only virtual names, not IP numbers for servers
--language Specify 2-letter language code for language negotiation
Options --match, --exclude, and --ignore can take a perl regular expression
as their argument\n
Use 'perldoc checkbot' for more verbose documentation.
Checkbot WWW page : http://degraaff.org/checkbot/
Mail bugs and problems: checkbot\@degraaff.org
__EOT__
exit 0;
}
sub ip_address {
my($host) = @_;
return $main::ip_cache{$host} if defined $main::ip_cache{$host};
my($name,$aliases,$adrtype,$length,@addrs) = gethostbyname($host);
if (defined $addrs[0]) {
my($n1,$n2,$n3,$n4) = unpack ('C4',$addrs[0]);
$main::ip_cache{$host} = "$n1.$n2.$n3.$n4";
} else {
# Whee! No IP-address found for this host. Just keep whatever we
# got for the host. If this really is some kind of error it will
# be found later on.
$main::ip_cache{$host} = $host;
}
}
sub count_problems {
my ($server) = @_;
$server = quotemeta $server;
my $count = 0;
foreach my $url (sort keys %url_parent) {
foreach my $parent (@{ $url_parent{$url} }) {
$count++ if $parent =~ m/$server/;
}
}
return $count;
}
checkbot-1.80/README 0000644 0001750 0000144 00000013725 11075360012 013254 0 ustar graaff users Checkbot -- a WWW link verifier
Checkbot is a perl5 script which can verify links within a region of
the World Wide Web. It checks all pages within an identified region,
and all links within that region. After checking all links within the
region, it will also check all links which point outside of the
region, and then stop.
Checkbot regularly writes reports on its findings, including all
servers found in the region, and all links with problems on those
servers.
Checkbot was written originally to check a number of servers at
once. This has implied some design decisions, so you might want to
keep that in mind when making suggestions. Speaking of which, be sure
to check the to do file on the website for things which have been
suggested for Checkbot.
INSTALLATION
Making and installing Checkbot is easy:
perl Makefile.PL
make
make install
You will need to have the following Perl modules installed in order to
properly install Checkbot:
LWP
URI
HTML::Parser
MIME::Base64
Net::FTP
Mail::Send (optional, contained in the MailTools package)
Time::Duration (optional, used for additional info in report)
WHERE TO FIND IT
Checkbot is distributed at: http://degraaff.org/checkbot/
Problems, bug reports, and feature enhancements are welcome at
http://sourceforge.net/projects/checkbot/
There is an announcement mailing list to which announcements of new
versions are posted. You can sign up for the list at
https://lists.sourceforge.net/lists/listinfo/checkbot-announce
Hans de Graaff
RECENT CHANGES
Changes in versino 1.80 (15-Oct-2008)
* Fix handling of nofollow robots tag.
* Require newer version of LWP for better handling of character
encodings.
* Ignore mms scheme.
* Minor clarification in output.
Changes in version 1.79 (3-Feb-2007)
* Correctly parse documents to avoid problems with UTF-8
documents. This avoids the "Parsing of undecoded UTF-8 will give
garbage when decoding entities" messages.
* Allow regular expressions in the suppression file, and complain if
the suppression file is not a proper file.
* More robust handling of HTTP and FTP servers that have problems
responding to HEAD requests.
* Use the original URL to report problems.
* Ensure XHTML compliance.
Changes in version 1.78 (3-May-2006)
* Don't throw errors for links that cannot be expected to be valid
all the time (e.g. the classid attribute of an object element)
* Better fallbacks for some cases where the HEAD request does not
work
* Add more classes and ids to allow more styling of results pages
(including example CSS file)
* Ensure XHTML compliance
* Better checks for optional dependencies
Changes in version 1.77 (28-Jul-2005)
* Fix silly build-related problem that prevented checkbot 1.76 from
running at all.
* Check for presence of robots meta tag and act on it.
Changes in version 1.76 (25-Jul-2005)
* Error reports now include the page title for easier identification.
* javascript: links are now ignored because they cannot be checked.
* Documentation updates.
Changes in version 1.75 (22-Apr-2004)
* New --cookies option to accept cookies from servers while checking.
* New --noproxy option indicates which domains should not be
passed through the proxy.
* New error code for unknown domains; only known non-checkable
schemes are ignored now.
* Minor bug fixes.
* Documentation updates.
Changes in version 1.74 (17-Dec-2003)
* New --suppress option allows Response code/URL combinations not
to be reported as problems.
* Checkbot warnings are now handled as pseudo-HTTP status messages
so that they can make use of all Checkbot features such as
--dontwarn.
* Option --allow-simple-hosts is deprecated due to this change.
* More robust handling of (lack of) status messages.
* Checkbot now requires LWP 5.70 due to bugfixes in this release,
although it should still also work with older LWP versions.
* Documentation fixes.
Changes in version 1.73 (31-Aug-2003)
* Checkbot now tries to produce valid XHTML 1.1
* URLs matching the --ignore option are now completely ignored;
they used to be checked but not reported.
* Proxy support works again, but --proxy now applies to all links
* Documentation fixes
Changes in version 1.72 (04-May-2003)
* URLs with query strings are now checked by default, the
--exclude option can be used to revert to the previous behavior
* The server results page contains shortcut links to each section
* Removed warning for unqualified hostnames for news: URLs
* Handling of signals such as SIGINT
* Bug and documentation fixes
Changes in version 1.71 (29-Dec-2002)
* New --filter option allows rewriting of URLs before they will be checked
* Problematic links are now reported for each page on which they occur
* New statistics which should work correctly
* Much simplified storage of information on problem links
* Duplicate links are now properly detected and not checked twice
* Rewritten internals for link checking, as a consequence internal
and external links are checked at the same time now, not in two
passes like before
* Rewritten internals for message output
* A simple test case for 'make test'
* Minor cleanups of the code
Version 1.70 was only released for testing purposes
Changes in version 1.69
* Improved makefile and packaging
* Better default for --match argument
* Additional instance of using GET instead of HEAD added
* Bug fixes in printing of web server feedback
Changes in version 1.68
* Add --allow-simple-hosts which doesn't check for unqualified hosts
* Mention --style option in help and added example style file
* Change --sleep implementation so that fractional seconds can be used
* Fix a bug with handling tags
* Tighten checks for http and https schemes
* Remove harmless warnings
checkbot-1.80/META.yml 0000644 0001750 0000144 00000000445 11075364114 013647 0 ustar graaff users # http://module-build.sourceforge.net/META-spec.html
#XXXXXXX This is a prototype!!! It will change in the future!!! XXXXX#
name: checkbot
version: 1.80
version_from: checkbot
installdirs: site
requires:
distribution_type: module
generated_by: ExtUtils::MakeMaker version 6.30
checkbot-1.80/MANIFEST 0000644 0001750 0000144 00000000231 10520645675 013530 0 ustar graaff users ChangeLog
MANIFEST
Makefile.PL
README
TODO
checkbot
checkbot.css
t/test.t
META.yml Module meta-data (added by MakeMaker)
checkbot-1.80/ChangeLog 0000644 0001750 0000144 00000130774 11075360057 014163 0 ustar graaff users 2008-10-15 Hans de Graaff
* Checkbot 1.80 is released
2008-07-08 Hans de Graaff
* checkbot (handle_doc): Tighten up the check for a robots tag so
that nofollow text later in the document won't be matched, thus
skipping the whole document, bug 2005950.
2007-05-05 Brandon Bell
* checkbot: mms scheme can be ignored safely.
2007-04-30 Hans de Graaff
* checkbot (printAllServers): Clarify that 'Unique links' actually
is 'Documents scanned'.
2007-02-26 Hans de Graaff
* checkbot (handle_doc): Handle the case where decoded_content is
not available as per bug 1665075.
2007-02-26 Gerald Preifer
* checkbot (check_point): Simplify and add a comment.
2007-02-26 Hans de Graaff
* Makefile.PL: Require LWP 5.803 or better. decoded_content got
added in 5.802 and 5.803 added some important bugfixes.
2007-02-03 Hans de Graaff
* Checkbot 1.79 is released
* RELEASE-PROCESS: Add the release process documentation.
2007-01-27 Gerald Pfeifer
* checkbot (init_suppression): Check and provide error if
suppression file is in fact a directory.
2006-12-28 Hans de Graaff
* checkbot: Add summary to tables to make files XHTML 1.1 compliant.
2006-11-16 Hans de Graaff
* checkbot (handle_doc): Parse the decoded content so that all
character set issues are dealt with before parsing. This solves
bug 1264729.
2006-11-14 Hans de Graaff
* checkbot (performRequest): Simplify the code dealing with
problems of HEAD requests by retrying all 500 reponses instead of
special-cases particular failures that we happen to know
about. This type of problem is all to common, and if there really
is a problem GET will find it anyway.
(add_error): Allow regular expressions in the suppression
file. Based on patch from Eric Noack
2006-11-14 Eric Noack
* checkbot (send_mail): Indicate how many errors are detected in
the notification email's subject.
(handle_doc): Use the URL with which the document was received for
the problem reports and internal accounting, but keep on using the
proper base URL as defined by the reponse object when retrieving
links from the document. This fixes the case where a weird BASE
URL in a document could make it unclear where the actual problem
was.
2006-10-28 Hans de Graaff
* checkbot (performRequest): Handle case where an FTP server may
not be able to handle a HEAD request. This may cause a lot of data
to be transferred in those cases.
2006-05-03 Hans de Graaff
* Checkbot 1.78 is released
2005-12-18 Hans de Graaff
* checkbot (printServerProblems): Make pages XHTML compliant again.
2005-12-18 Jens Schweikhardt
* checkbot: Add classes and ids so that more styling options for
CSS are available.
* checkbot2.css: Example CSS file using the new classes and ids.
2005-11-11 Hans de Graaff
* checkbot: React in a more subtle way if the Time::Duration
module is not found.
2005-09-22 Hans de Graaff
* Makefile.PL: Check for presence of Net::SSL and explain the
effects if this it not present.
2005-08-20 Hans de Graaff
* checkbot (handle_doc): Ignore some 'links' found by LinkExtor
which do not need to link to live links. Fixed bugs #1264447 and
#1107832.
* test.html: Add test cases for it.
2005-08-06 Hans de Graaff
* checkbot (performRequest): Switch from HEAD to GET on a 400
error, as the most likely cause is that the server has trouble
with HEAD requests.
2005-08-05 Hans de Graaff
* checkbot (handle_doc): Also show how many new links are found on
a page, not just the total number of links.
(performRequest): Don't retry GET method on a 403 error.
(handle_doc): Properly handle newlines in the matches for title
and robots meta tag.
2005-07-28 Hans de Graaff
* Checkbot 1.77 is released.
* checkbot: Fix use of $VERSION so that it compiles and can be
used by MakeMaker at the same time.
(handle_doc): Check for presence of robots meta tag and act on it.
Based on a patch by Donald Willingham.
2005-07-25 Hans de Graaff
* Checkbot 1.76 is released.
2005-06-07 Hans de Graaff
* checkbot (printServerProblems): Include title of page.
(handle_doc): Extract title for later printing.
Add new hash url_title to store page titles.
Based on a patch from John Bintz.
2005-04-23 Hans de Graaff
* checkbot: Add documentation on use of file:/// URLs.
2005-01-23 Hans de Graaff
* checkbot: Only send mail when Checkbot has detected any
problems, based on suggestion from Thomas Kuerten.
Print duration of run on final report, and refactor use of start
time variable to facilitate this. Feature depends on availability
of Time::Duration, but checkbot will work without it. Based on
patch from Adam Griff.
2005-01-23 Adam Griff
* checkbot (create_page): Print out more options on results page.
2005-01-21 Hans de Graaff
* checkbot: Remove automatic version number based on CVS version
now that commits will be more frequent than releases.
2004-11-12 Hans de Graaff
* checkbot (handle_url): Ignore javascript: URLs instead of
generating a 904 error. It would be nice to handle these as well.
2004-05-26 Hans de Graaff
* Makefile.PL: Sync HTML::Parser requirement with required
versions of libwww-perl.
2004-05-03 Hans de Graaff
* checkbot: Write better documentation for --file option.
2004-04-26 Hans de Graaff
* checkbot: Minor documentation changes thank to Jens
Schweikhardt.
2004-04-22 Hans de Graaff
* Checkbot 1.75 is released.
2004-04-21 Hans de Graaff
* checkbot (print_help): Use a here-doc for the help for easier
maintenance.
(init_modules): Add --noproxy options to set list of domains which
will not be passed through the proxy.
2004-04-18 Hans de Graaff
* checkbot (handle_url): Create an error if an unknown scheme is
encountered and only ignore known schemes like mailto:
2004-03-30 Hans de Graaff
* checkbot: Add explanation about error message which indicates
lack of SSL support.
2004-03-28 Hans de Graaff
* checkbot: Add EXAMPLES section to the perldoc documentation with
an example of the most simple invocation. Needs more examples...
Update help text for --mailto to confirm that more than one
address is possible.
* checkbot: Add new --cookies option to accept cookies from
servers. Based on patch from Roger Pilkey.
2004-02-09 Hans de Graaff
* Makefile.PL: Show correct text if LWP test fails.
2004-01-05 Hans de Graaff
* Makefile.PL: Now require LWP 5.76 to avoid problems with 500
"Need a field name" HTTP errors being generated by LWP.
2003-12-29 Gerald Pfeifer
* checkbot: Improve description of --proxy.
(print_help): Ditto.
2003-12-21 Hans de Graaff
* checkbot (performRequest): $url->authority may not be defined
for the URL we are checking.
2003-12-17 Hans de Graaff
* Checkbot 1.74 is released
* checkbot (add_error): Take into account that status message can
be undefined.
2003-12-15 Hans de Graaff
* checkbot: Put Checkbot errors in a hash to have one set of
descriptions around.
(handle_doc): Use it.
(checkbot_status_message): Use it to ind the status message for a
code from HTTP codes, Checkbot codes, or a generic status message.
(printServerProblems): Use it.
(handle_url): Move checks for --dontwarn and --suppression
features from here ...
(add_error): ... to here so that it applies to all errors.
2003-12-14 Hans de Graaff
* checkbot: Document that Checkbot defines its own response codes
for common problems.
No longer a need for the %warning hash.
(add_error): New function to add a new error into the hashes.
(handle_url): Use it.
(handle_doc): Use it for what previously were warnings.
(printServerWarnings): Obsolete as warnings have been changed to
use the normal error handling routines.
Marked --allow-simple-hosts option as deprecated, because this can
now be handled in a more generic way by the --dontwarn mechanism.
(print_help): Removed --allow-simple-hosts option from help.
(add_to_queue): Move code to check for double slash in URL to ...
(handle_doc): ... here as Checkbot error 903.
2003-11-29 Hans de Graaff
* checkbot (printServerProblems): Oops. Make sure all output is
going to the right file, not stdout.
Add new --suppress option which reads a file with response code /
URL combinations to be suppressed in the output, based on patch by
Rob Chekaluk.
(init_suppression): Read suppresson file and fill has with
results.
(handle_url): Use it.
(print_help): Document it.
2003-11-24 Hans de Graaff
* checkbot: Add example to --ignore argument.
2003-11-23 Hans de Graaff
* checkbot (init_modules): Delete commented-out code to enable
HTTP 1.1 in LWP. HTTP 1.1 has been the default in LWP for a while
and does not need special code to be enabled.
2003-11-21 Hans de Graaff
* checkbot (printServerProblems): Don't assume that status_message
is defined for all possible codes, based on patch by Thomas
Kuerten.
2003-10-18 Hans de Graaff
* Makefile.PL: Require LWP 5.70 because problems with HEAD of
ftp:// links have been solved in this release.
2003-09-05 Hans de Graaff
* checkbot (printServerProblems): Put line breaks in HTML file in
a more logical place.
2003-08-31 Hans de Graaff
* Checkbot 1.73 released
2003-08-30 Hans de Graaff
* checkbot (printServerProblems): Protect against undefined status.
2003-08-29 Hans de Graaff
* checkbot (handle_doc): Ignore URIs matching --ignore as they are
being found.
(handle_url): Remove check for --ignore option here.
Update documentation for --ignore.
(print_help): Idem.
2003-08-21 Hans de Graaff
* checkbot: Made --interval description a bit more clear.
2003-07-26 Hans de Graaff
* checkbot (init_modules): Uncomment proxy support, but it now
applies to all requests, not just external ones.
(print_help): Update --proxy help text.
Update perldoc documentation.
2003-07-05 Hans de Graaff
* checkbot: Additional explanation for --exclude option.
2003-06-28 Bernd Petrovitsch
* checkbot.css: Additional cleaning up of the CSS file.
2003-06-26 Bernd Petrovitsch
* checkbot: Produce valid XHTML 1.1 pages.
* checkbot.css: Clean up of the CSS file.
2003-05-04 Hans de Graaff
* Checkbot 1.72 released
* checkbot: Applied spelling fixes from Jens Schweikhardt.
(clean_up): Factored out of check_links so that it can also be
called when we catch a signal.
(got_signal): Catch signals like SIGINT and handle them, based on
patch by Jens Schweikhardt.
2003-04-06 Hans de Graaff
* checkbot (handle_url): No longer ignore URLs with a query
string. If checking these is not wanted then the --exclude option
can be used, and an example for this is now included in the
documentation.
2003-03-30 Hans de Graaff
* checkbot (printServerProblems): Add links to different error
codes on a server page for quick navigation.
2003-02-22 Paul Merchant, Jr.
* checkbot: Initialize the statistics counters to avoid warnings.
2003-01-15 Hans de Graaff
* checkbot (output): Correct the check for --verbose; not
specifying it now generates no output.
2003-01-06 Hans de Graaff
* checkbot (handle_doc): The host name check does not make much
sense for news: scheme URLs.
2003-01-03 Hans de Graaff
* checkbot (init_globals): Only remove file from default --match
argument when there is a path component in the start URL.
Initialize problem counter to avoid warning about uninitialized
value.
2002-12-29 Hans de Graaff
* Checkbot 1.71 released
* checkbot (handle_url): Make sure we feed is_internal a string.
(handle_url): Use existing variable instead of Referer header to
store parent URL.
* Checkbot 1.70 created for testing, but not released
* checkbot (performRequest): Add HTTP 403 error to list of error
codes to retry with a GET.
(handle_url): Only follow redirections for internal links.
2002-12-28 Hans de Graaff
* checkbot: Removed reference to AnyDBM_File because it is not
used anywhere.
Rewrote global statistics gathering to be more simple and more
accurate.
Added --filter option which allows rewriting of URLs before they
are checked, based on patch from Eli the Bearded .
Simplified storage of URLs with problems
(get_headers): Removed.
(performRequest): Included code from get_headers here.
(count_problems): Updated for new storage of URLs
(printServerProblems): Idem.
(handle_url): Idem.
(handle_doc): Idem.
(count_problems): Idem.
(printServerProblems): Idem.
(handle_doc): Add code to report all pages on which a problematic
URL appears.
(init_globals): Changed default --match argument to exclude final
page name.
2002-12-27 Hans de Graaff
* checkbot (output): Moved printing, including indentation and
verbose checking, to function 'output'.
(handle_doc): No more distinction between internal and external
links, we throw all links found in the queue.
(handle_doc): Removed statistics for now, they are too buggy.
(is_checked): New function takes into account that we sometimes
translate hostnames to IP addresses.
(handle_doc): Use it.
(check_internal): Removed dependency on statistics, use actual
queue contents to determine when all links are checked.
(handle_url): Only query server for file type on
application/octet-stream documents.
(is_internal): New function to determine if URL is internal.
(handle_url): Rewritten to use new functions and to deal with
external URLs being mixed in, and generally cleaned up.
(handle_url): Moved --internal-only checks here.
(check_external): Removed.
(check_links): Renamed from check_internal.
Added small blurb to documentation on distinction between internal
and external links and the way checkbot checks these.
* t/test.t: Added simple test case: can checkbot be run without
arguments?
2002-12-25 Hans de Graaff
* Checkbot 1.69 released
2002-12-25 Hans de Graaff
* checkbot (get_headers): Make sure feedback on HEAD requests gets
indented properly.
2002-12-23 Hans de Graaff
* checkbot (init_globals): Anchor automatic match argument based
on start URLs at the beginning.
2002-12-16 Jens Schweikhardt
* checkbot (check_external): Fixed printf to be print so that
actual information can be printed using --verbose.
2002-12-02 Hans de Graaff
* checkbot (get_headers): Also add 406 as an error which might
indicate that the web server doesn't like us doing a HEAD, so GET
instead.
2002-12-01 Hans de Graaff
* Makefile.PL: Updated based on libwww-perl Makefile.PL.
* checkbot: Remove the preamble cruft and just assume perl will be
/usr/bin/perl. Therefore also renamed checkbot.pl -> checkbot.
Indicate that Checkbot is licensed under the same terms as Perl
itself.
* checkbot.pl (count_problems): Rewrote debugging code to handle
request without header() method, even though this should not be
possible it does happen in the wild.
(handle_doc): Perform fully-qualified hostname check for all URI's
which support a hostname.
2002-11-30 Hans de Graaff
* checkbot.pl (add_checked): Use ->can construct to check if URL
supports host method.
2002-10-27 Hans de Graaff
* checkbot.pl: Add hints for recursive or run-away checkbot
processes.
2002-09-28 Hans de Graaff
* Checkbot 1.68 released
2002-08-05 Hans de Graaff
* checkbot.pl (handle_doc): Comment out warning about external
URLs with non-checkable schemes to avoid lots of useless output.
2002-06-09 Jostle Lemcke
* checkbot.pl: Added --allow-simple-hosts option. This option
turns off the warnings for unqualified host names.
2002-04-01 Hans de Graaff
* checkbot.pl (handle_doc): Ignore URLs found in
tags. Suggestion from Roman Maeder.
2002-03-31 Hans de Graaff
* checkbot.pl (print_help): Mention --style option in help message.
(check_internal): Always close CURRENT filehandle, and add warn
for potential problems with this based on patch and report from
Greg Larkin.
* checkbot.pl: Added HINTS AND TIPS section to
documentation. Added hint on using passive FTP based on feedback
from Roman Maeder.
2002-03-31 Brent Verner
* checkbot.pl (handle_doc): Only match http and https, not stuff
like httpa.
2002-03-31 Paco Hope
* checkbot.css: Contributed style sheet for Checkbot. Use with
--style option.
2002-01-20 Roman Maeder
* checkbot.pl (handle_url): Use select() to sleep instead of
sleep() so that sleep interval can be fractional.
2001-12-16 Hans de Graaff
* Checkbot 1.67 released
2001-11-16 Hans de Graaff
* checkbot.pl: Add example for --match argument based on question
by Michael Lambert.
2001-11-11 Hans de Graaff
* checkbot.pl (count_problems): Quote meta characters in server
name and URL when matching them.
(handle_doc): Fix two minor bugs related to the move to URI.
2001-11-11 Evaldas Imbrasas
* checkbot.pl: Add --language option to allow language
negotiation.
* checkbot.pl (check_options): Set default for --sleep option to 0.
* checkbot.pl (check_internal): Only close if it already
exists.
2001-11-03 Hans de Graaff
* checkbot.pl (printServerProblems): There might not be a response
message.
(handle_url): Use status_line instead of code and message for
HTTP::Response object.
(handle_doc): Also check external gopher links.
2001-10-25 Hans de Graaff
* Checkbot 1.66 released
* checkbot.pl (get_headers): URI doesn't know about netloc, but it
does know about authority.
(get_headers): $url is already absolute, no need for ->abs
2001-10-18 Hans de Graaff
* Checkbot 1.65 released
2001-10-14 Hans de Graaff
* checkbot.pl (handle_doc): Print a notice when external non
HTTP/FTP URLs are dropped.
2001-09-29 Hans de Graaff
* checkbot.pl (init_modules and other places): Remove
URI::URL::strict call and use of new URI::URL because it is
obsolete, we should use the URI classes now.
2001-09-23 Hans de Graaff
* checkbot.pl (init_globals): Initialize last checkpoint time with
0 instead of current time, so that we write out a set of pages
right at the start. This will catch problems with permissions for
these pages as early as possible.
2001-07-01 Hans de Graaff
* checkbot.pl (get_server_type): Take into account that we might
not learn anything about the server
2001-05-06 Hans de Graaff
* checkbot.pl (get_headers): Factored out of check_external so
that moving to using GET requests only will be easier later.
2001-04-30 Hans de Graaff
* checkbot.pl (send_mail): Really fix printing of starting URLs in
email. All URLs are now printed in the subject and body of the
message.
2001-04-15 Hans de Graaff
* Checkbot 1.64 released
2001-03-13 Hans de Graaff
* checkbot.pl (send_mail): Fix printing of starting URL in email.
2001-03-04 Nick Hibma
* checkbot.pl (printServerWarnings): Removed duplicate print statement.
2001-02-10 Boris Lantrewitz
* checkbot.pl (init_globals): Allow more environment variables to
be used to set the temporary directory.
(send_mail): Avoid using printf to the handle for those systems
where printf on a pipe is not implemented.
2001-01-14 Hans de Graaff
* Checkbot 1.63 released
2001-01-02 Hans de Graaff
* Makefile.PL (chk_version): Require LWP 5.50, which contains an
important bugfix when dealing with relative redirects.
2001-01-01 Hans de Graaff
* checkbot.pl (init_globals): If no --match is given, construct
one based on all the start URLs given. Suggested by Mathieu
Guillaume.
2000-12-31 Hans de Graaff
* checkbot.pl (create_page): Remove the .bak file when the new
file is written, unless --debug is in effect.
2000-12-31 OBARA Kiyotake
* checkbot.pl (print_server): Create correct URLs when --file
argument contains directories as well as a filename.
2000-12-31 David Brownlee
* checkbot.pl (create_page): Fix typo in die message.
2000-12-24 Hans de Graaff
* checkbot.pl: Added a small blurb in the documentation about the
URLs Checkbot will find and check.
2000-12-24 Petter Reinholdtsen
* checkbot.pl (handle_url): Deal with redirect responses without
Location header.
2000-11-18 Roman Maeder
* checkbot.pl (handle_url): Remove check which would not check
files named the same as the main report file. If you don't want
Checkbot to check its intermediate pages, use the --exclude
option.
* checkbot.pl (handle_url): Ask server for file type when
requesting http:// URLs to be on the safe side, as using
guess_media_type() is not always correct.
2000-10-28 Nick Hibma
* checkbot.pl (check_external): Only print when --verbose is true.
(printServerProblems): Fix proper printing of .
(handle_doc): Include proper URL for report for unqualified URLs.
2000-10-01 TAKAKU Masao
* checkbot.pl (print_server): Make pages well-formed by inserting
and tags.
2000-09-24 Hans de Graaff
* Checkbot 1.62 released
2000-09-16 Hans de Graaff
* checkbot.pl (send_mail): Only mention URL in the subject of the
mail if one is given through the --url option.
(check_external): The ALEPH web server is also broken with respect
to HEAD requests.
2000-09-04 Hans de Graaff
* checkbot.pl (check_external): JavaWebServer is also broken with
respect to HEAD requests.
2000-08-26 Hans de Graaff
* checkbot.pl (create_page): Add --style option which allows a
link to a CSS file to be included in each Checkbot page.
2000-08-20 Nick Hibma
* checkbot.pl (check_external): Some servers don't set the Server:
header. Check to see if the server field is set in a response to
avoid warnings.
* checkbot.pl (add_checked): Add --enable-virtual option to use
hostname instead of IP address to distinguish servers. This allows
checking of multiple virtual servers.
2000-08-13 Hans de Graaff
* Makefile.PL: Add a check for HTML::Parser. Require latest
version, 3.10, because I'm not sure older versions work correctly.
2000-06-29 Hans de Graaff
* Checkbot 1.61 released
* Makefile.PL (chk_version): Add version checked for in output.
2000-06-18 Larry Gilbert
* checkbot.pl (check_external): Use GET instead of HEAD for
confused closed-source servers.
2000-06-18 Hans de Graaff
* Makefile.PL (chk_version): require URI 1.07 as it contains bug
fixes for using Base URLs.
* checkbot.pl: Change email and web address
2000-04-30 Hans de Graaff
* Checkbot 1.60 released
* checkbot.pl (check_options): Add option --dontwarn to exclude
certain types of warnings. Based on idea by David Hoekman.
2000-04-29 Mark Roedel
* checkbot.pl (handle_url): Deal with "300 Multiple Choices"
response which does not offer a URL to redirect to.
2000-04-09 David Hoekman
* checkbot.pl (init_globals): Allow for TMPDIR with or without
trailing /
2000-04-08 Hans de Graaff >
* checkbot.pl: Updated contact information in file header.
2000-03-26 Hans de Graaff
* checkbot.pl (check_options): Add message about skipping of
external links. Also removes warning about single use of variable.
2000-03-06 Brian McNett
* checkbot.pl: On a Mac, ask command line options
through MacPerl mechanism.
2000-02-06 Hans de Graaff
* checkbot.pl (init_globals): Check wether URLs on the command
line have a proper host. Thanks to Charles Williams for the
report.
2000-01-30 Hans de Graaff
* Checkbot 1.59 released
* checkbot.pl (handle_doc): Use eof instead of parse(undef) to end
parsing.
2000-01-15 Hans de Graaff
* checkbot.pl (handle_doc): Show warnings about hostnames only on
the console when --verbose.
2000-01-09 Hans de Graaff
* checkbot.pl: Added option --internal-only to skip checking of
external links altogether. Idea by David Hoekman
2000-01-02 Hans de Graaff
* checkbot.pl (handle_doc): Use canonical URI from LinkExtor,
which simplifies the rest of the logic and gets things working
with the new version of LinkExtor.
2000-01-01 Stephane Bortzmeyer
* checkbot.pl (init_globals): Create Checkbot workdir in $TMPDIR
if defined, /tmp otherwise.
1999-12-31 Hans de Graaff
* checkbot.pl (handle_doc): Change frag to fragment.
1999-11-07 Hans de Graaff
* checkbot.pl (handle_doc): Add warning for URLs for which LWP
can't determine a hostname, and don't check them further.
1999-10-24 Hans de Graaff
* checkbot.pl (print_help): Added line on --interval option.
1999-10-23 Hans de Graaff
* checkbot.pl (init_globals): Fixed proper determination of server
prefix if a filename is supplied, thanks to Michael Baumer.
1999-10-02 Hans de Graaff
* checkbot.pl (init_modules): Added use URI.
1999-08-21 Hans de Graaff
* Makefile.PL (chk_version): Added check for URI.
1999-07-17 Hans de Graaff
* README: Added blurb on the announcements mailing list.
1999-07-06 Hans de Graaff
* checkbot.pl (add_checked): Deal with the fact that a mailto: URL
has no host component. Thanks to John Croft for the report.
1999-06-27 Hans de Graaff
* checkbot.pl (handle_url): Really fix relative redirection URLs
using the URI class. Thanks for Thomas Zander for the report and
reproducible failing URL.
1999-05-03 Hans de Graaff
* checkbot.pl (printServerWarnings): Also change clustering of URLs.
1999-05-02 Hans de Graaff
* checkbot.pl (signature): Add quotes around the URL in the
signature.
(printServerProblems): Fixed clustering of URLs so that faulty
links are listed under the URL that contains them, instead of the
other way around. This ordering problem was introduced in 1.53.
1999-04-10 Hans de Graaff
* checkbot.pl (handle_url): Make sure a redirected URL is fully
qualified (based on the original URL) to avoid dying on it
later. Thanks to David Hoekman for the initial analysis.
1999-04-05 Hans de Graaff
* checkbot.pl (printAllServers): Taken out of create_page for
clarity.
(printServerWarnings): Keep warning headers from being printed for
each warning.
1999-03-15 Hans de Graaff
* README: Explain which Perl modules are needed.
1999-02-20 Hans de Graaff
* checkbot.pl (printServerWarnings): Fix printing of warnings so
that headers are only printed once.
(print_server): get correct IP address for web servers with
non-standard port numbers.
1999-02-08 Hans de Graaff
* Makefile.PL (chk_version): Added location of Mail::Send.
1999-01-18 Hans de Graaff
* checkbot.pl (count_problems): Change counting of problems to
deal with new structure.
1999-01-17 Hans de Graaff
* checkbot.pl (printServerProblems): Changed to accomodate new
inventory of problem response. This new method allow multiple bad
links to one URL be all reported all at once. Also use
standardized response descriptions based on a patch by Benjamin
Franz .
1999-01-10 Hans de Graaff
* checkbot.pl (byReferringPage): Added to allow sorting of
problems by referer.
(byProblem): Removed code to compare by exact message and
referer.
Removed the pre-amble to generate correct perl path because it is
a bit too cumbersome during development.
1998-12-31 Hans de Graaff
* checkbot.pl (handle_url): Do a HEAD request when the guessed
content-type matches application/octet-stream to get the real
content-type from the server.
1998-12-27 Hans de Graaff
* checkbot.pl (handle_doc): Added warning for HTTP URLs without a
fully-qualified hostname.
* checkbot.pl (printServerWarnings): Added a mechanism to also
display checkbot warnings, unrelated to the HTTP responses, on the
results pages.
1998-10-24 Hans de Graaff
* checkbot.pl (setup): Explicitly set record separator $/
This appears needed for perl 5.005, and fixes a problem
where no URLs would appear to match except the first few.
1998-10-10 Hans de Graaff
* checkbot.pl: Made POD conform to new scripts format better.
1998-06-21 Hans de Graaff
* checkbot.pl (init_modules): HTML::Parse is no longer needed,
removed.
Sat Sep 6 16:00:12 1997 Hans de Graaff
* checkbot 1.51 released
Sat Aug 30 18:05:39 1997 Hans de Graaff
* checkbot.pl (init_globals): assume file: scheme when no scheme
is present.
* checkbot.pl: Small portability stuff for perl 5.004 and LWP 5.11.
Sun Aug 17 08:56:38 1997 Hans de Graaff
* README: Changed email addresses to point to new ISP.
Mon Apr 28 09:08:29 1997 Hans de Graaff
* checkbot.pl: Parsing VERSION is somewhat tricky. Fixed.
Sun Apr 27 21:02:58 1997 Hans de Graaff
* checkbot.pl (check_external): Close EXTERNAL after use.
Sun Apr 20 10:24:09 1997 Hans de Graaff
* checkbot.pl: Fixed a number of small bugs reported by Jost Krieger.
Regular expressions can now be used with the options.
Added --interval option to denote maximum interval between updates.
Sat Apr 5 17:03:46 1997 Hans de Graaff
* checkbot.pl (init_globals): Added checks for URLs without a scheme.
Fri Mar 14 11:17:21 1997 Hans de Graaff
* checkbot.pl (print_help): Fix typo.
Tue Jan 14 16:51:36 1997 Hans de Graaff
* checkbot.pl (check_internal): Check whether there are really
entries in the new queue when changing queues.
Sat Jan 4 14:26:04 1997 Hans de Graaff
* checkbot.pl (print_help): --seconds should be --sleep in help.
Mon Dec 30 12:03:14 1996 Hans de Graaff
* checkbot.pl (handle_url): If a URL is exclude'd, only use HEAD
on it, not GET.
Starting URLs can now be entered on the command line in addition
to the --url option. --url takes precedence. --match is
initialized with first URL if not given as separate option.
Mon Dec 23 20:21:32 1996 Hans de Graaff
* checkbot.pl (print_server_problems): Each error message was
evaluated as a regexp, potentially crashing checkbot on a bad
regexp (e.g. including the string '++').
Mon Dec 23 15:15:05 1996 Hans de Graaff
* checkbot.pl (ip_address): Deal with IP-address not found.
Sun Dec 8 12:55:33 1996 Hans de Graaff
* checkbot.pl (send_mail): --note didn't work; Checkbot would
crash when no external links were found.
Wed Dec 4 12:43:14 1996 Hans de Graaff
* checkbot.pl (add_checked): All checked URLs are indexed using
IP-address to avoid checking pages multiple times for multiple
CNAME's.
Mon Nov 4 14:19:30 1996 Hans de Graaff
* checkbot.pl (send_mail): Braino in URL fixed.
Sun Oct 27 20:16:38 1996 Hans de Graaff
* checkbot.pl (init_globals): Don't let --match default to the
--url until after we possible change the URL (this happens for
file:/ URLs, currently)
Wed Oct 23 14:22:15 1996 Hans de Graaff
* checkbot.pl (check_point): Oops, checking would occur every minute
Mon Oct 21 13:41:48 1996 Hans de Graaff
* checkbot.pl (print_help): Added version number to help info.
Wed Oct 16 21:05:58 1996 Hans de Graaff
* checkbot.pl: Added --proxy option for checking external links
through a proxy server
Sat Sep 28 09:26:48 1996 Hans de Graaff
* checkbot.pl (init_globals): Changed /var/tmp to /tmp.
(check_point): Slower exponential rate, upper limit of 3 hours
* Makefile.PL: Added check for Mail::Send
* README: Added
Thu Sep 26 17:01:36 1996 Hans de Graaff
* checkbot.pl: Switched from short options to long options.
I was already running out of meaningful options, so before adding
additional stuff I wanted to move to Long options first. You
should be able to abbreviate most options to the previous values.
Notable exception is -m, which has become --match.
Wed Sep 25 10:58:06 1996 Hans de Graaff
* checkbot.pl:
Renamed from checkbot
Added preamble to set proper path for perl (code from Gisle Aas)
* Makefile.PL: First version, installs checkbot and checkbot.1
* checkbot: Changed $revision to $VERSION for MakeMaker.
Thu Sep 12 15:09:07 1996 Hans de Graaff
* index.html: updated required modules and location.
* checkbot: require LWP-5.02, because it fixes a few nasty bugs.
Thu Sep 5 16:00:42 1996 Hans de Graaff
* index.html:
Removed old and out-of-date documentation. Replaced by link to
automatically generated html version of POD documentation
within Checkbot.
* checkbot:
Fixed documentation bugs.
Really fix the case insensitive comparison.
Sun Sep 1 20:31:46 1996 Hans de Graaff
* checkbot (print_server_problems):
Make comparison for error message case insensitive.
Fri Aug 30 20:19:56 1996 Hans de Graaff
* checkbot: Fixed several typo's.
Wed Aug 7 10:06:29 1996 Hans de Graaff
* checkbot (handle_doc):
The new LinkExtractor is nice, but I shouldn't treat its output as
a hash when it is an array, and thus skipping every other link.
Mon Aug 5 08:46:24 1996 Hans de Graaff
* checkbot (print_server):
Fixed silly bug in calculating the percentage of problems on each
server.
Fri Aug 2 21:38:39 1996 Hans de Graaff
* checkbot: Added several patches by Bruce Speyer:
Added -N note option to go along with -M, -z to suppress reporting
errors on matching links.
Added enough logic to catch gopher URLS if no gopher server found.
Need further logic to parse gopher returned menu for bad file or
directory.
* checkbot: Made a good start with POD documentation inside the
checkbot file. Try 'perldoc checkbot'.
* TODO: Added number of suggestions by Luuk de Boer.
* checkbot (send_mail): Include summary of links checked in message.
Fri Aug 2 13:01:02 1996 Hans de Graaff
* checkbot:
Added check for correct LWP version. We now need 5.01, due to bugs
in the handling of the BASE attribute in previous versions.
Sat Jul 27 21:13:26 1996 Hans de Graaff
* checkbot:
Added several patches by Bruce Speyer:
Optimized some static regular expressions.
Fixed not setting the timeout, making the -t option useless.
Mon Jul 22 22:28:34 1996 Hans de Graaff
* checkbot (create_page):
Fixed number of columns in summary output.
Sat Jul 20 11:49:23 1996 Hans de Graaff
* checkbot (handle_doc): Changed to use the new HTML::LinkExtor,
which will be present in LWP5.01. Should be more efficient, and
less prone to memory leaks.
Sat Jul 13 12:41:23 1996 Hans de Graaff
* checkbot (create_page): Forgot to add the ratio on the page.
(check_external): Fix problems with different `wc` output.
Sat Jun 22 11:30:12 1996 Hans de Graaff
* checkbot: Use correct base URL as returned with the document.
Only check document when we used 'GET' to receive it.
Remove magic guessing with ending slash of starting url.
Deal with redirections by inserting redirected URLs into queue
again.
Thu Jun 20 15:58:20 1996 Hans de Graaff
* checkbot: Major cleanup of initialization code. Also added todo
counts to progression page, and proper todo handling for external
links.
Sun Jun 16 21:16:28 1996 Hans de Graaff
* checkbot: Added -M option: send mail when Checkbot is done.
Fixed division by zero bug when external links == 0
Tue Jun 4 12:46:39 1996 Hans de Graaff
* checkbot: Better way to ignore fragments.
Sat Jun 1 15:14:52 1996 Hans de Graaff
* checkbot: Don't print decimals with the precentages.
Major update of counting, and printing counts. Cleaned up
variables, corrected counting, made display more consistent and
clear.
Wed May 29 21:18:26 1996 Hans de Graaff
* checkbot: Small fixes to support lwp-win32 as well, thanks to
Martin Cleaver.
Mon May 27 09:21:30 1996 Hans de Graaff
* checkbot: oops, small error in regexp caused script to append a
slash to almost all start-url's. Fixed.
* checkbot (handle_doc): External links without full URL's were
not always handled properly.
Sun May 26 10:04:39 1996 Hans de Graaff
* checkbot: If the starting URL doesn't end in a slash, and
doesn't have an extension, assume we need to add a slash.
* index.html: Add version number to web page, and make sure it gets
updated automatically.
Wed May 22 09:58:36 1996 Hans de Graaff
* checkbot: Changed verbose output of links found on pages.
Tue May 14 16:43:38 1996 Hans de Graaff
* TODO: updated with respect to recent changes.
Mon May 13 15:06:05 1996 Hans de Graaff
* checkbot: Added LWP version number to agent field, changed page
update policy, and updated script to LWP5b13.
Sat May 4 21:38:56 1996 Hans de Graaff
* checkbot: Changed checked array to an associative array. Will
consume more memory, but drastically cut back on lookup time.
Rewrote handle_url logic to be more clear. Also fixed bug where
servers would be added to the list unjustly.
Sleep was only done on problem links, not after each request.
Also added checks for already checked links while scanning through
the document, and only add those links not checked to the queue.
Add percentage problem links for each individual server.
Mon Apr 29 08:43:12 1996 Hans de Graaff
* checkbot: Deal with unknown or non-determinable server types.
Only add links to the external queue when we know we can check
their protocol.
Additional changes to layout and content of pages.
Sun Apr 28 21:16:51 1996 Hans de Graaff
* checkbot: Rewrote report page.
Wed Apr 24 22:39:43 1996 Hans de Graaff
* checkbot: Added a number of patches by Tim MacKenzie
Added -s option to set the seconds of sleep between requests.
Remove work files when *not* debugging.
Only compile -m and -x regular expressions once.
Also check external ftp and nntp links (using HEAD only).
Get rid of huge memory leak! (Also noted by Fabrice Gaillard)
Fri Mar 29 10:58:24 1996 Hans de Graaff
* checkbot:
Got rid of warnings about some variables.
Fixed problem with incorrect automatic -m argument when scanning
local files.
Sun Mar 24 18:01:05 1996 Hans de Graaff
* checkbot:
Added code to support regular expressions with the -m and -x
arguments. Thanks to Thomas Thiel for the patch and suggestions.
No strict checking on schemes, fixes problem with unknown schemes
stopping checkbot. Thanks to Pierre-Yves Foucou.
* checkbot:
Should create direcory for temporary files, and remove it
afterwards. Noted by Steve Fisk.
Sat Mar 16 13:40:48 1996 Hans de Graaff
* checkbot:
Made a number of changes from or based on patches by Thomas Thiel:
Added missing t option in Getopts string.
Made -m argument optional. If not given, the -u argument is also
used as the start argument.
Temporary files are now created in a separate directory. Its name
contains the PID of Checkbot, to allow several concurrent
Checkbots being run. Also remove temporary files, unless
debugging.
Implement file:// scheme to allow direct checking (without HTTP
server)
Fri Mar 15 11:06:13 1996 Hans de Graaff
* checkbot:
Fixed warnings (and in the process, a small bug as well).
Added URL and proper name to help.
Sat Mar 2 11:51:45 1996 Hans de Graaff
* checkbot:
Added 'require 5.002' (because libwww-perl5b8 requires it).
Added 'use strict', and fixed problems resulting from this. This
can be seen as a first step towards fixing the huge
memory-consumption.
Updated help.
Tue Feb 27 09:57:57 1996 Hans de Graaff
* checkbot:
Fixed bug which occured when -x option was not present.
Updated script to use libwww-perl5b8 function names. This is not
backward compatible with versions prior to beta 8.
Mon Feb 26 12:46:08 1996 Hans de Graaff
* checkbot:
Fixed bug with Referer header for external URL's.
Also make server pages auto-refresh.
Sat Feb 24 11:48:15 1996 Hans de Graaff
* TODO: New file.
* checkbot: Added single -x option as an additional exclude pattern.
This overrules the -m match attribute.
Mon Dec 11 14:13:30 1995 Hans de Graaff
* index.html
Added libwww-perl5 address, and added a usage section.
* checkbot.pl
Removed this old perl4 version.
Fri Dec 8 13:41:43 1995 Hans de Graaff
* checkbot:
Major rewrite of most of the internal routines. The routines are
much more structured now, and broken up into smaller routines.
I also changed the way checked links are remembered. It should be
much less efficient, CPU-wise, but more efficient memory-wise.
Fri Nov 24 16:45:18 1995 Hans de Graaff
* checkbot:
Fixed small problems, mostly with output.
Fixed checking of external links
Changed sorting order
* checkbot:
Perl5 version now works for the most part. Although Checkbot isn't
fully finished I at least feel confident to release it.
Fri Aug 25 11:23:36 1995 Hans de Graaff
* Made a start with the perl5 version of checkbot. The modules in
perl5 (e.g. LWP) look very promising, and should make checkbot
quite a bit better.