Parse-MediaWikiDump-1.0.6/000755 000765 000024 00000000000 11476760033 015570 5ustar00tylerstaff000000 000000 Parse-MediaWikiDump-1.0.6/Changes000644 000765 000024 00000014133 11476554061 017067 0ustar00tylerstaff000000 000000 Revision history for Parse-MediaWikiDump 1.0.6 Dec 04, 2010 * Fix for bug #58196 - error "not a MediaWiki link dunp file" due to absence of 'LOCK TABLES ...' line in link dump file? * Added option to pass input to ::Pages constructor via named parameters so that MediaWiki::DumpFile::Compat and ::Pages share the same API * Software is nearly retired, only need more testing on MediaWiki::DumpFile::Compat; Please begin using MediaWiki::DumpFile::Compat instead of this package and report problems if you encounter them. 1.0.5 Apr 21, 2010 * Updated docs pointing people to MediaWiki::DumpFile::Compat 1.0.4 Jan 04, 2010 * Fixed bug #53361: Incorrectly assigned usernames with anon edits * Added support for getting access to IP of anonymous edits 1.0.3 Nov 21, 2009 * Fixed bug #51607 "Build failed CPAN smoke test for for i686pc solaris" by specifying minimum version numbers for all dependencies in Makefile.PL. 1.0.2 Nov 15, 2009 * Fixed bug #51461 "Warnings During Build" - now the test process squelches the harmless warning from Test::Memory::Cycle. 1.0.1 Nov 11, 2009 * CPAN indexer did not like previous version number 1.0.0 Nov 11, 2009 * Stable status achieved * Slight speed tweak on character handler for XML parser * Added dependency on Devel::Cycle 1.11 as 1.10 causes a false positive to be thrown on the memory leak test 0.98 Oct 28, 2009 * Bumped processing speed back up * Fixed possible infinite loop scenario * Ordered tests * Added test to find circular references 0.97 Oct 23, 2009 * Fixed all known memory leaks * No more Object::Destroyer * Cleaned out some old cruft 0.96 Oct 22, 2009 * Allowed parsing of 0.4 version XML dump files but not support for new features * Added in a method to retrieve the version number of the XML dump file 0.95 Oct 14, 2009 * Found and removed a circular reference but it did not stop the memory leak * Fixed bug 50092 - some times $page->text would return a reference to an undefined value * Implemented support for compressed file GLOB objects per bug 50241 0.94 Sep 28, 2009 * Fix bug 49979 - "redirect in newer Wikipedia dumps" by allowing unknown tag names to exist 0.93 Sep 15, 2009 * Made ::Pages a subclass of ::Revisions * Discovered a bug regression: ::Pages and ::Revisions leak memory/are not properly garbage collected 0.92 Apr 15, 2009 * Completed documentation for all modules * Added test for backwards compatibility to the pre-factory Parse::MediaWikiDump interface 0.91 May 13, 2009 * Updated documentation to more explicitly list what kind of dump files each parser object can deal with. * Added dependency on perl 5.8.8 for :utf8 compatibility. * Split up lib/ into multiple files. * Fix for bug #46054 - using categories method of Parse::MediaWikiDump::page object causes script to crash. 0.90 May 07, 2009 * Implemented new parsing engine and called it Parse::MediaWikiDump::Revisions. Soon it will be replacing Parse::MediaWikiDump::Pages as a base engine. It is fully backwards compatible so please feel free to test it in your existing utilities and report success and failure to the author. * Moved namespace logic into Parse::MediaWikiDump::page and updated Parse::MediaWikiDump::Pages to support it. 0.51 May 31, 2008 * Fix for bug 36255 "Parse::MediaWikiDump::page::namespace may return a string which is not really a namespace" provided by Amir E. Aharoni. * Moved test data into t/ and moved speed_test.pl into examples/ * Exceedingly complicated functions (parse_head() and parse_page()) are not funny. Added some comments on how to rectify that situation. * Tightened up the tests a little bit. 0.50 Jun 27, 2006 * Added category links parser. * Removed all instances of shift() from the code. 0.40 Jun 21, 2006 * Increased processing speed by around 40%! Thank you Andrew Rodland. 0.33 Jun 18, 2006 * Added current_byte and size methods to page dumper. 0.32 Feb 25, 2006 * Added a line to create a package named Parse::MediaWikiDump so the module will get listed on CPAN search and the cpan command line tool. 0.31 Jan 10, 2006 * Fix bug 16981 - Parse::MediaWikiDump::page->redirect does not work with redirects that have a : in them. * Fix bug 16981 part two: title with a non-breaking space in it would come through as undefined. 0.30 December 23, 2005 * the Pages and Links class now both use a method named next() to get the next record. The old link() and page() methods are retained for now but should be migrated away from as soon as is convenient. * Added list of dump files that this module can process to the README file. 0.24 December 19, 2005 * Fixed bug #16616 - the category method only works properly on English language dump files. 0.23 December 19, 2005 * Fixed email address for author. * Fixed omission of namespace method for pages objects in the documentation. * Added limitations section to README. * Fixed http://rt.cpan.org bug #16583 - Module dies when parsing the 20051211 German Wikipedia dump. * Added some comments to the source code. 0.22 September 15, 2005 * Created some new and more comprehensive examples. * Parse::MediaWikiDump::Pages now dies with a specific error if it is asked to parse a comprehensive (full pages) dump file. * Updated Parse::MediaWikiDump::Links to new dump file format. * Added tests for Parse::MediaWikiDump::Links. * Solved a bug: Expat's current_byte method returns a 32 bit signed integer and the english Wikipedia dumps cause the number to wrap; implemented a work around for this. 0.21 September 10, 2005 * Improve testing of Parse::MediaWikiDump::Pages * Fix silly bug related to opening file handle references * Found new bug: The links dump format has been changed and the existing code can not parse the new format * Found new bug: comprehensive dump files like 20050909_pages_full.xml.gz cause the stack to grow too large and the module to abort early. 0.2 September 9, 2005 * Add tests and test data 0.1 September 6, 2005 * First version, released on an unsuspecting world. Parse-MediaWikiDump-1.0.6/examples/000755 000765 000024 00000000000 11476760033 017406 5ustar00tylerstaff000000 000000 Parse-MediaWikiDump-1.0.6/lib/000755 000765 000024 00000000000 11476760033 016336 5ustar00tylerstaff000000 000000 Parse-MediaWikiDump-1.0.6/Makefile.PL000644 000765 000024 00000001372 11476537747 017563 0ustar00tylerstaff000000 000000 use strict; use warnings; use ExtUtils::MakeMaker; WriteMakefile( NAME => 'Parse::MediaWikiDump', AUTHOR => 'Tyler Riddle ', VERSION_FROM => 'lib/Parse/MediaWikiDump.pm', ABSTRACT_FROM => 'lib/Parse/MediaWikiDump.pm', PL_FILES => {}, PREREQ_PM => { 'PadWalker' => '1.9', 'Devel::Cycle' => '1.11', 'Test::Memory::Cycle' => '1.04', 'Test::More' => '0.94', 'Test::Exception' => '0.27', 'Test::Warn' => '0.21', 'XML::Parser' => '2.36', 'List::Util' => '1.21', 'Scalar::Util' => '1.21', }, dist => { COMPRESS => 'gzip -9f', SUFFIX => 'gz', }, clean => { FILES => 'Parse-MediaWikiDump-*' }, ); Parse-MediaWikiDump-1.0.6/MANIFEST000644 000765 000024 00000001203 11476537747 016733 0ustar00tylerstaff000000 000000 Changes TODO MANIFEST META.yml # Will be created by "make dist" Makefile.PL README examples/speed_test t/00-load.t t/00-load.t t/30-links-compat.t t/30-links.t t/30-pages.t t/30-revisions.t t/40-pages-single-revision-only.t t/40-pre-factory.t t/70-memory-cycle.t t/links_test.sql t/memory-leak.off t/pages_test.xml t/revisions_test.xml lib/Parse/MediaWikiDump/category_link.pm lib/Parse/MediaWikiDump/CategoryLinks.pm lib/Parse/MediaWikiDump/link.pm lib/Parse/MediaWikiDump/Links.pm lib/Parse/MediaWikiDump/page.pm lib/Parse/MediaWikiDump/Pages.pm lib/Parse/MediaWikiDump/Revisions.pm lib/Parse/MediaWikiDump/XML.pm lib/Parse/MediaWikiDump.pm Parse-MediaWikiDump-1.0.6/META.yml000644 000765 000024 00000001467 11476760033 017051 0ustar00tylerstaff000000 000000 --- #YAML:1.0 name: Parse-MediaWikiDump version: 1.0.6 abstract: Tools to process MediaWiki dump files author: - Tyler Riddle license: unknown distribution_type: module configure_requires: ExtUtils::MakeMaker: 0 build_requires: ExtUtils::MakeMaker: 0 requires: Devel::Cycle: 1.11 List::Util: 1.21 PadWalker: 1.9 Scalar::Util: 1.21 Test::Exception: 0.27 Test::Memory::Cycle: 1.04 Test::More: 0.94 Test::Warn: 0.21 XML::Parser: 2.36 no_index: directory: - t - inc generated_by: ExtUtils::MakeMaker version 6.56 meta-spec: url: http://module-build.sourceforge.net/META-spec-v1.4.html version: 1.4 Parse-MediaWikiDump-1.0.6/README000644 000765 000024 00000003756 11476537747 016501 0ustar00tylerstaff000000 000000 Parse-MediaWikiDump Parse::MediaWikiDump is a collection of classes for processing various MediaWiki dump files such as those at http://download.wikimedia.org/wikipedia/en/; the package requires XML::Parser. Using this software it is nearly trivial to get access to the information in supported dump files. Currently the following dump files are supported: * Current page dumps for all languages * Current links dumps for all languages INSTALLATION To install this module, run the following commands: perl Makefile.PL make make test make install EXAMPLE Extract the text for a given article from the given dump file: #!/usr/bin/perl use strict; use warnings; use Parse::MediaWikiDump; my $file = shift(@ARGV) or die "must specify a MediaWiki dump of the current pages"; my $title = shift(@ARGV) or die "must specify an article title"; my $dump = Parse::MediaWikiDump::Pages->new($file); binmode(STDOUT, ':utf8'); binmode(STDERR, ':utf8'); #this is the only currently known value but there could be more in the future if ($dump->case ne 'first-letter') { die "unable to handle any case setting besides 'first-letter'"; } #enforce the MediaWiki case rules $title = case_fixer($title); #iterate over the entire dump file, article by article while(my $page = $dump->next) { if ($page->title eq $title) { print STDERR "Located text for $title\n"; my $text = $page->text; print $$text; exit 0; } } print STDERR "Unable to find article text for $title\n"; exit 1; #removes any case sensativity from the very first letter of the title #but not from the optional namespace name sub case_fixer { my $title = shift; #check for namespace if ($title =~ /^(.+?):(.+)/) { $title = $1 . ':' . ucfirst($2); } else { $title = ucfirst($title); } return $title; } COPYRIGHT & LICENSE Copyright 2005 Tyler Riddle, all rights reserved. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. Parse-MediaWikiDump-1.0.6/t/000755 000765 000024 00000000000 11476760033 016033 5ustar00tylerstaff000000 000000 Parse-MediaWikiDump-1.0.6/TODO000644 000765 000024 00000000000 11476537747 016264 0ustar00tylerstaff000000 000000 Parse-MediaWikiDump-1.0.6/t/00-load.t000644 000765 000024 00000000250 11476537747 017367 0ustar00tylerstaff000000 000000 #!perl use Test::More tests => 1; BEGIN { use_ok( 'Parse::MediaWikiDump' ); } diag( "Testing Parse::MediaWikiDump $Parse::MediaWikiDump::VERSION, Perl $], $^X" ); Parse-MediaWikiDump-1.0.6/t/30-links-compat.t000644 000765 000024 00000000615 11476537747 021061 0ustar00tylerstaff000000 000000 #!perl use Test::Simple tests => 4; use strict; use warnings; use Parse::MediaWikiDump; my $file = 't/links_test.sql'; my $links = Parse::MediaWikiDump->links($file); my $sum; my $last_link; while(my $link = $links->link) { $sum += $link->from; $last_link = $link; } ok($sum == 92288); ok($last_link->from == 7759); ok($last_link->to eq 'Recentchanges'); ok($last_link->namespace == -1); Parse-MediaWikiDump-1.0.6/t/30-links.t000644 000765 000024 00000000614 11476537747 017577 0ustar00tylerstaff000000 000000 #!perl use Test::Simple tests =>4; use strict; use warnings; use Parse::MediaWikiDump; my $file = 't/links_test.sql'; my $links = Parse::MediaWikiDump->links($file); my $sum; my $last_link; while(my $link = $links->next) { $sum += $link->from; $last_link = $link; } ok($sum == 92288); ok($last_link->from == 7759); ok($last_link->to eq 'Recentchanges'); ok($last_link->namespace == -1); Parse-MediaWikiDump-1.0.6/t/30-pages.t000644 000765 000024 00000005220 11476537747 017554 0ustar00tylerstaff000000 000000 #!perl -w use Test::Simple tests => 108; use strict; use Parse::MediaWikiDump; use Data::Dumper; my $file = 't/pages_test.xml'; my $fh; my $pages; my $mode; $mode = 'file'; test_all($file); open($fh, $file) or die "could not open $file: $!"; $mode = 'handle'; test_all($fh); sub test_all { $pages = Parse::MediaWikiDump->pages(shift); test_one(); test_two(); test_three(); test_four(); test_five(); ok(! defined($pages->next)); } sub test_one { ok($pages->sitename eq 'Sitename Test Value'); ok($pages->base eq 'Base Test Value'); ok($pages->generator eq 'Generator Test Value'); ok($pages->case eq 'Case Test Value'); ok($pages->namespaces->[0]->[0] == -2); ok($pages->namespaces_names->[0] eq 'Media'); ok($pages->current_byte != 0); ok($pages->version eq '0.3'); if ($mode eq 'file') { ok($pages->size == 3100); } elsif ($mode eq 'handle') { ok(! defined($pages->size)) } else { die "invalid test mode"; } my $page = $pages->next; my $text = $page->text; ok(defined($page)); ok($page->title eq 'Talk:Title Test Value'); ok($page->id == 1); ok($page->revision_id == 47084); ok($page->username eq 'Username Test Value'); ok($page->userid == 1292); ok($page->timestamp eq '2005-07-09T18:41:10Z'); ok($page->userid == 1292); ok($page->minor); ok($$text eq "Text Test Value\n"); ok($page->namespace eq 'Talk'); ok(! defined($page->redirect)); ok(! defined($page->categories)); } sub test_two { my $page = $pages->next; my $text = $page->text; ok($page->title eq 'Title Test Value #2'); ok($page->id == 2); ok($page->revision_id eq '47085'); ok($page->username eq 'Username Test Value 2'); ok($page->timestamp eq '2005-07-09T18:41:10Z'); ok($page->userid == 1292); ok($page->minor); ok($$text eq "#redirect : [[fooooo]]\n"); ok($page->namespace eq ''); ok($page->redirect eq 'fooooo'); ok(! defined($page->categories)); } sub test_three { my $page = $pages->next; ok(defined($page)); ok($page->redirect); ok($page->title eq 'Title Test Value #3'); ok($page->id == 3); ok($page->timestamp eq '2005-07-09T18:41:10Z'); ok($page->username eq 'Username Test Value'); ok($page->userid == 1292); } sub test_four { my $page = $pages->next; ok(defined($page)); ok($page->id == 4); ok($page->timestamp eq '2005-07-09T18:41:10Z'); ok($page->username eq 'Username Test Value'); ok($page->userid == 1292); #test for bug 36255 ok($page->namespace eq ''); ok($page->title eq 'NotANameSpace:Bar'); } sub test_five { my $page = $pages->next; ok(defined($page)); ok($page->id == 5); ok($page->title eq 'Moar Tests'); ok(! defined($page->username)); ok(! defined($page->userid)); ok($page->userip eq '62.104.212.74'); } Parse-MediaWikiDump-1.0.6/t/30-revisions.t000644 000765 000024 00000006316 11476537747 020505 0ustar00tylerstaff000000 000000 #!perl -w use Test::Simple tests => 114; use strict; use Parse::MediaWikiDump; use Data::Dumper; my $file = 't/revisions_test.xml'; my $fh; my $revisions; my $mode; $mode = 'file'; test_all($file); open($fh, $file) or die "could not open $file: $!"; $mode = 'handle'; test_all($fh); sub test_all { $revisions = Parse::MediaWikiDump->revisions(shift); test_siteinfo(); test_one(); test_two(); test_three(); test_four(); test_five(); test_six(); ok(! defined($revisions->next)); } sub test_siteinfo { ok($revisions->sitename eq 'Sitename Test Value'); ok($revisions->base eq 'Base Test Value'); ok($revisions->generator eq 'Generator Test Value'); ok($revisions->case eq 'Case Test Value'); ok($revisions->namespaces->[0]->[0] == -2); ok($revisions->namespaces_names->[0] eq 'Media'); ok($revisions->current_byte != 0); ok($revisions->version eq '0.3'); if ($mode eq 'file') { ok($revisions->size == 3570); } elsif ($mode eq 'handle') { ok(! defined($revisions->size)); } else { die "invalid test mode"; } } #the first two tests check everything to make sure information #is not leaking across pages due to accumulator errors. sub test_one { my $page = $revisions->next; my $text = $page->text; ok(defined($page)); ok($page->title eq 'Talk:Title Test Value'); ok($page->id == 1); ok($page->revision_id == 47084); ok($page->username eq 'Username Test Value 1'); ok($page->userid == 1292); ok($page->timestamp eq '2005-07-09T18:41:10Z'); ok($page->userid == 1292); ok($page->minor); ok($$text eq "Text Test Value 1\n"); ok($page->namespace eq 'Talk'); ok(! defined($page->redirect)); ok(! defined($page->categories)); } sub test_two { my $page = $revisions->next; my $text = $page->text; ok($page->title eq 'Title Test Value #2'); ok($page->id == 2); ok($page->revision_id eq '47085'); ok($page->username eq 'Username Test Value 2'); ok($page->timestamp eq '2006-07-09T18:41:10Z'); ok($page->userid == 12); ok($page->minor); ok($$text eq "#redirect : [[fooooo]]"); ok($page->namespace eq ''); ok($page->redirect eq 'fooooo'); ok(! defined($page->categories)); } sub test_three { my $page = $revisions->next; my $text = $page->text; ok(defined($page)); ok($page->redirect eq 'fooooo'); ok($page->title eq 'Title Test Value #2'); ok($page->id == 2); ok($page->timestamp eq '2005-07-09T18:41:10Z'); ok($page->username eq 'Username Test Value'); ok($page->userid == 1292); ok(! $page->minor); } sub test_four { my $page = $revisions->next; my $text = $page->text; ok(defined($page)); ok($page->id == 4); ok($page->timestamp eq '2005-07-09T18:41:10Z'); ok($page->username eq 'Username Test Value'); ok($page->userid == 1292); #test for bug 36255 ok($page->namespace eq ''); ok($page->title eq 'NotANameSpace:Bar'); } #test for Bug 50092 sub test_five { my $page = $revisions->next; ok($page->title eq 'Bug 50092 Test'); ok(defined(${$page->text})); } #test for bug 53361 sub test_six { my $page = $revisions->next; ok($page->title eq 'Test for bug 53361'); ok($page->username eq 'Ben-Zin'); ok(! defined($page->userip)); $page = $revisions->next; ok($page->title eq 'Test for bug 53361'); ok($page->userip eq '62.104.212.74'); ok(! defined($page->username)); } Parse-MediaWikiDump-1.0.6/t/40-pages-single-revision-only.t000644 000765 000024 00000000517 11476537747 023653 0ustar00tylerstaff000000 000000 #!perl -w use strict; use warnings; use Test::Exception tests => 1; use Parse::MediaWikiDump; my $file = 't/revisions_test.xml'; throws_ok { test() } qr/^only one revision per page is allowed$/, 'one revision per article ok'; sub test { my $pages = Parse::MediaWikiDump->pages($file); while(defined($pages->next)) { }; }; Parse-MediaWikiDump-1.0.6/t/40-pre-factory.t000644 000765 000024 00000000425 11476537747 020713 0ustar00tylerstaff000000 000000 use Test::Simple tests => 3; use strict; use Parse::MediaWikiDump; ok(defined(Parse::MediaWikiDump::Pages->new('t/pages_test.xml'))); ok(defined(Parse::MediaWikiDump::Revisions->new('t/revisions_test.xml'))); ok(defined(Parse::MediaWikiDump::Links->new('t/links_test.sql')));Parse-MediaWikiDump-1.0.6/t/70-memory-cycle.t000644 000765 000024 00000001043 11476537747 021065 0ustar00tylerstaff000000 000000 use strict; use warnings; use Test::Memory::Cycle tests => 15; use Test::Warn; use Parse::MediaWikiDump; my $pages = Parse::MediaWikiDump->pages('t/pages_test.xml'); my $revisions = Parse::MediaWikiDump->revisions('t/revisions_test.xml'); #for bug 51461 warnings_exist { memory_cycle_ok($pages); while(defined(my $page = $pages->next)) { memory_cycle_ok($page); } memory_cycle_ok($revisions); while(defined(my $revision = $revisions->next)) { memory_cycle_ok($revision); } } [qr/^Unhandled type: GLOB/], "Unhandled type: GLOB";Parse-MediaWikiDump-1.0.6/t/links_test.sql000644 000765 000024 00000002220 11476537747 020745 0ustar00tylerstaff000000 000000 -- MySQL dump 9.11 -- -- Host: benet Database: simplewiki -- ------------------------------------------------------ -- Server version 4.0.22-log -- -- Table structure for table `pagelinks` -- DROP TABLE IF EXISTS `pagelinks`; CREATE TABLE `pagelinks` ( `pl_from` int(8) unsigned NOT NULL default '0', `pl_namespace` int(11) NOT NULL default '0', `pl_title` varchar(255) binary NOT NULL default '', UNIQUE KEY `pl_from` (`pl_from`,`pl_namespace`,`pl_title`), KEY `pl_namespace` (`pl_namespace`,`pl_title`) ) TYPE=InnoDB; -- -- Dumping data for table `pagelinks` -- /*!40000 ALTER TABLE `pagelinks` DISABLE KEYS */; LOCK TABLES `pagelinks` WRITE; INSERT INTO `pagelinks` VALUES (7759,-1,'Recentchanges'),(4016,0,'\"Captain\"_Lou_Albano'),(7491,0,'\"Captain\"_Lou_Albano'),(9935,0,'\"Dimebag\"_Darrell'),(7617,0,'\"Hawkeye\"_Pierce'),(1495,0,'$1'),(1495,0,'$2'),(4901,0,'\',_art_title,_\''),(4376,0,'\'Abd_Al-Rahman_Al_Sufi'),(12418,0,'\'Allo_\'Allo!'),(4045,0,'\'Newton\'s_cradle\'_toy'),(4045,0,'\'Push-and-go\'_toy_car'),(7794,0,'\'Salem\'s_Lot'),(4670,0,'(2340_Hathor'),(1876,0,'(Mt.'),(4400,0,'(c)Brain'),(3955,0,'...Baby_One_More_Time_(single)'); Parse-MediaWikiDump-1.0.6/t/memory-leak.off000644 000765 000024 00000000622 11476537747 020767 0ustar00tylerstaff000000 000000 #!perl use strict; use warnings; use Parse::MediaWikiDump; use Devel::Cycle; my $dump1 = Parse::MediaWikiDump->revisions('t/pages_test.xml'); my $NUM_TESTS = 10000; my $i = 0; find_cycle($dump1); #exit 1; while ($i++ < $NUM_TESTS) { my $dump = Parse::MediaWikiDump->pages('t/pages_test.xml'); while($dump->next) { } $dump = Parse::MediaWikiDump->pages('t/pages_test.xml'); $dump->next; } Parse-MediaWikiDump-1.0.6/t/pages_test.xml000644 000765 000024 00000006034 11476537747 020734 0ustar00tylerstaff000000 000000 Sitename Test Value Base Test Value Generator Test Value Case Test Value Media Special Talk User User talk Wikipedia Wikipedia talk Image Image talk MediaWiki MediaWiki talk Template Template talk Help Help talk Category Category talk Talk:Title Test Value 1 47084 2005-07-09T18:41:10Z Username Test Value1292 Comment Test Value Text Test Value Title Test Value #2 2 47085 2005-07-09T18:41:10Z Username Test Value 21292 Comment Test Value #redirect : [[fooooo]] Title Test Value #3 3 47086 2005-07-09T18:41:10Z Username Test Value1292 Comment Test Value #redirect [[fooooo]] NotANameSpace:Bar 4 47088 2005-07-09T18:41:10Z Username Test Value1292 Comment Test Value test for bug #36255 - Parse::MediaWikiDump::page::namespace may return a string which is not really a namespace Moar Tests 5 38847 2002-10-31T14:53:37Z 62.104.212.74 Parse-MediaWikiDump-1.0.6/t/revisions_test.xml000644 000765 000024 00000006762 11476537747 021666 0ustar00tylerstaff000000 000000 Sitename Test Value Base Test Value Generator Test Value Case Test Value Media Special Talk User User talk Wikipedia Wikipedia talk Image Image talk MediaWiki MediaWiki talk Template Template talk Help Help talk Category Category talk Talk:Title Test Value 1 47084 2005-07-09T18:41:10Z Username Test Value 11292 Comment Test Value 1 Text Test Value 1 Title Test Value #2 2 47085 2006-07-09T18:41:10Z Username Test Value 212 Comment Test Value 2 #redirect : [[fooooo]] 47086 2005-07-09T18:41:10Z Username Test Value1292 Comment Test Value #redirect [[fooooo]] NotANameSpace:Bar 4 47088 2005-07-09T18:41:10Z Username Test Value1292 Comment Test Value test for bug #36255 - Parse::MediaWikiDump::page::namespace may return a string which is not really a namespace Bug 50092 Test 5 47089 2005-07-09T18:41:10Z Username Test Value1292 Comment Test Value Test for bug 53361 145 38841 2002-09-08T22:15:32Z Ben-Zin 9 en: 38847 2002-10-31T14:53:37Z 62.104.212.74 Parse-MediaWikiDump-1.0.6/lib/Parse/000755 000765 000024 00000000000 11476760033 017410 5ustar00tylerstaff000000 000000 Parse-MediaWikiDump-1.0.6/lib/Parse/MediaWikiDump/000755 000765 000024 00000000000 11476760033 022101 5ustar00tylerstaff000000 000000 Parse-MediaWikiDump-1.0.6/lib/Parse/MediaWikiDump.pm000644 000765 000024 00000014703 11476757476 022465 0ustar00tylerstaff000000 000000 package Parse::MediaWikiDump; our $VERSION = '1.0.6'; use Parse::MediaWikiDump::XML; use Parse::MediaWikiDump::Revisions; use Parse::MediaWikiDump::Pages; use Parse::MediaWikiDump::page; use Parse::MediaWikiDump::Links; use Parse::MediaWikiDump::link; use Parse::MediaWikiDump::CategoryLinks; use Parse::MediaWikiDump::category_link; #the POD is at the end of this file sub new { my ($class) = @_; return bless({}, $class); } sub pages { shift(@_); return Parse::MediaWikiDump::Pages->new(@_); } sub revisions { shift(@_); return Parse::MediaWikiDump::Revisions->new(@_); } sub links { shift(@_); return Parse::MediaWikiDump::Links->new(@_); } #just a place holder for something that might be used in the future #package Parse::MediaWikiDump::ExternalLinks; # #use strict; #use warnings; # #sub new { # my ($class, $source) = @_; # my $self = {}; # # $$self{BUFFER} = []; # $$self{BYTE} = 0; # # bless($self, $class); # # $self->open($source); # $self->init; # # return $self; #} # #sub next { # my ($self) = @_; # my $buffer = $$self{BUFFER}; # my $link; # # while(1) { # if (defined($link = pop(@$buffer))) { # last; # } # # #signals end of input # return undef unless $self->parse_more; # } # # return Parse::MediaWikiDump::external_link->new($link); #} # ##private functions with OO interface #sub parse_more { # my ($self) = @_; # my $source = $$self{SOURCE}; # my $need_data = 1; # # while($need_data) { # my $line = <$source>; # # last unless defined($line); # # $$self{BYTE} += length($line); # # while($line =~ m/\((\d+),'(.*?)','(.*?)'\)[;,]/g) { # push(@{$$self{BUFFER}}, [$1, $2, $3]); # $need_data = 0; # } # } # # #if we still need data and we are here it means we ran out of input # if ($need_data) { # return 0; # } # # return 1; #} # #sub open { # my ($self, $source) = @_; # # if (ref($source) ne 'GLOB') { # die "could not open $source: $!" unless # open($$self{SOURCE}, $source); # # $$self{SOURCE_FILE} = $source; # } else { # $$self{SOURCE} = $source; # } # # binmode($$self{SOURCE}, ':utf8'); # # return 1; #} # #sub init { # my ($self) = @_; # my $source = $$self{SOURCE}; # my $found = 0; # # while(<$source>) { # if (m/^LOCK TABLES `externallinks` WRITE;/) { # $found = 1; # last; # } # } # # die "not a MediaWiki link dump file" unless $found; #} # #sub current_byte { # my ($self) = @_; # # return $$self{BYTE}; #} # #sub size { # my ($self) = @_; # # return undef unless defined $$self{SOURCE_FILE}; # # my @stat = stat($$self{SOURCE_FILE}); # # return $stat[7]; #} # #package Parse::MediaWikiDump::external_link; # ##you must pass in a fully populated link array reference #sub new { # my ($class, $self) = @_; # # bless($self, $class); # # return $self; #} # #sub from { # my ($self) = @_; # return $$self[0]; #} # #sub to { # my ($self) = @_; # return $$self[1]; #} # #sub index { # my ($self) = @_; # return $$self[2]; #} # #sub timestamp { # my ($self) = @_; # return $$self[3]; # 1; __END__ =head1 NAME Parse::MediaWikiDump - Tools to process MediaWiki dump files =head1 SYNOPSIS use Parse::MediaWikiDump; $pmwd = Parse::MediaWikiDump->new; $pages = $pmwd->pages('pages-articles.xml'); $revisions = $pmwd->revisions('pages-articles.xml'); $links = $pmwd->links('links.sql'); =head1 DESCRIPTION This software suite provides the tools needed to process the contents of the XML page dump files and the SQL based links dump file. =head1 STATUS This software is being RETIRED - MediaWiki::DumpFile is the official successor to Parse::MediaWikiDump and includes a compatibility library called MediaWiki::DumpFile::Compat that is 100% API compatible and is a near perfect standin for this module. It is faster in all instances where it counts and is actively maintained. Any undocumented deviation of MediaWiki::DumpFile::Compat from Parse::MediaWikiDump is considered a bug and will be fixed. =head2 Migration Please begin using MediaWiki::DumpFile::Compat immediately as a replacement for this module. There will be no more features added to this software suite and bugs may not be fixed. Parse::MediaWikiDump::Pages used to check the version of the dump file it is parsing and reject versions it does not know about; this behavior has been removed. The parser will now continue in this instance and hope for the best. This way this software will continue to run into the future with out requiring further adjustment for as long as the upstream fileformat remains compatible. In the event there is an unfixable bug or the dump file format changes in an incompatible way the Parse::MediaWikiDump module as a whole wil be replaced with a stub that brings in MediaWiki::DumpFile::Compat - this may never need to happen but it is the plan for when it does. Migrating on your terms instead of being forced to if this happens is suggested. =head1 USAGE This module is a factory class that allows you to create instances of the individual parser objects. =over 4 =item $pmwd->pages Returns a Parse::MediaWikiDump::Pages object capable of parsing an article XML dump file with one revision per each article. =item $pmwd->revisions Returns a Parse::MediaWikiDump::Revisions object capable of parsing an article XML dump file with multiple revisions per each article. =item $pmwd->links Returns a Parse::MediaWikiDump::Links object capable of parsing an article links SQL dump file. =back =head2 General All parser creation invocations require a location of source data to parse; this argument can be either a filename or a reference to an already open filehandle. This entire software suite will die() upon errors in the file or if internal inconsistencies have been detected. If this concerns you then you can wrap the portion of your code that uses these calls with eval(). =head1 AUTHOR This module was created, documented, and is maintained by Tyler Riddle Etriddle@gmail.comE. Fix for bug 36255 "Parse::MediaWikiDump::page::namespace may return a string which is not really a namespace" provided by Amir E. Aharoni. =head1 BUGS Please report any bugs or feature requests to C, or through the web interface at L. I will be notified, and then you'll automatically be notified of progress on your bug as I make changes. =head2 Known Bugs No known bugs at this time. =head1 COPYRIGHT & LICENSE Copyright 2005 Tyler Riddle, all rights reserved. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. Parse-MediaWikiDump-1.0.6/lib/Parse/MediaWikiDump/category_link.pm000644 000765 000024 00000000643 11476537747 025312 0ustar00tylerstaff000000 000000 package Parse::MediaWikiDump::category_link; our $VERSION = '1.0.3'; #you must pass in a fully populated link array reference sub new { my ($class, $self) = @_; bless($self, $class); return $self; } sub from { my ($self) = @_; return $$self[0]; } sub to { my ($self) = @_; return $$self[1]; } sub sortkey { my ($self) = @_; return $$self[2]; } sub timestamp { my ($self) = @_; return $$self[3]; } 1;Parse-MediaWikiDump-1.0.6/lib/Parse/MediaWikiDump/CategoryLinks.pm000644 000765 000024 00000003315 11476537747 025235 0ustar00tylerstaff000000 000000 package Parse::MediaWikiDump::CategoryLinks; our $VERSION = '1.0.3'; use strict; use warnings; sub new { my ($class, $source) = @_; my $self = {}; $$self{BUFFER} = []; $$self{BYTE} = 0; bless($self, $class); $self->open($source); $self->init; return $self; } sub next { my ($self) = @_; my $buffer = $$self{BUFFER}; my $link; while(1) { if (defined($link = pop(@$buffer))) { last; } #signals end of input return undef unless $self->parse_more; } return Parse::MediaWikiDump::category_link->new($link); } #private functions with OO interface sub parse_more { my ($self) = @_; my $source = $$self{SOURCE}; my $need_data = 1; while($need_data) { my $line = <$source>; last unless defined($line); $$self{BYTE} += length($line); while($line =~ m/\((\d+),'(.*?)','(.*?)',(\d+)\)[;,]/g) { push(@{$$self{BUFFER}}, [$1, $2, $3, $4]); $need_data = 0; } } #if we still need data and we are here it means we ran out of input if ($need_data) { return 0; } return 1; } sub open { my ($self, $source) = @_; if (ref($source) ne 'GLOB') { die "could not open $source: $!" unless open($$self{SOURCE}, $source); $$self{SOURCE_FILE} = $source; } else { $$self{SOURCE} = $source; } binmode($$self{SOURCE}, ':utf8'); return 1; } sub init { my ($self) = @_; my $source = $$self{SOURCE}; my $found = 0; while(<$source>) { if (m/^LOCK TABLES `categorylinks` WRITE;/) { $found = 1; last; } } die "not a MediaWiki link dump file" unless $found; } sub current_byte { my ($self) = @_; return $$self{BYTE}; } sub size { my ($self) = @_; return undef unless defined $$self{SOURCE_FILE}; my @stat = stat($$self{SOURCE_FILE}); return $stat[7]; } 1;Parse-MediaWikiDump-1.0.6/lib/Parse/MediaWikiDump/link.pm000644 000765 000024 00000002532 11476553727 023410 0ustar00tylerstaff000000 000000 package Parse::MediaWikiDump::link; our $VERSION = '1.0.3'; #you must pass in a fully populated link array reference sub new { my ($class, $self) = @_; bless($self, $class); return $self; } sub from { my ($self) = @_; return $$self[0]; } sub namespace { my ($self) = @_; return $$self[1]; } sub to { my ($self) = @_; return $$self[2]; } 1; =head1 NAME Parse::MediaWikiDump::link - Object representing a link from one article to another =head1 ABOUT This object is used to access the data associated with each individual link between articles in a MediaWiki instance. =head1 STATUS This software is being RETIRED - MediaWiki::DumpFile is the official successor to Parse::MediaWikiDump and includes a compatibility library called MediaWiki::DumpFile::Compat that is 100% API compatible and is a near perfect standin for this module. It is faster in all instances where it counts and is actively maintained. Any undocumented deviation of MediaWiki::DumpFile::Compat from Parse::MediaWikiDump is considered a bug and will be fixed. =head1 METHODS =over 4 =item $link->from Returns the article id (not the name) that the link orginiates from. =item $link->namespace Returns the namespace id (not the name) that the link points to =item $link->to Returns the article title (not the id and not including the namespace) that the link points to Parse-MediaWikiDump-1.0.6/lib/Parse/MediaWikiDump/Links.pm000644 000765 000024 00000010405 11476553737 023532 0ustar00tylerstaff000000 000000 package Parse::MediaWikiDump::Links; #this needs to be fully replaced by MediaWiki::DumpFile::Compat #because it uses a much more correct SQL parser our $VERSION = '1.0.6'; use strict; use warnings; sub new { my ($class, $source) = @_; my $self = {}; $$self{BUFFER} = []; bless($self, $class); $self->open($source); #fix for bug 58196 #$self->init; return $self; } sub next { my ($self) = @_; my $buffer = $$self{BUFFER}; my $link; while(1) { if (defined($link = pop(@$buffer))) { last; } #signals end of input return undef unless $self->parse_more; } return Parse::MediaWikiDump::link->new($link); } #private functions with OO interface sub parse_more { my ($self) = @_; my $source = $$self{SOURCE}; my $need_data = 1; while($need_data) { my $line = <$source>; last unless defined($line); while($line =~ m/\((\d+),(-?\d+),'(.*?)'\)[;,]/g) { push(@{$$self{BUFFER}}, [$1, $2, $3]); $need_data = 0; } } #if we still need data and we are here it means we ran out of input if ($need_data) { return 0; } return 1; } sub open { my ($self, $source) = @_; if (ref($source) ne 'GLOB') { die "could not open $source: $!" unless open($$self{SOURCE}, $source); } else { $$self{SOURCE} = $source; } binmode($$self{SOURCE}, ':utf8'); return 1; } sub init { my ($self) = @_; my $source = $$self{SOURCE}; my $found = 0; while(<$source>) { if (m/^LOCK TABLES `pagelinks` WRITE;/) { $found = 1; last; } } die "not a MediaWiki link dump file" unless $found; } #depreciated backwards compatibility methods #replaced by next() sub link { my ($self) = @_; $self->next(@_); } 1; __END__ =head1 NAME Parse::MediaWikiDump::Links - Object capable of processing link dump files =head1 ABOUT This object is used to access content of the SQL based category dump files by providing an iterative interface for extracting the indidivual article links to the same. Objects returned are an instance of Parse::MediaWikiDump::link. =head1 SYNOPSIS $pmwd = Parse::MediaWikiDump->new; $links = $pmwd->links('pagelinks.sql'); $links = $pmwd->links(\*FILEHANDLE); #print the links between articles while(defined($link = $links->next)) { print 'from ', $link->from, ' to ', $link->namespace, ':', $link->to, "\n"; } =head1 STATUS This software is being RETIRED - MediaWiki::DumpFile is the official successor to Parse::MediaWikiDump and includes a compatibility library called MediaWiki::DumpFile::Compat that is 100% API compatible and is a near perfect standin for this module. It is faster in all instances where it counts and is actively maintained. Any undocumented deviation of MediaWiki::DumpFile::Compat from Parse::MediaWikiDump is considered a bug and will be fixed. =head1 METHODS =over 4 =item Parse::MediaWikiDump::Links->new Create a new instance of a page links dump file parser =item $links->next Return the next available Parse::MediaWikiDump::link object or undef if there is no more data left =back =head1 EXAMPLE =head2 List all links between articles in a friendly way #!/usr/bin/perl use strict; use warnings; use Parse::MediaWikiDump; my $pmwd = Parse::MediaWikiDump->new; my $links = $pmwd->links(shift) or die "must specify a pagelinks dump file"; my $dump = $pmwd->pages(shift) or die "must specify an article dump file"; my %id_to_namespace; my %id_to_pagename; binmode(STDOUT, ':utf8'); #build a map between namespace ids to namespace names foreach (@{$dump->namespaces}) { my $id = $_->[0]; my $name = $_->[1]; $id_to_namespace{$id} = $name; } #build a map between article ids and article titles while(my $page = $dump->next) { my $id = $page->id; my $title = $page->title; $id_to_pagename{$id} = $title; } $dump = undef; #cleanup since we don't need it anymore while(my $link = $links->next) { my $namespace = $link->namespace; my $from = $link->from; my $to = $link->to; my $namespace_name = $id_to_namespace{$namespace}; my $fully_qualified; my $from_name = $id_to_pagename{$from}; if ($namespace_name eq '') { #default namespace $fully_qualified = $to; } else { $fully_qualified = "$namespace_name:$to"; } print "Article \"$from_name\" links to \"$fully_qualified\"\n"; } Parse-MediaWikiDump-1.0.6/lib/Parse/MediaWikiDump/page.pm000644 000765 000024 00000011212 11476553753 023361 0ustar00tylerstaff000000 000000 package Parse::MediaWikiDump::page; our $VERSION = '1.0.4'; use strict; use warnings; use List::Util; sub new { my ($class, $data, $category_anchor, $case_setting, $namespaces) = @_; my $self = {}; bless($self, $class); $$self{DATA} = $data; $$self{CACHE} = {}; $$self{CATEGORY_ANCHOR} = $category_anchor; $$self{NAMESPACES} = $namespaces; return $self; } sub namespace { my ($self) = @_; my $title = $self->title; my $namespace = ''; return $$self{CACHE}{namespace} if defined $$self{CACHE}{namespace}; if ($title =~ m/^([^:]+):(.*)/) { foreach (@{ $self->{NAMESPACES} } ) { my ($num, $name) = @$_; if ($1 eq $name) { $namespace = $1; last; } } } $$self{CACHE}{namespace} = $namespace; return $namespace; } sub categories { my ($self) = @_; my $anchor = $$self{CATEGORY_ANCHOR}; return $$self{CACHE}{categories} if defined($$self{CACHE}{categories}); my $text = $$self{DATA}{text}; my @cats; while($text =~ m/\[\[$anchor:\s*([^\]]+)\]\]/gi) { my $buf = $1; #deal with the pipe trick $buf =~ s/\|.*$//; push(@cats, $buf); } return undef if scalar(@cats) == 0; $$self{CACHE}{categories} = \@cats; return \@cats; } sub redirect { my ($self) = @_; my $text = $$self{DATA}{text}; return $$self{CACHE}{redirect} if exists($$self{CACHE}{redirect}); if ($text =~ m/^#redirect\s*:?\s*\[\[([^\]]*)\]\]/i) { $$self{CACHE}{redirect} = $1; return $1; } else { $$self{CACHE}{redirect} = undef; return undef; } } sub title { my ($self) = @_; return $$self{DATA}{title}; } sub id { my ($self) = @_; return $$self{DATA}{id}; } sub revision_id { my ($self) = @_; return $$self{DATA}{revision_id}; } sub timestamp { my ($self) = @_; return $$self{DATA}{timestamp}; } sub username { my ($self) = @_; return $$self{DATA}{username}; } sub userid { my ($self) = @_; return $$self{DATA}{userid}; } sub userip { my ($self) = @_; return $$self{DATA}{userip}; } sub minor { my ($self) = @_; return $$self{DATA}{minor}; } sub text { my ($self) = @_; return \$$self{DATA}{text}; } 1; __END__ =head1 NAME Parse::MediaWikiDump::page - Object representing a specific revision of a MediaWiki page =head1 ABOUT This object is returned from the "next" method of Parse::MediaWikiDump::Pages and Parse::MediaWikiDump::Revisions. You most likely will not be creating instances of this particular object yourself instead you use this object to access the information about a page in a MediaWiki instance. =head1 SYNOPSIS $pages = Parse::MediaWikiDump::Pages->new('pages-articles.xml'); #get all the records from the dump files, one record at a time while(defined($page = $pages->next)) { print "title '", $page->title, "' id ", $page->id, "\n"; } =head1 STATUS This software is being RETIRED - MediaWiki::DumpFile is the official successor to Parse::MediaWikiDump and includes a compatibility library called MediaWiki::DumpFile::Compat that is 100% API compatible and is a near perfect standin for this module. It is faster in all instances where it counts and is actively maintained. Any undocumented deviation of MediaWiki::DumpFile::Compat from Parse::MediaWikiDump is considered a bug and will be fixed. =head1 METHODS =over 4 =item $page->redirect Returns an empty string (such as '') for the main namespace or a string containing the name of the namespace. =item $page->categories Returns a reference to an array that contains a list of categories or undef if there are no categories. This method does not understand templates and may not return all the categories the article actually belongs in. =item $page->title Returns a string of the full article title including the namespace if present =item $page->namespace Returns a string of the namespace of the article or an empty string if the article is in the default namespace =item $page->id Returns a number that is the id for the page in the MediaWiki instance =item $page->revision_id Returns a number that is the revision id for the page in the MediaWiki instance =item $page->timestamp Returns a string in the following format: 2005-07-09T18:41:10Z =item $page->username Returns a string of the username responsible for this specific revision of the article or undef if the editor was anonymous =item $page->userid Returns a number that is the id for the user returned by $page->username or undef if the editor was anonymous =item $page->userip Returns a string of the IP of the editor if the edit was anonymous or undef otherwise =item $page->minor Returns 1 if this article was flaged as a minor edit otherwise returns 0 =item $page->text Returns a reference to a string that contains the article title text =back Parse-MediaWikiDump-1.0.6/lib/Parse/MediaWikiDump/Pages.pm000644 000765 000024 00000021207 11476553764 023513 0ustar00tylerstaff000000 000000 package Parse::MediaWikiDump::Pages; our $VERSION = '1.0.4'; use base qw(Parse::MediaWikiDump::Revisions); use strict; use warnings; use Scalar::Util qw(weaken); #the only difference between this class and ::Revisions #is that this class enforces a single revision per each #page node sub new_accumulator_engine { my ($self) = @_; weaken($self); my $f = Parse::MediaWikiDump::XML::Accumulator->new; my $store_siteinfo = $self->{SITEINFO}; my $store_page = $self->{PAGE_LIST}; my $root = $f->root; my $mediawiki = $f->node('mediawiki', Start => \&handle_mediawiki_node); #stuff for siteinfo my $siteinfo = $f->node('siteinfo', End => sub { %$store_siteinfo = %{ $_[1] } } ); my $sitename = $f->textcapture('sitename'); my $base = $f->textcapture('base'); my $generator = $f->textcapture('generator'); my $case = $f->textcapture('case'); my $namespaces = $f->node('namespaces', Start => sub { $_[1]->{namespaces} = []; } ); my $namespace = $f->node('namespace', Character => \&save_namespace_node); #stuff for page entries my $page = $f->node('page', Start => sub { $_[0]->accumulator( {} ) } ); my $title = $f->textcapture('title'); my $id = $f->textcapture('id'); my $revision = $f->node('revision', Start => sub { $_[1]->{minor} = 0 }, End => sub { if (defined($_[1]->{seen_revision})) { $self->{DIE_REQUESTED} = "only one revision per page is allowed"; } $_[1]->{seen_revision} = 1; push(@$store_page, { %{ $_[1] } } ); } ); my $rev_id = $f->textcapture('id', 'revision_id'); my $minor = $f->node('minor', Start => sub { $_[1]->{minor} = 1 } ); my $time = $f->textcapture('timestamp'); my $contributor = $f->node('contributor'); my $username = $f->textcapture('username'); my $ip = $f->textcapture('ip', 'userip'); my $contrib_id = $f->textcapture('id', 'userid'); my $comment = $f->textcapture('comment'); my $text = $f->textcapture('text'); my $restr = $f->textcapture('restrictions'); #put together the tree $siteinfo->add_child($sitename, $base, $generator, $case, $namespaces); $namespaces->add_child($namespace); $page->add_child($title, $id, $revision, $restr); $revision->add_child($rev_id, $time, $contributor, $minor, $comment, $text); $contributor->add_child($username, $ip, $contrib_id); $mediawiki->add_child($siteinfo, $page); $root->add_child($mediawiki); my $engine = $f->engine($root, {}); return $engine; } sub handle_mediawiki_node { return Parse::MediaWikiDump::Revisions::handle_mediawiki_node(@_); } sub save_namespace_node { return Parse::MediaWikiDump::Revisions::save_namespace_node(@_); } 1; __END__ =head1 NAME Parse::MediaWikiDump::Pages - Object capable of processing dump files with a single revision per article =head1 ABOUT This object is used to access the metadata associated with a MediaWiki instance and provide an iterative interface for extracting the individual articles out of the same. This module does not allow more than one revision for each specific article; to parse a comprehensive dump file use the Parse::MediaWikiDump::Revisions object. =head1 SYNOPSIS $pmwd = Parse::MediaWikiDump->new; $pages = $pmwd->pages('pages-articles.xml'); $pages = $pmwd->pages(\*FILEHANDLE); #print the title and id of each article inside the dump file while(defined($page = $pages->next)) { print "title '", $page->title, "' id ", $page->id, "\n"; } =head1 STATUS =head1 STATUS This software is being RETIRED - MediaWiki::DumpFile is the official successor to Parse::MediaWikiDump and includes a compatibility library called MediaWiki::DumpFile::Compat that is 100% API compatible and is a near perfect standin for this module. It is faster in all instances where it counts and is actively maintained. Any undocumented deviation of MediaWiki::DumpFile::Compat from Parse::MediaWikiDump is considered a bug and will be fixed. =head1 METHODS =over 4 =item $pages->new Open the specified MediaWiki dump file. If the single argument to this method is a string it will be used as the path to the file to open. If the argument is a reference to a filehandle the contents will be read from the filehandle as specified. =item $pages->next Returns an instance of the next available Parse::MediaWikiDump::page object or returns undef if there are no more articles left. =item $pages->version Returns a plain text string of the dump file format revision number =item $pages->sitename Returns a plain text string that is the name of the MediaWiki instance. =item $pages->base Returns the URL to the instances main article in the form of a string. =item $pages->generator Returns a string containing 'MediaWiki' and a version number of the instance that dumped this file. Example: 'MediaWiki 1.14alpha' =item $pages->case Returns a string describing the case sensitivity configured in the instance. =item $pages->namespaces Returns a reference to an array of references. Each reference is to another array with the first item being the unique identifier of the namespace and the second element containing a string that is the name of the namespace. =item $pages->namespaces_names Returns an array reference the array contains strings of all the namespaces each as an element. =item $pages->current_byte Returns the number of bytes that has been processed so far =item $pages->size Returns the total size of the dump file in bytes. =back =head2 Scan an article dump file for double redirects that exist in the most recent article revision #!/usr/bin/perl #progress information goes to STDERR, a list of double redirects found #goes to STDOUT binmode(STDOUT, ":utf8"); binmode(STDERR, ":utf8"); use strict; use warnings; use Parse::MediaWikiDump; my $file = shift(@ARGV); my $pmwd = Parse::MediaWikiDump->new; my $pages; my $page; my %redirs; my $artcount = 0; my $file_size; my $start = time; if (defined($file)) { $file_size = (stat($file))[7]; $pages = $pmwd->pages($file); } else { print STDERR "No file specified, using standard input\n"; $pages = $pmwd->pages(\*STDIN); } #the case of the first letter of titles is ignored - force this option #because the other values of the case setting are unknown die 'this program only supports the first-letter case setting' unless $pages->case eq 'first-letter'; print STDERR "Analyzing articles:\n"; while(defined($page = $pages->next)) { update_ui() if ++$artcount % 500 == 0; #main namespace only next unless $page->namespace eq ''; next unless defined($page->redirect); my $title = case_fixer($page->title); #create a list of redirects indexed by their original name $redirs{$title} = case_fixer($page->redirect); } my $redir_count = scalar(keys(%redirs)); print STDERR "done; searching $redir_count redirects:\n"; my $count = 0; #if a redirect location is also a key to the index we have a double redirect foreach my $key (keys(%redirs)) { my $redirect = $redirs{$key}; if (defined($redirs{$redirect})) { print "$key\n"; $count++; } } print STDERR "discovered $count double redirects\n"; #removes any case sensativity from the very first letter of the title #but not from the optional namespace name sub case_fixer { my $title = shift; #check for namespace if ($title =~ /^(.+?):(.+)/) { $title = $1 . ':' . ucfirst($2); } else { $title = ucfirst($title); } return $title; } sub pretty_bytes { my $bytes = shift; my $pretty = int($bytes) . ' bytes'; if (($bytes = $bytes / 1024) > 1) { $pretty = int($bytes) . ' kilobytes'; } if (($bytes = $bytes / 1024) > 1) { $pretty = sprintf("%0.2f", $bytes) . ' megabytes'; } if (($bytes = $bytes / 1024) > 1) { $pretty = sprintf("%0.4f", $bytes) . ' gigabytes'; } return $pretty; } sub pretty_number { my $number = reverse(shift); $number =~ s/(...)/$1,/g; $number = reverse($number); $number =~ s/^,//; return $number; } sub update_ui { my $seconds = time - $start; my $bytes = $pages->current_byte; print STDERR " ", pretty_number($artcount), " articles; "; print STDERR pretty_bytes($bytes), " processed; "; if (defined($file_size)) { my $percent = int($bytes / $file_size * 100); print STDERR "$percent% completed\n"; } else { my $bytes_per_second = int($bytes / $seconds); print STDERR pretty_bytes($bytes_per_second), " per second\n"; } } =head1 LIMITATIONS =head2 Version 0.4 This class was updated to support version 0.4 dump files from a MediaWiki instance but it does not currently support any of the new information available in those files. Parse-MediaWikiDump-1.0.6/lib/Parse/MediaWikiDump/Revisions.pm000644 000765 000024 00000025635 11476553704 024440 0ustar00tylerstaff000000 000000 package Parse::MediaWikiDump::Revisions; our $VERSION = '1.0.4'; use 5.8.0; use strict; use warnings; use Carp; use List::Util; use Scalar::Util qw(weaken reftype); use Data::Dumper; sub DESTROY { my ($self) = @_; if (! $self->{FINISHED}) { $self->{EXPAT}->parse_done; } } #public methods sub new { my ($class, @args) = @_; my $self = {}; my $source; if (scalar(@args) == 0) { die "you must specify an argument to new()"; } elsif (scalar(@args) == 1) { $source = $args[0]; } else { my %conf = @args; if (! defined($conf{input})) { die "input is a required parameter to new()"; } $source = $conf{input}; } bless($self, $class); $$self{XML} = undef; #holder for XML::Accumulator $$self{EXPAT} = undef; #holder for expat under XML::Accumulator $$self{SITEINFO} = {}; #holder for the data from siteinfo $$self{PAGE_LIST} = []; #place to store articles as they come out of XML::Accumulator $$self{BYTE} = 0; $$self{CHUNK_SIZE} = 32768; $$self{FINISHED} = 0; $self->open($source); $self->init; return $self; } sub next { my ($self) = @_; my $case = $self->{SITEINFO}->{CASE}; my $namespaces = $self->{SITEINFO}->{namespaces}; my $page; #look for an available page and if one isn't #there then parse more XML while(1) { $page = shift(@{ $self->{PAGE_LIST} } ); if (defined($page)) { return Parse::MediaWikiDump::page->new($page, $self->get_category_anchor, $case, $namespaces); } return undef unless $self->parse_more; } die "should not get here"; } sub version { my ($self) = @_; return $self->{SITEINFO}{version}; } sub sitename { my ($self) = @_; return $$self{SITEINFO}{sitename}; } sub base { my ($self) = @_; return $$self{SITEINFO}{base}; } sub generator { my ($self) = @_; return $$self{SITEINFO}{generator}; } sub case { my ($self) = @_; return $$self{SITEINFO}{case}; } sub namespaces { my ($self) = @_; return $$self{SITEINFO}{namespaces}; } sub namespaces_names { my $self = shift; my @result; foreach (@{ $$self{SITEINFO}{namespaces} }) { push(@result, $_->[1]); } return \@result; } sub current_byte { my ($self) = @_; return $$self{BYTE}; } sub size { my ($self) = @_; return undef unless defined $$self{SOURCE_FILE}; my @stat = stat($$self{SOURCE_FILE}); return $stat[7]; } #private functions with OO interface sub open { my ($self, $source) = @_; if (defined(reftype($source)) && reftype($source) eq 'GLOB') { $$self{SOURCE} = $source; } else { if (! open($$self{SOURCE}, $source)) { die "could not open $source: $!"; } $$self{SOURCE_FILE} = $source; } binmode($$self{SOURCE}, ':utf8'); return 1; } sub init { my ($self) = @_; $self->{XML} = $self->new_accumulator_engine; my $expat_bb = $$self{XML}->parser->parse_start(); $$self{EXPAT} = $expat_bb; #load the information from the siteinfo section so it is available before #someone calls ->next while(scalar(@{$self->{PAGE_LIST}}) < 1) { die "hit end of document" unless $self->parse_more; } } sub new_accumulator_engine { my ($self) = @_; my $f = Parse::MediaWikiDump::XML::Accumulator->new; my $store_siteinfo = $self->{SITEINFO}; my $store_page = $self->{PAGE_LIST}; my $root = $f->root; my $mediawiki = $f->node('mediawiki', Start => \&handle_mediawiki_node); #stuff for siteinfo my $siteinfo = $f->node('siteinfo', End => sub { %$store_siteinfo = %{ $_[1] } } ); my $sitename = $f->textcapture('sitename'); my $base = $f->textcapture('base'); my $generator = $f->textcapture('generator'); my $case = $f->textcapture('case'); my $namespaces = $f->node('namespaces', Start => sub { $_[1]->{namespaces} = []; } ); my $namespace = $f->node('namespace', Character => \&save_namespace_node); #stuff for page entries my $page = $f->node('page', Start => sub { $_[0]->accumulator( {} ) } ); my $title = $f->textcapture('title'); my $id = $f->textcapture('id'); my $revision = $f->node('revision', Start => \&handle_revision_node_start, End => sub { push(@$store_page, { %{ $_[1] } } ) } ); my $rev_id = $f->textcapture('id', 'revision_id'); my $minor = $f->node('minor', Start => sub { $_[1]->{minor} = 1 } ); my $time = $f->textcapture('timestamp'); my $contributor = $f->node('contributor'); my $username = $f->textcapture('username'); my $ip = $f->textcapture('ip', 'userip'); my $contrib_id = $f->textcapture('id', 'userid'); my $comment = $f->textcapture('comment'); my $text = $f->textcapture('text'); my $restr = $f->textcapture('restrictions'); #put together the tree $siteinfo->add_child($sitename, $base, $generator, $case, $namespaces); $namespaces->add_child($namespace); $page->add_child($title, $id, $revision, $restr); $revision->add_child($rev_id, $time, $contributor, $minor, $comment, $text); $contributor->add_child($username, $ip, $contrib_id); $mediawiki->add_child($siteinfo, $page); $root->add_child($mediawiki); my $engine = $f->engine($root, {}); return $engine; } sub parse_more { my ($self) = @_; my $buf; my $read = read($$self{SOURCE}, $buf, $$self{CHUNK_SIZE}); if (! defined($read)) { die "error during read: $!"; } elsif ($read == 0) { $$self{FINISHED} = 1; $$self{EXPAT}->parse_done; return 0; } #expat has a bug where the current_byte #value overflows around 2 gigabytes #so we track how much data has been #processed ourselves $$self{BYTE} += $read; $$self{EXPAT}->parse_more($buf); if ($self->{DIE_REQUESTED}) { die "$self->{DIE_REQUESTED}\n"; } return 1; } sub get_category_anchor { my ($self) = @_; my $namespaces = $self->{SITEINFO}->{namespaces}; foreach (@$namespaces) { my ($id, $name) = @$_; if ($id == 14) { return $name; } } return undef; } #helper functions that the xml accumulator uses sub save_namespace_node { my ($parser, $accum, $text, $element, $attrs) = @_; my $key = $attrs->{key}; my $namespaces = $accum->{namespaces}; push(@{ $accum->{namespaces} }, [$key, $text] ); } sub handle_mediawiki_node { my ($engine, $a, $element, $attrs) = @_; my $version = $attrs->{version}; #checking versions of the dump file removed in 1.0.6 #see the migration notes for why # if ($version ne '0.3' && $version ne '0.4') { # die "Only version 0.3 and 0.4 dump files are supported"; # } $a->{version} = $version; } sub handle_revision_node_start { my (undef, $a) = @_; $a->{minor} = 0; delete($a->{username}); delete($a->{userid}); delete($a->{userip}); } sub save_siteinfo { my ($self, $info) = @_; my %info = %$info; $self->{SITEINFO} = \%info; } 1; __END__ =head1 NAME Parse::MediaWikiDump::Revisions - Object capable of processing dump files with multiple revisions per article =head1 ABOUT This object is used to access the metadata associated with a MediaWiki instance and provide an iterative interface for extracting the indidivual article revisions out of the same. To gurantee that there is only a single revision per article use the Parse::MediaWikiDump::Revisions object. =head1 SYNOPSIS $pmwd = Parse::MediaWikiDump->new; $revisions = $pmwd->revisions('pages-articles.xml'); $revisions = $pmwd->revisions(\*FILEHANDLE); #print the title and id of each article inside the dump file while(defined($page = $revisions->next)) { print "title '", $page->title, "' id ", $page->id, "\n"; } =head1 STATUS This software is being RETIRED - MediaWiki::DumpFile is the official successor to Parse::MediaWikiDump and includes a compatibility library called MediaWiki::DumpFile::Compat that is 100% API compatible and is a near perfect standin for this module. It is faster in all instances where it counts and is actively maintained. Any undocumented deviation of MediaWiki::DumpFile::Compat from Parse::MediaWikiDump is considered a bug and will be fixed. =head1 METHODS =over 4 =item $revisions->new Open the specified MediaWiki dump file. If the single argument to this method is a string it will be used as the path to the file to open. If the argument is a reference to a filehandle the contents will be read from the filehandle as specified. =item $revisions->next Returns an instance of the next available Parse::MediaWikiDump::page object or returns undef if there are no more articles left. =item $revisions->version Returns a plain text string of the dump file format revision number =item $revisions->sitename Returns a plain text string that is the name of the MediaWiki instance. =item $revisions->base Returns the URL to the instances main article in the form of a string. =item $revisions->generator Returns a string containing 'MediaWiki' and a version number of the instance that dumped this file. Example: 'MediaWiki 1.14alpha' =item $revisions->case Returns a string describing the case sensitivity configured in the instance. =item $revisions->namespaces Returns a reference to an array of references. Each reference is to another array with the first item being the unique identifier of the namespace and the second element containing a string that is the name of the namespace. =item $revisions->namespaces_names Returns an array reference the array contains strings of all the namespaces each as an element. =item $revisions->current_byte Returns the number of bytes that has been processed so far =item $revisions->size Returns the total size of the dump file in bytes. =back =head1 EXAMPLE =head2 Extract the article text of each revision of an article using a given title #!/usr/bin/perl use strict; use warnings; use Parse::MediaWikiDump; my $file = shift(@ARGV) or die "must specify a MediaWiki dump of the current pages"; my $title = shift(@ARGV) or die "must specify an article title"; my $pmwd = Parse::MediaWikiDump->new; my $dump = $pmwd->revisions($file); my $found = 0; binmode(STDOUT, ':utf8'); binmode(STDERR, ':utf8'); #this is the only currently known value but there could be more in the future if ($dump->case ne 'first-letter') { die "unable to handle any case setting besides 'first-letter'"; } $title = case_fixer($title); while(my $revision = $dump->next) { if ($revision->title eq $title) { print STDERR "Located text for $title revision ", $revision->revision_id, "\n"; my $text = $revision->text; print $$text; $found = 1; } } print STDERR "Unable to find article text for $title\n" unless $found; exit 1; #removes any case sensativity from the very first letter of the title #but not from the optional namespace name sub case_fixer { my $title = shift; #check for namespace if ($title =~ /^(.+?):(.+)/) { $title = $1 . ':' . ucfirst($2); } else { $title = ucfirst($title); } return $title; } =head1 LIMITATIONS =head2 Version 0.4 This class was updated to support version 0.4 dump files from a MediaWiki instance but it does not currently support any of the new information available in those files. Parse-MediaWikiDump-1.0.6/lib/Parse/MediaWikiDump/XML.pm000644 000765 000024 00000015762 11476537747 023130 0ustar00tylerstaff000000 000000 #this is set to become a new module on CPAN after #testing is done and documentation is written #this module is a thin wrapper around XML::Accumulator that #provides a tree interface for the event handlers. The engine #follows the tree as it receives events from XML::Accumulator #so that context can be pulled out from the location in the #tree. #Handlers for this module are also registered as callbacks but #exist at a specific node on the tree. Each handler is invoked #with the same information that came from the XML::Parser event #but is also given an additional argument that is an accumulator #variable to store data in. package Parse::MediaWikiDump::XML::Accumulator; use warnings; use strict; sub new { my ($class) = @_; my $self = {}; bless($self, $class); } sub engine { shift(@_); return Parse::MediaWikiDump::XML::Accumulator::Engine->new(@_); } sub node { shift(@_); return Parse::MediaWikiDump::XML::Accumulator::Node->new(@_); } sub root { shift(@_); return Parse::MediaWikiDump::XML::Accumulator::Root->new(@_); } sub textcapture { shift(@_); return Parse::MediaWikiDump::XML::Accumulator::TextCapture->new(@_); } package Parse::MediaWikiDump::XML::Accumulator::Engine; use strict; use warnings; use Carp qw(croak); use Scalar::Util qw(weaken); use XML::Parser; sub new { my ($class, $root, $accum) = @_; my $self = {}; croak "must specify a tree root" unless defined $root; eval { $root->validate; }; die "root node failed validation: $@" if $@; bless($self, $class); $self->{parser} = $self->init_parser; $self->{root} = $root; $self->{element_stack} = []; $self->{accum} = $accum; $self->{char_buf} = []; $self->{node_stack} = [ $root ]; return $self; } sub init_parser { my ($self) = @_; #stop a giant memory leak weaken($self); my $parser = XML::Parser->new( Handlers => { #Init => sub { handle_init_event($self, @_) }, #Final => sub { handle_final_event($self, @_) }, Start => sub { handle_start_event($self, @_) }, End => sub { handle_end_event($self, @_) }, Char => sub { handle_char_event($self, @_); }, } ); return $parser; } sub parser { my ($self) = @_; return $self->{parser}; } sub handle_init_event { my ($self, $expat) = @_; my $root = $self->{root}; my $handlers = $root->{handlers}; if (defined(my $cb = $handlers->{Init})) { &cb($self); } } sub handle_final_event { my ($self, $expat) = @_; my $root = $self->{root}; my $handlers = $root->{handlers}; if (defined(my $cb = $handlers->{Final})) { &cb($self); } } sub handle_start_event { my ($self, $expat, $element, %attrs) = @_; my $element_stack = $self->{element_stack}; my $node = $self->node; my $matched = $node->{children}->{$element}; my $handler; $handler = $matched->{handlers}->{Start}; $self->flush_chars; defined $handler && &$handler($self, $self->{accum}, $element, \%attrs); push(@{$self->{node_stack}}, $matched); push(@$element_stack, [$element, \%attrs]); } sub handle_end_event { my ($self, $expat, $element) = @_; my $handler = $self->node->{handlers}->{End}; my $node_stack = $self->{node_stack}; $self->flush_chars; defined $handler && &$handler($self, $self->{accum}, @{$self->element}); pop(@$node_stack); pop(@{$self->{element_stack}}); } sub handle_char_event { push(@{$_[0]->{char_buf}}, $_[2]); } sub flush_chars { my ($self) = @_; my ($handler, $cur_element); $handler = $self->node->{handlers}->{Character}; $cur_element = $self->element; if (! defined($cur_element = $self->element)) { $cur_element = []; } defined $handler && &$handler($self, $self->{accum}, join('', @{$self->{char_buf}}), @$cur_element); $self->{char_buf} = []; return undef; } sub node { my ($self) = @_; my $stack = $self->{node_stack}; my $size = scalar(@$stack); return $$stack[$size - 1]; } sub element { my ($self) = @_; my $stack = $self->{element_stack}; my $size = scalar(@$stack); my $return = $$stack[$size - 1]; return $return; } sub accumulator { my ($self, $new) = @_; if (defined($new)) { $self->{accum} = $new; } return $self->{accum}; } package Parse::MediaWikiDump::XML::Accumulator::Node; use strict; use warnings; use Carp qw(croak cluck); sub new { my ($class, $name, %handlers) = @_; my $self = {}; croak("must specify a node name") unless defined $name; $self->{name} = $name; $self->{handlers} = \%handlers; $self->{children} = {}; $self->{debug} = 1; bless($self, $class); return $self; } sub name { my ($self) = @_; return $self->{name}; } sub handlers { my ($self) = @_; return $self->{handlers}; } sub unset_handlers { my ($self) = @_; $self->{handlers} = undef; foreach (values(%{ $self->{children} })) { $_->unset_handlers; } return 1; } sub error { my ($self, $path, $string) = @_; my $name = $self->{name}; if (ref($path) ne 'ARRAY') { cluck "must specify an array ref for node path in tree"; } if ($self->{debug}) { print "Fatal error in node $name: $string\n"; print "Node tree path:\n"; $self->print_path($path); } die "fatal error: $string"; } sub print_path { my ($self, $path) = @_; my $i = 0; foreach (@$path) { my ($name) = $_->name; print "$i: $name\n"; } return undef; } sub validate { my ($self, $path) = @_; my ($handlers) = $self->{handlers}; my (%ok); map({$ok{$_} = 1} $self->ok_handlers); if (! defined($path)) { $path = []; } push(@$path, $self); foreach (keys(%$handlers)) { my $check = $handlers->{$_}; if (! defined($check) || ref($check) ne 'CODE') { $self->error($path, "Handler $_: not a code reference"); next; } if (! $ok{$_}) { $self->error($path, "$_ is not a valid event name"); next; } } foreach (values(%{$self->{children}})) { $_->validate($path); } return undef; } sub ok_handlers { return qw(Character Start End); } sub print { my ($self, $level) = @_; if (! defined($level)) { $level = 1; } print ' ' x $level, "$level: ", $self->name, "\n"; $level++; foreach (values(%{$self->{children} } )) { $_->print($level); } $level--; } sub add_child { my ($self, @children) = @_; foreach my $child (@children) { my $name = $child->{name}; $self->{children}->{$name} = $child; } return $self; } package Parse::MediaWikiDump::XML::Accumulator::Root; use strict; use warnings; use base qw(Parse::MediaWikiDump::XML::Accumulator::Node); sub new { my ($class) = @_; my $self = $class->SUPER::new('[root container]'); bless($self, $class); } sub ok_handlers { return qw(Init Final); } package Parse::MediaWikiDump::XML::Accumulator::TextCapture; use base qw(Parse::MediaWikiDump::XML::Accumulator::Node); use strict; use warnings; sub new { my ($class, $name, $store_as) = @_; my $self = $class->SUPER::new($name); bless($self, $class); if (! defined($store_as)) { $store_as = $name; } $self->{handlers} = { Character => sub { char_handler($store_as, @_); }, }; return $self; } sub char_handler { my ($store_as, $parser, $a, $chars, $element) = @_; $a->{$store_as} = $chars; } 1; Parse-MediaWikiDump-1.0.6/examples/speed_test000644 000765 000024 00000001426 11476537747 021511 0ustar00tylerstaff000000 000000 #!/usr/bin/perl use strict; use warnings; use Parse::MediaWikiDump; $SIG{ALRM} = \&progress; $| = 1; print ''; my $i = 0; my $file = shift(@ARGV); my $num_iter = shift(@ARGV); $num_iter = 10 unless defined($num_iter); my $start = time; my $dump = undef; alarm(1); while($i++ < $num_iter) { $start = time; print "Iteration $i\r"; $dump = Parse::MediaWikiDump::Pages->new($file); while($dump->next) { }; print "\n"; } my @times = times; print $times[0] + $times[1], "\n"; sub progress { return unless defined($dump); my $elapsed = time - $start; $elapsed = 1 if $elapsed == 0; print "Iteration $i: "; print int($dump->current_byte / $dump->size * 100), "% "; my $speed = int($dump->current_byte / $elapsed); print $speed, " bytes per second \r"; alarm(1); }