DBD-CSV-0.62/0000755000031300001440000000000014741200541011716 5ustar00merijnusersDBD-CSV-0.62/files/0000755000031300001440000000000014741200541013020 5ustar00merijnusersDBD-CSV-0.62/files/tools.csv0000644000031300001440000000010713706563626014713 0ustar00merijnusersc_tool;tool 1;Hammer 2;Screwdriver 3;Drill 4;Saw 5;Router 6;Hobbyknife DBD-CSV-0.62/files/fruit.csv0000644000031300001440000000006313706563514014701 0ustar00merijnusersc_fruit,fruit 1,Apple 2,Blueberry 3,Orange 4,Melon DBD-CSV-0.62/README0000755000031300001440000000477214741174246012627 0ustar00merijnusersModule DBD::CSV - DBI driver for CSV files Description The DBD::CSV module is yet another driver for the DBI (Database independent interface for Perl). This one is based on the SQL "engine" SQL::Statement and the abstract DBI driver DBD::File and implements access to so-called CSV files (Comma separated values). Copying Copyright (C) 2009-2025 by H.Merijn Brand Copyright (C) 2004-2009 by Jeff Zucker Copyright (C) 1998-2004 by Jochen Wiedmann You may distribute this module under the terms of either the GNU General Public License or the Artistic License, as specified in the Perl README file. Recent changes can be (re)viewed in the public GIT repository at https://github.com/perl5-dbi/DBD-CSV.git Feel free to clone your own copy: $ git clone https://github.com/perl5-dbi/DBD-CSV.git DBD-CSV or get it as a tgz: $ wget --output-document=DBD-CSV-git.tgz \ 'https://github.com/perl5-dbi/DBD-CSV/archive/master.tar.gz which will unpack to DBD-CSV-master Prerequisites: DBI - the DBI (Database independent interface for Perl). SQL::Statement - a simple SQL engine. Text::CSV_XS - this module is used for writing rows to or reading rows from CSV files. Build/Installation: Use CPAN: cpan DBD::CSV Or standard build/installation: gzip -cd DBD-CSV-0.43.tar.gz | tar xf - cd DBD-CSV-0.43 perl Makefile.PL make test make install (this is for Unix users, Windows users would prefer PowerArchiver, WinZip or something similar). The test suite contains extensive tests for all features provided by DBD::CSV. Some of them include the use of what is set to be the default temporary directory on the system. Even though the tests do not use the folder to read or modify data, using the folder will imply the scanning of that folder to see if files would qualify for use in DBD::CSV. When the folder contains many files, the scanning will seriously slow down the testing duration. The configure phase therefor asks the user if using the folder is allowed. The default answer is yes unless $AUTOMATED_TESTING is set. As File::Spec->tmpdir () honors the environment, you can enable these tests using another folder by setting $TMPDIR or whatever controls tmpdir () or your OS. Author: This module is currently maintained by H.Merijn Brand < h.m.brand at xs4all.nl > The original author is Jochen Wiedmann. Previous maintainer was Jeff Zucker DBD-CSV-0.62/t/0000755000031300001440000000000014741200541012161 5ustar00merijnusersDBD-CSV-0.62/t/73_csv-case.t0000644000031300001440000000265513255423076014404 0ustar00merijnusers#!/usr/bin/perl use strict; use warnings; use Test::More; BEGIN { use_ok ("DBI"); } do "./t/lib.pl"; sub DbFile; my $dir = DbDir (); my @tbl_def = ( [ "id", "INTEGER", 4, 0 ], [ "name", "CHAR", 64, 0 ], ); my $tbl = "foo"; ok (my $dbh = Connect (), "connect"); ok (!-f DbFile ($tbl), "foo does not exist"); ok ($dbh->{ignore_missing_table} = 1, "ignore missing tables"); like (my $def = TableDefinition ($tbl, @tbl_def), qr{^create table $tbl}i, "table definition"); ok ($dbh->do ($def), "create table"); ok (-f DbFile ($tbl), "does exists"); for (qw( foo foO fOo fOO Foo FoO FOo FOO )) { ok (my $sth = $dbh->prepare ("select * from $_"), "select from $_"); ok ($sth->execute, "execute"); } ok ($dbh->disconnect, "disconnect"); undef $dbh; ok ($dbh = Connect (), "connect"); ok ($dbh->{ignore_missing_table} = 1, "ignore missing tables"); my $case_ok = isSaneCase (); for (qw( foo foO fOo fOO Foo FoO FOo FOO )) { ok (my $sth = $dbh->prepare (qq{select * from "$_"}), "prepare \"$_\""); if ($_ eq "foo") { ok ( $sth->execute, "execute ok"); } else { TODO: { local $TODO = "Filesystem has to be case-aware" unless $case_ok; local $sth->{PrintError} = 0; ok (!$sth->execute, "table name '$_' should not match 'foo'"); } } } ok ($dbh->do ("drop table $tbl"), "drop table"); ok ($dbh->disconnect, "disconnect"); undef $dbh; done_testing (); DBD-CSV-0.62/t/70_csv.t0000644000031300001440000001032313255423076013457 0ustar00merijnusers#!/usr/bin/perl use strict; use warnings; use Test::More; BEGIN { use_ok ("DBI"); } do "./t/lib.pl"; my @tbl_def = ( [ "id", "INTEGER", 4, &COL_KEY ], [ "str", "CHAR", 64, &COL_NULLABLE ], [ "name", "CHAR", 64, &COL_NULLABLE ], ); sub DbFile; my $dir = "output$$"; my $fqd = File::Spec->rel2abs ($dir); my $abs = Cwd::abs_path ($dir); ok (my $dbh = Connect (), "connect"); ok ($dbh->{f_dir} eq $dir || $dbh->{f_dir} eq $abs || $dbh->{f_dir} eq $fqd, "default dir"); ok ($dbh->{f_dir} = $dir, "set f_dir"); ok (my $tbl = FindNewTable ($dbh), "find new test table"); ok (!-f DbFile ($tbl), "does not exist"); ok (my $tbl2 = FindNewTable ($dbh), "find new test table"); ok (!-f DbFile ($tbl2), "does not exist"); ok (my $tbl3 = FindNewTable ($dbh), "find new test table"); ok (!-f DbFile ($tbl3), "does not exist"); ok (my $tbl4 = FindNewTable ($dbh), "find new test table"); ok (!-f DbFile ($tbl4), "does not exist"); isnt ($tbl, $tbl2, "different 1 2"); isnt ($tbl, $tbl3, "different 1 3"); isnt ($tbl, $tbl4, "different 1 4"); isnt ($tbl2, $tbl3, "different 2 3"); isnt ($tbl2, $tbl4, "different 2 4"); isnt ($tbl3, $tbl4, "different 3 4"); like (my $def = TableDefinition ($tbl, @tbl_def), qr{^create table $tbl}i, "table definition"); ok ($dbh->do ($def), "create table 1"); ok (-f DbFile ($tbl), "does exists"); ok ($dbh->do ("drop table $tbl"), "drop table"); ok (!-f DbFile ($tbl), "does not exist"); ok ($dbh->disconnect, "disconnect"); undef $dbh; my $dsn = "DBI:CSV:f_dir=$dir;csv_eol=\015\012;csv_sep_char=\\;;"; ok ($dbh = Connect ($dsn), "connect"); ok ($dbh->do ($def), "create table"); ok (-f DbFile ($tbl), "does exists"); ok ($dbh->do ("insert into $tbl values (1, 1, ?)", undef, "joe"), "insert 1"); ok ($dbh->do ("insert into $tbl values (2, 2, ?)", undef, "Jochen;"), "insert 2"); ok (my $sth = $dbh->prepare ("select * from $tbl"), "prepare"); ok ($sth->execute, "execute"); ok (my $row = $sth->fetch, "fetch 1"); is_deeply ($row, [ 1, "1", "joe" ], "content"); ok ( $row = $sth->fetch, "fetch 2"); is_deeply ($row, [ 2, "2", "Jochen;" ], "content"); ok ($sth->finish, "finish"); undef $sth; ok ($dbh->do ("drop table $tbl"), "drop table"); ok (!-f DbFile ($tbl), "does not exist"); ok ($dbh->disconnect, "disconnect"); undef $dbh; $dsn = "DBI:CSV:"; ok ($dbh = Connect ($dsn), "connect"); # Check, whether the csv_tables->{$tbl}{file} attribute works like (my $def4 = TableDefinition ($tbl4, @tbl_def), qr{^create table $tbl4}i, "table definition"); ok ($dbh->{csv_tables}{$tbl4}{file} = DbFile ($tbl4), "set table/file"); ok ($dbh->do ($def4), "create table"); ok (-f DbFile ($tbl4), "does exists"); ok ($dbh->do ("drop table $tbl4"), "drop table"); ok ($dbh->disconnect, "disconnect"); undef $dbh; ok ($dbh = DBI->connect ("dbi:CSV:", "", "", { f_dir => DbDir (), f_ext => ".csv", dbd_verbose => 8, csv_sep_char => ";", csv_blank_is_undef => 1, csv_always_quote => 1, }), "connect with attr"); is ($dbh->{dbd_verbose}, 8, "dbd_verbose set"); is ($dbh->{f_ext}, ".csv", "f_ext set"); is ($dbh->{csv_sep_char}, ";", "sep_char set"); is ($dbh->{csv_blank_is_undef}, 1, "blank_is_undef set"); ok ($dbh->do ($def), "create table"); ok (-f DbFile ($tbl).".csv", "does exists"); #is ($sth->{blank_is_undef}, 1, "blank_is_undef"); eval { local $SIG{__WARN__} = sub { }; ok ($sth = $dbh->prepare ("insert into $tbl values (?, ?, ?)"), "prepare"); is ($sth->execute (1, ""), undef, "not enough values"); like ($dbh->errstr, qr/passed 2 parameters where 3 required/, "error message"); # Cannot use the same handle twice. SQL::Statement bug ok ($sth = $dbh->prepare ("insert into $tbl values (?, ?, ?)"), "prepare"); is ($sth->execute (1, "", 1, ""), undef, "too many values"); like ($dbh->errstr, qr/passed 4 parameters where 3 required/, "error message"); }; ok ($sth->execute ($_, undef, "Code $_"), "insert $_") for 0 .. 9; ok ($dbh->do ("drop table $tbl"), "drop table"); ok (!-f DbFile ($tbl), "does not exist"); ok (!-f DbFile ($tbl).".csv", "does not exist"); ok ($dbh->disconnect, "disconnect"); undef $dbh; done_testing (); DBD-CSV-0.62/t/41_nulls.t0000644000031300001440000000402413255423076014020 0ustar00merijnusers#!/usr/bin/perl # This is a test for correctly handling NULL values. use strict; use warnings; use Test::More; BEGIN { use_ok ("DBI") } do "./t/lib.pl"; my $nano = $ENV{DBI_SQL_NANO}; my @tbl_def = ( [ "id", "INTEGER", 4, &COL_NULLABLE ], [ "name", "CHAR", 64, &COL_NULLABLE ], [ "str", "CHAR", 64, &COL_NULLABLE ], ); ok (my $dbh = Connect (), "connect"); ok (my $tbl = FindNewTable ($dbh), "find new test table"); like (my $def = TableDefinition ($tbl, @tbl_def), qr{^create table $tbl}i, "table definition"); ok ($dbh->do ($def), "create table"); ok ($dbh->do ("insert into $tbl values (NULL, 'NULL-id', ' ')"), "insert"); my $row; ok (my $sth = $dbh->prepare ("select * from $tbl where id is NULL"), "prepare"); ok ($sth->execute, "execute"); TODO: { local $TODO = $nano ? "SQL::Nano does not yet support this syntax" : undef; ok ($row = $sth->fetch, "fetch"); is_deeply ($row, [ "", "NULL-id", " " ], "default content"); } ok ($sth->finish, "finish"); undef $sth; ok ($dbh = Connect ({ csv_null => 1 }), "connect csv_null"); ok ($sth = $dbh->prepare ("select * from $tbl where id is NULL"), "prepare"); ok ($sth->execute, "execute"); TODO: { local $TODO = $nano ? "SQL::Nano does not yet support this syntax" : undef; ok ($row = $sth->fetch, "fetch"); is_deeply ($row, [ undef, "NULL-id", " " ], "NULL content"); } ok ($sth->finish, "finish"); undef $sth; ok ($dbh->do ("drop table $tbl"), "drop table"); ok ($dbh->disconnect, "disconnect"); ok ($dbh = Connect ({ csv_null => 1 }), "connect csv_null"); ok ($dbh->do ($def), "create table"); ok ($dbh->do ("insert into $tbl (id, str) values (1, ' ')"), "insert just 2"); ok ($sth = $dbh->prepare ("select * from $tbl"), "prepare"); ok ($sth->execute, "execute"); ok ($row = $sth->fetch, "fetch"); is_deeply ($row, [ 1, undef, " " ], "content"); ok ($sth->finish, "finish"); undef $sth; ok ($dbh->do ("drop table $tbl"), "drop table"); ok ($dbh->disconnect, "disconnect"); done_testing (); DBD-CSV-0.62/t/85_error.t0000644000031300001440000000223513255423076014026 0ustar00merijnusers#!/usr/bin/perl use strict; use warnings; use Test::More; BEGIN { use_ok ("DBI") } do "./t/lib.pl"; my @tbl_def = ( [ "id", "INTEGER", 4, 0 ], [ "name", "CHAR", 64, 0 ], ); ok (my $dbh = Connect (), "connect"); ok (my $tbl = FindNewTable ($dbh), "find new test table"); like (my $def = TableDefinition ($tbl, @tbl_def), qr{^create table $tbl}i, "table definition"); ok ($dbh->do ($def), "create table"); my $tbl_file = DbFile ($tbl); ok (-s $tbl_file, "file exists"); ok ($dbh->disconnect, "disconnect"); ok (-f $tbl_file, "file still there"); open my $fh, ">>", $tbl_file; print $fh qq{1, "p0wnd",",""",0\n}; # Very bad content close $fh; ok ($dbh = Connect (), "connect"); { local $dbh->{PrintError} = 0; local $dbh->{RaiseError} = 0; ok (my $sth = $dbh->prepare ("select * from $tbl"), "prepare"); is ($sth->execute, undef, "execute should fail"); # It is safe to regex on this text, as it is NOT local dependent like ($dbh->errstr, qr{\w+ \@ line [0-9?]+ pos [0-9?]+}, "error message"); }; ok ($dbh->do ("drop table $tbl"), "drop"); ok ($dbh->disconnect, "disconnect"); done_testing (); DBD-CSV-0.62/t/82_free_unref_scalar.t0000644000031300001440000000411013311026166016321 0ustar00merijnusers#!/usr/bin/perl use strict; use warnings; # perl5.27.2 -DD -Mblib t/02_free_unref_scalar.t > & alloc-free.log # ^^^ # -DD Cleaning up #use Devel::Peek; #use Data::Peek; use Test::More; #use Test::NoWarnings; require Text::CSV_XS; $Text::CSV_XS::VERSION < 1.35 and plan skip_all => "This leak should be fixed in Text::CSV_XS 1.35"; use_ok "DBI"; require "./t/lib.pl"; $SIG{__WARN__} = sub { $_[0] =~ m/^Attempt to free unreferenced scalar: SV (0x[0-9a-f]+)(, \<\w+\> line \d+)?.* during global destruction\.$/ and fail ("there was an attempt to free unreferenced scalar"); diag "@_"; }; sub DBD::CSV::Table::DESTROY { my $self = shift; delete $self->{meta}{csv_in}; } # DBD::CSV::Table::DESTROY sub test_with_options { my (%opts) = @_; my $dbh = DBI->connect ("dbi:CSV:", undef, undef, { f_schema => undef, f_dir => 't', f_dir_search => [], f_ext => ".csv/r", f_lock => 2, f_encoding => "utf8", %opts, RaiseError => 1, PrintError => 1, FetchHashKeyName => "NAME_lc", }) or die "$DBI::errstr\n" || $DBI::errstr; my %tbl = map { $_ => 1 } $dbh->tables (undef, undef, undef, undef); is ($tbl{$_}, 1, "Table $_ found") for qw( tmp ); my %data = ( tmp => { # t/tmp.csv 1 => "ape", 2 => (grep (m/^csv_callbacks$/ => keys %opts) ? "new world monkey" : "monkey"), 3 => "gorilla", }, ); foreach my $tbl (sort keys %data) { my $sth = $dbh->prepare ("select * from $tbl"); $sth->execute; while (my $row = $sth->fetch) { is ($row->[1], $data{$tbl}{$row->[0]}, "$tbl ($row->[0], ...)"); } $sth->finish (); } $dbh->disconnect; } sub new_world_monkeys { my ($csv, $data) = @_; $data->[1] =~ s/^monkey$/new world monkey/; return; } my $callbacks = { csv_callbacks => { after_parse => \&new_world_monkeys, }, }; test_with_options ( csv_tables => { tmp => { f_file => "tmp.csv"} }, %$callbacks, ); test_with_options ( csv_auto_diag => 0, %$callbacks, ) for (1 .. 200); done_testing (); DBD-CSV-0.62/t/32_update.t0000644000031300001440000000466513255423076014160 0ustar00merijnusers#!/usr/bin/perl # test if update returns expected values / keeps file sizes sane use strict; use warnings; use Test::More; BEGIN { use_ok ("DBI"); } do "./t/lib.pl"; my @tbl_def = ( [ "id", "INTEGER", 4, &COL_NULLABLE ], [ "name", "CHAR", 64, &COL_NULLABLE ], ); ok (my $dbh = Connect (), "connect"); ok (my $tbl = FindNewTable ($dbh), "find new test table"); like (my $def = TableDefinition ($tbl, @tbl_def), qr{^create table $tbl}i, "table definition"); ok ($dbh->do ($def), "create table"); my $sz = 0; my $tbl_file = DbFile ($tbl); ok ($sz = -s $tbl_file, "file exists"); ok ($dbh->do ("insert into $tbl (id) values ($_)"), "insert $_") for 1 .. 10; ok ($sz < -s $tbl_file, "file grew"); $sz = -s $tbl_file; { local $dbh->{PrintWarn} = 0; local $dbh->{PrintError} = 0; is ($dbh->do ("update wxyz set name = 'ick' where id = 99"), undef, "update in non-existing tbl"); } my $zero_ret = $dbh->do ("update $tbl set name = 'ack' where id = 99"); ok ($zero_ret, "true non-existing update RV (via do)"); cmp_ok ($zero_ret, "==", 0, "update non-existing row (via do)"); cmp_ok ($sz, "==", -s $tbl_file, "file size did not change on noop updates"); is ($dbh->do ("update $tbl set name = 'multis' where id > 7"), 3, "update several (count) (via do)"); cmp_ok ($sz, "<", -s $tbl_file, "file size grew on update"); $sz = -s $tbl_file; is ($dbh->do ("update $tbl set name = 'single' where id = 9"), 1, "update single (count) (via do)"); cmp_ok ($sz, "==", -s $tbl_file, "file size did not change on same-size update"); $zero_ret = $dbh->prepare ("update $tbl set name = 'ack' where id = 88")->execute; ok ($zero_ret, "true non-existing update RV (via prepare/execute)"); cmp_ok ($zero_ret, "==", 0, "update non-existing row (via prepare/execute)"); cmp_ok ($sz, "==", -s $tbl_file, "file size did not change on noop update"); $sz = -s $tbl_file; is ($dbh->prepare ("update $tbl set name = 'multis' where id < 4")->execute, 3, "update several (count) (via prepare/execute)"); cmp_ok ($sz, "<", -s $tbl_file, "file size grew on update"); $sz = -s $tbl_file; is ($dbh->prepare ("update $tbl set name = 'single' where id = 2")->execute, 1, "update single (count) (via prepare/execute)"); cmp_ok ($sz, "==", -s $tbl_file, "file size did not change on same-size update"); ok ($dbh->do ("drop table $tbl"), "drop table"); ok ($dbh->disconnect, "disconnect"); ok (!-f $tbl_file, "file removed"); done_testing (); DBD-CSV-0.62/t/11_dsnlist.t0000644000031300001440000000175413746302670014350 0ustar00merijnusers#!/usr/bin/perl # Test whether data_sources () returns something useful use strict; use warnings; use Test::More; # Include lib.pl BEGIN { use_ok ("DBI") } do "./t/lib.pl"; ok (1, "Driver is CSV\n"); ok (my $dbh = Connect (), "Connect"); $dbh or BAIL_OUT "Cannot connect"; ok ($dbh->ping, "ping"); # This returns at least ".", "lib", and "t" ok (my @dsn = DBI->data_sources ("CSV"), "data_sources"); ok (@dsn >= 2, "more than one"); ok ($dbh->disconnect, "disconnect"); # Try different DSN's foreach my $d (qw( . examples lib t )) { ok (my $dns = Connect ("dbi:CSV:f_dir=$d"), "use $d as f_dir"); ok ($dbh->disconnect, "disconnect"); } if ($DBD::File::VERSION ge "0.45") { my @err; is (eval { local $SIG{__WARN__} = sub { push @err => @_ }; local $SIG{__DIE__} = sub { push @err => @_ }; Connect ("dbi:CSV:f_dir=d/non/exist/here"); }, undef, "f_dir = nonexting dir"); like ("@err", qr{d/non/exist/here}, "Error caught"); } done_testing (); DBD-CSV-0.62/t/tmp.csv0000644000031300001440000000004313706551757013515 0ustar00merijnusersc_tmp,tmp 1,ape 2,monkey 3,gorilla DBD-CSV-0.62/t/lib.pl0000644000031300001440000001102113261342043013257 0ustar00merijnusers#!/usr/bin/perl # lib.pl is the file where database specific things should live, # wherever possible. For example, you define certain constants # here and the like. use strict; use warnings; use File::Spec; my $testname = "output$$"; my $base_dir = File::Spec->rel2abs (File::Spec->curdir ()); my $test_dir = File::Spec->rel2abs ($testname); my $test_dsn = $ENV{DBI_DSN} || ""; $test_dsn =~ m/csv/i or $test_dsn = "dbi:CSV:f_dir=$testname"; my $test_user = $ENV{DBI_USER} || ""; my $test_pass = $ENV{DBI_PASS} || ""; DBI->import (":sql_types"); defined &SQL_CHAR or *SQL_CHAR = sub { 1 }; defined &SQL_VARCHAR or *SQL_VARCHAR = sub { 12 }; defined &SQL_INTEGER or *SQL_INTEGER = sub { 4 }; sub COL_NULLABLE () { 1 } sub COL_KEY () { 2 } my %v; { my @req = qw( DBI SQL::Statement Text::CSV_XS DBD::CSV ); my $req = join ";\n" => map { qq{require $_;\n\$v{"$_"} = $_->VERSION ()} } @req; eval $req; if ($@) { my @missing = grep { !exists $v{$_} } @req; print STDERR "\n\nYOU ARE MISSING REQUIRED MODULES: [ @missing ]\n\n"; exit 0; } } sub AnsiTypeToDb { my ($type, $size) = @_; my $uctype = uc $type; if ($uctype eq "CHAR" || $uctype eq "VARCHAR") { $size ||= 1; return "$uctype ($size)"; } $uctype eq "BLOB" || $uctype eq "REAL" || $uctype eq "INTEGER" and return $uctype; $uctype eq "INT" and return "INTEGER"; warn "Unknown type $type\n"; return $type; } # AnsiTypeToDb # This function generates a table definition based on an input list. The input # list consists of references, each reference referring to a single column. The # column reference consists of column name, type, size and a bitmask of certain # flags, namely # # COL_NULLABLE - true, if this column may contain NULL's # COL_KEY - true, if this column is part of the table's primary key sub TableDefinition { my ($tablename, @cols) = @_; my @keys = (); foreach my $col (@cols) { $col->[2] & COL_KEY and push @keys, $col->[0]; } my @colDefs; foreach my $col (@cols) { my $colDef = $col->[0] . " " . AnsiTypeToDb ($col->[1], $col->[2]); $col->[3] & COL_NULLABLE or $colDef .= " NOT NULL"; push @colDefs, $colDef; } my $keyDef = @keys ? ", PRIMARY KEY (" . join (", ", @keys) . ")" : ""; my $tq = $tablename =~ m/^\w+\./ ? qq{"$tablename"} : $tablename; return sprintf "CREATE TABLE %s (%s%s)", $tq, join (", ", @colDefs), $keyDef; } # TableDefinition # This function generates a list of tables associated to a given DSN. sub ListTables { my $dbh = shift or return; my @tables = $dbh->func ("list_tables"); my $msg = $dbh->errstr || $DBI::errstr; $msg and die "Cannot create table list: $msg"; @tables; } # ListTables sub DbCleanup { chdir $base_dir; -d $testname or return; chdir $testname or BAIL_OUT ("Cleanup failed"); unlink glob "*"; chdir $base_dir; rmdir $testname; } # DbCleanup mkdir $testname, 0755; END { DbCleanup (); } # This functions generates a list of possible DSN's aka # databases and returns a possible table name for a new # table being created. # # Problem is, we have two different situations here: Test scripts # call us by pasing a dbh, which is fine for most situations. { my $listTablesHook; my $testtable = "testaa"; sub FindNewTable { my $dbh = shift; my @tables = defined $listTablesHook ? $listTablesHook->($dbh) : defined &ListTables ? ListTables ($dbh) : die "Fatal: ListTables not implemented.\n"; my $table; while (grep { $_ eq $testtable } @tables) { $testtable++; } $table = $testtable; $testtable++; return $table; } # FindNewTable } sub isSaneCase { my @f = glob "??????.???"; foreach my $try (qw( FrUbLl BlURgH wOngOs )) { my $fn = "$try.csv"; grep m{^$fn$}i => @f and next; open my $fh, ">", $fn or return 1; close $fh; my $sane = (-f $fn && ! -f lc $fn && ! -f uc $fn); unlink $fn; return $sane; } # Assume insane return 0; } # isSaneCase sub ServerError { die "# Cannot connect: $DBI::errstr\n"; } # ServerError sub Connect { my $attr = @_ && ref $_[-1] eq "HASH" ? pop @_ : {}; my ($dsn, $usr, $pass) = @_; $dsn ||= $test_dsn; $usr ||= $test_user; $pass ||= $test_pass; my $dbh = DBI->connect ($dsn, $usr, $pass, $attr) or ServerError; $dbh; } # Connect sub DbDir { @_ and $test_dir = File::Spec->catdir ($base_dir, shift); $test_dir; } # DbDir sub DbFile { my $file = shift or return; File::Spec->catdir ($test_dir, $file); } # DbFile 1; DBD-CSV-0.62/t/40_numrows.t0000644000031300001440000000325713261342045014374 0ustar00merijnusers#!/usr/bin/perl # This tests, whether the number of rows can be retrieved. use strict; use warnings; use Test::More; BEGIN { use_ok ("DBI") } require "./t/lib.pl"; sub TrueRows { my $sth = shift; my $count = 0; $count++ while $sth->fetch; $count; } # TrueRows my @tbl_def = ( [ "id", "INTEGER", 4, 0 ], [ "name", "CHAR", 64, 0 ], ); my ($sth, $rows); ok (my $dbh = Connect (), "connect"); ok (my $tbl = FindNewTable ($dbh), "find new test table"); like (my $def = TableDefinition ($tbl, @tbl_def), qr{^create table $tbl}i, "table definition"); ok ($dbh->do ($def), "create table"); ok ($dbh->do ("INSERT INTO $tbl VALUES (1, 'Alligator Descartes')"), "insert"); ok ($sth = $dbh->prepare ("SELECT * FROM $tbl WHERE id = 1"), "prepare"); ok ($sth->execute, "execute"); is ($sth->rows, 1, "numrows"); is (TrueRows ($sth), 1, "true rows"); ok ($sth->finish, "finish"); undef $sth; ok ($dbh->do ("INSERT INTO $tbl VALUES (2, 'Jochen Wiedman')"), "insert"); ok ($sth = $dbh->prepare ("SELECT * FROM $tbl WHERE id >= 1"), "prepare"); ok ($sth->execute, "execute"); $rows = $sth->rows; ok ($rows == 2 || $rows == -1, "rows"); is (TrueRows ($sth), 2, "true rows"); ok ($sth->finish, "finish"); undef $sth; ok ($dbh->do ("INSERT INTO $tbl VALUES (3, 'Tim Bunce')"), "insert"); ok ($sth = $dbh->prepare ("SELECT * FROM $tbl WHERE id >= 2"), "prepare"); ok ($sth->execute, "execute"); $rows = $sth->rows; ok ($rows == 2 || $rows == -1, "rows"); is (TrueRows ($sth), 2, "true rows"); ok ($sth->finish, "finish"); undef $sth; ok ($dbh->do ("DROP TABLE $tbl"), "drop"); ok ($dbh->disconnect, "disconnect"); done_testing (); DBD-CSV-0.62/t/42_bindparam.t0000644000031300001440000000547613255423076014635 0ustar00merijnusers#!/usr/bin/perl # Test if bindparam () works use strict; use warnings; use Test::More; BEGIN { use_ok ("DBI") } if ($ENV{DBI_SQL_NANO}) { diag ("These tests are not yet supported for SQL::Nano"); done_testing (1); exit 0; } do "./t/lib.pl"; my @tbl_def = ( [ "id", "INTEGER", 4, 0 ], [ "name", "CHAR", 64, &COL_NULLABLE ], ); ok (my $dbh = Connect (), "connect"); ok ($dbh->{csv_null} = 1, "Allow NULL"); ok (my $tbl = FindNewTable ($dbh), "find new test table"); like (my $def = TableDefinition ($tbl, @tbl_def), qr{^create table $tbl}i, "table definition"); ok ($dbh->do ($def), "create table"); ok (my $sth = $dbh->prepare ("insert into $tbl values (?, ?)"), "prepare"); # Automatic type detection my ($int, $chr) = (1, "Alligator Descartes"); ok ($sth->execute ($int, $chr), "execute insert 1"); # Does the driver remember the automatically detected type? ok ($sth->execute ("3", "Jochen Wiedman"), "execute insert 2"); ($int, $chr) = (2, "Tim Bunce"); ok ($sth->execute ($int, $chr), "execute insert 3"); # Now try the explicit type settings ok ($sth->bind_param (1, " 4", &SQL_INTEGER), "bind 4 int"); ok ($sth->bind_param (2, "Andreas König"), "bind str"); ok($sth->execute, "execute"); # Works undef -> NULL? ok ($sth->bind_param (1, 5, &SQL_INTEGER), "bind 5 int"); ok ($sth->bind_param (2, undef), "bind NULL"); ok($sth->execute, "execute"); ok ($sth->finish, "finish"); undef $sth; ok ($dbh->disconnect, "disconnect"); undef $dbh; # And now retrieve the rows using bind_columns ok ($dbh = Connect ({ csv_null => 1 }), "connect"); ok ($sth = $dbh->prepare ("select * from $tbl order by id"), "prepare"); ok ($sth->execute, "execute"); my ($id, $name); ok ($sth->bind_columns (undef, \$id, \$name), "bind_columns"); ok ($sth->execute, "execute"); ok ($sth->fetch, "fetch"); is ($id, 1, "id 1"); is ($name, "Alligator Descartes", "name 1"); ok ($sth->fetch, "fetch"); is ($id, 2, "id 2"); is ($name, "Tim Bunce", "name 2"); ok ($sth->fetch, "fetch"); is ($id, 3, "id 3"); is ($name, "Jochen Wiedman", "name 3"); ok ($sth->fetch, "fetch"); is ($id, 4, "id 4"); is ($name, "Andreas König", "name 4"); ok ($sth->fetch, "fetch"); is ($id, 5, "id 5"); is ($name, undef, "name 5"); ok ($sth->finish, "finish"); undef $sth; ok ($sth = $dbh->prepare ("update $tbl set name = ? where id = ?"), "prepare update"); is ($sth->execute ("Tux", 5), 1, "update"); ok ($sth->finish, "finish"); undef $sth; ok ($sth = $dbh->prepare ("update $tbl set id = ? where name = ?"), "prepare update"); is ($sth->execute (5, "Tux"), 1, "update"); is ($sth->execute (6, ""), "0E0", "update"); ok ($sth->finish, "finish"); undef $sth; ok ($dbh->do ("drop table $tbl"), "drop table"); ok ($dbh->disconnect, "disconnect"); done_testing (); DBD-CSV-0.62/t/50_chopblanks.t0000644000031300001440000000321513255423076015010 0ustar00merijnusers#!/usr/bin/perl # This driver should check if 'ChopBlanks' works. use strict; use warnings; use Test::More; BEGIN { use_ok ("DBI") } do "./t/lib.pl"; my @tbl_def = ( [ "id", "INTEGER", 4, &COL_NULLABLE ], [ "name", "CHAR", 64, &COL_NULLABLE ], ); ok (my $dbh = Connect (), "connect"); ok (my $tbl = FindNewTable ($dbh), "find new test table"); like (my $def = TableDefinition ($tbl, @tbl_def), qr{^create table $tbl}i, "table definition"); ok ($dbh->do ($def), "create table"); my @rows = ( [ 1, "NULL", ], [ 2, " ", ], [ 3, " a b c ", ], [ 4, " a \r ", ], [ 5, " a \t ", ], [ 6, " a \n ", ], ); ok (my $sti = $dbh->prepare ("insert into $tbl (id, name) values (?, ?)"), "prepare ins"); ok (my $sth = $dbh->prepare ("select id, name from $tbl where id = ?"), "prepare sel"); foreach my $row (@rows) { ok ($sti->execute (@$row), "insert $row->[0]"); $sth->{ChopBlanks} = 0; ok (1, "ChopBlanks 0"); ok ($sth->execute ($row->[0]), "execute"); ok (my $r = $sth->fetch, "fetch ($row->[0]:1)"); is_deeply ($r, $row, "content ($row->[0]:1)"); $sth->{ChopBlanks} = 1; ok (1, "ChopBlanks 1"); ok ($sth->execute ($row->[0]), "execute"); s/ +$// for @$row; if ($DBD::File::VERSION <= 0.38) { s/\s+$// for @$row; # Bug fixed in new DBI } ok ($r = $sth->fetch, "fetch ($row->[0]:2)"); is_deeply ($r, $row, "content ($row->[0]:2)"); } ok ($sti->finish, "finish sti"); undef $sti; ok ($sth->finish, "finish sth"); undef $sth; ok ($dbh->do ("drop table $tbl"), "drop table"); ok ($dbh->disconnect, "disconnect"); done_testing (); DBD-CSV-0.62/t/10_base.t0000644000031300001440000000216713255423076013577 0ustar00merijnusers#!/usr/bin/perl # Test whether the driver can be installed use strict; use warnings; use Test::More; BEGIN { use_ok ("DBI"); use_ok ("SQL::Statement"); } ok ($SQL::Statement::VERSION, "SQL::Statement::Version $SQL::Statement::VERSION"); do "./t/lib.pl"; my $nano = $ENV{DBI_SQL_NANO}; defined $nano or $nano = "not set"; diag ("Showing relevant versions (DBI_SQL_NANO = $nano)"); diag ("Using DBI version $DBI::VERSION"); diag ("Using DBD::File version $DBD::File::VERSION"); diag ("Using SQL::Statement version $SQL::Statement::VERSION"); diag ("Using Text::CSV_XS version $Text::CSV_XS::VERSION"); ok (my $switch = DBI->internal, "DBI->internal"); is (ref $switch, "DBI::dr", "Driver class"); # This is a special case. install_driver should not normally be used. ok (my $drh = DBI->install_driver ("CSV"), "Install driver"); is (ref $drh, "DBI::dr", "Driver class installed"); ok ($drh->{Version}, "Driver version $drh->{Version}"); my $dbh = DBI->connect ("dbi:CSV:"); my $csv_version_info = $dbh->csv_versions (); ok ($csv_version_info, "csv_versions"); diag ($csv_version_info); done_testing (); DBD-CSV-0.62/t/61_meta.t0000644000031300001440000001276313762652541013630 0ustar00merijnusers#!/usr/bin/perl use strict; use warnings; use Test::More; use DBI qw(:sql_types); do "./t/lib.pl"; my $cnt = join "" => ; my $tbl; my $expect = [ [ 1, "Knut", "white" ], [ 2, "Inge", "black" ], [ 3, "Beowulf", "CCEE00" ], ]; { my $dbh = Connect (); ok ($tbl = FindNewTable ($dbh), "find new test table"); } TODO: { local $TODO = "Streaming support"; if ($DBD::File::VERSION gt "0.44") { note ("ScalarIO - no col_names"); my $dbh = Connect ({ RaiseError => 0, PrintError => 0 }); open my $data, "<", \$cnt; $dbh->{csv_tables}->{data} = { f_file => $data, skip_rows => 4, }; if (my $sth = $dbh->prepare ("SELECT * FROM data")) { $sth->execute (); my $rows = $sth->fetchall_arrayref (); is_deeply ($rows, $expect, "all rows found - mem-io w/o col_names"); } } if ($DBD::File::VERSION gt "0.44") { note ("ScalarIO - with col_names"); my $dbh = Connect ({ RaiseError => 0, PrintError => 0 }); open my $data, "<", \$cnt; $dbh->{csv_tables}->{data} = { f_file => $data, skip_rows => 4, col_names => [qw(id name color)], }; if (my $sth = $dbh->prepare ("SELECT * FROM data")) { $sth->execute (); my $rows = $sth->fetchall_arrayref (); is_deeply ($rows, $expect, "all rows found - mem-io w col_names"); } } } my $fn = File::Spec->rel2abs (DbFile ($tbl)); open my $fh, ">", $fn or die "Can't open $fn for writing: $!"; print $fh $cnt; close $fh; note ("File handle - no col_names"); { open my $data, "<", $fn; my $dbh = Connect (); $dbh->{csv_tables}->{data} = { f_file => $data, skip_rows => 4, }; my $sth = $dbh->prepare ("SELECT * FROM data"); $sth->execute (); my $rows = $sth->fetchall_arrayref (); is_deeply ($rows, $expect, "all rows found - file-handle w/o col_names"); is_deeply ($sth->{NAME_lc}, [qw(id name color)], "column names - file-handle w/o col_names"); } note ("File handle - with col_names"); { open my $data, "<", $fn; my $dbh = Connect (); $dbh->{csv_tables}->{data} = { f_file => $data, skip_rows => 4, col_names => [qw(foo bar baz)], }; my $sth = $dbh->prepare ("SELECT * FROM data"); $sth->execute (); my $rows = $sth->fetchall_arrayref (); is_deeply ($rows, $expect, "all rows found - file-handle w col_names"); is_deeply ($sth->{NAME_lc}, [qw(foo bar baz)], "column names - file-handle w col_names"); } note ("File name - no col_names"); { my $dbh = Connect (); $dbh->{csv_tables}->{data} = { f_file => $fn, skip_rows => 4, }; my $sth = $dbh->prepare ("SELECT * FROM data"); $sth->execute (); my $rows = $sth->fetchall_arrayref (); is_deeply ($rows, $expect, "all rows found - file-name w/o col_names"); is_deeply ($sth->{NAME_lc}, [qw(id name color)], "column names - file-name w/o col_names"); } note ("File name - with col_names"); { my $dbh = Connect ({ RaiseError => 1 }); $dbh->{csv_tables}->{data} = { f_file => $fn, skip_rows => 4, col_names => [qw(foo bar baz)], }; my $sth = $dbh->prepare ("SELECT * FROM data"); $sth->execute (); my $rows = $sth->fetchall_arrayref (); is_deeply ($rows, $expect, "all rows found - file-name w col_names" ); is_deeply ($sth->{NAME_lc}, [qw(foo bar baz)], "column names - file-name w col_names" ); # TODO: Next test will hang in open_tables () # 'Cannot obtain exclusive lock on .../output12660/testaa: Interrupted system call' #ok ($dbh->do ("drop table data"), "Drop the table"); } unlink $fn; note ("Attribute prefixes"); $fn = "test.csv"; foreach my $x (0, 1) { my ($fpfx, $cpfx) = $x ? ("f_", "csv_") : ("", ""); my $dbh = DBI->connect ("dbi:CSV:", undef, undef, { "${fpfx}schema" => undef, # schema / f_schema "${fpfx}dir" => "files", # .. f_dir "${fpfx}ext" => ".csv/r", # .. f_ext "${cpfx}eol" => "\n", # eol / csv_eol "${cpfx}always_quote" => 1, # .. csv_always_quote "${cpfx}sep_char" => ";", # .. csv_sep_char RaiseError => 1, PrintError => 1, }) or die "$DBI::errstr\n" || $DBI::errstr; my $ffn = "files/$fn"; unlink $ffn; $dbh->{csv_tables}{tst} = { "${fpfx}file" => $fn, # file / f_file col_names => [qw( c_tst s_tst )], }; is_deeply ( [ sort $dbh->tables (undef, undef, undef, undef) ], [qw( fruit tools )], "Tables"); is_deeply ( [ sort keys %{$dbh->{csv_tables}} ], [qw( fruit tools tst )], "Mixed tables"); $dbh->{csv_tables}{fruit}{sep_char} = ","; # should work is_deeply ($dbh->selectall_arrayref ("select * from tools order by c_tool"), [ [ 1, "Hammer" ], [ 2, "Screwdriver" ], [ 3, "Drill" ], [ 4, "Saw" ], [ 5, "Router" ], [ 6, "Hobbyknife" ], ], "Sorted tools"); is_deeply ($dbh->selectall_arrayref ("select * from fruit order by c_fruit"), [ [ 1, "Apple" ], [ 2, "Blueberry" ], [ 3, "Orange" ], [ 4, "Melon" ], ], "Sorted fruit"); # TODO: Ideally, insert should create the file if empty or non-existent # and insert "c_tst";"s_tst" as header line open my $fh, ">", $ffn; close $fh; $dbh->do ("insert into tst values (42, 'Test')"); # "42";"Test" $dbh->do ("update tst set s_tst = 'Done' where c_tst = 42"); # "42";"Done" $dbh->disconnect; open $fh, "<", $ffn or die "$ffn: $!\n"; my @dta = <$fh>; close $fh; is ($dta[-1], qq{"42";"Done"\n}, "Table tst written to $fn"); unlink $ffn; } done_testing (); __END__ id,name,color stupid content only for skipping followed by column names 1,Knut,white 2,Inge,black 3,Beowulf,"CCEE00" DBD-CSV-0.62/t/51_commit.t0000644000031300001440000000442514010747617014161 0ustar00merijnusers#!/usr/bin/perl # Check commit, rollback and "AutoCommit" attribute use strict; use warnings; use Test::More; BEGIN { use_ok ("DBI") } do "./t/lib.pl"; my $nano = $ENV{DBI_SQL_NANO}; my @tbl_def = ( [ "id", "INTEGER", 4, 0 ], [ "name", "CHAR", 64, 0 ], ); my $nano_msg = "SQL::Nano does not support count (*)"; sub RowCount { my ($dbh, $tbl) = @_; if ($nano) { $nano_msg and diag ($nano_msg); $nano_msg = ""; return 0; } local $dbh->{PrintError} = 1; my $sth = $dbh->prepare ("SELECT count (*) FROM $tbl") or return; $sth->execute or return; my $row = $sth->fetch or return; $row->[0]; } # RowCount ok (my $dbh = Connect (), "connect"); ok (my $tbl = FindNewTable ($dbh), "find new test table"); like (my $def = TableDefinition ($tbl, @tbl_def), qr{^create table $tbl}i, "table definition"); ok ($dbh->do ($def), "create table"); is ($dbh->{AutoCommit}, 1, "AutoCommit on"); eval { $dbh->{AutoCommit} = 0; }; like ($@, qr{^Can't disable AutoCommit}, "disable"); is ($dbh->{AutoCommit}, 1, "AutoCommit still on"); # Check whether AutoCommit mode works. ok ($dbh->do ("insert into $tbl values (1, 'Jochen')"), "insert 1"); is (RowCount ($dbh, $tbl), $nano ? 0 : 1, "1 row"); ok ($dbh->disconnect, "disconnect"); ok ($dbh = Connect (), "connect"); is (RowCount ($dbh, $tbl), $nano ? 0 : 1, "still 1 row"); # Check whether commit issues a warning in AutoCommit mode ok ($dbh->do ("insert into $tbl values (2, 'Tim')"), "insert 2"); is ($dbh->{AutoCommit}, 1, "AutoCommit on"); { my $got_warn = 0; local $SIG{__WARN__} = sub { $got_warn++; }; eval { ok ($dbh->commit, "commit"); }; is ($got_warn, 1, "warning"); } # Check whether rollback issues a warning in AutoCommit mode # We accept error messages as being legal, because the DBI # requirement of just issuing a warning seems scary. ok ($dbh->do ("insert into $tbl values (3, 'Alligator')"), "insert 3"); is ($dbh->{AutoCommit}, 1, "AutoCommit on"); { my $got_warn = 0; local $SIG{__WARN__} = sub { $got_warn++; }; eval { is ($dbh->rollback, 0, "rollback"); }; is ($got_warn, 1, "warning"); is ($dbh->err, undef, "err"); } ok ($dbh->do ("drop table $tbl"), "drop table"); ok ($dbh->disconnect, "disconnect"); done_testing (); DBD-CSV-0.62/t/43_blobs.t0000644000031300001440000000213213255423076013764 0ustar00merijnusers#!/usr/bin/perl # This is a test for correct handling of BLOBS and $dbh->quote () use strict; use warnings; use Test::More; BEGIN { use_ok ("DBI") } do "./t/lib.pl"; my $size = 128; my @tbl_def = ( [ "id", "INTEGER", 4, 0 ], [ "name", "BLOB", $size, 0 ], ); ok (my $dbh = Connect (), "connect"); ok (my $tbl = FindNewTable ($dbh), "find new test table"); like (my $def = TableDefinition ($tbl, @tbl_def), qr{^create table $tbl}i, "table definition"); ok ($dbh->do ($def), "create table"); ok (my $blob = (pack "C*", 0 .. 255) x $size, "create blob"); is (length $blob, $size * 256, "blob size"); ok (my $qblob = $dbh->quote ($blob), "quote blob"); ok ($dbh->do ("insert into $tbl values (1, ?)", undef, $blob), "insert"); ok (my $sth = $dbh->prepare ("select * from $tbl where id = 1"), "prepare"); ok ($sth->execute, "execute"); ok (my $row = $sth->fetch, "fetch"); is_deeply ($row, [ 1, $blob ], "content"); ok ($sth->finish, "finish"); undef $sth; ok ($dbh->do ("drop table $tbl"), "drop table"); ok ($dbh->disconnect, "disconnect"); done_testing (); DBD-CSV-0.62/t/48_utf8.t0000644000031300001440000000401213255423076013555 0ustar00merijnusers#!/usr/bin/perl # This is a test for correctly handling UTF-8 content use strict; use warnings; use charnames ":full"; use DBI; use Text::CSV_XS; use Encode qw( encode ); use Test::More; BEGIN { use_ok ("DBI") } do "./t/lib.pl"; ok (my $dbh = Connect ({ f_ext => ".csv/r", f_schema => undef }), "connect"); ok (my $tbl1 = FindNewTable ($dbh), "find new test table"); ok (my $tbl2 = FindNewTable ($dbh), "find new test table"); my @data = ( "The \N{SNOWMAN} is melting", "U2 should \N{SKULL AND CROSSBONES}", "I \N{BLACK HEART SUIT} my wife", "Unicode makes me \N{WHITE SMILING FACE}", ); ok ("Creating table with UTF-8 content"); foreach my $tbl ($tbl1, $tbl2) { ok (my $csv = Text::CSV_XS->new ({ binary => 1, eol => "\n" }), "New csv"); ok (open (my $fh, ">:utf8", File::Spec->catfile (DbDir (), "$tbl.csv")), "Open CSV"); ok ($csv->print ($fh, [ "id", "str" ]), "CSV print header"); ok ($csv->print ($fh, [ $_, $data[$_ - 1] ]), "CSV row $_") for 1 .. scalar @data; ok (close ($fh), "close"); } { $dbh->{f_encoding} = undef; my $row; ok (my $sth = $dbh->prepare ("select * from $tbl1"), "prepare"); ok ($sth->execute, "execute"); foreach my $i (1 .. scalar @data) { ok ($row = $sth->fetch, "fetch $i"); my $v = $data[$i - 1]; utf8::is_utf8 ($v) or $v = encode ("utf8", $v); is_deeply ($row, [ $i , $v ], "unencoded content $i"); } ok ($sth->finish, "finish"); undef $sth; } { $dbh->{f_encoding} = "utf8"; my $row; ok (my $sth = $dbh->prepare ("select * from $tbl2"), "prepare"); ok ($sth->execute, "execute"); foreach my $i (1 .. scalar @data) { ok ($row = $sth->fetch, "fetch $i"); my $v = $data[$i - 1]; ok (utf8::is_utf8 ($v), "is encoded"); is_deeply ($row, [ $i , $v ], "encoded content $i"); } ok ($sth->finish, "finish"); undef $sth; } ok ($dbh->do ("drop table $tbl1"), "drop table"); ok ($dbh->do ("drop table $tbl2"), "drop table"); ok ($dbh->disconnect, "disconnect"); done_testing (); DBD-CSV-0.62/t/71_csv-ext.t0000644000031300001440000000346013261342060014250 0ustar00merijnusers#!/usr/bin/perl use strict; use warnings; use Test::More; BEGIN { use_ok ("DBI"); } do "./t/lib.pl"; my @tbl_def = ( [ "id", "INTEGER", 4, 0 ], [ "name", "CHAR", 64, 0 ], ); my $dbh; my @ext = ("", ".csv", ".foo", ".txt"); sub DbFile; my $usr = eval { getpwuid $< } || $ENV{USERNAME} || ""; sub Tables { my @tbl = $dbh->tables (); if ($usr) { s/^['"]*$usr["']*\.//i for @tbl; } sort @tbl; } # Tables my $dir = DbDir (); ok ($dbh = Connect (), "connect"); ok (my $tbl = FindNewTable ($dbh), "find new test table"); ok (!-f DbFile ($tbl), "does not exist"); foreach my $ext (@ext) { my $qt = '"'.$tbl.$ext.'"'; like (my $def = TableDefinition ($qt, @tbl_def), qr{^create table $qt}i, "table definition"); ok ($dbh->do ($def), "create table $ext"); ok (-f DbFile ($tbl.$ext), "does exists"); } ok (my @tbl = Tables (), "tables"); is_deeply (\@tbl, [ map { "$tbl$_" } @ext ], "for all ext"); ok ($dbh->disconnect, "disconnect"); undef $dbh; ok ($dbh = DBI->connect ("dbi:CSV:", "", "", { f_dir => $dir, f_ext => ".csv", }), "connect (f_ext => .csv)"); ok (@tbl = Tables (), "tables"); is_deeply (\@tbl, [ map { "$tbl$_" } grep { !m/\.csv$/i } @ext ], "for all ext"); ok ($dbh->disconnect, "disconnect"); undef $dbh; ok ($dbh = DBI->connect ("dbi:CSV:", "", "", { f_dir => $dir, f_ext => ".csv/r", }), "connect (f_ext => .csv/r)"); ok (@tbl = Tables (), "tables"); is_deeply (\@tbl, [ $tbl ], "just one"); ok ($dbh->disconnect, "disconnect"); undef $dbh; ok ($dbh = Connect (), "connect"); ok (@tbl = Tables (), "tables"); ok ($dbh->do ("drop table $_"), "drop table $_") for @tbl; ok ($dbh->disconnect, "disconnect"); undef $dbh; ok (rmdir $dir, "no files left"); done_testing (); DBD-CSV-0.62/t/60_misc.t0000644000031300001440000000426013255423076013621 0ustar00merijnusers#!/usr/bin/perl # Misc tests use strict; use warnings; use Test::More; BEGIN { use_ok ("DBI"); } do "./t/lib.pl"; my @tbl_def = ( [ "id", "INTEGER", 4, &COL_NULLABLE ], [ "name", "CHAR", 64, &COL_NULLABLE ], ); ok (my $dbh = Connect (), "connect"); ok (my $tbl = FindNewTable ($dbh), "find new test table"); like (my $def = TableDefinition ($tbl, @tbl_def), qr{^create table $tbl}i, "table definition"); ok ($dbh->do ($def), "create table"); is ($dbh->quote ("tast1"), "'tast1'", "quote"); ok (my $sth = $dbh->prepare ("select * from $tbl where id = 1"), "prepare"); { local $dbh->{PrintError} = 0; my @warn; local $SIG{__WARN__} = sub { push @warn, @_ }; eval { is ($sth->fetch, undef, "fetch w/o execute"); }; is (scalar @warn, 1, "one error"); like ($warn[0], qr/fetch row without a precee?ding execute/, "error message"); } ok ($sth->execute, "execute"); is ($sth->fetch, undef, "fetch no rows"); ok ($sth->finish, "finish"); undef $sth; ok ($sth = $dbh->prepare ("insert into $tbl values (?, ?)"), "prepare ins"); ok ($sth->execute ($_, "Code $_"), "insert $_") for 1 .. 9; ok ($sth->finish, "finish"); undef $sth; ok ($sth = $dbh->prepare ("select * from $tbl order by id"), "prepare sel"); # Test what happens with two consequetive execute ()'s ok ($sth->execute, "execute 1"); ok ($sth->execute, "execute 2"); # Test all fetch methods ok (my @row = $sth->fetchrow_array, "fetchrow_array"); is_deeply (\@row, [ 1, "Code 1" ], "content"); ok (my $row = $sth->fetchrow_arrayref, "fetchrow_arrayref"); is_deeply ( $row, [ 2, "Code 2" ], "content"); ok ( $row = $sth->fetchrow_hashref, "fetchrow_hashref"); is_deeply ( $row, { id => 3, name => "Code 3" }, "content"); ok (my $all = $sth->fetchall_hashref ("id"), "fetchall_hashref"); is_deeply ($all, { map { ( $_ => { id => $_, name => "Code $_" } ) } 4 .. 9 }, "content"); ok ($sth->execute, "execute"); ok ( $all = $sth->fetchall_arrayref, "fetchall_arrayref"); is_deeply ($all, [ map { [ $_, "Code $_" ] } 1 .. 9 ], "content"); ok ($dbh->do ("drop table $tbl"), "drop table"); ok ($dbh->disconnect, "disconnect"); done_testing (); DBD-CSV-0.62/t/31_delete.t0000644000031300001440000000424013255423076014124 0ustar00merijnusers#!/usr/bin/perl # test if delete from shrinks table use strict; use warnings; use Test::More; BEGIN { use_ok ("DBI"); } do "./t/lib.pl"; my @tbl_def = ( [ "id", "INTEGER", 4, &COL_NULLABLE ], [ "name", "CHAR", 64, &COL_NULLABLE ], ); ok (my $dbh = Connect (), "connect"); ok (my $tbl = FindNewTable ($dbh), "find new test table"); like (my $def = TableDefinition ($tbl, @tbl_def), qr{^create table $tbl}i, "table definition"); ok ($dbh->do ($def), "create table"); my $sz = 0; my $tbl_file = DbFile ($tbl); ok ($sz = -s $tbl_file, "file exists"); ok ($dbh->do ("insert into $tbl values (1, 'Foo')"), "insert"); ok ($sz < -s $tbl_file, "file grew"); $sz = -s $tbl_file; ok ($dbh->do ("delete from $tbl where id = 1"), "delete single"); ok ($sz > -s $tbl_file, "file shrank"); $sz = -s $tbl_file; ok ($dbh->do ("insert into $tbl (id) values ($_)"), "insert $_") for 1 .. 10; ok ($sz < -s $tbl_file, "file grew"); { local $dbh->{PrintWarn} = 0; local $dbh->{PrintError} = 0; is ($dbh->do ("delete from wxyz where id = 99"), undef, "delete non-existing tbl"); } my $zero_ret = $dbh->do ("delete from $tbl where id = 99"); ok ($zero_ret, "true non-existing delete RV (via do)"); cmp_ok ($zero_ret, "==", 0, "delete non-existing row (via do)"); is ($dbh->do ("delete from $tbl where id = 9"), 1, "delete single (count) (via do)"); is ($dbh->do ("delete from $tbl where id > 7"), 2, "delete more (count) (via do)"); $zero_ret = $dbh->prepare ("delete from $tbl where id = 88")->execute; ok ($zero_ret, "true non-existing delete RV (via prepare/execute)"); cmp_ok ($zero_ret, "==", 0, "delete non-existing row (via prepare/execute)"); is ($dbh->prepare ("delete from $tbl where id = 7")->execute, 1, "delete single (count) (via prepare/execute)"); is ($dbh->prepare ("delete from $tbl where id > 4")->execute, 2, "delete more (count) (via prepare/execute)"); ok ($dbh->do ("delete from $tbl"), "delete all"); is (-s $tbl_file, $sz, "file reflects empty table"); ok ($dbh->do ("drop table $tbl"), "drop table"); ok ($dbh->disconnect, "disconnect"); ok (!-f $tbl_file, "file removed"); done_testing (); DBD-CSV-0.62/t/30_insertfetch.t0000644000031300001440000000534513255423076015206 0ustar00merijnusers#!/usr/local/bin/perl # Test row insertion and retrieval use strict; use warnings; use Test::More; BEGIN { use_ok ("DBI") } do "./t/lib.pl"; my @tbl_def = ( [ "id", "INTEGER", 4, 0 ], [ "name", "CHAR", 64, 0 ], [ "val", "INTEGER", 4, 0 ], [ "txt", "CHAR", 64, 0 ], ); ok (my $dbh = Connect (), "connect"); ok (my $tbl = FindNewTable ($dbh), "find new test table"); $tbl ||= "tmp99"; eval { local $SIG{__WARN__} = sub {}; $dbh->do ("drop table $tbl"); }; like (my $def = TableDefinition ($tbl, @tbl_def), qr{^create table $tbl}i, "table definition"); my $sz = 0; ok ($dbh->do ($def), "create table"); my $tbl_file = DbFile ($tbl); ok ($sz = -s $tbl_file, "file exists"); ok ($dbh->do ("insert into $tbl values ". "(1, 'Alligator Descartes', 1111, 'Some Text')"), "insert"); ok ($sz < -s $tbl_file, "file grew"); $sz = -s $tbl_file; ok ($dbh->do ("insert into $tbl (id, name, val, txt) values ". "(2, 'Crocodile Dundee', 2222, 'Down Under')"), "insert with field names"); ok ($sz < -s $tbl_file, "file grew"); ok (my $sth = $dbh->prepare ("select * from $tbl where id = 1"), "prepare"); is (ref $sth, "DBI::st", "handle type"); ok ($sth->execute, "execute"); ok (my $row = $sth->fetch, "fetch"); is (ref $row, "ARRAY", "returned a list"); is ($sth->errstr, undef, "no error"); is_deeply ($row, [ 1, "Alligator Descartes", 1111, "Some Text" ], "content"); ok ($sth->finish, "finish"); undef $sth; # Try some other capitilization ok ($dbh->do ("DELETE FROM $tbl WHERE id = 1"), "delete"); # Now, try SELECT'ing the row out. This should fail. ok ($sth = $dbh->prepare ("select * from $tbl where id = 1"), "prepare"); is (ref $sth, "DBI::st", "handle type"); ok ($sth->execute, "execute"); is ($sth->fetch, undef, "fetch"); is ($sth->errstr, undef, "error"); # ??? ok ($sth->finish, "finish"); undef $sth; ok ($sth = $dbh->prepare ("insert into $tbl values (?, ?, ?, ?)"), "prepare insert"); ok ($sth->execute (3, "Babar", 3333, "Elephant"), "insert prepared"); ok ($sth->finish, "finish"); undef $sth; ok ($sth = $dbh->prepare ("insert into $tbl (id, name, val, txt) values (?, ?, ?, ?)"), "prepare insert with field names"); ok ($sth->execute (4, "Vischje", 33, "in het riet"), "insert prepared"); ok ($sth->finish, "finish"); undef $sth; ok ($dbh->do ("delete from $tbl"), "delete all"); ok ($dbh->do ("insert into $tbl (id) values (0)"), "insert just one field"); { local (@ARGV) = DbFile ($tbl); my @csv = <>; s/\r?\n\Z// for @csv; is (scalar @csv, 2, "Just two lines"); is ($csv[0], "id,name,val,txt", "header"); is ($csv[1], "0,,,", "data"); } ok ($dbh->do ("drop table $tbl"), "drop"); ok ($dbh->disconnect, "disconnect"); done_testing (); DBD-CSV-0.62/t/74-class.t0000644000031300001440000000205614010720500013675 0ustar00merijnusers#!/usr/bin/perl use strict; use warnings; use Test::More; BEGIN { use_ok ("DBI"); } my $tbl = "foo_class$$"; my $fnm = "$tbl.csv"; END { unlink $fnm; } my @fail; foreach my $class ("", "Text::CSV_XS", "Text::CSV", "Text::CSV_SX") { my %opt = (f_ext => ".csv/r"); if ($class) { $opt{csv_class} = $class; eval "require $class"; # Ignore errors, let the DBD fail } # Connect won't fail ... ok (my $dbh = DBI->connect ("dbi:CSV:", undef, undef, \%opt), "Connect $class"); my @warn; my $sth = eval { local $SIG{__WARN__} = sub { push @warn => @_ }; $dbh->do ("create table $tbl (c_tbl integer)"); }; if ($@ || !$sth) { note ("$class is not supported"); like ("@warn", qr{"new" via package "$class"}); push @fail => $class; next; } $class and note (join "-" => $class, $class->VERSION); ok ($dbh->do ("drop table $tbl"), "drop table"); ok (!-f $fnm, "is removed"); ok ($dbh->disconnect, "disconnect"); undef $dbh; } ok (@fail == 1 || @fail == 2, "2 or 3 out of 4 should pass"); done_testing (); DBD-CSV-0.62/t/44_listfields.t0000644000031300001440000000310113255423076015023 0ustar00merijnusers#!/usr/bin/perl # This is a test for statement attributes being present appropriately. use strict; use warnings; use Test::More; BEGIN { use_ok ("DBI") } do "./t/lib.pl"; my $nano = $ENV{DBI_SQL_NANO}; my @tbl_def = ( [ "id", "INTEGER", 4, &COL_KEY ], [ "name", "CHAR", 64, &COL_NULLABLE ], ); ok (my $dbh = Connect (), "connect"); ok (my $tbl = FindNewTable ($dbh), "find new test table"); like (my $def = TableDefinition ($tbl, @tbl_def), qr{^create table $tbl}i, "table definition"); ok ($dbh->do ($def), "create table"); ok (my $sth = $dbh->prepare ("select * from $tbl"), "prepare"); ok ($sth->execute, "execute"); is ($sth->{NUM_OF_FIELDS}, scalar @tbl_def, "NUM_OF_FIELDS"); is ($sth->{NUM_OF_PARAMS}, 0, "NUM_OF_PARAMS"); is ($sth->{NAME_lc}[0], lc $tbl_def[0][0], "NAME_lc"); is ($sth->{NAME_uc}[1], uc $tbl_def[1][0], "NAME_uc"); is_deeply ($sth->{NAME_lc_hash}, { map { ( lc $tbl_def[$_][0] => $_ ) } 0 .. $#tbl_def }, "NAME_lc_hash"); if ($DBD::File::VERSION gt "0.43") { is ($sth->{TYPE}[0], $nano ? &SQL_VARCHAR : &SQL_INTEGER, "TYPE 1"); is ($sth->{TYPE}[1], $nano ? &SQL_VARCHAR : &SQL_CHAR, "TYPE 2"); is ($sth->{PRECISION}[0], 0, "PRECISION 1"); is ($sth->{PRECISION}[1], $nano ? 0 : 64, "PRECISION 2"); is ($sth->{NULLABLE}[0], $nano ? 1 : 0, "NULLABLE 1"); is ($sth->{NULLABLE}[1], 1, "NULLABLE 2"); } ok ($sth->finish, "finish"); #s ($sth->{NUM_OF_FIELDS}, 0, "NUM_OF_FIELDS"); undef $sth; ok ($dbh->do ("drop table $tbl"), "drop table"); ok ($dbh->disconnect, "disconnect"); done_testing (); DBD-CSV-0.62/t/55_dir_search.t0000644000031300001440000000431513263051645014774 0ustar00merijnusers#!/pro/bin/perl use strict; use warnings; use Test::More; $ENV{AUTOMATED_TESTING} and plan skip_all => "No folder scanning during automated tests"; use_ok ("DBI"); require "./t/lib.pl"; my $tstdir = DbDir (); my @extdir = ("t", File::Spec->tmpdir ()); if (open my $fh, "<", "tests.skip") { grep m/\b tmpdir \b/x => <$fh> and pop @extdir; } my $dbh = DBI->connect ("dbi:CSV:", undef, undef, { f_schema => undef, f_dir => DbDir (), f_dir_search => \@extdir, f_ext => ".csv/r", f_lock => 2, f_encoding => "utf8", RaiseError => 1, PrintError => 1, FetchHashKeyName => "NAME_lc", }) or die $DBI::errstr || $DBI::errstr || "", "\n"; my @dsn = $dbh->data_sources; my %dir = map { m{^dbi:CSV:.*\bf_dir=([^;]+)}i; my $folder = $1; # data_sources returns the string just one level to many $folder =~ m{\\[;\\]} and $folder =~ s{\\(.)}{$1}g; ($folder => 1); } @dsn; # Use $test_dir $dbh->do ("create table fox (c_fox integer, fox char (1))"); $dbh->do ("insert into fox values ($_, $_)") for 1, 2, 3; my @test_dirs = ($tstdir, @extdir); is ($dir{$_}, 1, "DSN for $_") for @test_dirs; my %tbl = map { $_ => 1 } $dbh->tables (undef, undef, undef, undef); is ($tbl{$_}, 1, "Table $_ found") for qw( tmp fox ); my %data = ( tmp => { # t/tmp.csv 1 => "ape", 2 => "monkey", 3 => "gorilla", }, fox => { # output123/fox.csv 1 => 1, 2 => 2, 3 => 3, }, ); foreach my $tbl ("tmp", "fox") { my $sth = $dbh->prepare ("select * from $tbl"); $sth->execute; while (my $row = $sth->fetch) { is ($row->[1], $data{$tbl}{$row->[0]}, "$tbl ($row->[0], ...)"); } } # Do not drop table fox yet ok ($dbh->disconnect, "disconnect"); chdir DbDir (); my @f = grep m/^fox\.csv/i => glob "*.*"; is (scalar @f, 1, "fox.csv still here"); SKIP: { $DBD::File::VERSION < 0.43 and skip "DBD::File-0.43 required", 1; is (DBI->connect ("dbi:CSV:", undef, undef, { f_schema => undef, f_dir => "./undefined", f_ext => ".csv/r", RaiseError => 0, PrintError => 0, }), undef, "Should not be able to connect to non-exiting folder"); } # drop table fox; @f and unlink @f; done_testing; DBD-CSV-0.62/t/20_createdrop.t0000644000031300001440000000126013255423076015007 0ustar00merijnusers#!/usr/bin/perl # Test if a table can be created and dropped use strict; use warnings; use Test::More; BEGIN { use_ok ("DBI") } do "./t/lib.pl"; my @tbl_def = ( [ "id", "INTEGER", 4, 0 ], [ "name", "CHAR", 64, 0 ], ); ok (my $dbh = Connect (), "connect"); ok (my $tbl = FindNewTable ($dbh), "find new test table"); like (my $def = TableDefinition ($tbl, @tbl_def), qr{^create table $tbl}i, "table definition"); ok ($dbh->do ($def), "create table"); my $tbl_file = DbFile ($tbl); ok (-s $tbl_file, "file exists"); ok ($dbh->do ("drop table $tbl"), "drop table"); ok ($dbh->disconnect, "disconnect"); ok (!-f $tbl_file, "file removed"); done_testing (); DBD-CSV-0.62/t/72_csv-schema.t0000644000031300001440000000216113255423076014720 0ustar00merijnusers#!/usr/bin/perl use strict; use warnings; use Test::More; BEGIN { use_ok ("DBI"); } do "./t/lib.pl"; my @tbl_def = ( [ "id", "INTEGER", 4, 0 ], [ "name", "CHAR", 64, 0 ], ); my $dir = DbDir (); ok (my $dbh = DBI->connect ("dbi:CSV:", "", "", { f_dir => $dir, }), "connect"); ok (my $tbl = FindNewTable ($dbh), "find new test table"); like (my $def = TableDefinition ($tbl, @tbl_def), qr{^create table $tbl}i, "table definition"); ok ($dbh->do ($def), "create table"); my @tbl = $dbh->tables (); if (my $usr = eval { getpwuid $< }) { s/^(['"`])(.+)\1\./$2./ for @tbl; is_deeply (\@tbl, [ qq{$usr.$tbl} ], "tables"); } else { is_deeply (\@tbl, [ qq{$tbl} ], "tables"); } ok ($dbh->disconnect, "disconnect"); undef $dbh; ok ($dbh = DBI->connect ("dbi:CSV:", "", "", { f_schema => undef, f_dir => $dir, }), "connect (f_schema => undef)"); is_deeply ([ $dbh->tables () ], [ $tbl ], "tables"); ok ($dbh->do ("drop table $tbl"), "drop table"); ok ($dbh->disconnect, "disconnect"); undef $dbh; ok (rmdir $dir, "no files left"); done_testing (); DBD-CSV-0.62/t/80_rt.t0000644000031300001440000002365613706573157013336 0ustar00merijnusers#!/usr/bin/perl use strict; use warnings; use Test::More; use DBI qw(:sql_types); if ($ENV{DBI_SQL_NANO}) { ok ($ENV{DBI_SQL_NANO}, "These tests are not suit for SQL::Nano"); done_testing (); exit 0; } do "./t/lib.pl"; my ($rt, %input, %desc); while () { if (s/^Ť(\d+)ť\s*-?\s*//) { chomp; $rt = $1; $desc {$rt} = $_; $input{$rt} = []; next; } s/\\([0-7]{1,3})/chr oct $1/ge; push @{$input{$rt}}, $_; } sub rt_file { return File::Spec->catfile (DbDir (), "rt$_[0]"); } # rt_file { $rt = 18477; ok ($rt, "RT-$rt - $desc{$rt}"); my @lines = @{$input{$rt}}; open my $fh, ">", rt_file ($rt); print $fh @lines; close $fh; ok (my $dbh = Connect (), "connect"); ok (my $sth = $dbh->prepare ("select * from rt$rt"), "prepare"); ok ($sth->execute, "execute"); ok ($sth = $dbh->prepare (qq; select SEGNO, OWNER, TYPE, NAMESPACE, EXPERIMENT, STREAM, UPDATED, SIZE from rt$rt where NAMESPACE = ? and EXPERIMENT LIKE ? and STREAM LIKE ? ;), "prepare"); ok ($sth->execute ("RT", "%", "%"), "execute"); ok (my $row = $sth->fetch, "fetch"); is_deeply ($row, [ 14, "root", "bug", "RT", "not really", "fast", 20090501, 42 ], "content"); ok ($sth->finish, "finish"); ok ($dbh->do ("drop table rt$rt"), "drop table"); ok ($dbh->disconnect, "disconnect"); } { $rt = 20550; ok ($rt, "RT-$rt - $desc{$rt}"); ok (my $dbh = Connect (), "connect"); ok ($dbh->do ("CREATE TABLE rt$rt(test INT, PRIMARY KEY (test))"), "prepare"); ok ($dbh->do ("drop table rt$rt"), "drop table"); ok ($dbh->disconnect, "disconnect"); } { $rt = 33764; ok ($rt, "RT-$rt - $desc{$rt}"); my @lines = @{$input{$rt}}; open my $fh, ">", rt_file ($rt); print $fh @lines; close $fh; ok (my $dbh = Connect (), "connect"); ok (my $sth = $dbh->prepare ("select * from rt$rt"), "prepare"); eval { local $dbh->{PrintError} = 0; local $SIG{__WARN__} = sub { }; is ($sth->execute, undef, "execute"); like ($dbh->errstr, qr{Error 2034 while reading}, "error message"); is (my $row = $sth->fetch, undef, "fetch"); like ($dbh->errstr, qr{fetch row without a precee?ding execute}, "error message"); }; ok ($sth->finish, "finish"); ok ($dbh->do ("drop table rt$rt"), "drop table"); ok ($dbh->disconnect, "disconnect"); } { $rt = 43010; ok ($rt, "RT-$rt - $desc{$rt}"); my @tbl = ( [ "rt${rt}_0" => [ [ "id", "INTEGER", 4, &COL_KEY ], [ "one", "INTEGER", 4, &COL_NULLABLE ], [ "two", "INTEGER", 4, &COL_NULLABLE ], ]], [ "rt${rt}_1" => [ [ "id", "INTEGER", 4, &COL_KEY ], [ "thre", "INTEGER", 4, &COL_NULLABLE ], [ "four", "INTEGER", 4, &COL_NULLABLE ], ]], ); ok (my $dbh = Connect (), "connect"); $dbh->{csv_null} = 1; foreach my $t (@tbl) { like (my $def = TableDefinition ($t->[0], @{$t->[1]}), qr{^create table $t->[0]}i, "table def"); ok ($dbh->do ($def), "create table"); } ok ($dbh->do ("INSERT INTO $tbl[0][0] (id, one) VALUES (8, 1)"), "insert 1"); ok ($dbh->do ("INSERT INTO $tbl[1][0] (id, thre) VALUES (8, 3)"), "insert 2"); ok (my $row = $dbh->selectrow_hashref (join (" ", "SELECT *", "FROM $tbl[0][0]", "JOIN $tbl[1][0]", "USING (id)")), "join 1 2"); is_deeply ($row, { id => 8, one => 1, two => undef, thre => 3, four => undef }, "content"); ok ($dbh->do ("drop table $_"), "drop table") for map { $_->[0] } @tbl; ok ($dbh->disconnect, "disconnect"); } { $rt = 44583; ok ($rt, "RT-$rt - $desc{$rt}"); my @lines = @{$input{$rt}}; open my $fh, ">", rt_file ($rt); print $fh @lines; close $fh; ok (my $dbh = Connect (), "connect"); ok (my $sth = $dbh->prepare ("select * from rt$rt"), "prepare"); ok ($sth->execute, "execute"); is_deeply ($sth->{NAME_lc}, [qw( c_tab s_tab )], "field names"); ok ($sth = $dbh->prepare (qq; select c_tab, s_tab from rt$rt where c_tab = 1 ;), "prepare"); ok ($sth->execute (), "execute"); ok (my $row = $sth->fetch, "fetch"); is_deeply ($row, [ 1, "ok" ], "content"); ok ($sth->finish, "finish"); ok ($dbh = Connect ({ raw_headers => 1 }), "connect"); ok ($sth = $dbh->prepare ("select * from rt$rt"), "prepare"); # $sth is `empty' and should fail on all actions $sth->{NAME_lc} # this can return undef or an empty list ? is_deeply ($sth->{NAME_lc}, [], "field names") : is ($sth->{NAME_lc}, undef, "field names"); ok ($sth->finish, "finish"); ok ($dbh->do ("drop table rt$rt"), "drop table"); ok ($dbh->disconnect, "disconnect"); } { $rt = 46627; ok ($rt, "RT-$rt - $desc{$rt}"); ok (my $dbh = Connect ({f_ext => ".csv/r"}),"connect"); unlink "RT$rt.csv"; ok ($dbh->do (" create table RT$rt ( name varchar, id integer )"), "create"); ok (my $sth = $dbh->prepare (" insert into RT$rt values (?, ?)"), "prepare ins"); ok ($sth->execute ("Steffen", 1), "insert 1"); ok ($sth->execute ("Tux", 2), "insert 2"); ok ($sth->finish, "finish"); ok ($dbh->do (" insert into RT$rt ( name, id, ) values (?, ?)", undef, "", 3), "insert 3"); ok ($sth = $dbh->prepare (" update RT$rt set name = ? where id = ?" ), "prepare upd"); ok ($sth->execute ("Tim", 1), "update"); ok ($sth->execute ("Tux", 2), "update"); ok ($sth->finish, "finish"); my $rtfn = DbFile ("RT$rt.csv"); -f $rtfn or $rtfn = DbFile ("rt$rt.csv"); ok (-f $rtfn, "file $rtfn exists"); ok (-s $rtfn, "file is not empty"); open my $fh, "<", $rtfn; ok ($fh, "open file"); binmode $fh; is (scalar <$fh>, qq{name,id\r\n}, "Field names"); is (scalar <$fh>, qq{Tim,1\r\n}, "Record 1"); is (scalar <$fh>, qq{Tux,2\r\n}, "Record 2"); is (scalar <$fh>, qq{,3\r\n}, "Record 3"); is (scalar <$fh>, undef, "EOF"); close $fh; ok ($dbh->do ("drop table RT$rt"), "drop"); ok ($dbh->disconnect, "disconnect"); } { $rt = 51090; ok ($rt, "RT-$rt - $desc{$rt}"); my @lines = @{$input{$rt}}; my @dbitp = ( SQL_INTEGER, SQL_LONGVARCHAR, SQL_NUMERIC ); my @csvtp = ( 1, 0, 2 ); open my $fh, ">", rt_file ($rt); print $fh @lines; close $fh; ok (my $dbh = Connect ({ f_lock => 0 }), "connect"); $dbh->{csv_tables}{rt51090}{types} = [ @dbitp ]; ok (my $sth = $dbh->prepare ("select * from rt$rt"), "prepare"); is_deeply ($dbh->{csv_tables}{rt51090}{types}, \@dbitp, "set types (@dbitp)"); ok ($sth->execute (), "execute"); is_deeply ($dbh->{csv_tables}{rt51090}{types}, \@csvtp, "get types (@csvtp)"); ok ($dbh->do ("drop table RT$rt"), "drop"); ok ($dbh->disconnect, "disconnect"); } { $rt = 61168; ok ($rt, "RT-$rt - $desc{$rt}"); my @lines = @{$input{$rt}}; open my $fh, ">", rt_file ($rt); print $fh @lines; close $fh; ok (my $dbh = Connect ({ f_lock => 0 }), "connect"); $dbh->{csv_tables}{rt61168}{sep_char} = ";"; cmp_ok ($dbh->{csv_tables}{rt61168}{csv_in} {sep_char}, "eq", ";", "cvs_in adjusted"); cmp_ok ($dbh->{csv_tables}{rt61168}{csv_out}{sep_char}, "eq", ";", "cvs_out adjusted"); ok (my $sth = $dbh->prepare ("select * from rt$rt"), "prepare"); ok ($sth->execute (), "execute"); ok (my $all_rows = $sth->fetchall_arrayref({}), "fetch"); my $wanted_rows = [ { header1 => "Volki", header2 => "Bolki", }, { header1 => "Zolki", header2 => "Solki", }, ]; is_deeply ($all_rows, $wanted_rows, "records"); ok ($dbh->do ("drop table RT$rt"), "drop"); ok ($dbh->disconnect, "disconnect"); } { $rt = 80078; ok ($rt, "RT-$rt - $desc{$rt}"); my @lines = @{$input{$rt}}; my $tbl = "rt$rt"; open my $fh, ">", rt_file ($rt); print $fh @lines; close $fh; ok (my $dbh = Connect ({ csv_sep_char => "\t", csv_quote_char => undef, csv_escape_char => "\\", csv_allow_loose_escapes => 1, RaiseError => 1, PrintError => 1, }), "connect"); $dbh->{csv_tables}{$tbl}{col_names} = []; ok (my $sth = $dbh->prepare ("select * from $tbl"), "prepare"); eval { ok ($sth->execute, "execute"); ok (!$@, "no error"); is (scalar @{$sth->fetchall_arrayref}, 2, # not part of 80078 "empty col_names treat skip_first_row as false"); }; ok ($dbh->do ("drop table $tbl"), "drop"); ok ($dbh->disconnect, "disconnect"); } done_testing (); __END__ Ť357ť - build failure of DBD::CSV Ť2193ť - DBD::File fails on create Ť5392ť - No way to process Unicode CSVs Ť6040ť - Implementing "Active" attribute for driver Ť7214ť - error with perl-5.8.5 Ť7877ť - make test says "t/40bindparam......FAILED test 14" Ť8525ť - Build failure due to output files in DBD-CSV-0.21.tar.gz Ť11094ť - hint in docs about unix eol Ť11763ť - dependency revision incompatibility Ť14280ť - wish: detect typo'ed connect strings Ť17340ť - Update statements does not work properly Ť17744ť - Using placeholder in update statement causes error Ť18477ť - use of prepare/execute with placeholders fails segno,owner,type,namespace,experiment,stream,updated,size 14,root,bug,RT,"not really",fast,20090501,42 Ť20340ť - csv_eol Ť20550ť - Using "Primary key" leads to error Ť31395ť - eat memory Ť33764ť - $! is not an indicator of failure c_tab,s_tab 1,correct 2,Fal"se 3,Wr"ong Ť33767ť - (No subject) Ť43010ť - treatment of nulls scrambles joins Ť44583ť - DBD::CSV cannot read CSV files with dots on the first line c.tab,"s,tab" 1,ok Ť46627ť - DBD::File is damaged now Ť51090ť - Report a bug in DBD-CSV integer,longvarchar,numeric Ť61168ť - Specifying separation character per table does not work "HEADER1";"HEADER2" Volki;Bolki Zolki;Solki Ť80078ť - bug in DBD::CSV causes select to fail a b c d e f g h DBD-CSV-0.62/lib/0000755000031300001440000000000014741200541012464 5ustar00merijnusersDBD-CSV-0.62/lib/DBD/0000755000031300001440000000000014741200541013055 5ustar00merijnusersDBD-CSV-0.62/lib/DBD/CSV.pm0000755000031300001440000011065014741175353014070 0ustar00merijnusers#!/usr/bin/perl # # DBD::CSV - A DBI driver for CSV and similar structured files # # This module is currently maintained by # # H.Merijn Brand # # See for full acknowledgements the last two pod sections in this file use strict; use warnings; require DynaLoader; require DBD::File; require IO::File; our @f_SHORT = qw( class file dir dir_search ext lock lockfile schema encoding ); our @c_SHORT = qw( eof eol sep_char quote_char escape_char binary decode_utf8 auto_diag diag_verbose blank_is_undef empty_is_undef allow_whitespace allow_loose_quotes allow_loose_escapes allow_unquoted_escape always_quote quote_empty quote_space escape_null quote_binary keep_meta_info callbacks ); package DBD::CSV; use strict; our @ISA = qw( DBD::File ); our $VERSION = "0.62"; our $ATTRIBUTION = "DBD::CSV $DBD::CSV::VERSION by H.Merijn Brand"; our $err = 0; # holds error code for DBI::err our $errstr = ""; # holds error string for DBI::errstr our $sqlstate = ""; # holds error state for DBI::state our $drh = undef; # holds driver handle once initialized sub CLONE { # empty method: prevent warnings when threads are cloned } # CLONE # --- DRIVER ------------------------------------------------------------------- package DBD::CSV::dr; use strict; use Text::CSV_XS (); our @CSV_TYPES = ( Text::CSV_XS::IV (), # SQL_TINYINT Text::CSV_XS::IV (), # SQL_BIGINT Text::CSV_XS::PV (), # SQL_LONGVARBINARY Text::CSV_XS::PV (), # SQL_VARBINARY Text::CSV_XS::PV (), # SQL_BINARY Text::CSV_XS::PV (), # SQL_LONGVARCHAR Text::CSV_XS::PV (), # SQL_ALL_TYPES Text::CSV_XS::PV (), # SQL_CHAR Text::CSV_XS::NV (), # SQL_NUMERIC Text::CSV_XS::NV (), # SQL_DECIMAL Text::CSV_XS::IV (), # SQL_INTEGER Text::CSV_XS::IV (), # SQL_SMALLINT Text::CSV_XS::NV (), # SQL_FLOAT Text::CSV_XS::NV (), # SQL_REAL Text::CSV_XS::NV (), # SQL_DOUBLE ); our @ISA = qw( DBD::File::dr ); our $imp_data_size = 0; our $data_sources_attr = undef; sub connect { my ($drh, $dbname, $user, $auth, $attr) = @_; if ($attr && ref $attr eq "HASH") { # Top-level aliasses foreach my $key (grep { exists $attr->{$_} } @f_SHORT) { my $f_key = "f_$key"; exists $attr->{$f_key} and next; $attr->{$f_key} = delete $attr->{$key}; } foreach my $key (grep { exists $attr->{$_} } @c_SHORT) { my $c_key = "csv_$key"; exists $attr->{$c_key} and next; $attr->{$c_key} = delete $attr->{$key}; } } my $dbh = $drh->DBD::File::dr::connect ($dbname, $user, $auth, $attr); $dbh and $dbh->{Active} = 1; $dbh; } # connect # --- DATABASE ----------------------------------------------------------------- package DBD::CSV::db; use strict; our $imp_data_size = 0; our @ISA = qw( DBD::File::db ); sub set_versions { my $this = shift; $this->{csv_version} = $DBD::CSV::VERSION; return $this->SUPER::set_versions (); } # set_versions my %csv_xs_attr; sub init_valid_attributes { my $dbh = shift; # Straight from Text::CSV_XS.pm my @xs_attr = @c_SHORT; @csv_xs_attr{@xs_attr} = (); # Dynamically add "new" attributes - available in Text::CSV_XS-1.20 if (my @ka = eval { Text::CSV_XS->known_attributes }) { for (grep { m/^[a-su-z]/ && !exists $csv_xs_attr{$_} } @ka) { push @xs_attr => $_; $csv_xs_attr{$_} = undef; } }; $dbh->{csv_xs_valid_attrs} = [ @xs_attr ]; $dbh->{csv_valid_attrs} = { map {("csv_$_" => 1 )} @xs_attr, qw( class tables in csv_in out csv_out skip_first_row null sep quote escape bom )}; $dbh->{csv_readonly_attrs} = { }; $dbh->{csv_meta} = "csv_tables"; return $dbh->SUPER::init_valid_attributes (); } # init_valid_attributes sub get_csv_versions { my ($dbh, $table) = @_; $table ||= ""; my $class = $dbh->{ImplementorClass}; $class =~ s/::db$/::Table/; my $meta; $table and (undef, $meta) = $class->get_table_meta ($dbh, $table, 1); unless ($meta) { $meta = {}; $class->bootstrap_table_meta ($dbh, $meta, $table); } my $dvsn = eval { $meta->{csv_class}->VERSION (); }; my $dtype = $meta->{csv_class}; $dvsn and $dtype .= " ($dvsn)"; return sprintf "%s using %s", $dbh->{csv_version}, $dtype; } # get_csv_versions sub get_info { my ($dbh, $info_type) = @_; require DBD::CSV::GetInfo; my $v = $DBD::CSV::GetInfo::info{int ($info_type)}; ref $v eq "CODE" and $v = $v->($dbh); return $v; } # get_info sub type_info_all { # my $dbh = shift; require DBD::CSV::TypeInfo; return [@$DBD::CSV::TypeInfo::type_info_all]; } # type_info_all # --- STATEMENT ---------------------------------------------------------------- package DBD::CSV::st; use strict; our $imp_data_size = 0; our @ISA = qw( DBD::File::st ); package DBD::CSV::Statement; use strict; use Carp; our @ISA = qw( DBD::File::Statement ); package DBD::CSV::Table; use strict; use Carp; our @ISA = qw( DBD::File::Table ); my %compat_map; { my %class_mapped; sub _register_compat_map { my $class = shift; my $x = 0; if (!%compat_map) { $compat_map{$_} = "f_$_" for @f_SHORT; $compat_map{$_} = "csv_$_" for @c_SHORT; $x++; } if ($class and !$class_mapped{$class}++ and my @ka = eval { $class->known_attributes }) { # exclude types $compat_map{$_} = "csv_$_" for grep m/^[a-su-z]/ => @ka; $x++; } if ($x) { __PACKAGE__->register_compat_map (\%compat_map); } } # _register_compat_map } #sub DESTROY { # my $self = shift or return; # # $self->{meta} and delete $self->{meta}{csv_in}; # } # DESTROY sub bootstrap_table_meta { my ($self, $dbh, $meta, $table) = @_; $meta->{csv_class} ||= $dbh->{csv_class} || "Text::CSV_XS"; $meta->{csv_eol} ||= $dbh->{csv_eol} || "\r\n"; _register_compat_map ($meta->{csv_class}); exists $meta->{csv_skip_first_row} or $meta->{csv_skip_first_row} = $dbh->{csv_skip_first_row}; exists $meta->{csv_bom} or $meta->{csv_bom} = exists $dbh->{bom} ? $dbh->{bom} : $dbh->{csv_bom}; $self->SUPER::bootstrap_table_meta ($dbh, $meta, $table); } # bootstrap_table_meta sub init_table_meta { my ($self, $dbh, $meta, $table) = @_; _register_compat_map ($meta->{csv_class}); $self->SUPER::init_table_meta ($dbh, $table, $meta); my $csv_in = $meta->{csv_in} || $dbh->{csv_csv_in}; unless ($csv_in) { my %opts = ( binary => 1, auto_diag => 1 ); # Allow specific Text::CSV_XS options foreach my $attr (@{$dbh->{csv_xs_valid_attrs}}) { $attr eq "eol" and next; # Handles below exists $dbh->{"csv_$attr"} and $opts{$attr} = $dbh->{"csv_$attr"}; } $dbh->{csv_null} || $meta->{csv_null} and $opts{Text::CSV_XS->version < 1.18 ? "always_quote" : "quote_empty"} = $opts{blank_is_undef} = 1; my $class = $meta->{csv_class}; my $eol = $meta->{csv_eol}; $eol =~ m/^\A(?:[\r\n]|\r\n)\Z/ or $opts{eol} = $eol; for ([ "sep", ',' ], [ "quote", '"' ], [ "escape", '"' ], ) { my ($attr, $def) = ($_->[0]."_char", $_->[1]); $opts{$attr} = exists $meta->{$attr} ? $meta->{$attr} : exists $dbh->{"csv_$attr"} ? $dbh->{"csv_$attr"} : $def; } $meta->{csv_in} = $class->new (\%opts) or $class->error_diag; $opts{eol} = $eol; $meta->{csv_out} = $class->new (\%opts) or $class->error_diag; } } # init_table_meta sub table_meta_attr_changed { my ($class, $meta, $attr, $value) = @_; _register_compat_map ($meta->{csv_class}); (my $csv_attr = $attr) =~ s/^csv_//; if (exists $csv_xs_attr{$csv_attr}) { for ("csv_in", "csv_out") { exists $meta->{$_} && exists $meta->{$_}{$csv_attr} and $meta->{$_}{$csv_attr} = $value; } } $class->SUPER::table_meta_attr_changed ($meta, $attr, $value); } # table_meta_attr_changed sub open_data { my ($self, $meta, $attrs, $flags) = @_; $self->SUPER::open_file ($meta, $attrs, $flags); if ($meta && $meta->{fh}) { $attrs->{csv_csv_in} = $meta->{csv_in}; $attrs->{csv_csv_out} = $meta->{csv_out}; if (my $types = $meta->{types}) { # XXX $meta->{types} is nowhere assigned and should better $meta->{csv_types} # The 'types' array contains DBI types, but we need types # suitable for Text::CSV_XS. my $t = []; for (@{$types}) { $_ = $_ ? $DBD::CSV::dr::CSV_TYPES[$_ + 6] || Text::CSV_XS::PV () : Text::CSV_XS::PV (); push @$t, $_; } $meta->{types} = $t; } if (!$flags->{createMode}) { my $array; my $skipRows = defined $meta->{skip_rows} ? $meta->{skip_rows} : defined $meta->{csv_skip_first_row} ? 1 : exists $meta->{col_names} ? 0 : 1; defined $meta->{skip_rows} or $meta->{skip_rows} = $skipRows; if ($meta->{csv_bom}) { my @hdr = $attrs->{csv_csv_in}->header ($meta->{fh}) or croak "Failed using the header row: ".$attrs->{csv_csv_in}->error_diag; $meta->{col_names} ||= \@hdr; $skipRows and $skipRows = 0; } if ($skipRows--) { $array = $attrs->{csv_csv_in}->getline ($meta->{fh}) or croak "Missing first row due to ".$attrs->{csv_csv_in}->error_diag; unless ($meta->{raw_header}) { s/\W/_/g for @$array; } defined $meta->{col_names} or $meta->{col_names} = $array; while ($skipRows--) { $attrs->{csv_csv_in}->getline ($meta->{fh}); } } # lockMode is set 1 for DELETE, INSERT or UPDATE # no other case need seeking $flags->{lockMode} and # $meta->{fh}->can ("tell") and $meta->{first_row_pos} = $meta->{fh}->tell (); exists $meta->{col_names} and $array = $meta->{col_names}; if (!$meta->{col_names} || !@{$meta->{col_names}}) { # No column names given; fetch first row and create default # names. my $ar = $meta->{cached_row} = $attrs->{csv_csv_in}->getline ($meta->{fh}); $array = $meta->{col_names}; push @$array, map { "col$_" } 0 .. $#$ar; } } } } # open_file no warnings 'once'; $DBI::VERSION < 1.623 and *open_file = \&open_data; use warnings; sub _csv_diag { my @diag = $_[0]->error_diag; for (2, 3) { defined $diag[$_] or $diag[$_] = "?"; } return @diag; } # _csv_diag sub fetch_row { my ($self, $data) = @_; my $tbl = $self->{meta}; exists $tbl->{cached_row} and return $self->{row} = delete $tbl->{cached_row}; my $csv = $self->{csv_csv_in} or return do { $data->set_err ($DBI::stderr, "Fetch from undefined handle"); undef }; my $fields = eval { $csv->getline ($tbl->{fh}) }; unless ($fields) { $csv->eof and return; my @diag = _csv_diag ($csv); $diag[0] == 2012 and return; # Also EOF (broken in Text::CSV_XS-1.10) my $file = $tbl->{f_fqfn}; croak "Error $diag[0] while reading file $file: $diag[1] \@ line $diag[3] pos $diag[2]"; } @$fields < @{$tbl->{col_names}} and push @$fields, (undef) x (@{$tbl->{col_names}} - @$fields); $self->{row} = (@$fields ? $fields : undef); } # fetch_row sub push_row { my ($self, $data, $fields) = @_; my $tbl = $self->{meta}; my $csv = $self->{csv_csv_out}; my $fh = $tbl->{fh}; unless ($csv->print ($fh, $fields)) { my @diag = _csv_diag ($csv); my $file = $tbl->{f_fqfn}; return do { $data->set_err ($DBI::stderr, "Error $diag[0] while writing file $file: $diag[1] \@ line $diag[3] pos $diag[2]"); undef }; } 1; } # push_row no warnings 'once'; *push_names = \&push_row; use warnings; 1; __END__ =head1 NAME DBD::CSV - DBI driver for CSV files =head1 SYNOPSIS use DBI; # See "Creating database handle" below $dbh = DBI->connect ("dbi:CSV:", undef, undef, { f_ext => ".csv/r", RaiseError => 1, }) or die "Cannot connect: $DBI::errstr"; # Simple statements $dbh->do ("CREATE TABLE foo (id INTEGER, name CHAR (10))"); # Selecting my $sth = $dbh->prepare ("select * from foo"); $sth->execute; $sth->bind_columns (\my ($id, $name)); while ($sth->fetch) { print "id: $id, name: $name\n"; } # Updates my $sth = $dbh->prepare ("UPDATE foo SET name = ? WHERE id = ?"); $sth->execute ("DBI rocks!", 1); $sth->finish; $dbh->disconnect; =head1 DESCRIPTION The DBD::CSV module is yet another driver for the DBI (Database independent interface for Perl). This one is based on the SQL "engine" SQL::Statement and the abstract DBI driver DBD::File and implements access to so-called CSV files (Comma Separated Values). Such files are often used for exporting MS Access and MS Excel data. See L for details on DBI, L for details on SQL::Statement and L for details on the base class DBD::File. =head2 Prerequisites The only system dependent feature that DBD::File uses, is the C function. Thus the module should run (in theory) on any system with a working C, in particular on all Unix machines and on Windows NT. Under Windows 95 and MacOS the use of C is disabled, thus the module should still be usable. Unlike other DBI drivers, you don't need an external SQL engine or a running server. All you need are the following Perl modules, available from any CPAN mirror, for example http://search.cpan.org/ =over 4 =item DBI X A recent version of the L (Database independent interface for Perl). See below why. =item DBD::File X This is the base class for DBD::CSV, and it is part of the DBI distribution. As DBD::CSV requires a matching version of L which is (partly) developed by the same team that maintains DBD::CSV. See META.json or Makefile.PL for the minimum versions. =item SQL::Statement X A simple SQL engine. This module defines all of the SQL syntax for DBD::CSV, new SQL support is added with each release so you should look for updates to SQL::Statement regularly. It is possible to run C without this module if you define the environment variable C<$DBI_SQL_NANO> to 1. This will reduce the SQL support a lot though. See L for more details. Note that the test suite does only test in this mode in the development environment. =item Text::CSV_XS X This module is used to read and write rows in a CSV file. =back =head2 Installation Installing this module (and the prerequisites from above) is quite simple. The simplest way is to install the bundle: $ cpan Bundle::DBD::CSV Alternatively, you can name them all $ cpan Text::CSV_XS DBI DBD::CSV or even trust C to resolve all dependencies for you: $ cpan DBD::CSV If you cannot, for whatever reason, use cpan, fetch all modules from CPAN, and build with a sequence like: gzip -d < DBD-CSV-0.40.tgz | tar xf - (this is for Unix users, Windows users would prefer WinZip or something similar) and then enter the following: cd DBD-CSV-0.40 perl Makefile.PL make test If any tests fail, let us know. Otherwise go on with make install UNINST=1 Note that you almost definitely need root or administrator permissions. If you don't have them, read the ExtUtils::MakeMaker man page for details on installing in your own directories. L. =head2 Supported SQL Syntax All SQL processing for DBD::CSV is done by SQL::Statement. See L for more specific information about its feature set. Features include joins, aliases, built-in and user-defined functions, and more. See L for a description of the SQL syntax supported in DBD::CSV. Table- and column-names are case insensitive unless quoted. Column names will be sanitized unless L is true. =head1 Using DBD::CSV with DBI For most things, DBD-CSV operates the same as any DBI driver. See L for detailed usage. =head2 Creating a database handle (connect) Creating a database handle usually implies connecting to a database server. Thus this command reads use DBI; my $dbh = DBI->connect ("dbi:CSV:", "", "", { f_dir => "/home/user/folder", }); The directory tells the driver where it should create or open tables (a.k.a. files). It defaults to the current directory, so the following are equivalent: $dbh = DBI->connect ("dbi:CSV:"); $dbh = DBI->connect ("dbi:CSV:", undef, undef, { f_dir => "." }); $dbh = DBI->connect ("dbi:CSV:f_dir=."); We were told, that VMS might - for whatever reason - require: $dbh = DBI->connect ("dbi:CSV:f_dir="); The preferred way of passing the arguments is by driver attributes: # specify most possible flags via driver flags $dbh = DBI->connect ("dbi:CSV:", undef, undef, { f_schema => undef, f_dir => "data", f_dir_search => [], f_ext => ".csv/r", f_lock => 2, f_encoding => "utf8", csv_eol => "\r\n", csv_sep_char => ",", csv_quote_char => '"', csv_escape_char => '"', csv_class => "Text::CSV_XS", csv_null => 1, csv_bom => 0, csv_tables => { syspwd => { sep_char => ":", quote_char => undef, escape_char => undef, file => "/etc/passwd", col_names => [qw( login password uid gid realname directory shell )], }, }, RaiseError => 1, PrintError => 1, FetchHashKeyName => "NAME_lc", }) or die $DBI::errstr; but you may set these attributes in the DSN as well, separated by semicolons. Pay attention to the semi-colon for C (as seen in many CSV exports from MS Excel) is being escaped in below example, as is would otherwise be seen as attribute separator: $dbh = DBI->connect ( "dbi:CSV:f_dir=$ENV{HOME}/csvdb;f_ext=.csv;f_lock=2;" . "f_encoding=utf8;csv_eol=\n;csv_sep_char=\\;;" . "csv_quote_char=\";csv_escape_char=\\;csv_class=Text::CSV_XS;" . "csv_null=1") or die $DBI::errstr; Using attributes in the DSN is easier to use when the DSN is derived from an outside source (environment variable, database entry, or configure file), whereas specifying entries in the attribute hash is easier to read and to maintain. The default value for C is C<1> (True). The default value for C is <1>. Note that this might cause trouble on perl versions older than 5.8.9, so up to and including perl version 5.8.8 it might be required to use C<;csv_auto_diag=0> inside the C or C 0> inside the attributes. =head2 Creating and dropping tables You can create and drop tables with commands like the following: $dbh->do ("CREATE TABLE $table (id INTEGER, name CHAR (64))"); $dbh->do ("DROP TABLE $table"); Note that currently only the column names will be stored and no other data. Thus all other information including column type (INTEGER or CHAR (x), for example), column attributes (NOT NULL, PRIMARY KEY, ...) will silently be discarded. This may change in a later release. A drop just removes the file without any warning. See L for more details. Table names cannot be arbitrary, due to restrictions of the SQL syntax. I recommend that table names are valid SQL identifiers: The first character is alphabetic, followed by an arbitrary number of alphanumeric characters. If you want to use other files, the file names must start with "/", "./" or "../" and they must not contain white space. =head2 Inserting, fetching and modifying data The following examples insert some data in a table and fetch it back: First, an example where the column data is concatenated in the SQL string: $dbh->do ("INSERT INTO $table VALUES (1, ". $dbh->quote ("foobar") . ")"); Note the use of the quote method for escaping the word "foobar". Any string must be escaped, even if it does not contain binary data. Next, an example using parameters: $dbh->do ("INSERT INTO $table VALUES (?, ?)", undef, 2, "It's a string!"); Note that you don't need to quote column data passed as parameters. This version is particularly well designed for loops. Whenever performance is an issue, I recommend using this method. You might wonder about the C. Don't wonder, just take it as it is. :-) It's an attribute argument that I have never used and will be passed to the prepare method as the second argument. To retrieve data, you can use the following: my $query = "SELECT * FROM $table WHERE id > 1 ORDER BY id"; my $sth = $dbh->prepare ($query); $sth->execute (); while (my $row = $sth->fetchrow_hashref) { print "Found result row: id = ", $row->{id}, ", name = ", $row->{name}; } $sth->finish (); Again, column binding works: The same example again. my $sth = $dbh->prepare (qq; SELECT * FROM $table WHERE id > 1 ORDER BY id; ;); $sth->execute; my ($id, $name); $sth->bind_columns (undef, \$id, \$name); while ($sth->fetch) { print "Found result row: id = $id, name = $name\n"; } $sth->finish; Of course you can even use input parameters. Here's the same example for the third time: my $sth = $dbh->prepare ("SELECT * FROM $table WHERE id = ?"); $sth->bind_columns (undef, \$id, \$name); for (my $i = 1; $i <= 2; $i++) { $sth->execute ($id); if ($sth->fetch) { print "Found result row: id = $id, name = $name\n"; } $sth->finish; } See L for details on these methods. See L for details on the WHERE clause. Data rows are modified with the UPDATE statement: $dbh->do ("UPDATE $table SET id = 3 WHERE id = 1"); Likewise you use the DELETE statement for removing rows: $dbh->do ("DELETE FROM $table WHERE id > 1"); =head2 Error handling In the above examples we have never cared about return codes. Of course, this is not recommended. Instead we should have written (for example): my $sth = $dbh->prepare ("SELECT * FROM $table WHERE id = ?") or die "prepare: " . $dbh->errstr (); $sth->bind_columns (undef, \$id, \$name) or die "bind_columns: " . $dbh->errstr (); for (my $i = 1; $i <= 2; $i++) { $sth->execute ($id) or die "execute: " . $dbh->errstr (); $sth->fetch and print "Found result row: id = $id, name = $name\n"; } $sth->finish ($id) or die "finish: " . $dbh->errstr (); Obviously this is tedious. Fortunately we have DBI's I attribute: $dbh->{RaiseError} = 1; $@ = ""; eval { my $sth = $dbh->prepare ("SELECT * FROM $table WHERE id = ?"); $sth->bind_columns (undef, \$id, \$name); for (my $i = 1; $i <= 2; $i++) { $sth->execute ($id); $sth->fetch and print "Found result row: id = $id, name = $name\n"; } $sth->finish ($id); }; $@ and die "SQL database error: $@"; This is not only shorter, it even works when using DBI methods within subroutines. =head1 DBI database handle attributes =head2 Metadata The following attributes are handled by DBI itself and not by DBD::File, thus they all work as expected: Active ActiveKids CachedKids CompatMode (Not used) InactiveDestroy Kids PrintError RaiseError Warn (Not used) The following DBI attributes are handled by DBD::File: =over 4 =item AutoCommit X Always on =item ChopBlanks X Works =item NUM_OF_FIELDS X Valid after C<$sth-Eexecute> =item NUM_OF_PARAMS X Valid after C<$sth-Eprepare> =item NAME X =item NAME_lc X =item NAME_uc X Valid after C<$sth-Eexecute>; undef for Non-Select statements. =item NULLABLE X Not really working. Always returns an array ref of one's, as DBD::CSV does not verify input data. Valid after C<$sth-Eexecute>; undef for non-Select statements. =back These attributes and methods are not supported: bind_param_inout CursorName LongReadLen LongTruncOk =head1 DBD-CSV specific database handle attributes In addition to the DBI attributes, you can use the following dbh attributes: =head2 DBD::File attributes =over 4 =item f_dir X This attribute is used for setting the directory where CSV files are opened. Usually you set it in the dbh and it defaults to the current directory ("."). However, it may be overridden in statement handles. =item f_dir_search X This attribute optionally defines a list of extra directories to search when opening existing tables. It should be an anonymous list or an array reference listing all folders where tables could be found. my $dbh = DBI->connect ("dbi:CSV:", "", "", { f_dir => "data", f_dir_search => [ "ref/data", "ref/old" ], f_ext => ".csv/r", }) or die $DBI::errstr; =item f_ext X This attribute is used for setting the file extension. =item f_schema X This attribute allows you to set the database schema name. The default is to use the owner of C. C is allowed, but not in the DSN part. my $dbh = DBI->connect ("dbi:CSV:", "", "", { f_schema => undef, f_dir => "data", f_ext => ".csv/r", }) or die $DBI::errstr; =item f_encoding X This attribute allows you to set the encoding of the data. With CSV, it is not possible to set (and remember) the encoding on a column basis, but DBD::File now allows the encoding to be set on the underlying file. If this attribute is not set, or undef is passed, the file will be seen as binary. =item f_lock X With this attribute you can specify a locking mode to be used (if locking is supported at all) for opening tables. By default, tables are opened with a shared lock for reading, and with an exclusive lock for writing. The supported modes are: =over 2 =item 0 X<0> Force no locking at all. =item 1 X<1> Only shared locks will be used. =item 2 X<2> Only exclusive locks will be used. =back =back But see L. =head2 DBD::CSV specific attributes =over 4 =item csv_class The attribute I controls the CSV parsing engine. This defaults to C, but C can be used in some cases, too. Please be aware that C does not care about any edge case as C does and that C is probably about 100 times slower than C. In order to use the specified class other than C, it needs to be loaded before use. C does not C/C the specified class itself. =back =head2 Text::CSV_XS specific attributes =over 4 =item csv_eol X =item csv_sep_char X =item csv_quote_char X =item csv_escape_char X =item csv_csv X The attributes I, I, I and I are corresponding to the respective attributes of the I (usually Text::CSV_CS) object. You may want to set these attributes if you have unusual CSV files like F or MS Excel generated CSV files with a semicolon as separator. Defaults are C<\015\012>", C<,>, C<"> and C<">, respectively. The I attribute defines the end-of-line pattern, which is better known as a record separator pattern since it separates records. The default is windows-style end-of-lines C<\015\012> for output (writing) and unset for input (reading), so if on unix you may want to set this to newline (C<\n>) like this: $dbh->{csv_eol} = "\n"; It is also possible to use multi-character patterns as record separators. For example this file uses newlines as field separators (sep_char) and the pattern "\n__ENDREC__\n" as the record separators (eol): name city __ENDREC__ joe seattle __ENDREC__ sue portland __ENDREC__ To handle this file, you'd do this: $dbh->{eol} = "\n__ENDREC__\n" , $dbh->{sep_char} = "\n" The attributes are used to create an instance of the class I, by default Text::CSV_XS. Alternatively you may pass an instance as I, the latter takes precedence. Note that the I attribute I be set to a true value in that case. Additionally you may overwrite these attributes on a per-table base in the I attribute. =item csv_null X With this option set, all new statement handles will set C and C in the CSV parser and writer, so it knows how to distinguish between the empty string and C or C. You cannot reset it with a false value. You can pass it to connect, or set it later: $dbh = DBI->connect ("dbi:CSV:", "", "", { csv_null => 1 }); $dbh->{csv_null} = 1; =item csv_bom X With this option set, the CSV parser will try to detect BOM (Byte Order Mark) in the header line. This requires L version 1.22 or higher. $dbh = DBI->connect ("dbi:CSV:", "", "", { csv_bom => 1 }); $dbh->{csv_bom} = 1; =item csv_tables X This hash ref is used for storing table dependent metadata. For any table it contains an element with the table name as key and another hash ref with the following attributes: =over 4 =item o All valid attributes to the CSV parsing module. Any of them can optionally be prefixed with C. =item o All attributes valid to DBD::File =back If you pass it C or its alias C, C has no effect, but C and C still have. csv_tables => { syspwd => { # Table name csv_sep_char => ":", # Text::CSV_XS quote_char => undef, # Text::CSV_XS escape_char => undef, # Text::CSV_XS f_dir => "/etc", # DBD::File f_file => "passwd", # DBD::File col_names => # DBD::File [qw( login password uid gid realname directory shell )], }, }, =item csv_* X All other attributes that start with C and are not described above will be passed to C (without the C prefix). These extra options are only likely to be useful for reading (select) handles. Examples: $dbh->{csv_allow_whitespace} = 1; $dbh->{csv_allow_loose_quotes} = 1; $dbh->{csv_allow_loose_escapes} = 1; See the C documentation for the full list and the documentation. =back =head2 Driver specific attributes =over 4 =item f_file X The name of the file used for the table; defaults to "$dbh->{f_dir}/$table" =item eol X =item sep_char X =item quote_char X =item escape_char X =item class X =item csv X These correspond to the attributes I, I, I, I, I and I. The difference is that they work on a per-table basis. =item col_names X =item skip_first_row X By default DBD::CSV assumes that column names are stored in the first row of the CSV file and sanitizes them (see C below). If this is not the case, you can supply an array ref of table names with the I attribute. In that case the attribute I will be set to FALSE. If you supply an empty array ref, the driver will read the first row for you, count the number of columns and create column names like C, C, ... Note that column names that match reserved SQL words will cause unwanted and sometimes confusing errors. If your CSV has headers that match reserved words, you will require these two attributes. If C looks like select,from 1,2 the select query would result in C