.\" Automatically generated by Pod::Man 4.11 (Pod::Simple 3.35)
.\"
.\" Standard preamble:
.\" ========================================================================
.de Sp \" Vertical space (when we can't use .PP)
.if t .sp .5v
.if n .sp
..
.de Vb \" Begin verbatim text
.ft CW
.nf
.ne \\$1
..
.de Ve \" End verbatim text
.ft R
.fi
..
.\" Set up some character translations and predefined strings. \*(-- will
.\" give an unbreakable dash, \*(PI will give pi, \*(L" will give a left
.\" double quote, and \*(R" will give a right double quote. \*(C+ will
.\" give a nicer C++. Capital omega is used to do unbreakable dashes and
.\" therefore won't be available. \*(C` and \*(C' expand to `' in nroff,
.\" nothing in troff, for use with C<>.
.tr \(*W-
.ds C+ C\v'-.1v'\h'-1p'\s-2+\h'-1p'+\s0\v'.1v'\h'-1p'
.ie n \{\
. ds -- \(*W-
. ds PI pi
. if (\n(.H=4u)&(1m=24u) .ds -- \(*W\h'-12u'\(*W\h'-12u'-\" diablo 10 pitch
. if (\n(.H=4u)&(1m=20u) .ds -- \(*W\h'-12u'\(*W\h'-8u'-\" diablo 12 pitch
. ds L" ""
. ds R" ""
. ds C` ""
. ds C' ""
'br\}
.el\{\
. ds -- \|\(em\|
. ds PI \(*p
. ds L" ``
. ds R" ''
. ds C`
. ds C'
'br\}
.\"
.\" Escape single quotes in literal strings from groff's Unicode transform.
.ie \n(.g .ds Aq \(aq
.el .ds Aq '
.\"
.\" If the F register is >0, we'll generate index entries on stderr for
.\" titles (.TH), headers (.SH), subsections (.SS), items (.Ip), and index
.\" entries marked with X<> in POD. Of course, you'll have to process the
.\" output yourself in some meaningful fashion.
.\"
.\" Avoid warning from groff about undefined register 'F'.
.de IX
..
.nr rF 0
.if \n(.g .if rF .nr rF 1
.if (\n(rF:(\n(.g==0)) \{\
. if \nF \{\
. de IX
. tm Index:\\$1\t\\n%\t"\\$2"
..
. if !\nF==2 \{\
. nr % 0
. nr F 2
. \}
. \}
.\}
.rr rF
.\" ========================================================================
.\"
.IX Title "CSV_XS 3"
.TH CSV_XS 3 "2021-10-19" "perl v5.26.3" "User Contributed Perl Documentation"
.\" For nroff, turn off justification. Always turn off hyphenation; it makes
.\" way too many mistakes in technical documents.
.if n .ad l
.nh
.SH "NAME"
Text::CSV_XS \- comma\-separated values manipulation routines
.SH "SYNOPSIS"
.IX Header "SYNOPSIS"
.Vb 2
\& # Functional interface
\& use Text::CSV_XS qw( csv );
\&
\& # Read whole file in memory
\& my $aoa = csv (in => "data.csv"); # as array of array
\& my $aoh = csv (in => "data.csv",
\& headers => "auto"); # as array of hash
\&
\& # Write array of arrays as csv file
\& csv (in => $aoa, out => "file.csv", sep_char=> ";");
\&
\& # Only show lines where "code" is odd
\& csv (in => "data.csv", filter => { code => sub { $_ % 2 }});
\&
\&
\& # Object interface
\& use Text::CSV_XS;
\&
\& my @rows;
\& # Read/parse CSV
\& my $csv = Text::CSV_XS\->new ({ binary => 1, auto_diag => 1 });
\& open my $fh, "<:encoding(utf8)", "test.csv" or die "test.csv: $!";
\& while (my $row = $csv\->getline ($fh)) {
\& $row\->[2] =~ m/pattern/ or next; # 3rd field should match
\& push @rows, $row;
\& }
\& close $fh;
\&
\& # and write as CSV
\& open $fh, ">:encoding(utf8)", "new.csv" or die "new.csv: $!";
\& $csv\->say ($fh, $_) for @rows;
\& close $fh or die "new.csv: $!";
.Ve
.SH "DESCRIPTION"
.IX Header "DESCRIPTION"
Text::CSV_XS provides facilities for the composition and decomposition of
comma-separated values. An instance of the Text::CSV_XS class will combine
fields into a \f(CW\*(C`CSV\*(C'\fR string and parse a \f(CW\*(C`CSV\*(C'\fR string into fields.
.PP
The module accepts either strings or files as input and support the use of
user-specified characters for delimiters, separators, and escapes.
.SS "Embedded newlines"
.IX Subsection "Embedded newlines"
\&\fBImportant Note\fR: The default behavior is to accept only \s-1ASCII\s0 characters
in the range from \f(CW0x20\fR (space) to \f(CW0x7E\fR (tilde). This means that the
fields can not contain newlines. If your data contains newlines embedded in
fields, or characters above \f(CW0x7E\fR (tilde), or binary data, you \fB\f(BImust\fB\fR
set \f(CW\*(C`binary => 1\*(C'\fR in the call to \*(L"new\*(R". To cover the widest range of
parsing options, you will always want to set binary.
.PP
But you still have the problem that you have to pass a correct line to the
\&\*(L"parse\*(R" method, which is more complicated from the usual point of usage:
.PP
.Vb 5
\& my $csv = Text::CSV_XS\->new ({ binary => 1, eol => $/ });
\& while (<>) { # WRONG!
\& $csv\->parse ($_);
\& my @fields = $csv\->fields ();
\& }
.Ve
.PP
this will break, as the \f(CW\*(C`while\*(C'\fR might read broken lines: it does not care
about the quoting. If you need to support embedded newlines, the way to go
is to \fBnot\fR pass \f(CW\*(C`eol\*(C'\fR in the parser (it accepts \f(CW\*(C`\en\*(C'\fR, \f(CW\*(C`\er\*(C'\fR,
\&\fBand\fR \f(CW\*(C`\er\en\*(C'\fR by default) and then
.PP
.Vb 5
\& my $csv = Text::CSV_XS\->new ({ binary => 1 });
\& open my $fh, "<", $file or die "$file: $!";
\& while (my $row = $csv\->getline ($fh)) {
\& my @fields = @$row;
\& }
.Ve
.PP
The old(er) way of using global file handles is still supported
.PP
.Vb 1
\& while (my $row = $csv\->getline (*ARGV)) { ... }
.Ve
.SS "Unicode"
.IX Subsection "Unicode"
Unicode is only tested to work with perl\-5.8.2 and up.
.PP
See also \*(L"\s-1BOM\*(R"\s0.
.PP
The simplest way to ensure the correct encoding is used for in\- and output
is by either setting layers on the filehandles, or setting the \*(L"encoding\*(R"
argument for \*(L"csv\*(R".
.PP
.Vb 3
\& open my $fh, "<:encoding(UTF\-8)", "in.csv" or die "in.csv: $!";
\&or
\& my $aoa = csv (in => "in.csv", encoding => "UTF\-8");
\&
\& open my $fh, ">:encoding(UTF\-8)", "out.csv" or die "out.csv: $!";
\&or
\& csv (in => $aoa, out => "out.csv", encoding => "UTF\-8");
.Ve
.PP
On parsing (both for \*(L"getline\*(R" and \*(L"parse\*(R"), if the source is marked
being \s-1UTF8,\s0 then all fields that are marked binary will also be marked \s-1UTF8.\s0
.PP
On combining (\*(L"print\*(R" and \*(L"combine\*(R"): if any of the combining fields
was marked \s-1UTF8,\s0 the resulting string will be marked as \s-1UTF8.\s0 Note however
that all fields \fIbefore\fR the first field marked \s-1UTF8\s0 and contained 8\-bit
characters that were not upgraded to \s-1UTF8,\s0 these will be \f(CW\*(C`bytes\*(C'\fR in the
resulting string too, possibly causing unexpected errors. If you pass data
of different encoding, or you don't know if there is different encoding,
force it to be upgraded before you pass them on:
.PP
.Vb 1
\& $csv\->print ($fh, [ map { utf8::upgrade (my $x = $_); $x } @data ]);
.Ve
.PP
For complete control over encoding, please use Text::CSV::Encoded:
.PP
.Vb 5
\& use Text::CSV::Encoded;
\& my $csv = Text::CSV::Encoded\->new ({
\& encoding_in => "iso\-8859\-1", # the encoding comes into Perl
\& encoding_out => "cp1252", # the encoding comes out of Perl
\& });
\&
\& $csv = Text::CSV::Encoded\->new ({ encoding => "utf8" });
\& # combine () and print () accept *literally* utf8 encoded data
\& # parse () and getline () return *literally* utf8 encoded data
\&
\& $csv = Text::CSV::Encoded\->new ({ encoding => undef }); # default
\& # combine () and print () accept UTF8 marked data
\& # parse () and getline () return UTF8 marked data
.Ve
.SS "\s-1BOM\s0"
.IX Subsection "BOM"
\&\s-1BOM\s0 (or Byte Order Mark) handling is available only inside the \*(L"header\*(R"
method. This method supports the following encodings: \f(CW\*(C`utf\-8\*(C'\fR, \f(CW\*(C`utf\-1\*(C'\fR,
\&\f(CW\*(C`utf\-32be\*(C'\fR, \f(CW\*(C`utf\-32le\*(C'\fR, \f(CW\*(C`utf\-16be\*(C'\fR, \f(CW\*(C`utf\-16le\*(C'\fR, \f(CW\*(C`utf\-ebcdic\*(C'\fR, \f(CW\*(C`scsu\*(C'\fR,
\&\f(CW\*(C`bocu\-1\*(C'\fR, and \f(CW\*(C`gb\-18030\*(C'\fR. See Wikipedia <https://en.wikipedia.org/wiki/Byte_order_mark>.
.PP
If a file has a \s-1BOM,\s0 the easiest way to deal with that is
.PP
.Vb 1
\& my $aoh = csv (in => $file, detect_bom => 1);
.Ve
.PP
All records will be encoded based on the detected \s-1BOM.\s0
.PP
This implies a call to the \*(L"header\*(R" method, which defaults to also set
the \*(L"column_names\*(R". So this is \fBnot\fR the same as
.PP
.Vb 1
\& my $aoh = csv (in => $file, headers => "auto");
.Ve
.PP
which only reads the first record to set \*(L"column_names\*(R" but ignores any
meaning of possible present \s-1BOM.\s0
.SH "SPECIFICATION"
.IX Header "SPECIFICATION"
While no formal specification for \s-1CSV\s0 exists, \s-1RFC 4180\s0 <https://datatracker.ietf.org/doc/html/rfc4180>
(\fI1\fR) describes the common format and establishes \f(CW\*(C`text/csv\*(C'\fR as the \s-1MIME\s0
type registered with the \s-1IANA.\s0 \s-1RFC 7111\s0 <https://datatracker.ietf.org/doc/html/rfc7111>
(\fI2\fR) adds fragments to \s-1CSV.\s0
.PP
Many informal documents exist that describe the \f(CW\*(C`CSV\*(C'\fR format. \*(L"How To:
The Comma Separated Value (\s-1CSV\s0) File Format\*(R" <http://creativyst.com/Doc/Articles/CSV/CSV01.shtml>
(\fI3\fR) provides an overview of the \f(CW\*(C`CSV\*(C'\fR format in the most widely used
applications and explains how it can best be used and supported.
.PP
.Vb 3
\& 1) https://datatracker.ietf.org/doc/html/rfc4180
\& 2) https://datatracker.ietf.org/doc/html/rfc7111
\& 3) http://creativyst.com/Doc/Articles/CSV/CSV01.shtml
.Ve
.PP
The basic rules are as follows:
.PP
\&\fB\s-1CSV\s0\fR is a delimited data format that has fields/columns separated by the
comma character and records/rows separated by newlines. Fields that contain
a special character (comma, newline, or double quote), must be enclosed in
double quotes. However, if a line contains a single entry that is the empty
string, it may be enclosed in double quotes. If a field's value contains a
double quote character it is escaped by placing another double quote
character next to it. The \f(CW\*(C`CSV\*(C'\fR file format does not require a specific
character encoding, byte order, or line terminator format.
.IP "\(bu" 2
Each record is a single line ended by a line feed (\s-1ASCII/\s0\f(CW\*(C`LF\*(C'\fR=\f(CW0x0A\fR) or
a carriage return and line feed pair (\s-1ASCII/\s0\f(CW\*(C`CRLF\*(C'\fR=\f(CW\*(C`0x0D 0x0A\*(C'\fR), however,
line-breaks may be embedded.
.IP "\(bu" 2
Fields are separated by commas.
.IP "\(bu" 2
Allowable characters within a \f(CW\*(C`CSV\*(C'\fR field include \f(CW0x09\fR (\f(CW\*(C`TAB\*(C'\fR) and the
inclusive range of \f(CW0x20\fR (space) through \f(CW0x7E\fR (tilde). In binary mode
all characters are accepted, at least in quoted fields.
.IP "\(bu" 2
A field within \f(CW\*(C`CSV\*(C'\fR must be surrounded by double-quotes to contain a
separator character (comma).
.PP
Though this is the most clear and restrictive definition, Text::CSV_XS is
way more liberal than this, and allows extension:
.IP "\(bu" 2
Line termination by a single carriage return is accepted by default
.IP "\(bu" 2
The separation\-, escape\-, and escape\- characters can be any \s-1ASCII\s0 character
in the range from \f(CW0x20\fR (space) to \f(CW0x7E\fR (tilde). Characters outside
this range may or may not work as expected. Multibyte characters, like \s-1UTF\s0
\&\f(CW\*(C`U+060C\*(C'\fR (\s-1ARABIC COMMA\s0), \f(CW\*(C`U+FF0C\*(C'\fR (\s-1FULLWIDTH COMMA\s0), \f(CW\*(C`U+241B\*(C'\fR (\s-1SYMBOL
FOR ESCAPE\s0), \f(CW\*(C`U+2424\*(C'\fR (\s-1SYMBOL FOR NEWLINE\s0), \f(CW\*(C`U+FF02\*(C'\fR (\s-1FULLWIDTH QUOTATION
MARK\s0), and \f(CW\*(C`U+201C\*(C'\fR (\s-1LEFT DOUBLE QUOTATION MARK\s0) (to give some examples of
what might look promising) work for newer versions of perl for \f(CW\*(C`sep_char\*(C'\fR,
and \f(CW\*(C`quote_char\*(C'\fR but not for \f(CW\*(C`escape_char\*(C'\fR.
.Sp
If you use perl\-5.8.2 or higher these three attributes are utf8\-decoded, to
increase the likelihood of success. This way \f(CW\*(C`U+00FE\*(C'\fR will be allowed as a
quote character.
.IP "\(bu" 2
A field in \f(CW\*(C`CSV\*(C'\fR must be surrounded by double-quotes to make an embedded
double-quote, represented by a pair of consecutive double-quotes, valid. In
binary mode you may additionally use the sequence \f(CW\*(C`"0\*(C'\fR for representation
of a \s-1NULL\s0 byte. Using \f(CW0x00\fR in binary mode is just as valid.
.IP "\(bu" 2
Several violations of the above specification may be lifted by passing some
options as attributes to the object constructor.
.SH "METHODS"
.IX Header "METHODS"
.SS "version"
.IX Xref "version"
.IX Subsection "version"
(Class method) Returns the current module version.
.SS "new"
.IX Xref "new"
.IX Subsection "new"
(Class method) Returns a new instance of class Text::CSV_XS. The attributes
are described by the (optional) hash ref \f(CW\*(C`\e%attr\*(C'\fR.
.PP
.Vb 1
\& my $csv = Text::CSV_XS\->new ({ attributes ... });
.Ve
.PP
The following attributes are available:
.PP
\fIeol\fR
.IX Xref "eol"
.IX Subsection "eol"
.PP
.Vb 3
\& my $csv = Text::CSV_XS\->new ({ eol => $/ });
\& $csv\->eol (undef);
\& my $eol = $csv\->eol;
.Ve
.PP
The end-of-line string to add to rows for \*(L"print\*(R" or the record separator
for \*(L"getline\*(R".
.PP
When not passed in a \fBparser\fR instance, the default behavior is to accept
\&\f(CW\*(C`\en\*(C'\fR, \f(CW\*(C`\er\*(C'\fR, and \f(CW\*(C`\er\en\*(C'\fR, so it is probably safer to not specify \f(CW\*(C`eol\*(C'\fR at
all. Passing \f(CW\*(C`undef\*(C'\fR or the empty string behave the same.
.PP
When not passed in a \fBgenerating\fR instance, records are not terminated at
all, so it is probably wise to pass something you expect. A safe choice for
\&\f(CW\*(C`eol\*(C'\fR on output is either \f(CW$/\fR or \f(CW\*(C`\er\en\*(C'\fR.
.PP
Common values for \f(CW\*(C`eol\*(C'\fR are \f(CW"\e012"\fR (\f(CW\*(C`\en\*(C'\fR or Line Feed), \f(CW"\e015\e012"\fR
(\f(CW\*(C`\er\en\*(C'\fR or Carriage Return, Line Feed), and \f(CW"\e015"\fR (\f(CW\*(C`\er\*(C'\fR or Carriage
Return). The \f(CW\*(C`eol\*(C'\fR attribute cannot exceed 7 (\s-1ASCII\s0) characters.
.PP
If both \f(CW$/\fR and \f(CW\*(C`eol\*(C'\fR equal \f(CW"\e015"\fR, parsing lines that end on
only a Carriage Return without Line Feed, will be \*(L"parse\*(R"d correct.
.PP
\fIsep_char\fR
.IX Xref "sep_char"
.IX Subsection "sep_char"
.PP
.Vb 3
\& my $csv = Text::CSV_XS\->new ({ sep_char => ";" });
\& $csv\->sep_char (";");
\& my $c = $csv\->sep_char;
.Ve
.PP
The char used to separate fields, by default a comma. (\f(CW\*(C`,\*(C'\fR). Limited to a
single-byte character, usually in the range from \f(CW0x20\fR (space) to \f(CW0x7E\fR
(tilde). When longer sequences are required, use \f(CW\*(C`sep\*(C'\fR.
.PP
The separation character can not be equal to the quote character or to the
escape character.
.PP
See also \*(L"\s-1CAVEATS\*(R"\s0
.PP
\fIsep\fR
.IX Xref "sep"
.IX Subsection "sep"
.PP
.Vb 3
\& my $csv = Text::CSV_XS\->new ({ sep => "\eN{FULLWIDTH COMMA}" });
\& $csv\->sep (";");
\& my $sep = $csv\->sep;
.Ve
.PP
The chars used to separate fields, by default undefined. Limited to 8 bytes.
.PP
When set, overrules \f(CW\*(C`sep_char\*(C'\fR. If its length is one byte it
acts as an alias to \f(CW\*(C`sep_char\*(C'\fR.
.PP
See also \*(L"\s-1CAVEATS\*(R"\s0
.PP
\fIquote_char\fR
.IX Xref "quote_char"
.IX Subsection "quote_char"
.PP
.Vb 3
\& my $csv = Text::CSV_XS\->new ({ quote_char => "\*(Aq" });
\& $csv\->quote_char (undef);
\& my $c = $csv\->quote_char;
.Ve
.PP
The character to quote fields containing blanks or binary data, by default
the double quote character (\f(CW\*(C`"\*(C'\fR). A value of undef suppresses quote chars
(for simple cases only). Limited to a single-byte character, usually in the
range from \f(CW0x20\fR (space) to \f(CW0x7E\fR (tilde). When longer sequences are
required, use \f(CW\*(C`quote\*(C'\fR.
.PP
\&\f(CW\*(C`quote_char\*(C'\fR can not be equal to \f(CW\*(C`sep_char\*(C'\fR.
.PP
\fIquote\fR
.IX Xref "quote"
.IX Subsection "quote"
.PP
.Vb 3
\& my $csv = Text::CSV_XS\->new ({ quote => "\eN{FULLWIDTH QUOTATION MARK}" });
\& $csv\->quote ("\*(Aq");
\& my $quote = $csv\->quote;
.Ve
.PP
The chars used to quote fields, by default undefined. Limited to 8 bytes.
.PP
When set, overrules \f(CW\*(C`quote_char\*(C'\fR. If its length is one byte
it acts as an alias to \f(CW\*(C`quote_char\*(C'\fR.
.PP
This method does not support \f(CW\*(C`undef\*(C'\fR. Use \f(CW\*(C`quote_char\*(C'\fR to
disable quotation.
.PP
See also \*(L"\s-1CAVEATS\*(R"\s0
.PP
\fIescape_char\fR
.IX Xref "escape_char"
.IX Subsection "escape_char"
.PP
.Vb 3
\& my $csv = Text::CSV_XS\->new ({ escape_char => "\e\e" });
\& $csv\->escape_char (":");
\& my $c = $csv\->escape_char;
.Ve
.PP
The character to escape certain characters inside quoted fields. This is
limited to a single-byte character, usually in the range from \f(CW0x20\fR
(space) to \f(CW0x7E\fR (tilde).
.PP
The \f(CW\*(C`escape_char\*(C'\fR defaults to being the double-quote mark (\f(CW\*(C`"\*(C'\fR). In other
words the same as the default \f(CW\*(C`quote_char\*(C'\fR. This means that
doubling the quote mark in a field escapes it:
.PP
.Vb 1
\& "foo","bar","Escape ""quote mark"" with two ""quote marks""","baz"
.Ve
.PP
If you change the \f(CW\*(C`quote_char\*(C'\fR without changing the
\&\f(CW\*(C`escape_char\*(C'\fR, the \f(CW\*(C`escape_char\*(C'\fR will still be the double-quote (\f(CW\*(C`"\*(C'\fR).
If instead you want to escape the \f(CW\*(C`quote_char\*(C'\fR by doubling
it you will need to also change the \f(CW\*(C`escape_char\*(C'\fR to be the same as what
you have changed the \f(CW\*(C`quote_char\*(C'\fR to.
.PP
Setting \f(CW\*(C`escape_char\*(C'\fR to <undef> or \f(CW""\fR will disable escaping completely
and is greatly discouraged. This will also disable \f(CW\*(C`escape_null\*(C'\fR.
.PP
The escape character can not be equal to the separation character.
.PP
\fIbinary\fR
.IX Xref "binary"
.IX Subsection "binary"
.PP
.Vb 3
\& my $csv = Text::CSV_XS\->new ({ binary => 1 });
\& $csv\->binary (0);
\& my $f = $csv\->binary;
.Ve
.PP
If this attribute is \f(CW1\fR, you may use binary characters in quoted fields,
including line feeds, carriage returns and \f(CW\*(C`NULL\*(C'\fR bytes. (The latter could
be escaped as \f(CW\*(C`"0\*(C'\fR.) By default this feature is off.
.PP
If a string is marked \s-1UTF8,\s0 \f(CW\*(C`binary\*(C'\fR will be turned on automatically when
binary characters other than \f(CW\*(C`CR\*(C'\fR and \f(CW\*(C`NL\*(C'\fR are encountered. Note that a
simple string like \f(CW"\ex{00a0}"\fR might still be binary, but not marked \s-1UTF8,\s0
so setting \f(CW\*(C`{ binary => 1 }\*(C'\fR is still a wise option.
.PP
\fIstrict\fR
.IX Xref "strict"
.IX Subsection "strict"
.PP
.Vb 3
\& my $csv = Text::CSV_XS\->new ({ strict => 1 });
\& $csv\->strict (0);
\& my $f = $csv\->strict;
.Ve
.PP
If this attribute is set to \f(CW1\fR, any row that parses to a different number
of fields than the previous row will cause the parser to throw error 2014.
.PP
\fIskip_empty_rows\fR
.IX Xref "skip_empty_rows"
.IX Subsection "skip_empty_rows"
.PP
.Vb 3
\& my $csv = Text::CSV_XS\->new ({ skip_empty_rows => 1 });
\& $csv\->skip_empty_rows (0);
\& my $f = $csv\->skip_empty_rows;
.Ve
.PP
If this attribute is set to \f(CW1\fR, any row that has an \*(L"eol\*(R" immediately
following the start of line will be skipped. Default behavior is to return
one single empty field.
.PP
This attribute is only used in parsing.
.PP
\fIformula_handling\fR
.IX Subsection "formula_handling"
.PP
\fIformula\fR
.IX Xref "formula_handling formula"
.IX Subsection "formula"
.PP
.Vb 3
\& my $csv = Text::CSV_XS\->new ({ formula => "none" });
\& $csv\->formula ("none");
\& my $f = $csv\->formula;
.Ve
.PP
This defines the behavior of fields containing \fIformulas\fR. As formulas are
considered dangerous in spreadsheets, this attribute can define an optional
action to be taken if a field starts with an equal sign (\f(CW\*(C`=\*(C'\fR).
.PP
For purpose of code-readability, this can also be written as
.PP
.Vb 3
\& my $csv = Text::CSV_XS\->new ({ formula_handling => "none" });
\& $csv\->formula_handling ("none");
\& my $f = $csv\->formula_handling;
.Ve
.PP
Possible values for this attribute are
.IP "none" 2
.IX Item "none"
Take no specific action. This is the default.
.Sp
.Vb 1
\& $csv\->formula ("none");
.Ve
.IP "die" 2
.IX Item "die"
Cause the process to \f(CW\*(C`die\*(C'\fR whenever a leading \f(CW\*(C`=\*(C'\fR is encountered.
.Sp
.Vb 1
\& $csv\->formula ("die");
.Ve
.IP "croak" 2
.IX Item "croak"
Cause the process to \f(CW\*(C`croak\*(C'\fR whenever a leading \f(CW\*(C`=\*(C'\fR is encountered. (See
Carp)
.Sp
.Vb 1
\& $csv\->formula ("croak");
.Ve
.IP "diag" 2
.IX Item "diag"
Report position and content of the field whenever a leading \f(CW\*(C`=\*(C'\fR is found.
The value of the field is unchanged.
.Sp
.Vb 1
\& $csv\->formula ("diag");
.Ve
.IP "empty" 2
.IX Item "empty"
Replace the content of fields that start with a \f(CW\*(C`=\*(C'\fR with the empty string.
.Sp
.Vb 2
\& $csv\->formula ("empty");
\& $csv\->formula ("");
.Ve
.IP "undef" 2
.IX Item "undef"
Replace the content of fields that start with a \f(CW\*(C`=\*(C'\fR with \f(CW\*(C`undef\*(C'\fR.
.Sp
.Vb 2
\& $csv\->formula ("undef");
\& $csv\->formula (undef);
.Ve
.IP "a callback" 2
.IX Item "a callback"
Modify the content of fields that start with a \f(CW\*(C`=\*(C'\fR with the return-value
of the callback. The original content of the field is available inside the
callback as \f(CW$_\fR;
.Sp
.Vb 2
\& # Replace all formula\*(Aqs with 42
\& $csv\->formula (sub { 42; });
\&
\& # same as $csv\->formula ("empty") but slower
\& $csv\->formula (sub { "" });
\&
\& # Allow =4+12
\& $csv\->formula (sub { s/^=(\ed+\e+\ed+)$/$1/eer });
\&
\& # Allow more complex calculations
\& $csv\->formula (sub { eval { s{^=([\-+*/0\-9()]+)$}{$1}ee }; $_ });
.Ve
.PP
All other values will give a warning and then fallback to \f(CW\*(C`diag\*(C'\fR.
.PP
\fIdecode_utf8\fR
.IX Xref "decode_utf8"
.IX Subsection "decode_utf8"
.PP
.Vb 3
\& my $csv = Text::CSV_XS\->new ({ decode_utf8 => 1 });
\& $csv\->decode_utf8 (0);
\& my $f = $csv\->decode_utf8;
.Ve
.PP
This attributes defaults to \s-1TRUE.\s0
.PP
While \fIparsing\fR, fields that are valid \s-1UTF\-8,\s0 are automatically set to be
\&\s-1UTF\-8,\s0 so that
.PP
.Vb 1
\& $csv\->parse ("\exC4\exA8\en");
.Ve
.PP
results in
.PP
.Vb 1
\& PV("\e304\e250"\e0) [UTF8 "\ex{128}"]
.Ve
.PP
Sometimes it might not be a desired action. To prevent those upgrades, set
this attribute to false, and the result will be
.PP
.Vb 1
\& PV("\e304\e250"\e0)
.Ve
.PP
\fIauto_diag\fR
.IX Xref "auto_diag"
.IX Subsection "auto_diag"
.PP
.Vb 3
\& my $csv = Text::CSV_XS\->new ({ auto_diag => 1 });
\& $csv\->auto_diag (2);
\& my $l = $csv\->auto_diag;
.Ve
.PP
Set this attribute to a number between \f(CW1\fR and \f(CW9\fR causes \*(L"error_diag\*(R"
to be automatically called in void context upon errors.
.PP
In case of error \f(CW\*(C`2012 \- EOF\*(C'\fR, this call will be void.
.PP
If \f(CW\*(C`auto_diag\*(C'\fR is set to a numeric value greater than \f(CW1\fR, it will \f(CW\*(C`die\*(C'\fR
on errors instead of \f(CW\*(C`warn\*(C'\fR. If set to anything unrecognized, it will be
silently ignored.
.PP
Future extensions to this feature will include more reliable auto-detection
of \f(CW\*(C`autodie\*(C'\fR being active in the scope of which the error occurred which
will increment the value of \f(CW\*(C`auto_diag\*(C'\fR with \f(CW1\fR the moment the error is
detected.
.PP
\fIdiag_verbose\fR
.IX Xref "diag_verbose"
.IX Subsection "diag_verbose"
.PP
.Vb 3
\& my $csv = Text::CSV_XS\->new ({ diag_verbose => 1 });
\& $csv\->diag_verbose (2);
\& my $l = $csv\->diag_verbose;
.Ve
.PP
Set the verbosity of the output triggered by \f(CW\*(C`auto_diag\*(C'\fR. Currently only
adds the current input-record-number (if known) to the diagnostic output
with an indication of the position of the error.
.PP
\fIblank_is_undef\fR
.IX Xref "blank_is_undef"
.IX Subsection "blank_is_undef"
.PP
.Vb 3
\& my $csv = Text::CSV_XS\->new ({ blank_is_undef => 1 });
\& $csv\->blank_is_undef (0);
\& my $f = $csv\->blank_is_undef;
.Ve
.PP
Under normal circumstances, \f(CW\*(C`CSV\*(C'\fR data makes no distinction between quoted\-
and unquoted empty fields. These both end up in an empty string field once
read, thus
.PP
.Vb 1
\& 1,"",," ",2
.Ve
.PP
is read as
.PP
.Vb 1
\& ("1", "", "", " ", "2")
.Ve
.PP
When \fIwriting\fR \f(CW\*(C`CSV\*(C'\fR files with either \f(CW\*(C`always_quote\*(C'\fR
or \f(CW\*(C`quote_empty\*(C'\fR set, the unquoted \fIempty\fR field is the
result of an undefined value. To enable this distinction when \fIreading\fR
\&\f(CW\*(C`CSV\*(C'\fR data, the \f(CW\*(C`blank_is_undef\*(C'\fR attribute will cause unquoted empty
fields to be set to \f(CW\*(C`undef\*(C'\fR, causing the above to be parsed as
.PP
.Vb 1
\& ("1", "", undef, " ", "2")
.Ve
.PP
Note that this is specifically important when loading \f(CW\*(C`CSV\*(C'\fR fields into a
database that allows \f(CW\*(C`NULL\*(C'\fR values, as the perl equivalent for \f(CW\*(C`NULL\*(C'\fR is
\&\f(CW\*(C`undef\*(C'\fR in \s-1DBI\s0 land.
.PP
\fIempty_is_undef\fR
.IX Xref "empty_is_undef"
.IX Subsection "empty_is_undef"
.PP
.Vb 3
\& my $csv = Text::CSV_XS\->new ({ empty_is_undef => 1 });
\& $csv\->empty_is_undef (0);
\& my $f = $csv\->empty_is_undef;
.Ve
.PP
Going one step further than \f(CW\*(C`blank_is_undef\*(C'\fR, this
attribute converts all empty fields to \f(CW\*(C`undef\*(C'\fR, so
.PP
.Vb 1
\& 1,"",," ",2
.Ve
.PP
is read as
.PP
.Vb 1
\& (1, undef, undef, " ", 2)
.Ve
.PP
Note that this affects only fields that are originally empty, not fields
that are empty after stripping allowed whitespace. \s-1YMMV.\s0
.PP
\fIallow_whitespace\fR
.IX Xref "allow_whitespace"
.IX Subsection "allow_whitespace"
.PP
.Vb 3
\& my $csv = Text::CSV_XS\->new ({ allow_whitespace => 1 });
\& $csv\->allow_whitespace (0);
\& my $f = $csv\->allow_whitespace;
.Ve
.PP
When this option is set to true, the whitespace (\f(CW\*(C`TAB\*(C'\fR's and \f(CW\*(C`SPACE\*(C'\fR's)
surrounding the separation character is removed when parsing. If either
\&\f(CW\*(C`TAB\*(C'\fR or \f(CW\*(C`SPACE\*(C'\fR is one of the three characters \f(CW\*(C`sep_char\*(C'\fR,
\&\f(CW\*(C`quote_char\*(C'\fR, or \f(CW\*(C`escape_char\*(C'\fR it will not
be considered whitespace.
.PP
Now lines like:
.PP
.Vb 1
\& 1 , "foo" , bar , 3 , zapp
.Ve
.PP
are parsed as valid \f(CW\*(C`CSV\*(C'\fR, even though it violates the \f(CW\*(C`CSV\*(C'\fR specs.
.PP
Note that \fBall\fR whitespace is stripped from both start and end of each
field. That would make it \fImore\fR than a \fIfeature\fR to enable parsing bad
\&\f(CW\*(C`CSV\*(C'\fR lines, as
.PP
.Vb 1
\& 1, 2.0, 3, ape , monkey
.Ve
.PP
will now be parsed as
.PP
.Vb 1
\& ("1", "2.0", "3", "ape", "monkey")
.Ve
.PP
even if the original line was perfectly acceptable \f(CW\*(C`CSV\*(C'\fR.
.PP
\fIallow_loose_quotes\fR
.IX Xref "allow_loose_quotes"
.IX Subsection "allow_loose_quotes"
.PP
.Vb 3
\& my $csv = Text::CSV_XS\->new ({ allow_loose_quotes => 1 });
\& $csv\->allow_loose_quotes (0);
\& my $f = $csv\->allow_loose_quotes;
.Ve
.PP
By default, parsing unquoted fields containing \f(CW\*(C`quote_char\*(C'\fR
characters like
.PP
.Vb 1
\& 1,foo "bar" baz,42
.Ve
.PP
would result in parse error 2034. Though it is still bad practice to allow
this format, we cannot help the fact that some vendors make their
applications spit out lines styled this way.
.PP
If there is \fBreally\fR bad \f(CW\*(C`CSV\*(C'\fR data, like
.PP
.Vb 1
\& 1,"foo "bar" baz",42
.Ve
.PP
or
.PP
.Vb 1
\& 1,""foo bar baz"",42
.Ve
.PP
there is a way to get this data-line parsed and leave the quotes inside the
quoted field as-is. This can be achieved by setting \f(CW\*(C`allow_loose_quotes\*(C'\fR
\&\fB\s-1AND\s0\fR making sure that the \f(CW\*(C`escape_char\*(C'\fR is \fInot\fR equal
to \f(CW\*(C`quote_char\*(C'\fR.
.PP
\fIallow_loose_escapes\fR
.IX Xref "allow_loose_escapes"
.IX Subsection "allow_loose_escapes"
.PP
.Vb 3
\& my $csv = Text::CSV_XS\->new ({ allow_loose_escapes => 1 });
\& $csv\->allow_loose_escapes (0);
\& my $f = $csv\->allow_loose_escapes;
.Ve
.PP
Parsing fields that have \f(CW\*(C`escape_char\*(C'\fR characters that
escape characters that do not need to be escaped, like:
.PP
.Vb 2
\& my $csv = Text::CSV_XS\->new ({ escape_char => "\e\e" });
\& $csv\->parse (qq{1,"my bar\e\*(Aqs",baz,42});
.Ve
.PP
would result in parse error 2025. Though it is bad practice to allow this
format, this attribute enables you to treat all escape character sequences
equal.
.PP
\fIallow_unquoted_escape\fR
.IX Xref "allow_unquoted_escape"
.IX Subsection "allow_unquoted_escape"
.PP
.Vb 3
\& my $csv = Text::CSV_XS\->new ({ allow_unquoted_escape => 1 });
\& $csv\->allow_unquoted_escape (0);
\& my $f = $csv\->allow_unquoted_escape;
.Ve
.PP
A backward compatibility issue where \f(CW\*(C`escape_char\*(C'\fR differs
from \f(CW\*(C`quote_char\*(C'\fR prevents \f(CW\*(C`escape_char\*(C'\fR
to be in the first position of a field. If \f(CW\*(C`quote_char\*(C'\fR is
equal to the default \f(CW\*(C`"\*(C'\fR and \f(CW\*(C`escape_char\*(C'\fR is set to \f(CW\*(C`\e\*(C'\fR,
this would be illegal:
.PP
.Vb 1
\& 1,\e0,2
.Ve
.PP
Setting this attribute to \f(CW1\fR might help to overcome issues with backward
compatibility and allow this style.
.PP
\fIalways_quote\fR
.IX Xref "always_quote"
.IX Subsection "always_quote"
.PP
.Vb 3
\& my $csv = Text::CSV_XS\->new ({ always_quote => 1 });
\& $csv\->always_quote (0);
\& my $f = $csv\->always_quote;
.Ve
.PP
By default the generated fields are quoted only if they \fIneed\fR to be. For
example, if they contain the separator character. If you set this attribute
to \f(CW1\fR then \fIall\fR defined fields will be quoted. (\f(CW\*(C`undef\*(C'\fR fields are not
quoted, see \*(L"blank_is_undef\*(R"). This makes it quite often easier to handle
exported data in external applications. (Poor creatures who are better to
use Text::CSV_XS. :)
.PP
\fIquote_space\fR
.IX Xref "quote_space"
.IX Subsection "quote_space"
.PP
.Vb 3
\& my $csv = Text::CSV_XS\->new ({ quote_space => 1 });
\& $csv\->quote_space (0);
\& my $f = $csv\->quote_space;
.Ve
.PP
By default, a space in a field would trigger quotation. As no rule exists
this to be forced in \f(CW\*(C`CSV\*(C'\fR, nor any for the opposite, the default is true
for safety. You can exclude the space from this trigger by setting this
attribute to 0.
.PP
\fIquote_empty\fR
.IX Xref "quote_empty"
.IX Subsection "quote_empty"
.PP
.Vb 3
\& my $csv = Text::CSV_XS\->new ({ quote_empty => 1 });
\& $csv\->quote_empty (0);
\& my $f = $csv\->quote_empty;
.Ve
.PP
By default the generated fields are quoted only if they \fIneed\fR to be. An
empty (defined) field does not need quotation. If you set this attribute to
\&\f(CW1\fR then \fIempty\fR defined fields will be quoted. (\f(CW\*(C`undef\*(C'\fR fields are not
quoted, see \*(L"blank_is_undef\*(R"). See also \f(CW\*(C`always_quote\*(C'\fR.
.PP
\fIquote_binary\fR
.IX Xref "quote_binary"
.IX Subsection "quote_binary"
.PP
.Vb 3
\& my $csv = Text::CSV_XS\->new ({ quote_binary => 1 });
\& $csv\->quote_binary (0);
\& my $f = $csv\->quote_binary;
.Ve
.PP
By default, all \*(L"unsafe\*(R" bytes inside a string cause the combined field to
be quoted. By setting this attribute to \f(CW0\fR, you can disable that trigger
for bytes >= \f(CW0x7F\fR.
.PP
\fIescape_null\fR
.IX Xref "escape_null quote_null"
.IX Subsection "escape_null"
.PP
.Vb 3
\& my $csv = Text::CSV_XS\->new ({ escape_null => 1 });
\& $csv\->escape_null (0);
\& my $f = $csv\->escape_null;
.Ve
.PP
By default, a \f(CW\*(C`NULL\*(C'\fR byte in a field would be escaped. This option enables
you to treat the \f(CW\*(C`NULL\*(C'\fR byte as a simple binary character in binary mode
(the \f(CW\*(C`{ binary => 1 }\*(C'\fR is set). The default is true. You can prevent
\&\f(CW\*(C`NULL\*(C'\fR escapes by setting this attribute to \f(CW0\fR.
.PP
When the \f(CW\*(C`escape_char\*(C'\fR attribute is set to undefined, this attribute will
be set to false.
.PP
The default setting will encode \*(L"=\ex00=\*(R" as
.PP
.Vb 1
\& "="0="
.Ve
.PP
With \f(CW\*(C`escape_null\*(C'\fR set, this will result in
.PP
.Vb 1
\& "=\ex00="
.Ve
.PP
The default when using the \f(CW\*(C`csv\*(C'\fR function is \f(CW\*(C`false\*(C'\fR.
.PP
For backward compatibility reasons, the deprecated old name \f(CW\*(C`quote_null\*(C'\fR
is still recognized.
.PP
\fIkeep_meta_info\fR
.IX Xref "keep_meta_info"
.IX Subsection "keep_meta_info"
.PP
.Vb 3
\& my $csv = Text::CSV_XS\->new ({ keep_meta_info => 1 });
\& $csv\->keep_meta_info (0);
\& my $f = $csv\->keep_meta_info;
.Ve
.PP
By default, the parsing of input records is as simple and fast as possible.
However, some parsing information \- like quotation of the original field \-
is lost in that process. Setting this flag to true enables retrieving that
information after parsing with the methods \*(L"meta_info\*(R", \*(L"is_quoted\*(R",
and \*(L"is_binary\*(R" described below. Default is false for performance.
.PP
If you set this attribute to a value greater than 9, then you can control
output quotation style like it was used in the input of the the last parsed
record (unless quotation was added because of other reasons).
.PP
.Vb 5
\& my $csv = Text::CSV_XS\->new ({
\& binary => 1,
\& keep_meta_info => 1,
\& quote_space => 0,
\& });
\&
\& my $row = $csv\->parse (q{1,,"", ," ",f,"g","h""h",help,"help"});
\&
\& $csv\->print (*STDOUT, \e@row);
\& # 1,,, , ,f,g,"h""h",help,help
\& $csv\->keep_meta_info (11);
\& $csv\->print (*STDOUT, \e@row);
\& # 1,,"", ," ",f,"g","h""h",help,"help"
.Ve
.PP
\fIundef_str\fR
.IX Xref "undef_str"
.IX Subsection "undef_str"
.PP
.Vb 3
\& my $csv = Text::CSV_XS\->new ({ undef_str => "\e\eN" });
\& $csv\->undef_str (undef);
\& my $s = $csv\->undef_str;
.Ve
.PP
This attribute optionally defines the output of undefined fields. The value
passed is not changed at all, so if it needs quotation, the quotation needs
to be included in the value of the attribute. Use with caution, as passing
a value like \f(CW",",,,,"""\fR will for sure mess up your output. The default
for this attribute is \f(CW\*(C`undef\*(C'\fR, meaning no special treatment.
.PP
This attribute is useful when exporting \s-1CSV\s0 data to be imported in custom
loaders, like for MySQL, that recognize special sequences for \f(CW\*(C`NULL\*(C'\fR data.
.PP
This attribute has no meaning when parsing \s-1CSV\s0 data.
.PP
\fIcomment_str\fR
.IX Xref "comment_str"
.IX Subsection "comment_str"
.PP
.Vb 3
\& my $csv = Text::CSV_XS\->new ({ comment_str => "#" });
\& $csv\->comment_str (undef);
\& my $s = $csv\->comment_str;
.Ve
.PP
This attribute optionally defines a string to be recognized as comment. If
this attribute is defined, all lines starting with this sequence will not
be parsed as \s-1CSV\s0 but skipped as comment.
.PP
This attribute has no meaning when generating \s-1CSV.\s0
.PP
Comment strings that start with any of the special characters/sequences are
not supported (so it cannot start with any of \*(L"sep_char\*(R", \*(L"quote_char\*(R",
\&\*(L"escape_char\*(R", \*(L"sep\*(R", \*(L"quote\*(R", or \*(L"eol\*(R").
.PP
For convenience, \f(CW\*(C`comment\*(C'\fR is an alias for \f(CW\*(C`comment_str\*(C'\fR.
.PP
\fIverbatim\fR
.IX Xref "verbatim"
.IX Subsection "verbatim"
.PP
.Vb 3
\& my $csv = Text::CSV_XS\->new ({ verbatim => 1 });
\& $csv\->verbatim (0);
\& my $f = $csv\->verbatim;
.Ve
.PP
This is a quite controversial attribute to set, but makes some hard things
possible.
.PP
The rationale behind this attribute is to tell the parser that the normally
special characters newline (\f(CW\*(C`NL\*(C'\fR) and Carriage Return (\f(CW\*(C`CR\*(C'\fR) will not be
special when this flag is set, and be dealt with as being ordinary binary
characters. This will ease working with data with embedded newlines.
.PP
When \f(CW\*(C`verbatim\*(C'\fR is used with \*(L"getline\*(R", \*(L"getline\*(R" auto\-\f(CW\*(C`chomp\*(C'\fR's
every line.
.PP
Imagine a file format like
.PP
.Vb 1
\& M^^Hans^Janssen^Klas 2\en2A^Ja^11\-06\-2007#\er\en
.Ve
.PP
where, the line ending is a very specific \f(CW"#\er\en"\fR, and the sep_char is a
\&\f(CW\*(C`^\*(C'\fR (caret). None of the fields is quoted, but embedded binary data is
likely to be present. With the specific line ending, this should not be too
hard to detect.
.PP
By default, Text::CSV_XS' parse function is instructed to only know about
\&\f(CW"\en"\fR and \f(CW"\er"\fR to be legal line endings, and so has to deal with the
embedded newline as a real \f(CW\*(C`end\-of\-line\*(C'\fR, so it can scan the next line if
binary is true, and the newline is inside a quoted field. With this option,
we tell \*(L"parse\*(R" to parse the line as if \f(CW"\en"\fR is just nothing more than
a binary character.
.PP
For \*(L"parse\*(R" this means that the parser has no more idea about line ending
and \*(L"getline\*(R" \f(CW\*(C`chomp\*(C'\fRs line endings on reading.
.PP
\fItypes\fR
.IX Subsection "types"
.PP
A set of column types; the attribute is immediately passed to the \*(L"types\*(R"
method.
.PP
\fIcallbacks\fR
.IX Xref "callbacks"
.IX Subsection "callbacks"
.PP
See the \*(L"Callbacks\*(R" section below.
.PP
\fIaccessors\fR
.IX Subsection "accessors"
.PP
To sum it up,
.PP
.Vb 1
\& $csv = Text::CSV_XS\->new ();
.Ve
.PP
is equivalent to
.PP
.Vb 10
\& $csv = Text::CSV_XS\->new ({
\& eol => undef, # \er, \en, or \er\en
\& sep_char => \*(Aq,\*(Aq,
\& sep => undef,
\& quote_char => \*(Aq"\*(Aq,
\& quote => undef,
\& escape_char => \*(Aq"\*(Aq,
\& binary => 0,
\& decode_utf8 => 1,
\& auto_diag => 0,
\& diag_verbose => 0,
\& blank_is_undef => 0,
\& empty_is_undef => 0,
\& allow_whitespace => 0,
\& allow_loose_quotes => 0,
\& allow_loose_escapes => 0,
\& allow_unquoted_escape => 0,
\& always_quote => 0,
\& quote_empty => 0,
\& quote_space => 1,
\& escape_null => 1,
\& quote_binary => 1,
\& keep_meta_info => 0,
\& strict => 0,
\& skip_empty_rows => 0,
\& formula => 0,
\& verbatim => 0,
\& undef_str => undef,
\& comment_str => undef,
\& types => undef,
\& callbacks => undef,
\& });
.Ve
.PP
For all of the above mentioned flags, an accessor method is available where
you can inquire the current value, or change the value
.PP
.Vb 2
\& my $quote = $csv\->quote_char;
\& $csv\->binary (1);
.Ve
.PP
It is not wise to change these settings halfway through writing \f(CW\*(C`CSV\*(C'\fR data
to a stream. If however you want to create a new stream using the available
\&\f(CW\*(C`CSV\*(C'\fR object, there is no harm in changing them.
.PP
If the \*(L"new\*(R" constructor call fails, it returns \f(CW\*(C`undef\*(C'\fR, and makes the
fail reason available through the \*(L"error_diag\*(R" method.
.PP
.Vb 2
\& $csv = Text::CSV_XS\->new ({ ecs_char => 1 }) or
\& die "".Text::CSV_XS\->error_diag ();
.Ve
.PP
\&\*(L"error_diag\*(R" will return a string like
.PP
.Vb 1
\& "INI \- Unknown attribute \*(Aqecs_char\*(Aq"
.Ve
.SS "known_attributes"
.IX Xref "known_attributes"
.IX Subsection "known_attributes"
.Vb 3
\& @attr = Text::CSV_XS\->known_attributes;
\& @attr = Text::CSV_XS::known_attributes;
\& @attr = $csv\->known_attributes;
.Ve
.PP
This method will return an ordered list of all the supported attributes as
described above. This can be useful for knowing what attributes are valid
in classes that use or extend Text::CSV_XS.
.SS "print"
.IX Xref "print"
.IX Subsection "print"
.Vb 1
\& $status = $csv\->print ($fh, $colref);
.Ve
.PP
Similar to \*(L"combine\*(R" + \*(L"string\*(R" + \*(L"print\*(R", but much more efficient.
It expects an array ref as input (not an array!) and the resulting string
is not really created, but immediately written to the \f(CW$fh\fR object,
typically an \s-1IO\s0 handle or any other object that offers a \*(L"print\*(R" method.
.PP
For performance reasons \f(CW\*(C`print\*(C'\fR does not create a result string, so all
\&\*(L"string\*(R", \*(L"status\*(R", \*(L"fields\*(R", and \*(L"error_input\*(R" methods will return
undefined information after executing this method.
.PP
If \f(CW$colref\fR is \f(CW\*(C`undef\*(C'\fR (explicit, not through a variable argument) and
\&\*(L"bind_columns\*(R" was used to specify fields to be printed, it is possible
to make performance improvements, as otherwise data would have to be copied
as arguments to the method call:
.PP
.Vb 2
\& $csv\->bind_columns (\e($foo, $bar));
\& $status = $csv\->print ($fh, undef);
.Ve
.PP
A short benchmark
.PP
.Vb 2
\& my @data = ("aa" .. "zz");
\& $csv\->bind_columns (\e(@data));
\&
\& $csv\->print ($fh, [ @data ]); # 11800 recs/sec
\& $csv\->print ($fh, \e@data ); # 57600 recs/sec
\& $csv\->print ($fh, undef ); # 48500 recs/sec
.Ve
.SS "say"
.IX Xref "say"
.IX Subsection "say"
.Vb 1
\& $status = $csv\->say ($fh, $colref);
.Ve
.PP
Like \f(CW\*(C`print\*(C'\fR, but \f(CW\*(C`eol\*(C'\fR defaults to \f(CW\*(C`$\e\*(C'\fR.
.SS "print_hr"
.IX Xref "print_hr"
.IX Subsection "print_hr"
.Vb 1
\& $csv\->print_hr ($fh, $ref);
.Ve
.PP
Provides an easy way to print a \f(CW$ref\fR (as fetched with \*(L"getline_hr\*(R")
provided the column names are set with \*(L"column_names\*(R".
.PP
It is just a wrapper method with basic parameter checks over
.PP
.Vb 1
\& $csv\->print ($fh, [ map { $ref\->{$_} } $csv\->column_names ]);
.Ve
.SS "combine"
.IX Xref "combine"
.IX Subsection "combine"
.Vb 1
\& $status = $csv\->combine (@fields);
.Ve
.PP
This method constructs a \f(CW\*(C`CSV\*(C'\fR record from \f(CW@fields\fR, returning success
or failure. Failure can result from lack of arguments or an argument that
contains an invalid character. Upon success, \*(L"string\*(R" can be called to
retrieve the resultant \f(CW\*(C`CSV\*(C'\fR string. Upon failure, the value returned by
\&\*(L"string\*(R" is undefined and \*(L"error_input\*(R" could be called to retrieve the
invalid argument.
.SS "string"
.IX Xref "string"
.IX Subsection "string"
.Vb 1
\& $line = $csv\->string ();
.Ve
.PP
This method returns the input to \*(L"parse\*(R" or the resultant \f(CW\*(C`CSV\*(C'\fR string
of \*(L"combine\*(R", whichever was called more recently.
.SS "getline"
.IX Xref "getline"
.IX Subsection "getline"
.Vb 1
\& $colref = $csv\->getline ($fh);
.Ve
.PP
This is the counterpart to \*(L"print\*(R", as \*(L"parse\*(R" is the counterpart to
\&\*(L"combine\*(R": it parses a row from the \f(CW$fh\fR handle using the \*(L"getline\*(R"
method associated with \f(CW$fh\fR and parses this row into an array ref. This
array ref is returned by the function or \f(CW\*(C`undef\*(C'\fR for failure. When \f(CW$fh\fR
does not support \f(CW\*(C`getline\*(C'\fR, you are likely to hit errors.
.PP
When fields are bound with \*(L"bind_columns\*(R" the return value is a reference
to an empty list.
.PP
The \*(L"string\*(R", \*(L"fields\*(R", and \*(L"status\*(R" methods are meaningless again.
.SS "getline_all"
.IX Xref "getline_all"
.IX Subsection "getline_all"
.Vb 3
\& $arrayref = $csv\->getline_all ($fh);
\& $arrayref = $csv\->getline_all ($fh, $offset);
\& $arrayref = $csv\->getline_all ($fh, $offset, $length);
.Ve
.PP
This will return a reference to a list of getline ($fh) results.
In this call, \f(CW\*(C`keep_meta_info\*(C'\fR is disabled. If \f(CW$offset\fR is negative, as
with \f(CW\*(C`splice\*(C'\fR, only the last \f(CW\*(C`abs ($offset)\*(C'\fR records of \f(CW$fh\fR are taken
into consideration.
.PP
Given a \s-1CSV\s0 file with 10 lines:
.PP
.Vb 10
\& lines call
\& \-\-\-\-\- \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
\& 0..9 $csv\->getline_all ($fh) # all
\& 0..9 $csv\->getline_all ($fh, 0) # all
\& 8..9 $csv\->getline_all ($fh, 8) # start at 8
\& \- $csv\->getline_all ($fh, 0, 0) # start at 0 first 0 rows
\& 0..4 $csv\->getline_all ($fh, 0, 5) # start at 0 first 5 rows
\& 4..5 $csv\->getline_all ($fh, 4, 2) # start at 4 first 2 rows
\& 8..9 $csv\->getline_all ($fh, \-2) # last 2 rows
\& 6..7 $csv\->getline_all ($fh, \-4, 2) # first 2 of last 4 rows
.Ve
.SS "getline_hr"
.IX Xref "getline_hr"
.IX Subsection "getline_hr"
The \*(L"getline_hr\*(R" and \*(L"column_names\*(R" methods work together to allow you
to have rows returned as hashrefs. You must call \*(L"column_names\*(R" first to
declare your column names.
.PP
.Vb 3
\& $csv\->column_names (qw( code name price description ));
\& $hr = $csv\->getline_hr ($fh);
\& print "Price for $hr\->{name} is $hr\->{price} EUR\en";
.Ve
.PP
\&\*(L"getline_hr\*(R" will croak if called before \*(L"column_names\*(R".
.PP
Note that \*(L"getline_hr\*(R" creates a hashref for every row and will be much
slower than the combined use of \*(L"bind_columns\*(R" and \*(L"getline\*(R" but still
offering the same easy to use hashref inside the loop:
.PP
.Vb 5
\& my @cols = @{$csv\->getline ($fh)};
\& $csv\->column_names (@cols);
\& while (my $row = $csv\->getline_hr ($fh)) {
\& print $row\->{price};
\& }
.Ve
.PP
Could easily be rewritten to the much faster:
.PP
.Vb 6
\& my @cols = @{$csv\->getline ($fh)};
\& my $row = {};
\& $csv\->bind_columns (\e@{$row}{@cols});
\& while ($csv\->getline ($fh)) {
\& print $row\->{price};
\& }
.Ve
.PP
Your mileage may vary for the size of the data and the number of rows. With
perl\-5.14.2 the comparison for a 100_000 line file with 14 columns:
.PP
.Vb 3
\& Rate hashrefs getlines
\& hashrefs 1.00/s \-\- \-76%
\& getlines 4.15/s 313% \-\-
.Ve
.SS "getline_hr_all"
.IX Xref "getline_hr_all"
.IX Subsection "getline_hr_all"
.Vb 3
\& $arrayref = $csv\->getline_hr_all ($fh);
\& $arrayref = $csv\->getline_hr_all ($fh, $offset);
\& $arrayref = $csv\->getline_hr_all ($fh, $offset, $length);
.Ve
.PP
This will return a reference to a list of getline_hr ($fh)
results. In this call, \f(CW\*(C`keep_meta_info\*(C'\fR is disabled.
.SS "parse"
.IX Xref "parse"
.IX Subsection "parse"
.Vb 1
\& $status = $csv\->parse ($line);
.Ve
.PP
This method decomposes a \f(CW\*(C`CSV\*(C'\fR string into fields, returning success or
failure. Failure can result from a lack of argument or the given \f(CW\*(C`CSV\*(C'\fR
string is improperly formatted. Upon success, \*(L"fields\*(R" can be called to
retrieve the decomposed fields. Upon failure calling \*(L"fields\*(R" will return
undefined data and \*(L"error_input\*(R" can be called to retrieve the invalid
argument.
.PP
You may use the \*(L"types\*(R" method for setting column types. See \*(L"types\*(R"'
description below.
.PP
The \f(CW$line\fR argument is supposed to be a simple scalar. Everything else is
supposed to croak and set error 1500.
.SS "fragment"
.IX Xref "fragment"
.IX Subsection "fragment"
This function tries to implement \s-1RFC7111\s0 (\s-1URI\s0 Fragment Identifiers for the
text/csv Media Type) \- https://datatracker.ietf.org/doc/html/rfc7111
.PP
.Vb 1
\& my $AoA = $csv\->fragment ($fh, $spec);
.Ve
.PP
In specifications, \f(CW\*(C`*\*(C'\fR is used to specify the \fIlast\fR item, a dash (\f(CW\*(C`\-\*(C'\fR)
to indicate a range. All indices are \f(CW1\fR\-based: the first row or column
has index \f(CW1\fR. Selections can be combined with the semi-colon (\f(CW\*(C`;\*(C'\fR).
.PP
When using this method in combination with \*(L"column_names\*(R", the returned
reference will point to a list of hashes instead of a list of lists. A
disjointed cell-based combined selection might return rows with different
number of columns making the use of hashes unpredictable.
.PP
.Vb 2
\& $csv\->column_names ("Name", "Age");
\& my $AoH = $csv\->fragment ($fh, "col=3;8");
.Ve
.PP
If the \*(L"after_parse\*(R" callback is active, it is also called on every line
parsed and skipped before the fragment.
.IP "row" 2
.IX Item "row"
.Vb 4
\& row=4
\& row=5\-7
\& row=6\-*
\& row=1\-2;4;6\-*
.Ve
.IP "col" 2
.IX Item "col"
.Vb 4
\& col=2
\& col=1\-3
\& col=4\-*
\& col=1\-2;4;7\-*
.Ve
.IP "cell" 2
.IX Item "cell"
In cell-based selection, the comma (\f(CW\*(C`,\*(C'\fR) is used to pair row and column
.Sp
.Vb 1
\& cell=4,1
.Ve
.Sp
The range operator (\f(CW\*(C`\-\*(C'\fR) using \f(CW\*(C`cell\*(C'\fRs can be used to define top-left and
bottom-right \f(CW\*(C`cell\*(C'\fR location
.Sp
.Vb 1
\& cell=3,1\-4,6
.Ve
.Sp
The \f(CW\*(C`*\*(C'\fR is only allowed in the second part of a pair
.Sp
.Vb 3
\& cell=3,2\-*,2 # row 3 till end, only column 2
\& cell=3,2\-3,* # column 2 till end, only row 3
\& cell=3,2\-*,* # strip row 1 and 2, and column 1
.Ve
.Sp
Cells and cell ranges may be combined with \f(CW\*(C`;\*(C'\fR, possibly resulting in rows
with different numbers of columns
.Sp
.Vb 1
\& cell=1,1\-2,2;3,3\-4,4;1,4;4,1
.Ve
.Sp
Disjointed selections will only return selected cells. The cells that are
not specified will not be included in the returned set, not even as
\&\f(CW\*(C`undef\*(C'\fR. As an example given a \f(CW\*(C`CSV\*(C'\fR like
.Sp
.Vb 4
\& 11,12,13,...19
\& 21,22,...28,29
\& : :
\& 91,...97,98,99
.Ve
.Sp
with \f(CW\*(C`cell=1,1\-2,2;3,3\-4,4;1,4;4,1\*(C'\fR will return:
.Sp
.Vb 4
\& 11,12,14
\& 21,22
\& 33,34
\& 41,43,44
.Ve
.Sp
Overlapping cell-specs will return those cells only once, So
\&\f(CW\*(C`cell=1,1\-3,3;2,2\-4,4;2,3;4,2\*(C'\fR will return:
.Sp
.Vb 4
\& 11,12,13
\& 21,22,23,24
\& 31,32,33,34
\& 42,43,44
.Ve
.PP
\&\s-1RFC7111\s0 <https://datatracker.ietf.org/doc/html/rfc7111> does \fBnot\fR allow different
types of specs to be combined (either \f(CW\*(C`row\*(C'\fR \fIor\fR \f(CW\*(C`col\*(C'\fR \fIor\fR \f(CW\*(C`cell\*(C'\fR).
Passing an invalid fragment specification will croak and set error 2013.
.SS "column_names"
.IX Xref "column_names"
.IX Subsection "column_names"
Set the \*(L"keys\*(R" that will be used in the \*(L"getline_hr\*(R" calls. If no keys
(column names) are passed, it will return the current setting as a list.
.PP
\&\*(L"column_names\*(R" accepts a list of scalars (the column names) or a single
array_ref, so you can pass the return value from \*(L"getline\*(R" too:
.PP
.Vb 1
\& $csv\->column_names ($csv\->getline ($fh));
.Ve
.PP
\&\*(L"column_names\*(R" does \fBno\fR checking on duplicates at all, which might lead
to unexpected results. Undefined entries will be replaced with the string
\&\f(CW"\ecAUNDEF\ecA"\fR, so
.PP
.Vb 2
\& $csv\->column_names (undef, "", "name", "name");
\& $hr = $csv\->getline_hr ($fh);
.Ve
.PP
will set \f(CW\*(C`$hr\->{"\ecAUNDEF\ecA"}\*(C'\fR to the 1st field, \f(CW\*(C`$hr\->{""}\*(C'\fR to
the 2nd field, and \f(CW\*(C`$hr\->{name}\*(C'\fR to the 4th field, discarding the 3rd
field.
.PP
\&\*(L"column_names\*(R" croaks on invalid arguments.
.SS "header"
.IX Subsection "header"
This method does \s-1NOT\s0 work in perl\-5.6.x
.PP
Parse the \s-1CSV\s0 header and set \f(CW\*(C`sep\*(C'\fR, column_names and encoding.
.PP
.Vb 3
\& my @hdr = $csv\->header ($fh);
\& $csv\->header ($fh, { sep_set => [ ";", ",", "|", "\et" ] });
\& $csv\->header ($fh, { detect_bom => 1, munge_column_names => "lc" });
.Ve
.PP
The first argument should be a file handle.
.PP
This method resets some object properties, as it is supposed to be invoked
only once per file or stream. It will leave attributes \f(CW\*(C`column_names\*(C'\fR and
\&\f(CW\*(C`bound_columns\*(C'\fR alone if setting column names is disabled. Reading headers
on previously process objects might fail on perl\-5.8.0 and older.
.PP
Assuming that the file opened for parsing has a header, and the header does
not contain problematic characters like embedded newlines, read the first
line from the open handle then auto-detect whether the header separates the
column names with a character from the allowed separator list.
.PP
If any of the allowed separators matches, and none of the \fIother\fR allowed
separators match, set \f(CW\*(C`sep\*(C'\fR to that separator for the current
\&\s-1CSV_XS\s0 instance and use it to parse the first line, map those to lowercase,
and use that to set the instance \*(L"column_names\*(R":
.PP
.Vb 7
\& my $csv = Text::CSV_XS\->new ({ binary => 1, auto_diag => 1 });
\& open my $fh, "<", "file.csv";
\& binmode $fh; # for Windows
\& $csv\->header ($fh);
\& while (my $row = $csv\->getline_hr ($fh)) {
\& ...
\& }
.Ve
.PP
If the header is empty, contains more than one unique separator out of the
allowed set, contains empty fields, or contains identical fields (after
folding), it will croak with error 1010, 1011, 1012, or 1013 respectively.
.PP
If the header contains embedded newlines or is not valid \s-1CSV\s0 in any other
way, this method will croak and leave the parse error untouched.
.PP
A successful call to \f(CW\*(C`header\*(C'\fR will always set the \f(CW\*(C`sep\*(C'\fR of the
\&\f(CW$csv\fR object. This behavior can not be disabled.
.PP
\fIreturn value\fR
.IX Subsection "return value"
.PP
On error this method will croak.
.PP
In list context, the headers will be returned whether they are used to set
\&\*(L"column_names\*(R" or not.
.PP
In scalar context, the instance itself is returned. \fBNote\fR: the values as
found in the header will effectively be \fBlost\fR if \f(CW\*(C`set_column_names\*(C'\fR is
false.
.PP
\fIOptions\fR
.IX Subsection "Options"
.IP "sep_set" 2
.IX Xref "sep_set"
.IX Item "sep_set"
.Vb 1
\& $csv\->header ($fh, { sep_set => [ ";", ",", "|", "\et" ] });
.Ve
.Sp
The list of legal separators defaults to \f(CW\*(C`[ ";", "," ]\*(C'\fR and can be changed
by this option. As this is probably the most often used option, it can be
passed on its own as an unnamed argument:
.Sp
.Vb 1
\& $csv\->header ($fh, [ ";", ",", "|", "\et", "::", "\ex{2063}" ]);
.Ve
.Sp
Multi-byte sequences are allowed, both multi-character and Unicode. See
\&\f(CW\*(C`sep\*(C'\fR.
.IP "detect_bom" 2
.IX Xref "detect_bom"
.IX Item "detect_bom"
.Vb 1
\& $csv\->header ($fh, { detect_bom => 1 });
.Ve
.Sp
The default behavior is to detect if the header line starts with a \s-1BOM.\s0 If
the header has a \s-1BOM,\s0 use that to set the encoding of \f(CW$fh\fR. This default
behavior can be disabled by passing a false value to \f(CW\*(C`detect_bom\*(C'\fR.
.Sp
Supported encodings from \s-1BOM\s0 are: \s-1UTF\-8, UTF\-16BE, UTF\-16LE, UTF\-32BE,\s0 and
\&\s-1UTF\-32LE. BOM\s0 also supports \s-1UTF\-1,\s0 UTF-EBCDIC, \s-1SCSU, BOCU\-1,\s0 and \s-1GB\-18030\s0
but Encode does not (yet). \s-1UTF\-7\s0 is not supported.
.Sp
If a supported \s-1BOM\s0 was detected as start of the stream, it is stored in the
object attribute \f(CW\*(C`ENCODING\*(C'\fR.
.Sp
.Vb 1
\& my $enc = $csv\->{ENCODING};
.Ve
.Sp
The encoding is used with \f(CW\*(C`binmode\*(C'\fR on \f(CW$fh\fR.
.Sp
If the handle was opened in a (correct) encoding, this method will \fBnot\fR
alter the encoding, as it checks the leading \fBbytes\fR of the first line. In
case the stream starts with a decoded \s-1BOM\s0 (\f(CW\*(C`U+FEFF\*(C'\fR), \f(CW\*(C`{ENCODING}\*(C'\fR will be
\&\f(CW""\fR (empty) instead of the default \f(CW\*(C`undef\*(C'\fR.
.IP "munge_column_names" 2
.IX Xref "munge_column_names"
.IX Item "munge_column_names"
This option offers the means to modify the column names into something that
is most useful to the application. The default is to map all column names
to lower case.
.Sp
.Vb 1
\& $csv\->header ($fh, { munge_column_names => "lc" });
.Ve
.Sp
The following values are available:
.Sp
.Vb 6
\& lc \- lower case
\& uc \- upper case
\& db \- valid DB field names
\& none \- do not change
\& \e%hash \- supply a mapping
\& \e&cb \- supply a callback
.Ve
.RS 2
.IP "Lower case" 2
.IX Item "Lower case"
.Vb 1
\& $csv\->header ($fh, { munge_column_names => "lc" });
.Ve
.Sp
The header is changed to all lower-case
.Sp
.Vb 1
\& $_ = lc;
.Ve
.IP "Upper case" 2
.IX Item "Upper case"
.Vb 1
\& $csv\->header ($fh, { munge_column_names => "uc" });
.Ve
.Sp
The header is changed to all upper-case
.Sp
.Vb 1
\& $_ = uc;
.Ve
.IP "Literal" 2
.IX Item "Literal"
.Vb 1
\& $csv\->header ($fh, { munge_column_names => "none" });
.Ve
.IP "Hash" 2
.IX Item "Hash"
.Vb 1
\& $csv\->header ($fh, { munge_column_names => { foo => "sombrero" });
.Ve
.Sp
if a value does not exist, the original value is used unchanged
.IP "Database" 2
.IX Item "Database"
.Vb 1
\& $csv\->header ($fh, { munge_column_names => "db" });
.Ve
.RS 2
.IP "\-" 2
lower-case
.IP "\-" 2
all sequences of non-word characters are replaced with an underscore
.IP "\-" 2
all leading underscores are removed
.RE
.RS 2
.Sp
.Vb 1
\& $_ = lc (s/\eW+/_/gr =~ s/^_+//r);
.Ve
.RE
.IP "Callback" 2
.IX Item "Callback"
.Vb 3
\& $csv\->header ($fh, { munge_column_names => sub { fc } });
\& $csv\->header ($fh, { munge_column_names => sub { "column_".$col++ } });
\& $csv\->header ($fh, { munge_column_names => sub { lc (s/\eW+/_/gr) } });
.Ve
.Sp
As this callback is called in a \f(CW\*(C`map\*(C'\fR, you can use \f(CW$_\fR directly.
.RE
.RS 2
.RE
.IP "set_column_names" 2
.IX Xref "set_column_names"
.IX Item "set_column_names"
.Vb 1
\& $csv\->header ($fh, { set_column_names => 1 });
.Ve
.Sp
The default is to set the instances column names using \*(L"column_names\*(R" if
the method is successful, so subsequent calls to \*(L"getline_hr\*(R" can return
a hash. Disable setting the header can be forced by using a false value for
this option.
.Sp
As described in \*(L"return value\*(R" above, content is lost in scalar context.
.PP
\fIValidation\fR
.IX Subsection "Validation"
.PP
When receiving \s-1CSV\s0 files from external sources, this method can be used to
protect against changes in the layout by restricting to known headers (and
typos in the header fields).
.PP
.Vb 10
\& my %known = (
\& "record key" => "c_rec",
\& "rec id" => "c_rec",
\& "id_rec" => "c_rec",
\& "kode" => "code",
\& "code" => "code",
\& "vaule" => "value",
\& "value" => "value",
\& );
\& my $csv = Text::CSV_XS\->new ({ binary => 1, auto_diag => 1 });
\& open my $fh, "<", $source or die "$source: $!";
\& $csv\->header ($fh, { munge_column_names => sub {
\& s/\es+$//;
\& s/^\es+//;
\& $known{lc $_} or die "Unknown column \*(Aq$_\*(Aq in $source";
\& }});
\& while (my $row = $csv\->getline_hr ($fh)) {
\& say join "\et", $row\->{c_rec}, $row\->{code}, $row\->{value};
\& }
.Ve
.SS "bind_columns"
.IX Xref "bind_columns"
.IX Subsection "bind_columns"
Takes a list of scalar references to be used for output with \*(L"print\*(R" or
to store in the fields fetched by \*(L"getline\*(R". When you do not pass enough
references to store the fetched fields in, \*(L"getline\*(R" will fail with error
\&\f(CW3006\fR. If you pass more than there are fields to return, the content of
the remaining references is left untouched.
.PP
.Vb 4
\& $csv\->bind_columns (\e$code, \e$name, \e$price, \e$description);
\& while ($csv\->getline ($fh)) {
\& print "The price of a $name is \ex{20ac} $price\en";
\& }
.Ve
.PP
To reset or clear all column binding, call \*(L"bind_columns\*(R" with the single
argument \f(CW\*(C`undef\*(C'\fR. This will also clear column names.
.PP
.Vb 1
\& $csv\->bind_columns (undef);
.Ve
.PP
If no arguments are passed at all, \*(L"bind_columns\*(R" will return the list of
current bindings or \f(CW\*(C`undef\*(C'\fR if no binds are active.
.PP
Note that in parsing with \f(CW\*(C`bind_columns\*(C'\fR, the fields are set on the fly.
That implies that if the third field of a row causes an error (or this row
has just two fields where the previous row had more), the first two fields
already have been assigned the values of the current row, while the rest of
the fields will still hold the values of the previous row. If you want the
parser to fail in these cases, use the \f(CW\*(C`strict\*(C'\fR attribute.
.SS "eof"
.IX Xref "eof"
.IX Subsection "eof"
.Vb 1
\& $eof = $csv\->eof ();
.Ve
.PP
If \*(L"parse\*(R" or \*(L"getline\*(R" was used with an \s-1IO\s0 stream, this method will
return true (1) if the last call hit end of file, otherwise it will return
false (''). This is useful to see the difference between a failure and end
of file.
.PP
Note that if the parsing of the last line caused an error, \f(CW\*(C`eof\*(C'\fR is still
true. That means that if you are \fInot\fR using \*(L"auto_diag\*(R", an idiom like
.PP
.Vb 4
\& while (my $row = $csv\->getline ($fh)) {
\& # ...
\& }
\& $csv\->eof or $csv\->error_diag;
.Ve
.PP
will \fInot\fR report the error. You would have to change that to
.PP
.Vb 4
\& while (my $row = $csv\->getline ($fh)) {
\& # ...
\& }
\& +$csv\->error_diag and $csv\->error_diag;
.Ve
.SS "types"
.IX Xref "types"
.IX Subsection "types"
.Vb 1
\& $csv\->types (\e@tref);
.Ve
.PP
This method is used to force that (all) columns are of a given type. For
example, if you have an integer column, two columns with doubles and a
string column, then you might do a
.PP
.Vb 4
\& $csv\->types ([Text::CSV_XS::IV (),
\& Text::CSV_XS::NV (),
\& Text::CSV_XS::NV (),
\& Text::CSV_XS::PV ()]);
.Ve
.PP
Column types are used only for \fIdecoding\fR columns while parsing, in other
words by the \*(L"parse\*(R" and \*(L"getline\*(R" methods.
.PP
You can unset column types by doing a
.PP
.Vb 1
\& $csv\->types (undef);
.Ve
.PP
or fetch the current type settings with
.PP
.Vb 1
\& $types = $csv\->types ();
.Ve
.IP "\s-1IV\s0" 4
.IX Xref "IV"
.IX Item "IV"
Set field type to integer.
.IP "\s-1NV\s0" 4
.IX Xref "NV"
.IX Item "NV"
Set field type to numeric/float.
.IP "\s-1PV\s0" 4
.IX Xref "PV"
.IX Item "PV"
Set field type to string.
.SS "fields"
.IX Xref "fields"
.IX Subsection "fields"
.Vb 1
\& @columns = $csv\->fields ();
.Ve
.PP
This method returns the input to \*(L"combine\*(R" or the resultant decomposed
fields of a successful \*(L"parse\*(R", whichever was called more recently.
.PP
Note that the return value is undefined after using \*(L"getline\*(R", which does
not fill the data structures returned by \*(L"parse\*(R".
.SS "meta_info"
.IX Xref "meta_info"
.IX Subsection "meta_info"
.Vb 1
\& @flags = $csv\->meta_info ();
.Ve
.PP
This method returns the \*(L"flags\*(R" of the input to \*(L"combine\*(R" or the flags of
the resultant decomposed fields of \*(L"parse\*(R", whichever was called more
recently.
.PP
For each field, a meta_info field will hold flags that inform something
about the field returned by the \*(L"fields\*(R" method or passed to the
\&\*(L"combine\*(R" method. The flags are bit\-wise\-\f(CW\*(C`or\*(C'\fR'd like:
.ie n .IP """ ""0x0001" 2
.el .IP "\f(CW \fR0x0001" 2
.IX Item " 0x0001"
The field was quoted.
.ie n .IP """ ""0x0002" 2
.el .IP "\f(CW \fR0x0002" 2
.IX Item " 0x0002"
The field was binary.
.PP
See the \f(CW\*(C`is_***\*(C'\fR methods below.
.SS "is_quoted"
.IX Xref "is_quoted"
.IX Subsection "is_quoted"
.Vb 1
\& my $quoted = $csv\->is_quoted ($column_idx);
.Ve
.PP
where \f(CW$column_idx\fR is the (zero-based) index of the column in the last
result of \*(L"parse\*(R".
.PP
This returns a true value if the data in the indicated column was enclosed
in \f(CW\*(C`quote_char\*(C'\fR quotes. This might be important for fields
where content \f(CW\*(C`,20070108,\*(C'\fR is to be treated as a numeric value, and where
\&\f(CW\*(C`,"20070108",\*(C'\fR is explicitly marked as character string data.
.PP
This method is only valid when \*(L"keep_meta_info\*(R" is set to a true value.
.SS "is_binary"
.IX Xref "is_binary"
.IX Subsection "is_binary"
.Vb 1
\& my $binary = $csv\->is_binary ($column_idx);
.Ve
.PP
where \f(CW$column_idx\fR is the (zero-based) index of the column in the last
result of \*(L"parse\*(R".
.PP
This returns a true value if the data in the indicated column contained any
byte in the range \f(CW\*(C`[\ex00\-\ex08,\ex10\-\ex1F,\ex7F\-\exFF]\*(C'\fR.
.PP
This method is only valid when \*(L"keep_meta_info\*(R" is set to a true value.
.SS "is_missing"
.IX Xref "is_missing"
.IX Subsection "is_missing"
.Vb 1
\& my $missing = $csv\->is_missing ($column_idx);
.Ve
.PP
where \f(CW$column_idx\fR is the (zero-based) index of the column in the last
result of \*(L"getline_hr\*(R".
.PP
.Vb 4
\& $csv\->keep_meta_info (1);
\& while (my $hr = $csv\->getline_hr ($fh)) {
\& $csv\->is_missing (0) and next; # This was an empty line
\& }
.Ve
.PP
When using \*(L"getline_hr\*(R", it is impossible to tell if the parsed fields
are \f(CW\*(C`undef\*(C'\fR because they where not filled in the \f(CW\*(C`CSV\*(C'\fR stream or because
they were not read at all, as \fBall\fR the fields defined by \*(L"column_names\*(R"
are set in the hash-ref. If you still need to know if all fields in each
row are provided, you should enable \f(CW\*(C`keep_meta_info\*(C'\fR so
you can check the flags.
.PP
If \f(CW\*(C`keep_meta_info\*(C'\fR is \f(CW\*(C`false\*(C'\fR, \f(CW\*(C`is_missing\*(C'\fR will
always return \f(CW\*(C`undef\*(C'\fR, regardless of \f(CW$column_idx\fR being valid or not. If
this attribute is \f(CW\*(C`true\*(C'\fR it will return either \f(CW0\fR (the field is present)
or \f(CW1\fR (the field is missing).
.PP
A special case is the empty line. If the line is completely empty \- after
dealing with the flags \- this is still a valid \s-1CSV\s0 line: it is a record of
just one single empty field. However, if \f(CW\*(C`keep_meta_info\*(C'\fR is set, invoking
\&\f(CW\*(C`is_missing\*(C'\fR with index \f(CW0\fR will now return true.
.SS "status"
.IX Xref "status"
.IX Subsection "status"
.Vb 1
\& $status = $csv\->status ();
.Ve
.PP
This method returns the status of the last invoked \*(L"combine\*(R" or \*(L"parse\*(R"
call. Status is success (true: \f(CW1\fR) or failure (false: \f(CW\*(C`undef\*(C'\fR or \f(CW0\fR).
.PP
Note that as this only keeps track of the status of above mentioned methods,
you are probably looking for \f(CW\*(C`error_diag\*(C'\fR instead.
.SS "error_input"
.IX Xref "error_input"
.IX Subsection "error_input"
.Vb 1
\& $bad_argument = $csv\->error_input ();
.Ve
.PP
This method returns the erroneous argument (if it exists) of \*(L"combine\*(R" or
\&\*(L"parse\*(R", whichever was called more recently. If the last invocation was
successful, \f(CW\*(C`error_input\*(C'\fR will return \f(CW\*(C`undef\*(C'\fR.
.PP
Depending on the type of error, it \fImight\fR also hold the data for the last
error-input of \*(L"getline\*(R".
.SS "error_diag"
.IX Xref "error_diag"
.IX Subsection "error_diag"
.Vb 5
\& Text::CSV_XS\->error_diag ();
\& $csv\->error_diag ();
\& $error_code = 0 + $csv\->error_diag ();
\& $error_str = "" . $csv\->error_diag ();
\& ($cde, $str, $pos, $rec, $fld) = $csv\->error_diag ();
.Ve
.PP
If (and only if) an error occurred, this function returns the diagnostics
of that error.
.PP
If called in void context, this will print the internal error code and the
associated error message to \s-1STDERR.\s0
.PP
If called in list context, this will return the error code and the error
message in that order. If the last error was from parsing, the rest of the
values returned are a best guess at the location within the line that was
being parsed. Their values are 1\-based. The position currently is index of
the byte at which the parsing failed in the current record. It might change
to be the index of the current character in a later release. The records is
the index of the record parsed by the csv instance. The field number is the
index of the field the parser thinks it is currently trying to parse. See
\&\fIexamples/csv\-check\fR for how this can be used.
.PP
If called in scalar context, it will return the diagnostics in a single
scalar, a\-la \f(CW$!\fR. It will contain the error code in numeric context, and
the diagnostics message in string context.
.PP
When called as a class method or a direct function call, the diagnostics
are that of the last \*(L"new\*(R" call.
.SS "record_number"
.IX Xref "record_number"
.IX Subsection "record_number"
.Vb 1
\& $recno = $csv\->record_number ();
.Ve
.PP
Returns the records parsed by this csv instance. This value should be more
accurate than \f(CW$.\fR when embedded newlines come in play. Records written by
this instance are not counted.
.SS "SetDiag"
.IX Xref "SetDiag"
.IX Subsection "SetDiag"
.Vb 1
\& $csv\->SetDiag (0);
.Ve
.PP
Use to reset the diagnostics if you are dealing with errors.
.SH "FUNCTIONS"
.IX Header "FUNCTIONS"
.SS "csv"
.IX Xref "csv"
.IX Subsection "csv"
This function is not exported by default and should be explicitly requested:
.PP
.Vb 1
\& use Text::CSV_XS qw( csv );
.Ve
.PP
This is a high-level function that aims at simple (user) interfaces. This
can be used to read/parse a \f(CW\*(C`CSV\*(C'\fR file or stream (the default behavior) or
to produce a file or write to a stream (define the \f(CW\*(C`out\*(C'\fR attribute). It
returns an array\- or hash-reference on parsing (or \f(CW\*(C`undef\*(C'\fR on fail) or the
numeric value of \*(L"error_diag\*(R" on writing. When this function fails you
can get to the error using the class call to \*(L"error_diag\*(R"
.PP
.Vb 2
\& my $aoa = csv (in => "test.csv") or
\& die Text::CSV_XS\->error_diag;
.Ve
.PP
This function takes the arguments as key-value pairs. This can be passed as
a list or as an anonymous hash:
.PP
.Vb 2
\& my $aoa = csv ( in => "test.csv", sep_char => ";");
\& my $aoh = csv ({ in => $fh, headers => "auto" });
.Ve
.PP
The arguments passed consist of two parts: the arguments to \*(L"csv\*(R" itself
and the optional attributes to the \f(CW\*(C`CSV\*(C'\fR object used inside the function
as enumerated and explained in \*(L"new\*(R".
.PP
If not overridden, the default option used for \s-1CSV\s0 is
.PP
.Vb 2
\& auto_diag => 1
\& escape_null => 0
.Ve
.PP
The option that is always set and cannot be altered is
.PP
.Vb 1
\& binary => 1
.Ve
.PP
As this function will likely be used in one-liners, it allows \f(CW\*(C`quote\*(C'\fR to
be abbreviated as \f(CW\*(C`quo\*(C'\fR, and \f(CW\*(C`escape_char\*(C'\fR to be abbreviated as \f(CW\*(C`esc\*(C'\fR
or \f(CW\*(C`escape\*(C'\fR.
.PP
Alternative invocations:
.PP
.Vb 1
\& my $aoa = Text::CSV_XS::csv (in => "file.csv");
\&
\& my $csv = Text::CSV_XS\->new ();
\& my $aoa = $csv\->csv (in => "file.csv");
.Ve
.PP
In the latter case, the object attributes are used from the existing object
and the attribute arguments in the function call are ignored:
.PP
.Vb 2
\& my $csv = Text::CSV_XS\->new ({ sep_char => ";" });
\& my $aoh = $csv\->csv (in => "file.csv", bom => 1);
.Ve
.PP
will parse using \f(CW\*(C`;\*(C'\fR as \f(CW\*(C`sep_char\*(C'\fR, not \f(CW\*(C`,\*(C'\fR.
.PP
\fIin\fR
.IX Xref "in"
.IX Subsection "in"
.PP
Used to specify the source. \f(CW\*(C`in\*(C'\fR can be a file name (e.g. \f(CW"file.csv"\fR),
which will be opened for reading and closed when finished, a file handle
(e.g. \f(CW$fh\fR or \f(CW\*(C`FH\*(C'\fR), a reference to a glob (e.g. \f(CW\*(C`\e*ARGV\*(C'\fR), the glob
itself (e.g. \f(CW*STDIN\fR), or a reference to a scalar (e.g. \f(CW\*(C`\eq{1,2,"csv"}\*(C'\fR).
.PP
When used with \*(L"out\*(R", \f(CW\*(C`in\*(C'\fR should be a reference to a \s-1CSV\s0 structure (AoA
or AoH) or a CODE-ref that returns an array-reference or a hash-reference.
The code-ref will be invoked with no arguments.
.PP
.Vb 1
\& my $aoa = csv (in => "file.csv");
\&
\& open my $fh, "<", "file.csv";
\& my $aoa = csv (in => $fh);
\&
\& my $csv = [ [qw( Foo Bar )], [ 1, 2 ], [ 2, 3 ]];
\& my $err = csv (in => $csv, out => "file.csv");
.Ve
.PP
If called in void context without the \*(L"out\*(R" attribute, the resulting ref
will be used as input to a subsequent call to csv:
.PP
.Vb 1
\& csv (in => "file.csv", filter => { 2 => sub { length > 2 }})
.Ve
.PP
will be a shortcut to
.PP
.Vb 1
\& csv (in => csv (in => "file.csv", filter => { 2 => sub { length > 2 }}))
.Ve
.PP
where, in the absence of the \f(CW\*(C`out\*(C'\fR attribute, this is a shortcut to
.PP
.Vb 2
\& csv (in => csv (in => "file.csv", filter => { 2 => sub { length > 2 }}),
\& out => *STDOUT)
.Ve
.PP
\fIout\fR
.IX Xref "out"
.IX Subsection "out"
.PP
.Vb 8
\& csv (in => $aoa, out => "file.csv");
\& csv (in => $aoa, out => $fh);
\& csv (in => $aoa, out => STDOUT);
\& csv (in => $aoa, out => *STDOUT);
\& csv (in => $aoa, out => \e*STDOUT);
\& csv (in => $aoa, out => \emy $data);
\& csv (in => $aoa, out => undef);
\& csv (in => $aoa, out => \e"skip");
\&
\& csv (in => $fh, out => \e@aoa);
\& csv (in => $fh, out => \e@aoh, bom => 1);
\& csv (in => $fh, out => \e%hsh, key => "key");
.Ve
.PP
In output mode, the default \s-1CSV\s0 options when producing \s-1CSV\s0 are
.PP
.Vb 1
\& eol => "\er\en"
.Ve
.PP
The \*(L"fragment\*(R" attribute is ignored in output mode.
.PP
\&\f(CW\*(C`out\*(C'\fR can be a file name (e.g. \f(CW"file.csv"\fR), which will be opened for
writing and closed when finished, a file handle (e.g. \f(CW$fh\fR or \f(CW\*(C`FH\*(C'\fR), a
reference to a glob (e.g. \f(CW\*(C`\e*STDOUT\*(C'\fR), the glob itself (e.g. \f(CW*STDOUT\fR),
or a reference to a scalar (e.g. \f(CW\*(C`\emy $data\*(C'\fR).
.PP
.Vb 3
\& csv (in => sub { $sth\->fetch }, out => "dump.csv");
\& csv (in => sub { $sth\->fetchrow_hashref }, out => "dump.csv",
\& headers => $sth\->{NAME_lc});
.Ve
.PP
When a code-ref is used for \f(CW\*(C`in\*(C'\fR, the output is generated per invocation,
so no buffering is involved. This implies that there is no size restriction
on the number of records. The \f(CW\*(C`csv\*(C'\fR function ends when the coderef returns
a false value.
.PP
If \f(CW\*(C`out\*(C'\fR is set to a reference of the literal string \f(CW"skip"\fR, the output
will be suppressed completely, which might be useful in combination with a
filter for side effects only.
.PP
.Vb 4
\& my %cache;
\& csv (in => "dump.csv",
\& out => \e"skip",
\& on_in => sub { $cache{$_[1][1]}++ });
.Ve
.PP
Currently, setting \f(CW\*(C`out\*(C'\fR to any false value (\f(CW\*(C`undef\*(C'\fR, \f(CW""\fR, 0) will be
equivalent to \f(CW\*(C`\e"skip"\*(C'\fR.
.PP
If the \f(CW\*(C`in\*(C'\fR argument point to something to parse, and the \f(CW\*(C`out\*(C'\fR is set to
a reference to an \f(CW\*(C`ARRAY\*(C'\fR or a \f(CW\*(C`HASH\*(C'\fR, the output is appended to the data
in the existing reference. The result of the parse should match what exists
in the reference passed. This might come handy when you have to parse a set
of files with similar content (like data stored per period) and you want to
collect that into a single data structure:
.PP
.Vb 2
\& my %hash;
\& csv (in => $_, out => \e%hash, key => "id") for sort glob "foo\-[0\-9]*.csv";
\&
\& my @list; # List of arrays
\& csv (in => $_, out => \e@list) for sort glob "foo\-[0\-9]*.csv";
\&
\& my @list; # List of hashes
\& csv (in => $_, out => \e@list, bom => 1) for sort glob "foo\-[0\-9]*.csv";
.Ve
.PP
\fIencoding\fR
.IX Xref "encoding"
.IX Subsection "encoding"
.PP
If passed, it should be an encoding accepted by the \f(CW\*(C`:encoding()\*(C'\fR option
to \f(CW\*(C`open\*(C'\fR. There is no default value. This attribute does not work in perl
5.6.x. \f(CW\*(C`encoding\*(C'\fR can be abbreviated to \f(CW\*(C`enc\*(C'\fR for ease of use in command
line invocations.
.PP
If \f(CW\*(C`encoding\*(C'\fR is set to the literal value \f(CW"auto"\fR, the method \*(L"header\*(R"
will be invoked on the opened stream to check if there is a \s-1BOM\s0 and set the
encoding accordingly. This is equal to passing a true value in the option
\&\f(CW\*(C`detect_bom\*(C'\fR.
.PP
Encodings can be stacked, as supported by \f(CW\*(C`binmode\*(C'\fR:
.PP
.Vb 6
\& # Using PerlIO::via::gzip
\& csv (in => \e@csv,
\& out => "test.csv:via.gz",
\& encoding => ":via(gzip):encoding(utf\-8)",
\& );
\& $aoa = csv (in => "test.csv:via.gz", encoding => ":via(gzip)");
\&
\& # Using PerlIO::gzip
\& csv (in => \e@csv,
\& out => "test.csv:via.gz",
\& encoding => ":gzip:encoding(utf\-8)",
\& );
\& $aoa = csv (in => "test.csv:gzip.gz", encoding => ":gzip");
.Ve
.PP
\fIdetect_bom\fR
.IX Xref "detect_bom"
.IX Subsection "detect_bom"
.PP
If \f(CW\*(C`detect_bom\*(C'\fR is given, the method \*(L"header\*(R" will be invoked on the
opened stream to check if there is a \s-1BOM\s0 and set the encoding accordingly.
.PP
\&\f(CW\*(C`detect_bom\*(C'\fR can be abbreviated to \f(CW\*(C`bom\*(C'\fR.
.PP
This is the same as setting \f(CW\*(C`encoding\*(C'\fR to \f(CW"auto"\fR.
.PP
Note that as the method \*(L"header\*(R" is invoked, its default is to also set
the headers.
.PP
\fIheaders\fR
.IX Xref "headers"
.IX Subsection "headers"
.PP
If this attribute is not given, the default behavior is to produce an array
of arrays.
.PP
If \f(CW\*(C`headers\*(C'\fR is supplied, it should be an anonymous list of column names,
an anonymous hashref, a coderef, or a literal flag: \f(CW\*(C`auto\*(C'\fR, \f(CW\*(C`lc\*(C'\fR, \f(CW\*(C`uc\*(C'\fR,
or \f(CW\*(C`skip\*(C'\fR.
.IP "skip" 2
.IX Xref "skip"
.IX Item "skip"
When \f(CW\*(C`skip\*(C'\fR is used, the header will not be included in the output.
.Sp
.Vb 1
\& my $aoa = csv (in => $fh, headers => "skip");
.Ve
.IP "auto" 2
.IX Xref "auto"
.IX Item "auto"
If \f(CW\*(C`auto\*(C'\fR is used, the first line of the \f(CW\*(C`CSV\*(C'\fR source will be read as the
list of field headers and used to produce an array of hashes.
.Sp
.Vb 1
\& my $aoh = csv (in => $fh, headers => "auto");
.Ve
.IP "lc" 2
.IX Xref "lc"
.IX Item "lc"
If \f(CW\*(C`lc\*(C'\fR is used, the first line of the \f(CW\*(C`CSV\*(C'\fR source will be read as the
list of field headers mapped to lower case and used to produce an array of
hashes. This is a variation of \f(CW\*(C`auto\*(C'\fR.
.Sp
.Vb 1
\& my $aoh = csv (in => $fh, headers => "lc");
.Ve
.IP "uc" 2
.IX Xref "uc"
.IX Item "uc"
If \f(CW\*(C`uc\*(C'\fR is used, the first line of the \f(CW\*(C`CSV\*(C'\fR source will be read as the
list of field headers mapped to upper case and used to produce an array of
hashes. This is a variation of \f(CW\*(C`auto\*(C'\fR.
.Sp
.Vb 1
\& my $aoh = csv (in => $fh, headers => "uc");
.Ve
.IP "\s-1CODE\s0" 2
.IX Xref "CODE"
.IX Item "CODE"
If a coderef is used, the first line of the \f(CW\*(C`CSV\*(C'\fR source will be read as
the list of mangled field headers in which each field is passed as the only
argument to the coderef. This list is used to produce an array of hashes.
.Sp
.Vb 2
\& my $aoh = csv (in => $fh,
\& headers => sub { lc ($_[0]) =~ s/kode/code/gr });
.Ve
.Sp
this example is a variation of using \f(CW\*(C`lc\*(C'\fR where all occurrences of \f(CW\*(C`kode\*(C'\fR
are replaced with \f(CW\*(C`code\*(C'\fR.
.IP "\s-1ARRAY\s0" 2
.IX Xref "ARRAY"
.IX Item "ARRAY"
If \f(CW\*(C`headers\*(C'\fR is an anonymous list, the entries in the list will be used
as field names. The first line is considered data instead of headers.
.Sp
.Vb 2
\& my $aoh = csv (in => $fh, headers => [qw( Foo Bar )]);
\& csv (in => $aoa, out => $fh, headers => [qw( code description price )]);
.Ve
.IP "\s-1HASH\s0" 2
.IX Xref "HASH"
.IX Item "HASH"
If \f(CW\*(C`headers\*(C'\fR is a hash reference, this implies \f(CW\*(C`auto\*(C'\fR, but header fields
that exist as key in the hashref will be replaced by the value for that
key. Given a \s-1CSV\s0 file like
.Sp
.Vb 2
\& post\-kode,city,name,id number,fubble
\& 1234AA,Duckstad,Donald,13,"X313DF"
.Ve
.Sp
using
.Sp
.Vb 1
\& csv (headers => { "post\-kode" => "pc", "id number" => "ID" }, ...
.Ve
.Sp
will return an entry like
.Sp
.Vb 6
\& { pc => "1234AA",
\& city => "Duckstad",
\& name => "Donald",
\& ID => "13",
\& fubble => "X313DF",
\& }
.Ve
.PP
See also \f(CW\*(C`munge_column_names\*(C'\fR and
\&\f(CW\*(C`set_column_names\*(C'\fR.
.PP
\fImunge_column_names\fR
.IX Xref "munge_column_names"
.IX Subsection "munge_column_names"
.PP
If \f(CW\*(C`munge_column_names\*(C'\fR is set, the method \*(L"header\*(R" is invoked on the
opened stream with all matching arguments to detect and set the headers.
.PP
\&\f(CW\*(C`munge_column_names\*(C'\fR can be abbreviated to \f(CW\*(C`munge\*(C'\fR.
.PP
\fIkey\fR
.IX Xref "key"
.IX Subsection "key"
.PP
If passed, will default \f(CW\*(C`headers\*(C'\fR to \f(CW"auto"\fR and return a
hashref instead of an array of hashes. Allowed values are simple scalars or
array-references where the first element is the joiner and the rest are the
fields to join to combine the key.
.PP
.Vb 2
\& my $ref = csv (in => "test.csv", key => "code");
\& my $ref = csv (in => "test.csv", key => [ ":" => "code", "color" ]);
.Ve
.PP
with test.csv like
.PP
.Vb 4
\& code,product,price,color
\& 1,pc,850,gray
\& 2,keyboard,12,white
\& 3,mouse,5,black
.Ve
.PP
the first example will return
.PP
.Vb 10
\& { 1 => {
\& code => 1,
\& color => \*(Aqgray\*(Aq,
\& price => 850,
\& product => \*(Aqpc\*(Aq
\& },
\& 2 => {
\& code => 2,
\& color => \*(Aqwhite\*(Aq,
\& price => 12,
\& product => \*(Aqkeyboard\*(Aq
\& },
\& 3 => {
\& code => 3,
\& color => \*(Aqblack\*(Aq,
\& price => 5,
\& product => \*(Aqmouse\*(Aq
\& }
\& }
.Ve
.PP
the second example will return
.PP
.Vb 10
\& { "1:gray" => {
\& code => 1,
\& color => \*(Aqgray\*(Aq,
\& price => 850,
\& product => \*(Aqpc\*(Aq
\& },
\& "2:white" => {
\& code => 2,
\& color => \*(Aqwhite\*(Aq,
\& price => 12,
\& product => \*(Aqkeyboard\*(Aq
\& },
\& "3:black" => {
\& code => 3,
\& color => \*(Aqblack\*(Aq,
\& price => 5,
\& product => \*(Aqmouse\*(Aq
\& }
\& }
.Ve
.PP
The \f(CW\*(C`key\*(C'\fR attribute can be combined with \f(CW\*(C`headers\*(C'\fR for \f(CW\*(C`CSV\*(C'\fR
date that has no header line, like
.PP
.Vb 5
\& my $ref = csv (
\& in => "foo.csv",
\& headers => [qw( c_foo foo bar description stock )],
\& key => "c_foo",
\& );
.Ve
.PP
\fIvalue\fR
.IX Xref "value"
.IX Subsection "value"
.PP
Used to create key-value hashes.
.PP
Only allowed when \f(CW\*(C`key\*(C'\fR is valid. A \f(CW\*(C`value\*(C'\fR can be either a single column
label or an anonymous list of column labels. In the first case, the value
will be a simple scalar value, in the latter case, it will be a hashref.
.PP
.Vb 8
\& my $ref = csv (in => "test.csv", key => "code",
\& value => "price");
\& my $ref = csv (in => "test.csv", key => "code",
\& value => [ "product", "price" ]);
\& my $ref = csv (in => "test.csv", key => [ ":" => "code", "color" ],
\& value => "price");
\& my $ref = csv (in => "test.csv", key => [ ":" => "code", "color" ],
\& value => [ "product", "price" ]);
.Ve
.PP
with test.csv like
.PP
.Vb 4
\& code,product,price,color
\& 1,pc,850,gray
\& 2,keyboard,12,white
\& 3,mouse,5,black
.Ve
.PP
the first example will return
.PP
.Vb 4
\& { 1 => 850,
\& 2 => 12,
\& 3 => 5,
\& }
.Ve
.PP
the second example will return
.PP
.Vb 10
\& { 1 => {
\& price => 850,
\& product => \*(Aqpc\*(Aq
\& },
\& 2 => {
\& price => 12,
\& product => \*(Aqkeyboard\*(Aq
\& },
\& 3 => {
\& price => 5,
\& product => \*(Aqmouse\*(Aq
\& }
\& }
.Ve
.PP
the third example will return
.PP
.Vb 4
\& { "1:gray" => 850,
\& "2:white" => 12,
\& "3:black" => 5,
\& }
.Ve
.PP
the fourth example will return
.PP
.Vb 10
\& { "1:gray" => {
\& price => 850,
\& product => \*(Aqpc\*(Aq
\& },
\& "2:white" => {
\& price => 12,
\& product => \*(Aqkeyboard\*(Aq
\& },
\& "3:black" => {
\& price => 5,
\& product => \*(Aqmouse\*(Aq
\& }
\& }
.Ve
.PP
\fIkeep_headers\fR
.IX Xref "keep_headers keep_column_names kh"
.IX Subsection "keep_headers"
.PP
When using hashes, keep the column names into the arrayref passed, so all
headers are available after the call in the original order.
.PP
.Vb 1
\& my $aoh = csv (in => "file.csv", keep_headers => \emy @hdr);
.Ve
.PP
This attribute can be abbreviated to \f(CW\*(C`kh\*(C'\fR or passed as \f(CW\*(C`keep_column_names\*(C'\fR.
.PP
This attribute implies a default of \f(CW\*(C`auto\*(C'\fR for the \f(CW\*(C`headers\*(C'\fR attribute.
.PP
\fIfragment\fR
.IX Xref "fragment"
.IX Subsection "fragment"
.PP
Only output the fragment as defined in the \*(L"fragment\*(R" method. This option
is ignored when \fIgenerating\fR \f(CW\*(C`CSV\*(C'\fR. See \*(L"out\*(R".
.PP
Combining all of them could give something like
.PP
.Vb 9
\& use Text::CSV_XS qw( csv );
\& my $aoh = csv (
\& in => "test.txt",
\& encoding => "utf\-8",
\& headers => "auto",
\& sep_char => "|",
\& fragment => "row=3;6\-9;15\-*",
\& );
\& say $aoh\->[15]{Foo};
.Ve
.PP
\fIsep_set\fR
.IX Xref "sep_set seps"
.IX Subsection "sep_set"
.PP
If \f(CW\*(C`sep_set\*(C'\fR is set, the method \*(L"header\*(R" is invoked on the opened stream
to detect and set \f(CW\*(C`sep_char\*(C'\fR with the given set.
.PP
\&\f(CW\*(C`sep_set\*(C'\fR can be abbreviated to \f(CW\*(C`seps\*(C'\fR.
.PP
Note that as the \*(L"header\*(R" method is invoked, its default is to also set
the headers.
.PP
\fIset_column_names\fR
.IX Xref "set_column_names"
.IX Subsection "set_column_names"
.PP
If \f(CW\*(C`set_column_names\*(C'\fR is passed, the method \*(L"header\*(R" is invoked on the
opened stream with all arguments meant for \*(L"header\*(R".
.PP
If \f(CW\*(C`set_column_names\*(C'\fR is passed as a false value, the content of the first
row is only preserved if the output is AoA:
.PP
With an input-file like
.PP
.Vb 3
\& bAr,foo
\& 1,2
\& 3,4,5
.Ve
.PP
This call
.PP
.Vb 1
\& my $aoa = csv (in => $file, set_column_names => 0);
.Ve
.PP
will result in
.PP
.Vb 3
\& [[ "bar", "foo" ],
\& [ "1", "2" ],
\& [ "3", "4", "5" ]]
.Ve
.PP
and
.PP
.Vb 1
\& my $aoa = csv (in => $file, set_column_names => 0, munge => "none");
.Ve
.PP
will result in
.PP
.Vb 3
\& [[ "bAr", "foo" ],
\& [ "1", "2" ],
\& [ "3", "4", "5" ]]
.Ve
.SS "Callbacks"
.IX Xref "Callbacks"
.IX Subsection "Callbacks"
Callbacks enable actions triggered from the \fIinside\fR of Text::CSV_XS.
.PP
While most of what this enables can easily be done in an unrolled loop as
described in the \*(L"\s-1SYNOPSIS\*(R"\s0 callbacks can be used to meet special demands
or enhance the \*(L"csv\*(R" function.
.IP "error" 2
.IX Xref "error"
.IX Item "error"
.Vb 1
\& $csv\->callbacks (error => sub { $csv\->SetDiag (0) });
.Ve
.Sp
the \f(CW\*(C`error\*(C'\fR callback is invoked when an error occurs, but \fIonly\fR when
\&\*(L"auto_diag\*(R" is set to a true value. A callback is invoked with the values
returned by \*(L"error_diag\*(R":
.Sp
.Vb 1
\& my ($c, $s);
\&
\& sub ignore3006 {
\& my ($err, $msg, $pos, $recno, $fldno) = @_;
\& if ($err == 3006) {
\& # ignore this error
\& ($c, $s) = (undef, undef);
\& Text::CSV_XS\->SetDiag (0);
\& }
\& # Any other error
\& return;
\& } # ignore3006
\&
\& $csv\->callbacks (error => \e&ignore3006);
\& $csv\->bind_columns (\e$c, \e$s);
\& while ($csv\->getline ($fh)) {
\& # Error 3006 will not stop the loop
\& }
.Ve
.IP "after_parse" 2
.IX Xref "after_parse"
.IX Item "after_parse"
.Vb 4
\& $csv\->callbacks (after_parse => sub { push @{$_[1]}, "NEW" });
\& while (my $row = $csv\->getline ($fh)) {
\& $row\->[\-1] eq "NEW";
\& }
.Ve
.Sp
This callback is invoked after parsing with \*(L"getline\*(R" only if no error
occurred. The callback is invoked with two arguments: the current \f(CW\*(C`CSV\*(C'\fR
parser object and an array reference to the fields parsed.
.Sp
The return code of the callback is ignored unless it is a reference to the
string \*(L"skip\*(R", in which case the record will be skipped in \*(L"getline_all\*(R".
.Sp
.Vb 5
\& sub add_from_db {
\& my ($csv, $row) = @_;
\& $sth\->execute ($row\->[4]);
\& push @$row, $sth\->fetchrow_array;
\& } # add_from_db
\&
\& my $aoa = csv (in => "file.csv", callbacks => {
\& after_parse => \e&add_from_db });
.Ve
.Sp
This hook can be used for validation:
.IX Xref "data_validation"
.RS 2
.IP "\s-1FAIL\s0" 2
.IX Item "FAIL"
Die if any of the records does not validate a rule:
.Sp
.Vb 4
\& after_parse => sub {
\& $_[1][4] =~ m/^[0\-9]{4}\es?[A\-Z]{2}$/ or
\& die "5th field does not have a valid Dutch zipcode";
\& }
.Ve
.IP "\s-1DEFAULT\s0" 2
.IX Item "DEFAULT"
Replace invalid fields with a default value:
.Sp
.Vb 1
\& after_parse => sub { $_[1][2] =~ m/^\ed+$/ or $_[1][2] = 0 }
.Ve
.IP "\s-1SKIP\s0" 2
.IX Item "SKIP"
Skip records that have invalid fields (only applies to \*(L"getline_all\*(R"):
.Sp
.Vb 1
\& after_parse => sub { $_[1][0] =~ m/^\ed+$/ or return \e"skip"; }
.Ve
.RE
.RS 2
.RE
.IP "before_print" 2
.IX Xref "before_print"
.IX Item "before_print"
.Vb 3
\& my $idx = 1;
\& $csv\->callbacks (before_print => sub { $_[1][0] = $idx++ });
\& $csv\->print (*STDOUT, [ 0, $_ ]) for @members;
.Ve
.Sp
This callback is invoked before printing with \*(L"print\*(R" only if no error
occurred. The callback is invoked with two arguments: the current \f(CW\*(C`CSV\*(C'\fR
parser object and an array reference to the fields passed.
.Sp
The return code of the callback is ignored.
.Sp
.Vb 4
\& sub max_4_fields {
\& my ($csv, $row) = @_;
\& @$row > 4 and splice @$row, 4;
\& } # max_4_fields
\&
\& csv (in => csv (in => "file.csv"), out => *STDOUT,
\& callbacks => { before_print => \e&max_4_fields });
.Ve
.Sp
This callback is not active for \*(L"combine\*(R".
.PP
\fICallbacks for csv ()\fR
.IX Subsection "Callbacks for csv ()"
.PP
The \*(L"csv\*(R" allows for some callbacks that do not integrate in \s-1XS\s0 internals
but only feature the \*(L"csv\*(R" function.
.PP
.Vb 8
\& csv (in => "file.csv",
\& callbacks => {
\& filter => { 6 => sub { $_ > 15 } }, # first
\& after_parse => sub { say "AFTER PARSE"; }, # first
\& after_in => sub { say "AFTER IN"; }, # second
\& on_in => sub { say "ON IN"; }, # third
\& },
\& );
\&
\& csv (in => $aoh,
\& out => "file.csv",
\& callbacks => {
\& on_in => sub { say "ON IN"; }, # first
\& before_out => sub { say "BEFORE OUT"; }, # second
\& before_print => sub { say "BEFORE PRINT"; }, # third
\& },
\& );
.Ve
.IP "filter" 2
.IX Xref "filter"
.IX Item "filter"
This callback can be used to filter records. It is called just after a new
record has been scanned. The callback accepts a:
.RS 2
.IP "hashref" 2
.IX Item "hashref"
The keys are the index to the row (the field name or field number, 1\-based)
and the values are subs to return a true or false value.
.Sp
.Vb 4
\& csv (in => "file.csv", filter => {
\& 3 => sub { m/a/ }, # third field should contain an "a"
\& 5 => sub { length > 4 }, # length of the 5th field minimal 5
\& });
\&
\& csv (in => "file.csv", filter => { foo => sub { $_ > 4 }});
.Ve
.Sp
If the keys to the filter hash contain any character that is not a digit it
will also implicitly set \*(L"headers\*(R" to \f(CW"auto"\fR unless \*(L"headers\*(R" was
already passed as argument. When headers are active, returning an array of
hashes, the filter is not applicable to the header itself.
.Sp
All sub results should match, as in \s-1AND.\s0
.Sp
The context of the callback sets \f(CW$_\fR localized to the field indicated by
the filter. The two arguments are as with all other callbacks, so the other
fields in the current row can be seen:
.Sp
.Vb 1
\& filter => { 3 => sub { $_ > 100 ? $_[1][1] =~ m/A/ : $_[1][6] =~ m/B/ }}
.Ve
.Sp
If the context is set to return a list of hashes (\*(L"headers\*(R" is defined),
the current record will also be available in the localized \f(CW%_\fR:
.Sp
.Vb 1
\& filter => { 3 => sub { $_ > 100 && $_{foo} =~ m/A/ && $_{bar} < 1000 }}
.Ve
.Sp
If the filter is used to \fIalter\fR the content by changing \f(CW$_\fR, make sure
that the sub returns true in order not to have that record skipped:
.Sp
.Vb 1
\& filter => { 2 => sub { $_ = uc }}
.Ve
.Sp
will upper-case the second field, and then skip it if the resulting content
evaluates to false. To always accept, end with truth:
.Sp
.Vb 1
\& filter => { 2 => sub { $_ = uc; 1 }}
.Ve
.IP "coderef" 2
.IX Item "coderef"
.Vb 1
\& csv (in => "file.csv", filter => sub { $n++; 0; });
.Ve
.Sp
If the argument to \f(CW\*(C`filter\*(C'\fR is a coderef, it is an alias or shortcut to a
filter on column 0:
.Sp
.Vb 1
\& csv (filter => sub { $n++; 0 });
.Ve
.Sp
is equal to
.Sp
.Vb 1
\& csv (filter => { 0 => sub { $n++; 0 });
.Ve
.IP "filter-name" 2
.IX Item "filter-name"
.Vb 3
\& csv (in => "file.csv", filter => "not_blank");
\& csv (in => "file.csv", filter => "not_empty");
\& csv (in => "file.csv", filter => "filled");
.Ve
.Sp
These are predefined filters
.Sp
Given a file like (line numbers prefixed for doc purpose only):
.Sp
.Vb 9
\& 1:1,2,3
\& 2:
\& 3:,
\& 4:""
\& 5:,,
\& 6:, ,
\& 7:"",
\& 8:" "
\& 9:4,5,6
.Ve
.RS 2
.IP "not_blank" 2
.IX Item "not_blank"
Filter out the blank lines
.Sp
This filter is a shortcut for
.Sp
.Vb 2
\& filter => { 0 => sub { @{$_[1]} > 1 or
\& defined $_[1][0] && $_[1][0] ne "" } }
.Ve
.Sp
Due to the implementation, it is currently impossible to also filter lines
that consists only of a quoted empty field. These lines are also considered
blank lines.
.Sp
With the given example, lines 2 and 4 will be skipped.
.IP "not_empty" 2
.IX Item "not_empty"
Filter out lines where all the fields are empty.
.Sp
This filter is a shortcut for
.Sp
.Vb 1
\& filter => { 0 => sub { grep { defined && $_ ne "" } @{$_[1]} } }
.Ve
.Sp
A space is not regarded being empty, so given the example data, lines 2, 3,
4, 5, and 7 are skipped.
.IP "filled" 2
.IX Item "filled"
Filter out lines that have no visible data
.Sp
This filter is a shortcut for
.Sp
.Vb 1
\& filter => { 0 => sub { grep { defined && m/\eS/ } @{$_[1]} } }
.Ve
.Sp
This filter rejects all lines that \fInot\fR have at least one field that does
not evaluate to the empty string.
.Sp
With the given example data, this filter would skip lines 2 through 8.
.RE
.RS 2
.RE
.RE
.RS 2
.Sp
One could also use modules like Types::Standard:
.Sp
.Vb 1
\& use Types::Standard \-types;
\&
\& my $type = Tuple[Str, Str, Int, Bool, Optional[Num]];
\& my $check = $type\->compiled_check;
\&
\& # filter with compiled check and warnings
\& my $aoa = csv (
\& in => \e$data,
\& filter => {
\& 0 => sub {
\& my $ok = $check\->($_[1]) or
\& warn $type\->get_message ($_[1]), "\en";
\& return $ok;
\& },
\& },
\& );
.Ve
.RE
.IP "after_in" 2
.IX Xref "after_in"
.IX Item "after_in"
This callback is invoked for each record after all records have been parsed
but before returning the reference to the caller. The hook is invoked with
two arguments: the current \f(CW\*(C`CSV\*(C'\fR parser object and a reference to the
record. The reference can be a reference to a \s-1HASH\s0 or a reference to an
\&\s-1ARRAY\s0 as determined by the arguments.
.Sp
This callback can also be passed as an attribute without the \f(CW\*(C`callbacks\*(C'\fR
wrapper.
.IP "before_out" 2
.IX Xref "before_out"
.IX Item "before_out"
This callback is invoked for each record before the record is printed. The
hook is invoked with two arguments: the current \f(CW\*(C`CSV\*(C'\fR parser object and a
reference to the record. The reference can be a reference to a \s-1HASH\s0 or a
reference to an \s-1ARRAY\s0 as determined by the arguments.
.Sp
This callback can also be passed as an attribute without the \f(CW\*(C`callbacks\*(C'\fR
wrapper.
.Sp
This callback makes the row available in \f(CW%_\fR if the row is a hashref. In
this case \f(CW%_\fR is writable and will change the original row.
.IP "on_in" 2
.IX Xref "on_in"
.IX Item "on_in"
This callback acts exactly as the \*(L"after_in\*(R" or the \*(L"before_out\*(R" hooks.
.Sp
This callback can also be passed as an attribute without the \f(CW\*(C`callbacks\*(C'\fR
wrapper.
.Sp
This callback makes the row available in \f(CW%_\fR if the row is a hashref. In
this case \f(CW%_\fR is writable and will change the original row. So e.g. with
.Sp
.Vb 5
\& my $aoh = csv (
\& in => \e"foo\en1\en2\en",
\& headers => "auto",
\& on_in => sub { $_{bar} = 2; },
\& );
.Ve
.Sp
\&\f(CW$aoh\fR will be:
.Sp
.Vb 7
\& [ { foo => 1,
\& bar => 2,
\& }
\& { foo => 2,
\& bar => 2,
\& }
\& ]
.Ve
.IP "csv" 2
.IX Item "csv"
The \fIfunction\fR \*(L"csv\*(R" can also be called as a method or with an existing
Text::CSV_XS object. This could help if the function is to be invoked a lot
of times and the overhead of creating the object internally over and over
again would be prevented by passing an existing instance.
.Sp
.Vb 1
\& my $csv = Text::CSV_XS\->new ({ binary => 1, auto_diag => 1 });
\&
\& my $aoa = $csv\->csv (in => $fh);
\& my $aoa = csv (in => $fh, csv => $csv);
.Ve
.Sp
both act the same. Running this 20000 times on a 20 lines \s-1CSV\s0 file, showed
a 53% speedup.
.SH "INTERNALS"
.IX Header "INTERNALS"
.IP "Combine (...)" 4
.IX Item "Combine (...)"
.PD 0
.IP "Parse (...)" 4
.IX Item "Parse (...)"
.PD
.PP
The arguments to these internal functions are deliberately not described or
documented in order to enable the module authors make changes it when they
feel the need for it. Using them is highly discouraged as the \s-1API\s0 may
change in future releases.
.SH "EXAMPLES"
.IX Header "EXAMPLES"
.SS "Reading a \s-1CSV\s0 file line by line:"
.IX Subsection "Reading a CSV file line by line:"
.Vb 6
\& my $csv = Text::CSV_XS\->new ({ binary => 1, auto_diag => 1 });
\& open my $fh, "<", "file.csv" or die "file.csv: $!";
\& while (my $row = $csv\->getline ($fh)) {
\& # do something with @$row
\& }
\& close $fh or die "file.csv: $!";
.Ve
.PP
or
.PP
.Vb 3
\& my $aoa = csv (in => "file.csv", on_in => sub {
\& # do something with %_
\& });
.Ve
.PP
\fIReading only a single column\fR
.IX Subsection "Reading only a single column"
.PP
.Vb 5
\& my $csv = Text::CSV_XS\->new ({ binary => 1, auto_diag => 1 });
\& open my $fh, "<", "file.csv" or die "file.csv: $!";
\& # get only the 4th column
\& my @column = map { $_\->[3] } @{$csv\->getline_all ($fh)};
\& close $fh or die "file.csv: $!";
.Ve
.PP
with \*(L"csv\*(R", you could do
.PP
.Vb 2
\& my @column = map { $_\->[0] }
\& @{csv (in => "file.csv", fragment => "col=4")};
.Ve
.SS "Parsing \s-1CSV\s0 strings:"
.IX Subsection "Parsing CSV strings:"
.Vb 1
\& my $csv = Text::CSV_XS\->new ({ keep_meta_info => 1, binary => 1 });
\&
\& my $sample_input_string =
\& qq{"I said, ""Hi!""",Yes,"",2.34,,"1.09","\ex{20ac}",};
\& if ($csv\->parse ($sample_input_string)) {
\& my @field = $csv\->fields;
\& foreach my $col (0 .. $#field) {
\& my $quo = $csv\->is_quoted ($col) ? $csv\->{quote_char} : "";
\& printf "%2d: %s%s%s\en", $col, $quo, $field[$col], $quo;
\& }
\& }
\& else {
\& print STDERR "parse () failed on argument: ",
\& $csv\->error_input, "\en";
\& $csv\->error_diag ();
\& }
.Ve
.PP
\fIParsing \s-1CSV\s0 from memory\fR
.IX Subsection "Parsing CSV from memory"
.PP
Given a complete \s-1CSV\s0 data-set in scalar \f(CW$data\fR, generate a list of lists
to represent the rows and fields
.PP
.Vb 2
\& # The data
\& my $data = join "\er\en" => map { join "," => 0 .. 5 } 0 .. 5;
\&
\& # in a loop
\& my $csv = Text::CSV_XS\->new ({ binary => 1, auto_diag => 1 });
\& open my $fh, "<", \e$data;
\& my @foo;
\& while (my $row = $csv\->getline ($fh)) {
\& push @foo, $row;
\& }
\& close $fh;
\&
\& # a single call
\& my $foo = csv (in => \e$data);
.Ve
.SS "Printing \s-1CSV\s0 data"
.IX Subsection "Printing CSV data"
\fIThe fast way: using \*(L"print\*(R"\fR
.IX Subsection "The fast way: using print"
.PP
An example for creating \f(CW\*(C`CSV\*(C'\fR files using the \*(L"print\*(R" method:
.PP
.Vb 6
\& my $csv = Text::CSV_XS\->new ({ binary => 1, eol => $/ });
\& open my $fh, ">", "foo.csv" or die "foo.csv: $!";
\& for (1 .. 10) {
\& $csv\->print ($fh, [ $_, "$_" ]) or $csv\->error_diag;
\& }
\& close $fh or die "$tbl.csv: $!";
.Ve
.PP
\fIThe slow way: using \*(L"combine\*(R" and \*(L"string\*(R"\fR
.IX Subsection "The slow way: using combine and string"
.PP
or using the slower \*(L"combine\*(R" and \*(L"string\*(R" methods:
.PP
.Vb 1
\& my $csv = Text::CSV_XS\->new;
\&
\& open my $csv_fh, ">", "hello.csv" or die "hello.csv: $!";
\&
\& my @sample_input_fields = (
\& \*(AqYou said, "Hello!"\*(Aq, 5.67,
\& \*(Aq"Surely"\*(Aq, \*(Aq\*(Aq, \*(Aq3.14159\*(Aq);
\& if ($csv\->combine (@sample_input_fields)) {
\& print $csv_fh $csv\->string, "\en";
\& }
\& else {
\& print "combine () failed on argument: ",
\& $csv\->error_input, "\en";
\& }
\& close $csv_fh or die "hello.csv: $!";
.Ve
.PP
\fIGenerating \s-1CSV\s0 into memory\fR
.IX Subsection "Generating CSV into memory"
.PP
Format a data-set (\f(CW@foo\fR) into a scalar value in memory (\f(CW$data\fR):
.PP
.Vb 2
\& # The data
\& my @foo = map { [ 0 .. 5 ] } 0 .. 3;
\&
\& # in a loop
\& my $csv = Text::CSV_XS\->new ({ binary => 1, auto_diag => 1, eol => "\er\en" });
\& open my $fh, ">", \emy $data;
\& $csv\->print ($fh, $_) for @foo;
\& close $fh;
\&
\& # a single call
\& csv (in => \e@foo, out => \emy $data);
.Ve
.SS "Rewriting \s-1CSV\s0"
.IX Subsection "Rewriting CSV"
Rewrite \f(CW\*(C`CSV\*(C'\fR files with \f(CW\*(C`;\*(C'\fR as separator character to well-formed \f(CW\*(C`CSV\*(C'\fR:
.PP
.Vb 2
\& use Text::CSV_XS qw( csv );
\& csv (in => csv (in => "bad.csv", sep_char => ";"), out => *STDOUT);
.Ve
.PP
As \f(CW\*(C`STDOUT\*(C'\fR is now default in \*(L"csv\*(R", a one-liner converting a \s-1UTF\-16 CSV\s0
file with \s-1BOM\s0 and TAB-separation to valid \s-1UTF\-8 CSV\s0 could be:
.PP
.Vb 2
\& $ perl \-C3 \-MText::CSV_XS=csv \-we\e
\& \*(Aqcsv(in=>"utf16tab.csv",encoding=>"utf16",sep=>"\et")\*(Aq >utf8.csv
.Ve
.SS "Dumping database tables to \s-1CSV\s0"
.IX Subsection "Dumping database tables to CSV"
Dumping a database table can be simple as this (\s-1TIMTOWTDI\s0):
.PP
.Vb 2
\& my $dbh = DBI\->connect (...);
\& my $sql = "select * from foo";
\&
\& # using your own loop
\& open my $fh, ">", "foo.csv" or die "foo.csv: $!\en";
\& my $csv = Text::CSV_XS\->new ({ binary => 1, eol => "\er\en" });
\& my $sth = $dbh\->prepare ($sql); $sth\->execute;
\& $csv\->print ($fh, $sth\->{NAME_lc});
\& while (my $row = $sth\->fetch) {
\& $csv\->print ($fh, $row);
\& }
\&
\& # using the csv function, all in memory
\& csv (out => "foo.csv", in => $dbh\->selectall_arrayref ($sql));
\&
\& # using the csv function, streaming with callbacks
\& my $sth = $dbh\->prepare ($sql); $sth\->execute;
\& csv (out => "foo.csv", in => sub { $sth\->fetch });
\& csv (out => "foo.csv", in => sub { $sth\->fetchrow_hashref });
.Ve
.PP
Note that this does not discriminate between \*(L"empty\*(R" values and NULL-values
from the database, as both will be the same empty field in \s-1CSV.\s0 To enable
distinction between the two, use \f(CW\*(C`quote_empty\*(C'\fR.
.PP
.Vb 1
\& csv (out => "foo.csv", in => sub { $sth\->fetch }, quote_empty => 1);
.Ve
.PP
If the database import utility supports special sequences to insert \f(CW\*(C`NULL\*(C'\fR
values into the database, like MySQL/MariaDB supports \f(CW\*(C`\eN\*(C'\fR, use a filter
or a map
.PP
.Vb 2
\& csv (out => "foo.csv", in => sub { $sth\->fetch },
\& on_in => sub { $_ //= "\e\eN" for @{$_[1]} });
\&
\& while (my $row = $sth\->fetch) {
\& $csv\->print ($fh, [ map { $_ // "\e\eN" } @$row ]);
\& }
.Ve
.PP
Note that this will not work as expected when choosing the backslash (\f(CW\*(C`\e\*(C'\fR)
as \f(CW\*(C`escape_char\*(C'\fR, as that will cause the \f(CW\*(C`\e\*(C'\fR to need to be escaped by yet
another \f(CW\*(C`\e\*(C'\fR, which will cause the field to need quotation and thus ending
up as \f(CW"\e\eN"\fR instead of \f(CW\*(C`\eN\*(C'\fR. See also \f(CW\*(C`undef_str\*(C'\fR.
.PP
.Vb 1
\& csv (out => "foo.csv", in => sub { $sth\->fetch }, undef_str => "\e\eN");
.Ve
.PP
These special sequences are not recognized by Text::CSV_XS on parsing the
\&\s-1CSV\s0 generated like this, but map and filter are your friends again
.PP
.Vb 3
\& while (my $row = $csv\->getline ($fh)) {
\& $sth\->execute (map { $_ eq "\e\eN" ? undef : $_ } @$row);
\& }
\&
\& csv (in => "foo.csv", filter => { 1 => sub {
\& $sth\->execute (map { $_ eq "\e\eN" ? undef : $_ } @{$_[1]}); 0; }});
.Ve
.SS "Converting \s-1CSV\s0 to \s-1JSON\s0"
.IX Subsection "Converting CSV to JSON"
.Vb 2
\& use Text::CSV_XS qw( csv );
\& use JSON; # or Cpanel::JSON::XS for better performance
\&
\& # AoA (no header interpretation)
\& say encode_json (csv (in => "file.csv"));
\&
\& # AoH (convert to structures)
\& say encode_json (csv (in => "file.csv", bom => 1));
.Ve
.PP
Yes, it is that simple.
.SS "The examples folder"
.IX Subsection "The examples folder"
For more extended examples, see the \fIexamples/\fR \f(CW1\fR. sub-directory in the
original distribution or the git repository \f(CW2\fR.
.PP
.Vb 2
\& 1. https://github.com/Tux/Text\-CSV_XS/tree/master/examples
\& 2. https://github.com/Tux/Text\-CSV_XS
.Ve
.PP
The following files can be found there:
.IP "parser\-xs.pl" 2
.IX Xref "parser-xs.pl"
.IX Item "parser-xs.pl"
This can be used as a boilerplate to parse invalid \f(CW\*(C`CSV\*(C'\fR and parse beyond
(expected) errors alternative to using the \*(L"error\*(R" callback.
.Sp
.Vb 1
\& $ perl examples/parser\-xs.pl bad.csv >good.csv
.Ve
.IP "csv-check" 2
.IX Xref "csv-check"
.IX Item "csv-check"
This is a command-line tool that uses parser\-xs.pl techniques to check the
\&\f(CW\*(C`CSV\*(C'\fR file and report on its content.
.Sp
.Vb 5
\& $ csv\-check files/utf8.csv
\& Checked files/utf8.csv with csv\-check 1.9
\& using Text::CSV_XS 1.32 with perl 5.26.0 and Unicode 9.0.0
\& OK: rows: 1, columns: 2
\& sep = <,>, quo = <">, bin = <1>, eol = <"\en">
.Ve
.IP "csv-split" 2
.IX Xref "csv-split"
.IX Item "csv-split"
This command splits \f(CW\*(C`CSV\*(C'\fR files into smaller files, keeping (part of) the
header. Options include maximum number of (data) rows per file and maximum
number of columns per file or a combination of the two.
.IP "csv2xls" 2
.IX Xref "csv2xls"
.IX Item "csv2xls"
A script to convert \f(CW\*(C`CSV\*(C'\fR to Microsoft Excel (\f(CW\*(C`XLS\*(C'\fR). This requires extra
modules Date::Calc and Spreadsheet::WriteExcel. The converter accepts
various options and can produce \s-1UTF\-8\s0 compliant Excel files.
.IP "csv2xlsx" 2
.IX Xref "csv2xlsx"
.IX Item "csv2xlsx"
A script to convert \f(CW\*(C`CSV\*(C'\fR to Microsoft Excel (\f(CW\*(C`XLSX\*(C'\fR). This requires the
modules Date::Calc and Spreadsheet::Writer::XLSX. The converter does
accept various options including merging several \f(CW\*(C`CSV\*(C'\fR files into a single
Excel file.
.IP "csvdiff" 2
.IX Xref "csvdiff"
.IX Item "csvdiff"
A script that provides colorized diff on sorted \s-1CSV\s0 files, assuming first
line is header and first field is the key. Output options include colorized
\&\s-1ANSI\s0 escape codes or \s-1HTML.\s0
.Sp
.Vb 1
\& $ csvdiff \-\-html \-\-output=diff.html file1.csv file2.csv
.Ve
.IP "rewrite.pl" 2
.IX Xref "rewrite.pl"
.IX Item "rewrite.pl"
A script to rewrite (in)valid \s-1CSV\s0 into valid \s-1CSV\s0 files. Script has options
to generate confusing \s-1CSV\s0 files or \s-1CSV\s0 files that conform to Dutch MS-Excel
exports (using \f(CW\*(C`;\*(C'\fR as separation).
.Sp
Script \- by default \- honors \s-1BOM\s0 and auto-detects separation converting it
to default standard \s-1CSV\s0 with \f(CW\*(C`,\*(C'\fR as separator.
.SH "CAVEATS"
.IX Header "CAVEATS"
Text::CSV_XS is \fInot\fR designed to detect the characters used to quote and
separate fields. The parsing is done using predefined (default) settings.
In the examples sub-directory, you can find scripts that demonstrate how
you could try to detect these characters yourself.
.SS "Microsoft Excel"
.IX Subsection "Microsoft Excel"
The import/export from Microsoft Excel is a \fIrisky task\fR, according to the
documentation in \f(CW\*(C`Text::CSV::Separator\*(C'\fR. Microsoft uses the system's list
separator defined in the regional settings, which happens to be a semicolon
for Dutch, German and Spanish (and probably some others as well). For the
English locale, the default is a comma. In Windows however, the user is
free to choose a predefined locale, and then change \fIevery\fR individual
setting in it, so checking the locale is no solution.
.PP
As of version 1.17, a lone first line with just
.PP
.Vb 1
\& sep=;
.Ve
.PP
will be recognized and honored when parsing with \*(L"getline\*(R".
.SH "TODO"
.IX Header "TODO"
.IP "More Errors & Warnings" 2
.IX Item "More Errors & Warnings"
New extensions ought to be clear and concise in reporting what error has
occurred where and why, and maybe also offer a remedy to the problem.
.Sp
\&\*(L"error_diag\*(R" is a (very) good start, but there is more work to be done in
this area.
.Sp
Basic calls should croak or warn on illegal parameters. Errors should be
documented.
.IP "setting meta info" 2
.IX Item "setting meta info"
Future extensions might include extending the \*(L"meta_info\*(R", \*(L"is_quoted\*(R",
and \*(L"is_binary\*(R" to accept setting these flags for fields, so you can
specify which fields are quoted in the \*(L"combine\*(R"/\*(L"string\*(R" combination.
.Sp
.Vb 2
\& $csv\->meta_info (0, 1, 1, 3, 0, 0);
\& $csv\->is_quoted (3, 1);
.Ve
.Sp
Metadata Vocabulary for Tabular Data <http://w3c.github.io/csvw/metadata/>
(a W3C editor's draft) could be an example for supporting more metadata.
.IP "Parse the whole file at once" 2
.IX Item "Parse the whole file at once"
Implement new methods or functions that enable parsing of a complete file
at once, returning a list of hashes. Possible extension to this could be to
enable a column selection on the call:
.Sp
.Vb 1
\& my @AoH = $csv\->parse_file ($filename, { cols => [ 1, 4..8, 12 ]});
.Ve
.Sp
returning something like
.Sp
.Vb 7
\& [ { fields => [ 1, 2, "foo", 4.5, undef, "", 8 ],
\& flags => [ ... ],
\& },
\& { fields => [ ... ],
\& .
\& },
\& ]
.Ve
.Sp
Note that the \*(L"csv\*(R" function already supports most of this, but does not
return flags. \*(L"getline_all\*(R" returns all rows for an open stream, but this
will not return flags either. \*(L"fragment\*(R" can reduce the required rows
\&\fIor\fR columns, but cannot combine them.
.IP "Cookbook" 2
.IX Item "Cookbook"
Write a document that has recipes for most known non-standard (and maybe
some standard) \f(CW\*(C`CSV\*(C'\fR formats, including formats that use \f(CW\*(C`TAB\*(C'\fR, \f(CW\*(C`;\*(C'\fR,
\&\f(CW\*(C`|\*(C'\fR, or other non-comma separators.
.Sp
Examples could be taken from W3C's \s-1CSV\s0 on the Web: Use Cases and
Requirements <http://w3c.github.io/csvw/use-cases-and-requirements/index.html>
.IP "Steal" 2
.IX Item "Steal"
Steal good new ideas and features from PapaParse <http://papaparse.com> or
csvkit <http://csvkit.readthedocs.org>.
.IP "Raku support" 2
.IX Item "Raku support"
Raku support can be found here <https://github.com/Tux/CSV>. The interface
is richer in support than the Perl5 \s-1API,\s0 as Raku supports more types.
.Sp
The Raku version does not (yet) support pure binary \s-1CSV\s0 datasets.
.SS "\s-1NOT TODO\s0"
.IX Subsection "NOT TODO"
.IP "combined methods" 2
.IX Item "combined methods"
Requests for adding means (methods) that combine \*(L"combine\*(R" and \*(L"string\*(R"
in a single call will \fBnot\fR be honored (use \*(L"print\*(R" instead). Likewise
for \*(L"parse\*(R" and \*(L"fields\*(R" (use \*(L"getline\*(R" instead), given the problems
with embedded newlines.
.SS "Release plan"
.IX Subsection "Release plan"
No guarantees, but this is what I had in mind some time ago:
.IP "\(bu" 2
\&\s-1DIAGNOSTICS\s0 section in pod to *describe* the errors (see below)
.SH "EBCDIC"
.IX Header "EBCDIC"
Everything should now work on native \s-1EBCDIC\s0 systems. As the test does not
cover all possible codepoints and Encode does not support \f(CW\*(C`utf\-ebcdic\*(C'\fR,
there is no guarantee that all handling of Unicode is done correct.
.PP
Opening \f(CW\*(C`EBCDIC\*(C'\fR encoded files on \f(CW\*(C`ASCII\*(C'\fR+ systems is likely to succeed
using Encode's \f(CW\*(C`cp37\*(C'\fR, \f(CW\*(C`cp1047\*(C'\fR, or \f(CW\*(C`posix\-bc\*(C'\fR:
.PP
.Vb 1
\& open my $fh, "<:encoding(cp1047)", "ebcdic_file.csv" or die "...";
.Ve
.SH "DIAGNOSTICS"
.IX Header "DIAGNOSTICS"
Still under construction ...
.PP
If an error occurs, \f(CW\*(C`$csv\->error_diag\*(C'\fR can be used to get information
on the cause of the failure. Note that for speed reasons the internal value
is never cleared on success, so using the value returned by \*(L"error_diag\*(R"
in normal cases \- when no error occurred \- may cause unexpected results.
.PP
If the constructor failed, the cause can be found using \*(L"error_diag\*(R" as a
class method, like \f(CW\*(C`Text::CSV_XS\->error_diag\*(C'\fR.
.PP
The \f(CW\*(C`$csv\->error_diag\*(C'\fR method is automatically invoked upon error when
the contractor was called with \f(CW\*(C`auto_diag\*(C'\fR set to \f(CW1\fR or
\&\f(CW2\fR, or when autodie is in effect. When set to \f(CW1\fR, this will cause a
\&\f(CW\*(C`warn\*(C'\fR with the error message, when set to \f(CW2\fR, it will \f(CW\*(C`die\*(C'\fR. \f(CW\*(C`2012 \-
EOF\*(C'\fR is excluded from \f(CW\*(C`auto_diag\*(C'\fR reports.
.PP
Errors can be (individually) caught using the \*(L"error\*(R" callback.
.PP
The errors as described below are available. I have tried to make the error
itself explanatory enough, but more descriptions will be added. For most of
these errors, the first three capitals describe the error category:
.IP "\(bu" 2
\&\s-1INI\s0
.Sp
Initialization error or option conflict.
.IP "\(bu" 2
\&\s-1ECR\s0
.Sp
Carriage-Return related parse error.
.IP "\(bu" 2
\&\s-1EOF\s0
.Sp
End-Of-File related parse error.
.IP "\(bu" 2
\&\s-1EIQ\s0
.Sp
Parse error inside quotation.
.IP "\(bu" 2
\&\s-1EIF\s0
.Sp
Parse error inside field.
.IP "\(bu" 2
\&\s-1ECB\s0
.Sp
Combine error.
.IP "\(bu" 2
\&\s-1EHR\s0
.Sp
HashRef parse related error.
.PP
And below should be the complete list of error codes that can be returned:
.IP "\(bu" 2
1001 \*(L"\s-1INI\s0 \- sep_char is equal to quote_char or escape_char\*(R"
.IX Xref "1001"
.Sp
The separation character cannot be equal to the quotation
character or to the escape character, as this
would invalidate all parsing rules.
.IP "\(bu" 2
1002 \*(L"\s-1INI\s0 \- allow_whitespace with escape_char or quote_char \s-1SP\s0 or \s-1TAB\*(R"\s0
.IX Xref "1002"
.Sp
Using the \f(CW\*(C`allow_whitespace\*(C'\fR attribute when either
\&\f(CW\*(C`quote_char\*(C'\fR or \f(CW\*(C`escape_char\*(C'\fR is equal to
\&\f(CW\*(C`SPACE\*(C'\fR or \f(CW\*(C`TAB\*(C'\fR is too ambiguous to allow.
.IP "\(bu" 2
1003 \*(L"\s-1INI\s0 \- \er or \en in main attr not allowed\*(R"
.IX Xref "1003"
.Sp
Using default \f(CW\*(C`eol\*(C'\fR characters in either \f(CW\*(C`sep_char\*(C'\fR,
\&\f(CW\*(C`quote_char\*(C'\fR, or \f(CW\*(C`escape_char\*(C'\fR is not
allowed.
.IP "\(bu" 2
1004 \*(L"\s-1INI\s0 \- callbacks should be undef or a hashref\*(R"
.IX Xref "1004"
.Sp
The \f(CW\*(C`callbacks\*(C'\fR attribute only allows one to be \f(CW\*(C`undef\*(C'\fR or
a hash reference.
.IP "\(bu" 2
1005 \*(L"\s-1INI\s0 \- \s-1EOL\s0 too long\*(R"
.IX Xref "1005"
.Sp
The value passed for \s-1EOL\s0 is exceeding its maximum length (16).
.IP "\(bu" 2
1006 \*(L"\s-1INI\s0 \- \s-1SEP\s0 too long\*(R"
.IX Xref "1006"
.Sp
The value passed for \s-1SEP\s0 is exceeding its maximum length (16).
.IP "\(bu" 2
1007 \*(L"\s-1INI\s0 \- \s-1QUOTE\s0 too long\*(R"
.IX Xref "1007"
.Sp
The value passed for \s-1QUOTE\s0 is exceeding its maximum length (16).
.IP "\(bu" 2
1008 \*(L"\s-1INI\s0 \- \s-1SEP\s0 undefined\*(R"
.IX Xref "1008"
.Sp
The value passed for \s-1SEP\s0 should be defined and not empty.
.IP "\(bu" 2
1010 \*(L"\s-1INI\s0 \- the header is empty\*(R"
.IX Xref "1010"
.Sp
The header line parsed in the \*(L"header\*(R" is empty.
.IP "\(bu" 2
1011 \*(L"\s-1INI\s0 \- the header contains more than one valid separator\*(R"
.IX Xref "1011"
.Sp
The header line parsed in the \*(L"header\*(R" contains more than one (unique)
separator character out of the allowed set of separators.
.IP "\(bu" 2
1012 \*(L"\s-1INI\s0 \- the header contains an empty field\*(R"
.IX Xref "1012"
.Sp
The header line parsed in the \*(L"header\*(R" contains an empty field.
.IP "\(bu" 2
1013 \*(L"\s-1INI\s0 \- the header contains nun-unique fields\*(R"
.IX Xref "1013"
.Sp
The header line parsed in the \*(L"header\*(R" contains at least two identical
fields.
.IP "\(bu" 2
1014 \*(L"\s-1INI\s0 \- header called on undefined stream\*(R"
.IX Xref "1014"
.Sp
The header line cannot be parsed from an undefined source.
.IP "\(bu" 2
1500 \*(L"\s-1PRM\s0 \- Invalid/unsupported argument(s)\*(R"
.IX Xref "1500"
.Sp
Function or method called with invalid argument(s) or parameter(s).
.IP "\(bu" 2
1501 \*(L"\s-1PRM\s0 \- The key attribute is passed as an unsupported type\*(R"
.IX Xref "1501"
.Sp
The \f(CW\*(C`key\*(C'\fR attribute is of an unsupported type.
.IP "\(bu" 2
1502 \*(L"\s-1PRM\s0 \- The value attribute is passed without the key attribute\*(R"
.IX Xref "1502"
.Sp
The \f(CW\*(C`value\*(C'\fR attribute is only allowed when a valid key is given.
.IP "\(bu" 2
1503 \*(L"\s-1PRM\s0 \- The value attribute is passed as an unsupported type\*(R"
.IX Xref "1503"
.Sp
The \f(CW\*(C`value\*(C'\fR attribute is of an unsupported type.
.IP "\(bu" 2
2010 \*(L"\s-1ECR\s0 \- \s-1QUO\s0 char inside quotes followed by \s-1CR\s0 not part of \s-1EOL\*(R"\s0
.IX Xref "2010"
.Sp
When \f(CW\*(C`eol\*(C'\fR has been set to anything but the default, like
\&\f(CW"\er\et\en"\fR, and the \f(CW"\er"\fR is following the \fBsecond\fR (closing)
\&\f(CW\*(C`quote_char\*(C'\fR, where the characters following the \f(CW"\er"\fR do
not make up the \f(CW\*(C`eol\*(C'\fR sequence, this is an error.
.IP "\(bu" 2
2011 \*(L"\s-1ECR\s0 \- Characters after end of quoted field\*(R"
.IX Xref "2011"
.Sp
Sequences like \f(CW\*(C`1,foo,"bar"baz,22,1\*(C'\fR are not allowed. \f(CW"bar"\fR is a quoted
field and after the closing double-quote, there should be either a new-line
sequence or a separation character.
.IP "\(bu" 2
2012 \*(L"\s-1EOF\s0 \- End of data in parsing input stream\*(R"
.IX Xref "2012"
.Sp
Self-explaining. End-of-file while inside parsing a stream. Can happen only
when reading from streams with \*(L"getline\*(R", as using \*(L"parse\*(R" is done on
strings that are not required to have a trailing \f(CW\*(C`eol\*(C'\fR.
.IP "\(bu" 2
2013 \*(L"\s-1INI\s0 \- Specification error for fragments \s-1RFC7111\*(R"\s0
.IX Xref "2013"
.Sp
Invalid specification for \s-1URI\s0 \*(L"fragment\*(R" specification.
.IP "\(bu" 2
2014 \*(L"\s-1ENF\s0 \- Inconsistent number of fields\*(R"
.IX Xref "2014"
.Sp
Inconsistent number of fields under strict parsing.
.IP "\(bu" 2
2021 \*(L"\s-1EIQ\s0 \- \s-1NL\s0 char inside quotes, binary off\*(R"
.IX Xref "2021"
.Sp
Sequences like \f(CW\*(C`1,"foo\enbar",22,1\*(C'\fR are allowed only when the binary option
has been selected with the constructor.
.IP "\(bu" 2
2022 \*(L"\s-1EIQ\s0 \- \s-1CR\s0 char inside quotes, binary off\*(R"
.IX Xref "2022"
.Sp
Sequences like \f(CW\*(C`1,"foo\erbar",22,1\*(C'\fR are allowed only when the binary option
has been selected with the constructor.
.IP "\(bu" 2
2023 \*(L"\s-1EIQ\s0 \- \s-1QUO\s0 character not allowed\*(R"
.IX Xref "2023"
.Sp
Sequences like \f(CW\*(C`"foo "bar" baz",qu\*(C'\fR and \f(CW\*(C`2023,",2008\-04\-05,"Foo, Bar",\en\*(C'\fR
will cause this error.
.IP "\(bu" 2
2024 \*(L"\s-1EIQ\s0 \- \s-1EOF\s0 cannot be escaped, not even inside quotes\*(R"
.IX Xref "2024"
.Sp
The escape character is not allowed as last character in an input stream.
.IP "\(bu" 2
2025 \*(L"\s-1EIQ\s0 \- Loose unescaped escape\*(R"
.IX Xref "2025"
.Sp
An escape character should escape only characters that need escaping.
.Sp
Allowing the escape for other characters is possible with the attribute
\&\*(L"allow_loose_escapes\*(R".
.IP "\(bu" 2
2026 \*(L"\s-1EIQ\s0 \- Binary character inside quoted field, binary off\*(R"
.IX Xref "2026"
.Sp
Binary characters are not allowed by default. Exceptions are fields that
contain valid \s-1UTF\-8,\s0 that will automatically be upgraded if the content is
valid \s-1UTF\-8.\s0 Set \f(CW\*(C`binary\*(C'\fR to \f(CW1\fR to accept binary data.
.IP "\(bu" 2
2027 \*(L"\s-1EIQ\s0 \- Quoted field not terminated\*(R"
.IX Xref "2027"
.Sp
When parsing a field that started with a quotation character, the field is
expected to be closed with a quotation character. When the parsed line is
exhausted before the quote is found, that field is not terminated.
.IP "\(bu" 2
2030 \*(L"\s-1EIF\s0 \- \s-1NL\s0 char inside unquoted verbatim, binary off\*(R"
.IX Xref "2030"
.IP "\(bu" 2
2031 \*(L"\s-1EIF\s0 \- \s-1CR\s0 char is first char of field, not part of \s-1EOL\*(R"\s0
.IX Xref "2031"
.IP "\(bu" 2
2032 \*(L"\s-1EIF\s0 \- \s-1CR\s0 char inside unquoted, not part of \s-1EOL\*(R"\s0
.IX Xref "2032"
.IP "\(bu" 2
2034 \*(L"\s-1EIF\s0 \- Loose unescaped quote\*(R"
.IX Xref "2034"
.IP "\(bu" 2
2035 \*(L"\s-1EIF\s0 \- Escaped \s-1EOF\s0 in unquoted field\*(R"
.IX Xref "2035"
.IP "\(bu" 2
2036 \*(L"\s-1EIF\s0 \- \s-1ESC\s0 error\*(R"
.IX Xref "2036"
.IP "\(bu" 2
2037 \*(L"\s-1EIF\s0 \- Binary character in unquoted field, binary off\*(R"
.IX Xref "2037"
.IP "\(bu" 2
2110 \*(L"\s-1ECB\s0 \- Binary character in Combine, binary off\*(R"
.IX Xref "2110"
.IP "\(bu" 2
2200 \*(L"\s-1EIO\s0 \- print to \s-1IO\s0 failed. See errno\*(R"
.IX Xref "2200"
.IP "\(bu" 2
3001 \*(L"\s-1EHR\s0 \- Unsupported syntax for column_names ()\*(R"
.IX Xref "3001"
.IP "\(bu" 2
3002 \*(L"\s-1EHR\s0 \- getline_hr () called before column_names ()\*(R"
.IX Xref "3002"
.IP "\(bu" 2
3003 \*(L"\s-1EHR\s0 \- bind_columns () and column_names () fields count mismatch\*(R"
.IX Xref "3003"
.IP "\(bu" 2
3004 \*(L"\s-1EHR\s0 \- bind_columns () only accepts refs to scalars\*(R"
.IX Xref "3004"
.IP "\(bu" 2
3006 \*(L"\s-1EHR\s0 \- bind_columns () did not pass enough refs for parsed fields\*(R"
.IX Xref "3006"
.IP "\(bu" 2
3007 \*(L"\s-1EHR\s0 \- bind_columns needs refs to writable scalars\*(R"
.IX Xref "3007"
.IP "\(bu" 2
3008 \*(L"\s-1EHR\s0 \- unexpected error in bound fields\*(R"
.IX Xref "3008"
.IP "\(bu" 2
3009 \*(L"\s-1EHR\s0 \- print_hr () called before column_names ()\*(R"
.IX Xref "3009"
.IP "\(bu" 2
3010 \*(L"\s-1EHR\s0 \- print_hr () called with invalid arguments\*(R"
.IX Xref "3010"
.SH "SEE ALSO"
.IX Header "SEE ALSO"
IO::File, IO::Handle, IO::Wrap, Text::CSV, Text::CSV_PP,
Text::CSV::Encoded, Text::CSV::Separator, Text::CSV::Slurp,
Spreadsheet::CSV and Spreadsheet::Read, and of course perl.
.PP
If you are using Raku, have a look at \f(CW\*(C`Text::CSV\*(C'\fR in the Raku ecosystem,
offering the same features.
.PP
\fInon-perl\fR
.IX Subsection "non-perl"
.PP
A \s-1CSV\s0 parser in JavaScript, also used by W3C <http://www.w3.org>, is the
multi-threaded in-browser PapaParse <http://papaparse.com/>.
.PP
csvkit <http://csvkit.readthedocs.org> is a python \s-1CSV\s0 parsing toolkit.
.SH "AUTHOR"
.IX Header "AUTHOR"
Alan Citterman \fI<alan@mfgrtl.com>\fR wrote the original Perl module.
Please don't send mail concerning Text::CSV_XS to Alan, who is not involved
in the C/XS part that is now the main part of the module.
.PP
Jochen Wiedmann \fI<joe@ispsoft.de>\fR rewrote the en\- and decoding in
C by implementing a simple finite-state machine. He added variable quote,
escape and separator characters, the binary mode and the print and getline
methods. See \fIChangeLog\fR releases 0.10 through 0.23.
.PP
H.Merijn Brand \fI<h.m.brand@xs4all.nl>\fR cleaned up the code, added
the field flags methods, wrote the major part of the test suite, completed
the documentation, fixed most \s-1RT\s0 bugs, added all the allow flags and the
\&\*(L"csv\*(R" function. See ChangeLog releases 0.25 and on.
.SH "COPYRIGHT AND LICENSE"
.IX Header "COPYRIGHT AND LICENSE"
.Vb 3
\& Copyright (C) 2007\-2021 H.Merijn Brand. All rights reserved.
\& Copyright (C) 1998\-2001 Jochen Wiedmann. All rights reserved.
\& Copyright (C) 1997 Alan Citterman. All rights reserved.
.Ve
.PP
This library is free software; you can redistribute and/or modify it under
the same terms as Perl itself.