<< Click to Display Table of Contents >> Navigation: ThinBASIC Modules > ADODB > ADODB Module Classes > ADODB_Recordset > ADODB_Recordset Methods > <ADODB_Recordset>.Export > <ADODB_Recordset>.Export.Csv |
Description
Export current ADODB Recordset into CSV file formats.
Syntax
<ADODB_Recordset>.Export.Csv(FullPathCSVFileName [, FullPathCSVFileName_TEMP [, cOptions ]] )
Returns
Number.
Number of records written in CSV files
Parameters
Name |
Type |
Optional |
Meaning |
FullPathCSVFileName |
String |
No |
Full path to CSV file name. |
FullPathCSVFileName_TEMP |
String |
Yes |
Full path to CSV temporary file name. Leave empty if no needed See instructions below on how/when to use it |
cOptions |
Yes |
A cOptions object setting all the need options |
Remarks
List of supported CSV options:
|
Option Name |
Type |
Default value |
Meaning |
Parameters effecting file |
||||
|
Filename |
String |
none |
Full path name of the CSV file name. If this option will be passed, it will replace FullPathCSVFileName parameter |
|
Filename_Temp |
String |
none |
Temporary file name to be used while export process is running.
Specify a temporary file name/location where to output data while export process is running. At the end, temporary file will be renamed/moved following FileName parameter.
There are many reasons why this temporary file name can be necessary: 1.export process can takes a while and you do not want other process can start reading exported file till it is not finished 2.exported file can be an input file for another process that expects a precise file name. Using a temporary file name avoids other process can start reading file before it is finished 3.export file can be saved locally using local disk resource (usually faster) and only at the end copied into final destination |
|
AppendToFile |
Number |
%FALSE |
Is %TRUE, new data will be appended to CSV file if it already exists. Otherwise if file already exists it will be deleted before exporting |
|
RecordBuffer |
Number |
1000 |
Number of record to keep into string builder memory buffer before appending data into exported file. |
|
MaxRecords |
Number |
-1 |
Used to specify how many records to export. -1 means: all records. |
|
|
|
||
Parameters effecting file structure |
||||
|
FieldSeparator |
String |
";" |
String to be used as field (column) separator |
|
RecordSeparator |
String |
$CRLF |
String to be used as record (line) separator |
|
|
|
||
Parameters effecting columns |
||||
|
ColumnNames |
Number |
%TRUE |
If %TRUE, column names from Recordset fields will be inserted into CSV file. If AppendToFile is %TRUE and file was already present, ColumnNames will have no effect because it is supposed field names are already present in first rows of CSV file |
|
Index |
Number |
%FALSE |
If %TRUE, exported file will have a first column named "RecNo#" that will contain a numeric sequence starting from 1 unless IndexBase parameter is specified with a number different from 1 |
|
IndexBase |
Number |
1 |
If Index parameter is %TRUE, IndexBase can be used to indicate a different starting number for Index column |
|
IndexName |
String |
"RecNo#" |
If Index parameter is %TRUE, IndexName can be used to indicate the name of the index column instead of default one. |
|
ColumnFilter_StartWith |
String |
"" |
If present, only columns whose name will start with specified string will be exported. Other fields will be ignored. Attention: be sure at least one field match specified filter string otherwise no data will be esported |
|
|
|
||
|
|
|
||
Parameters effecting data |
||||
|
StringDelimiter |
String |
$DQ |
If present, will be used to surround string when dataset field is a string |
|
DecimalSeparator |
String |
"." |
If present, it will be used to replace decimal separator in numeric dataset fields |
|
|
|
||
Characted replace in data |
||||
|
Replace_CR |
Number |
%TRUE |
If %TRUE and field data will contain a carriage return, it will be replaced by Replace_CR_Char parameter (if present) string. |
|
Replace_CR_Char |
String |
$SPC |
|
|
Replace_LF |
Number |
%TRUE |
If %TRUE and field data will contain a line feed, it will be replaced by Replace_LF_Char parameter (if present) string. |
|
Replace_LF_Char |
String |
$SPC |
|
|
Replace_CRLF |
Number |
%TRUE |
If %TRUE and field data will contain a carriage return + line feed, it will be replaced by Replace_CRLF_Char parameter (if present) string. |
|
Replace_CRLF_Char |
String |
$SPC + $SPC |
|
|
Replace_DQ |
Number |
%TRUE |
If %TRUE and field data will contain double quote, it will be replaced by Replace_DQ_Char parameter (if present) string. |
|
Replace_DQ_Char |
String |
"'" |
|
|
|
|
||
Dynamic data transformation |
||||
|
Transform_Field_<FieldName> |
String |
none |
It is possible to specify a dynamic thinBasic code snippet to be executed for specific dataset field in order to make data transformation before exporting it. |
|
|
|
||
Optional CallBack functions |
||||
|
Callback_Progress |
String |
none |
Name of a function that will be executed at the beginning of every record handling.
Function must have exactly 3 parameters of Long type passed ByVal that will receive the following value: 1.total number of records to export (if supported by current recordset otherwise -1) 2.number of records exported so far 3.milliseconds elapsed so far. Can be used to calculate estimated time to finish using a formula lie:
|
|
CallBack_Progress_Every_Records |
Number |
0 |
Sometimes when number of records is high, like 100K or above, it can be be convenient to reduce number of times Callback_Progress function is executed and execute it every a certain number of records have been processed. |
Unicode conversions |
||||
|
ToUtf8 |
Number |
%TRUE |
If %TRUE, string will be converted to UTF8 before witing to disk |
|
ToWide |
Number |
|
If %TRUE, string will be converted to Wide Char before witing to disk |
Restrictions
ATTENTION: if specified CSV file exists, it will be replaced unless AppendToFile option is present and set to %TRUE.
See also
Examples
...
sSql = "
SELECT
Titles.Title
, Titles.ISBN
, Titles.[Year Published] as YearPublished
FROM
Titles
ORDER BY
Titles.Title
"
pRecordset.OPEN(sSql, pConnection, %ADOPENDYNAMIC, %ADLOCKOPTIMISTIC, %ADCMDTEXT)
...
dim ExportCsvOpt as new cOptions
ExportCsvOpt.Set(
Filename : APP_SourcePath + "ADODB_Test_Export_Out.csv", '---This (if present) will replace file name passed to pRecordset.Export.Csv
Filename_Temp : APP_SourcePath + "ADODB_Test_Export_Out.csv.temp", '---
AppendToFile: %FALSE,
ColumnNames: %TRUE,
FieldSeparator: ";",
RecordSeparator: $crlf,
StringDelimiter: $dq,
DecimalSeparator: ".",
RecordBuffer: 1000,
Index: %TRUE,
IndexBase: 1,
IndexName: "Rec",
ToUtf8: %true,
MaxRecords: -1,
Replace_CR: %TRUE,
Replace_CR_Char: "",
Replace_LF: %TRUE,
Replace_LF_Char: "",
Replace_CRLF: %TRUE,
Replace_CRLF_Char: "",
Replace_DQ: %TRUE,
Replace_DQ_Char: "'",
Transform_Field_Title: "{SetData_Title(pRecordset.Fields(""Title"").Value)}",
Transform_Field_YearPublished: """{format$(pRecordset.Fields(""YearPublished"").Value, ""000000"")}"""
)
pRecordset.MoveLast
pRecordset.MoveFirst
printl " Export csv:", pRecordset.Export.Csv(APP_SourcePath + "ADODB_Test_Export_Out.csv", ExportCsvOpt)
...