Add FOR CSV to facilitate CSV export
We have FOR XML and FOR JSON. But it is not uncommon that people want to produce CSV files. This can be done with BCP, but it is not straightforward not the least if you want to add headers. And if you only want fields to be quoted when needed, BCP is not really what you are looking for. SSIS is also an alternative, but SSIS does not always fit in.
Thus, it would be handy if there was a an option like FOR CSV, with a syntax like:
FOR CSV [, HEADERS] [,DELIMITER(',')] [, QUOTE('"')]
HEADERS would mean that the column headers would be added on the first line in the CSV output. DELIMITER would specify the delimiter, with comma as the default, and QUOTE would specify which character used to quote the data. (Whehter QUOTE is actually needed, I don't know.)
Jeff Moden commented
I definitely voted for this. I'd also like to add that there should be (some incredibly useful) options to do the following:
1. Export CSV and TSV files according to the RFC 4180 standard. This also includes automatically double quoting all character based fields whether an in-field delimiter is present or not.
2. Support for multi-character delimiters (FIELDTERMINATOR).
3. Support for multi-character row terminators (ROWTERMINATOR).
4. Support for delimiters and terminators in the ASCII character range below 32.
5. Include the same delimiters, text identifiers, and row teminator in the optional header so that BCP and BULK INSERT can successfully do a FIRSTROW (because BCP and BULK INSERT currently count delimiters and terminators rather than just looking for row terminators).
6. Ensure that the final line of output includes the same row terminator characters as all of the other rows to avoid the dreaded "Unexpected End of File" error.
Now, if you really want to do a bang up job, add the following options (pretty please!):
7. Add an option to create a row above the optional header that contains datatype information. It would be really helpful if these datatypes were exactly the same as they appear from the source table in T-SQL. That would allow a separate read of the file to get the datatype information for the file to be used to automate the creation of a target table. Again, this row must be delimited and terminated exactly the same as all the data rows so that BCP and BULK INSERT can successfully read the data in a separate read from the file using FIRSTROW.
8. Have 4 "file manifest" options to automatically append to the target file name, the date in the ISO (yyyymmdd) DATE format, append the time in the (HHMMSS) format, append the count of the number of DATA lines in the file (does not include the header, etc), and the option to set the leading delimiter for each of the optional add-ons to the file name. DO NOT INCLUDE THIS INFORMATION AT THE BEGINNING OF THE DATA IN THE FILE ITSELF BECAUSE IT WOULD DESTROY THE ABILITY OF BCP AND BULK INSERT TO TAKE ADVANTAGE OF FIRSTROW!
9. Optionally include a "LineNo" column at the beginning of all lines. The first data line should be numbered "1". It must be included in any optional lines (like the header and datatype lines) to preserve the number of delimiters in each line so that BCP and BULK INSERT can use FIRSTROW to find the first data line. I recommend that the optional header line be numbered "0" and the optional datatype line be numbered "-1". All of this would make troubleshooting both imports and exports a whole lot easier and would provide another bit of "manifest" information to ensure that no rows were lost in transmission or imports. Of course, this optional field must be delimited the same as all the other rows including any quoted (or otherwise) text identifiers.
I vote automatically for everything Erlang would suggests :)
Beside that, it indeed required, come on microsoft devs, the luck of very basic functionalities for data import/export makes your guys looks more and more funny as years and decades pasts away...
Ronen Ariely (pituach) commented
Sound nice idea to have some functions to work with flat structure comma/tab Delimiter like XML and JSON
Introducing FOR CSV will be definitely handy. I tried this approach - http://www.allaboutmssql.com/2019/01/sql-server-export-adhoc-sql-or-table-to.html
John Zabroski commented
@Erland Sommarskog - quote is definitely needed, as per the RFC for CSV
5. Each field may or may not be enclosed in double quotes (however
some programs, such as Microsoft Excel, do not use double quotes
at all). If fields are not enclosed with double quotes, then
double quotes may not appear inside the fields. For example:
For interpretation of "may not", please see this RFC: https://www.ietf.org/rfc/rfc2119.txt
This would be awesome - although i would also to love to see some CSV love shown to the bulk import too, specifically, handling escaped delimiters, double quotes from Excel etc. SSIS / BCP is time consuming bloated nonsense when compared to PostgreSQL's COPY FROM etc
Aaron Nelson commented
I do like this idea.
An easy work-around is to use Invoke-SqlCmd and pipe the results to Export-Csv (PowerShell cmdlets).
Eli Massey commented
I would suggest it be FOR RAWTEXT with options like Delimiter (comman, tab etc) and definitely QUOTE is useful
This is a great suggestion. I would love to see this as well.
Really would like this as well, but it MUST have the option to not to QUOTE fields, unlike in BCP
This would make life a lot easier
Ra Osolage (sqlkiter) commented
I completely agree with this request. We've been using sqlcmd a lot for data exports to CSV, but it always has the 2nd line with a whole bunch of dashes. Perhaps somewhere in the consideration for this request, you can include sqlcmd to honor FOR CSV or something.
Talley Ouro commented
How about FOR(format_type='json',lang='en',max_record='12000',auto_format='true',export_file='c:\export\data.json',open_with_default_program='true') for all of them(XML, JSON, CVS etc..)? lang(get language from user computer) should be defaulted to 'en',max_record=optional,auto_format defaulted to true,export_file should be optional and open_with_default_program should be defaulted to false.
Key here would be to comply with the CSV rules; notably honoring CrLf's when they appear within quoted block.
Never quite understood why SQL Server didn't/hasn't natively transform from and into CSV. Sure BCP does it, sort of, and one can craft an xsl transformation or directly code an extract aka DTS/SSIS but these methods all suffer from externalizing the transformation.