I have a simple BCP command to query my MSSQL database and copy the result to a .csv file like this:
bcp "select fullname from [database1].[dbo].employee" queryout "c:\test\table.csv" -c -t"," -r"\n" -S servername -T
The issue comes when the fullname column is varchar separated by a comma like "Lee, Bruce". When the result is copied to the .csv file, the portion before the comma (Lee) is placed into the first column in Excel spreadsheet and the portion after the comma (Bruce) is placed in the second column. I would like it to keep everything in the first column and keep the comma (Lee, Bruce) . Does anyone have any idea how to achieve this?
Obviously you should set columns separator to something different than comma. I'm not familiar with the above syntax, but I guess these: -c -t"," -r"\n" are column and new line separators respectively.
Further you should either change default CSV separator in regional settings OR use import wizard for proper data placing in Excel. By the way, there are plenty of similar questions on SO.
Related
Text to column in excel
Problem:I have data like as above in hyperlink image in excel with headers , i need power shell code to split it columns delimited by comma , i can do it on excel by manually but every time i don't want to do such activity ,so any help is much appreciated.
Check code here
## Power Shell Code ##
worksheet.QueryTables.add(TxtConnector,worksheet.Range("A1"))
you can use text to column option in the excel itself for this.
open the File in excel and Go to DATA Tab,
1. Select the Column
2. Data -> Text to columns
3. Delimiter ',' OK
4. Data Split to all columns
Could you clarify what's the input (xls/csv file?) and what you're trying to archieve?
For what i could understand, you can use import-csv -delimiter "," to force it to divide the columns by comma.
If you need it back on a csv, you can pipe the result to export-csv -path $path -useculture, which will use the delimiters set by your current culture. You can also use any other kind of delimiter by using the -delimiter switch.
If this isn't what you were looking for, could you paste your code in the original post instead of using an image? It'll make it easier to read and test :)
I am trying to import an excel file to postgresql using pgadmin but faced many issues here.
My original data is an excel format and the data values include many comma. So I convert excel file into csv format using delimiter (;). I could do so by unchecking "use system separators" option in excel. (This will make changes in numeric values. For example, 40.2 becomes 40,2)
When I try to import this csv file in pgadmin I got numerous errors due to data type, numeric. The pgadmin do not consider 40,2 as a numeric value. Interestingly, I could do similar thing with other dataset when I convert the other data set (txt -> csv (;)-> then import to pgadmin). It worked!
However, if I try with my data it did not work.
(excel -> txt -> csv (;) -XX-> pgadmin).
Any idea how I can address this? Maybe I would like to know other ways to generate semicolon-csv files from excel.
I had an issue with excel to csv to use after a python script to fill a database, and if you generate the csv file from windows I noticed 2 things:
the return line is not a char '\n' but \r\n
excel add some invisible char at the begin of the file.csv
hope that can help
Use psql and its \copy command:
\copy mytable FROM 'mytable.csv' (FORMAT 'csv', DELIMITER ';')
COPY won't be able to handle “,” as a decimal separator, make sure it is “.”.
I think you can export excel to csv file directly inside excel if you use Excel 2016. Choose "Export" -> Other File Types -> "CSV (Comma delimited) (*.csv)".
I think excel will put quote (") around your text column which includes "," automatically. Then you can import CSV file to data base using the normal approach.
I managed to use PSQL on Windows to export a SQL query directly into a CSV file, and everything works fine as long as I don't redefine column names with aliases (using AS).
But as soon as I use a column alias, e.g.:
\copy (SELECT project AS "ID" FROM mydb.mytable WHERE project > 15 ORDER BY project) TO 'C:/FILES/user/test_SQL2CSV.csv' DELIMITER ',' CSV HEADER
I have unexpected behaviors with the CSV file.
In Excel: the CSV is corrupted and is blank
In Notepad: the data is present, but with no delimiter or spaces
(continous, e.g. ID27282930...)
In Notepad++: the data is well organized in a column
(e.g.
ID
27
28
29
30
...
)
Is there anything to do so that the exported file can be read directly within Excel (as it happens when I don't use aliases)?
After testing various other configurations of the query, I found the issue. Apparently Excel interprets a file starting with "ID" as some SYLK format instead of CSV... Renaming the column alias to e.g. "MyID" fixed the issue.
Reference here: annalear.ca/2010/06/10/why-excel-thinks-your-csv-is-a-sylk
I have a format issue with my pentaho report excel/csv output.
My report output contains zip code column, which has leading zeroes if the zip code length is less than 5. the leading zeroes get truncated when i open the report output in excel file. I used 'textfield' for the zipcode column, i even tried concatenating zeroes in my xaction sql. everything works fine if i open the output in a text editor, but when we open it in excel file the zero got trimmed.
can we prevent this trimming issue or can we use other data fields in design instead of text field.
Change the extension of your csv to .txt so you get Excel's dialogue boxes for importing text files; there you can select the comma as your column delimiter. On the third screen (after you hit "next" twice), there is an option to choose the formatting of each column. Select you zip code column, change it from "General" to "Text" format, and your leading zeroes will be retained.
use text formatting in the Home-->Number-->Special
Cannt paste imapge--> i guess not enough points
Hope it helps
I don't know whether it is proper or not but enclose field in Double quotes or single which ever you prefer..
quotes will not display in excel file format but it will display in textpad or notepad..
So it you don't have any problem in adding this extra thing then it will solve your problem.
What is the original data format in your DB? Is it an INT?
In your sql statement, try something like this (adjust for the relevant sql dialect, if necessary):
lpad(cast(zip as CHAR(5)),5,'0') zip
where zip is your field name.
Then use text-field as you are already doing.
I am kind of new to PowerBuilder and I'd like to know if it was possible to keep the "visible" value of a column name when using the SaveAs() Method of my DataWindow. Currently, my report shows columns like "Numéro PB" or "Poste 1-3", but when I save, it shows the Database's names. ie: "no_pb" and "pos_1_3"...
As I am working on a deployed application, I have to make my changes and implementations As user-friendly as possible, and they won't understand anything of that.
I already use the dw2xls api to save an exact copy of the report, but they want to have an option saving only the raw data, and I don't think I can achieve it using their API.
Also, I was asked not to use the Excel OLE object to do it...
Anyone's got an idea?
Thanks,
Michael
dw.saveas(<string with filename and path>,CSV!,TRUE) saves the datawindow data as a comma separated value text file with the first row having the column headers (database names in the dw painter).
To set the column headings in a saveas you could first access the data with
any la_dwdata[] // declare array
la_dwdata = dw_1.Object.Data // get all data for all rows in dw_1 in the Primary! buffer
from here you would create an output file consisting initially of a series of strings along
with the column names you want and then the data from the array converted to a string (you loop through the array). If you insert commas between the values and name the file with the 'CSV' extension, it will load into Excel. Since this approach will also include any non visible data, you may have to use other logic to exclude them if the users don't want to see it.
So now you have a string consisting of lines of data separated by tabs along with a crlf at the end of each. You create your 'header string' with the user friendly column names in the format of 'blah,blah,blah~r~n' (this is three 'blah' strings separated by commas with a crlf at the end).
Now you parse the string obtained from dw_1.Object.Data to find the first line, strip it off, then replace it with the header string you created. You can use the replace method to replace the remaining tabs with a comma. Now you save the string to a file with a .CSV extension and you can load it into Excel
This assumes that your display columns match your raw columns. Create a DataStore ds_head . Set your report DW as the DataObject (no data). I'm calling the DataWindow with the report you want to save dw_report. You'll want to delete the two temporary files when you're done. You may need to specify EncodingUTF8! or some other encoding instead of ANSI depending on what the data in the DataWindow is. Note: Excel will open this CSV but some other programs may not like it because the header row has a trailing comma.
``
ds_head.saveAsFormattedText("file1.csv", EncodingANSI!, ",")
dw_report.saveAs("file2.csv", CSV!, FALSE, EncodingANSI!)
run("copy file1.csv file2.csv output.csv")