Related
I am trying to make a CSV file from an Excel file. It has English, Korean and Japanese inputs. Right now it's saved as file.xlsx.
But when I try to save-as CSV through Excel as file.csv, all the Korean and Japanese inputs turn into question marks (???????)
I tried importing into Google Spreadsheets and exporting out as csv from there (from reading some other solutions) but it still turns into question marks.
I tried building a CSV file from scratch and just copying/pasting values from the Excel file into the CSV, but after I save it as CSV, the characters always crack.
Does anybody know how to work-around this? Thank you
I don't know that there IS an answer for this. CSV has no encoding, so it gets lost when you save in that format.
I tried, as a test, saving Chinese characters as a Unicode Text file, and believe it or not, that worked. So you may be able to do that, and simply change the filename to CSV. Assuming for some reason you NEED the filename to be CSV.
EDIT: I just ran addional testing on this. I was able to reimport the TXT file with either TXT or CSV extension, and the characters stayed just fine. So I think Unicode text is your answer.
Simply opening a CSV file in Excel only works when default assumptions hold. You may be writing the CSV correctly but not validating it properly.
It is more reliable to open a blank worksheet and then use Data Import. The encoding of the CSV file is one of the parameters you can specify.
To fully retain the characters while saving it on a CSV format and to somehow be able to import/re-use the data in the future.
You can follow these steps.
In Microsoft Excel, open the *.xlsx file.
Select Menu | Save As.
Enter any name for your file.
Under "Save as type," select Unicode Text.
Click Save.
Open your saved file in Microsoft Notepad.
Replace all tab characters with commas (",").
Select a tab character (select and copy the space between two column headers)
Open the "Find and Replace" window (Press Ctrl+H) and replace all tab characters with comma .
Click Save As.
Name the file, and change the Encoding: to UTF-8.
Change the file extension from .txt to .csv.
Click Save.
Open the .csv file in Excel to view your data.
Had the same issue. the below article shows the workaround in details:
https://help.salesforce.com/articleView?id=000003837&type=1
However, i decided to go with LibreOffice Calc, as it requires less steps to achieve the desired outcome. While exporting, you get to select charecter set, field delimiter and text decimeter.
For all other tasks, i prefer Excel.
Download and install Unicode CSV Addin for excel.
Save the csv from the new "Unicode CSV" menu as shown in picture
below.
I have a tab-delimited text file which contains dagger characters (†). When I open this in Excel 2010, they are mangled and replaced with †(I'm not sure if Excel is adding the space, too). Why does this occur and how can I fix it?
Right now I do search and replacing in Excel to replace the †s, but it's inefficient for many files and hacky.
The original file is not using the character encoding that Excel expects.
See
Character Encoding and the ’ Issue
Excel's Import Wizard is better at handling encoding issues and may be able to open your source file properly. See
Microsoft Excel mangles Diacritics in .csv files?
Nevermind, after taking another look at Open .csv file containing special characters in Excel I realized that instead of Right Click -> Open with Excel, if I go to File -> Open in Excel it lets me choose the encoding.
I having a Excel document with a data table containing Chinese characters. I am trying to export this Excel spreadsheet to a CSV file for importing into a MySQL database.
However, when I save the Excel document as a CSV file, Notepad displays the resulting CSV file's Chinese characters as question marks. Importing into MySQL preserves the question marks, completely ignoring what the original Chinese characters are.
I'm suspecting this may have to do with using Excel with UTF-8 encoding. Thanks for your help!
The following method has been tested and used to import CSV files in MongoDB, so it should work:
In your Excel worksheet, go to File > Save As.
Name the file and choose Unicode Text (*.txt) from the drop-down list next to "Save as type", and then click Save.
Open the unicode .txt file using your preferred text editor, for example Notepad.
Since our unicode text file is a tab-delimited file and we want to convert Excel to CSV (comma-separated) file, we need to replace all tabs with commas.
Select a tab character, right click it and choose Copy from the context menu, or simply press CTRL+C as shown in the screenshot below.
Press CTRL+H to open the Replace dialog and paste the copied tab (CTRL+V) in the Find what field. When you do this, the cursor will move rightwards indicating that the tab was pasted. Type a comma in the Replace with field and click Replace All.
Click File > Save As, enter a file name and change the encoding to UTF-8. Then click the Save button.
Change the .txt extension to .csv directly in Notepad's Save as dialog and choose All files (.) next to Save as type, as shown in the screenshot below.
Open the CSV file from Excel by clicking File > Open > Text files (.prn, .txt, .csv) and verify if the data is Okay.
Source here
As far as I know Excel doesn't save CSV files in any Unicode encoding. I have had similar issues recently trying to export a file as CSV with the £ symbol. I had the benefit of being able to use another tool altogether.
My version of Excel 2010 can export in Unicode format File > Save As > Unicode Text (.txt), but the output is a tab-delimited, UCS-2 encoded file. I don't know MySQL at all but a brief look at the specifications and it appears to handle tab delimited imports and UCS-2. It may be worth trying this output.
Edit: Additionally, you could always open this Unicode output in Notepad++ convert it to UTF-8 Encoding > Convert to UTF-8 without BOM And possibly replace all tab chars with commas too (Use the Replace dialogue in Extended Search mode, \t in the Find box and , in the Replace box.)
You might want to try notepad++, I doubt notepad will support unicode characters.
http://notepad-plus-plus.org/
For some people this solution may work: https://support.geekseller.com/knowledgebase/utf-8/
When saving csv, go to lower right Tools > Web Options > Encoding > Unicode (UTF-8)
Or this SO answer: just use Google Sheets to save csv as unicode:
Excel to CSV with UTF8 encoding
I have tried all above methods for my data but it does not quite work for my data (Simplified Chinese, over 700Mb. I have tried Windows Chinese and English system, English and Chinese excel. Windows excel seems not be able to save to utf8 even it claims to do so. I specify the uft8 csv in save as, but when i use the 'open sheet' to detect the encoding mehtods. it is not uft8,not GB* as well.
Here is my final solution.
(1) Download 'open sheet'.
(2) Open it properly. You Ccan scroll the encoding method until you see the Chinese character displayed in the preview windows.
(3) Save it as utf-8(if you want utf-8).
PS:You need to figure out the default encoding in your system. As far
as I know, Ubuntu deals with UTF8 fine. But the windows default
Simplied Chinese is start with GB**.Even if you encode it as utf8,
still, you might open it cocrrectly as well. In my case, r could not
open my utf-8 csv, but can open the GB* encoding.
This methods work well even your file is very large.
Some other work around is google sheet(but the file size can be limited). Notepad++ also works for smaller file.
There is a way to detect the encoding methods by opening your file and scroll through the encoding methods until you see the Chinese displayed correctly.
You should save csv file with:
df.to_csv(file_name, encoding = 'utf_8_sig')
instead of:
df.to_csv(file_name, encoding = 'utf-8')
I have a plain text file looking like this:
"some
text
containing
line
breaks"
I'm trying to talk excel 2004 (Mac, v.11.5) into opening this file correctly. I'd expect to see only one cell (A1) containing all of the above (without the quotes)...
But alas, I can't make it happen, because Excel seems to insist on using the CR's as row delimiters, even if I set the text qualifier to double quote. I was sort of hoping that Excel would understand that those line breaks are part of the value - they are embedded in double quotes which should qualify them as part of the value. So my Excel sheet has 5 rows, which is not what I want.
I also tried this Applescript to no avail:
tell application "Microsoft Excel"
activate
open text file filename ¬
"Users:maximiliantyrtania:Desktop:linebreaks" data type delimited ¬
text qualifier text qualifier double quote ¬
field info {{1, text format}} ¬
origin Macintosh with tab
end tell
If I could tell Excel to use a row delimiter other than CR (or LF), well, I'd be a happy camper, but excel seems to allow the change of the field delimiter only, not the row delimiter.
Any pointers?
Thanks,
Max
Excel's open
Looks like I just found the solution myself. I need to save the initial file as ".csv". Excel honors the line breaks properly with CSV files. Opening those via applescript works as well.
Thanks again to those who responded.
Max
The other option is to create a macro to handle the opening. Open the file for input, and then read the text into the worksheet, parsing as you need, using a Range object.
If your file has columns separated by list separators (comma's, but semicolons for some non-English region settings), rename it to .csv and open it in Excel.
If your file has columns separated by TABs, rename it to .tab and open it in Excel.
Importing (instead of opening) a csv or tab file does not seem to understand line feeds in between text delimiters. :-(
Is it just one file? If so, don\'t import it. Just copy paste the content of your text file into the first cell (hit f2, then paste).
If you absolutely must script this, Excel actually uses only one of those two chars (cr, lf) as the row delimiter, but I'm not sure which. Try first stripping out the lf's with an external util (leave the cr's) and then import it... if that does't work, strip out the cr's (leave the lf's) and thenimport it.
I produce a report as an CSV file.
When I try to open the file in Excel, it makes an assumption about the data type based on the contents of the cell, and reformats it accordingly.
For example, if the CSV file contains
...,005,...
Then Excel shows it as 5.
Is there a way to override this and display 005?
I would prefer to do something to the file itself, so that the user could just double-click on the CSV file to open it.
I use Excel 2003.
There isn’t an easy way to control the formatting Excel applies when opening a .csv file. However listed below are three approaches that might help.
My preference is the first option.
Option 1 – Change the data in the file
You could change the data in the .csv file as follows ...,=”005”,...
This will be displayed in Excel as ...,005,...
Excel will have kept the data as a formula, but copying the column and using paste special values will get rid of the formula but retain the formatting
Option 2 – Format the data
If it is simply a format issue and all your data in that column has a three digits length. Then open the data in Excel and then format the column containing the data with this custom format 000
Option 3 – Change the file extension to .dif (Data interchange format)
Change the file extension and use the file import wizard to control the formats.
Files with a .dif extension are automatically opened by Excel when double clicked on.
Step by step:
Change the file extension from .csv to .dif
Double click on the file to open it in Excel.
The 'File Import Wizard' will be launched.
Set the 'File type' to 'Delimited' and click on the 'Next' button.
Under Delimiters, tick 'Comma' and click on the 'Next' button.
Click on each column of your data that is displayed and select a 'Column data format'. The column with the value '005' should be formatted as 'Text'.
Click on the finish button, the file will be opened by Excel with the formats that you have specified.
Don't use CSV, use SYLK.
http://en.wikipedia.org/wiki/SYmbolic_LinK_(SYLK)
It gives much more control over formatting, and Excel won't try to guess the type of a field by examining the contents. It looks a bit complicated, but you can get away with using a very small subset.
This works for Microsoft Office 2010, Excel Version 14
I misread the OP's preference "to do something to the file itself." I'm still keeping this for those who want a solution to format the import directly
Open a blank (new) file (File -> New from workbook)
Open the Import Wizard (Data -> From Text)
Select your .csv file and Import
In the dialogue box, choose 'Delimited', and click Next.
Choose your delimiters (uncheck everything but 'comma'), choose your Text qualifiers (likely {None}), click Next
In the Data preview field select the column you want to be text. It should highlight.
In the Column data format field, select 'Text'.
Click finished.
You can simply format your range as Text.
Also here is a nice article on the number formats and how you can program them.
Actually I discovered that, at least starting with Office 2003, you can save an Excel spreadsheet as an XML file.
Thus, I can produce an XML file and when I double-click on it, it'll be opened in Excel.
It provides the same level of control as SYLK, but XML syntax is more intuitive.
Adding a non-breaking space in the cell could help.
For instance:
"firstvalue";"secondvalue";"005 ";"othervalue"
It forces Excel to treat it as a text and the space is not visible.
On Windows you can add a non-breaking space by tiping alt+0160.
See here for more info: http://en.wikipedia.org/wiki/Non-breaking_space
Tried on Excel 2010.
Hope this can help people who still search a quite proper solution for this problem.
I had this issue when exporting CSV data from C# code, and resolved this by prepending the leading zero data with the tab character \t, so the data was interpreted as text rather than numeric in Excel (yet unlike prepending other characters, it wouldn't be seen).
I did like the ="001" approach, but this wouldn't allow exported CSV data to be re-imported again to my C# application without removing all this formatting from the import CSV file (instead I'll just trim the import data).
I believe when you import the file you can select the Column Type. Make it Text instead of Number. I don't have a copy in front of me at the moment to check though.
Load csv into oleDB and force all inferred datatypes to string
i asked the same question and then answerd it with code.
basically when the csv file is loaded the oledb driver makes assumptions, you can tell it what assumptions to make.
My code forces all datatypes to string though ... its very easy to change the schema.
for my purposes i used an xslt to get ti the way i wanted - but i am parsing a wide variety of files.
I know this is an old question, but I have a solution that isn't listed here.
When you produce the csv add a space after the comma but before your value e.g. , 005,.
This worked to prevent auto date formatting in excel 2007 anyway .
The Text Import Wizard method does NOT work when the CSV file being imported has line breaks within a cell. This method handles this scenario(at least with tab delimited data):
Create new Excel file
Ctrl+A to select all cells
In Number Format combobox, select Text
Open tab delimited file in text editor
Select all, copy and paste into Excel
Just add ' before the number in the CSV doc.
This has been driving me crazy all day (since indeed you can't control the Excel column types before opening the CSV file), and this worked for me, using VB.NET and Excel Interop:
'Convert .csv file to .txt file.
FileName = ConvertToText(FileName)
Dim ColumnTypes(,) As Integer = New Integer(,) {{1, xlTextFormat}, _
{2, xlTextFormat}, _
{3, xlGeneralFormat}, _
{4, xlGeneralFormat}, _
{5, xlGeneralFormat}, _
{6, xlGeneralFormat}}
'We are using OpenText() in order to specify the column types.
mxlApp.Workbooks.OpenText(FileName, , , Excel.XlTextParsingType.xlDelimited, , , True, , True, , , , ColumnTypes)
mxlWorkBook = mxlApp.ActiveWorkbook
mxlWorkSheet = CType(mxlApp.ActiveSheet, Excel.Worksheet)
Private Function ConvertToText(ByVal FileName As String) As String
'Convert the .csv file to a .txt file.
'If the file is a text file, we can specify the column types.
'Otherwise, the Codes are first converted to numbers, which loses trailing zeros.
Try
Dim MyReader As New StreamReader(FileName)
Dim NewFileName As String = FileName.Replace(".CSV", ".TXT")
Dim MyWriter As New StreamWriter(NewFileName, False)
Dim strLine As String
Do While Not MyReader.EndOfStream
strLine = MyReader.ReadLine
MyWriter.WriteLine(strLine)
Loop
MyReader.Close()
MyReader.Dispose()
MyWriter.Close()
MyWriter.Dispose()
Return NewFileName
Catch ex As Exception
MsgBox(ex.Message)
Return ""
End Try
End Function
When opening a CSV, you get the text import wizard. At the last step of the wizard, you should be able to import the specific column as text, thereby retaining the '00' prefix. After that you can then format the cell any way that you want.
I tried with with Excel 2007 and it appeared to work.
Well, excel never pops up the wizard for CSV files. If you rename it to .txt, you'll see the wizard when you do a File>Open in Excel the next time.
Put a single quote before the field. Excel will treat it as text, even if it looks like a number.
...,`005,...
EDIT: This is wrong. The apostrophe trick only works when entering data directly into Excel. When you use it in a CSV file, the apostrophe appears in the field, which you don't want.
http://support.microsoft.com/kb/214233