I am trying to export data containing chinese(some non-english characters for that matter) from ui-grid to pdf or CSV. However the exported text is all garbled. Here is the plnkr link
http://plnkr.co/edit/ZR34lhm3LUNmUrj7Vg23?p=preview
I understand for the pdf export to work I need to have the correct cmap for the font and characterset in use but why is CSV export not working? I have even tried exporterOlderExcelCompatibility: false but that didn't help either.
Did you try with:
exporterOlderExcelCompatibility: true
(false is the default).
I had a problem exporting Umlaut character to CSV, and I fixed it using exporterOlderExcelCompatibility: true in the gridOptions configuration.
The issue you are facing with the CSV is most likely due to opening this in Excel. I do not face the issue with OpenOffice.
For Excel 2013, these steps should work:
Launch Excel, and go to the Data tab.
Click the From Text button.
Navigate to the exported file, and click open.
Set the encoding to UTF-8 if it wasn't auto-selected.
Click Next.
Change the delimeter to comma.
Click Finish.
Choose where you want the input, or just click OK.
At this point, you should have the CSV displaying correctly.
With the CSV exporter, CSV-JS, I do not see any options to set the encoding before hand.
If you have small file: open your csv using Google sheet. and save it as csv. It will work.
if you have large file: try open office( free and easy to download). Open with Open Office excel and save as utf-8.
The default English Windows encoding for Chinese is GBK. For Ubuntu is utf8t. Make sure you have the irght encoding for your system.
Related
I searched all around internet how to save CVS file as Unicode (UTF-8), but it still does not work, whenever i save, and open the file, there is ????? instead of letter that are UTF-8.
Has anyone ever had this issue? how can i solve this?
This has been annoying short coming of Excel for a long time.
A way to work around this issue, is to do the following:
Save as... Unicode text (*.txt). Make sure to keep the extension as txt (or at least not csv). It will be saved with tabs instead of commas separating the columns.
Open the document. You will be prompted with an import wizard, like so:
For File origin, choose 65001: Unicode (UTF-8)
For the rest of the options, choose the common sense options.
You will have your document back, ready to edit, with the proper unicode text intact.
I have a CSV that is encoded in Unicode, however lacks a byte order mark at the start. As such Excel (2013) opens without encoding correctly (i think it assumes ASCII if no BOM specified...), meaning that certain characters are displayed incorectly.
From reading around i have read that a BOM of "\uFEFF" should be entered at the start of the CSV file. I have tried opening in txt editor and adding the characters e.g.
\uFEFF
r1test 1, r1text2, r1text3
r2test 1, r2text2, r2text3
However, this does not solve the problem - the characters "\uFEFF" show up on the first row when I open in excel, rather than it beign interpreted as a BOM. I am not sure what I am doing wrong, and the format of how the text should be specified such that it is interpreted as a BOM, rather than text in the the first of the data
I have only very limited experience using CSV, and only just heard of a BOM... and thus I could be implementing this completely wrong!
(for reference, i know that I could specify the encoding if i use the import data option within excel... however I really want to work out how to get it correctly specified in advance such that I can just open the csv... I have several thousand of these files that I am creating and exporting - once I know how to do this 'manually' [i.e. by adding some text at start of a the file], I can configure to automatically do in Python).
Thanks in advance
For someone else wanting to tell Excel to add a BOM: See if you can "Save as Unicode Text".
source
I'm trying to convert a .xls file to .pdf using LibreOffice via command line on Ubuntu. I have a kind of report on the .xls file with some colors in the background of the cells and etc.
The problem is when I convert the .xls file, the .pdf loses the original format. Each page is broken almost in the half and the content of one page is displayed in two different pages.
Does anybody know how to convert the .xls file to .pdf via command line with keeping the original format?
Or some trick to set the size of the .pdf page to not break pages? (Also via command line)
The code I used to make the conversion was:
soffice --headless --convert-to pdf:"impress_pdf_Export" filename.xls
If you use LibreOffice to convert Microsoft Excel (XLS) files to PDF documents, this is a two-step process (even if your command does look like it is a one-step process):
Import the XLS into LibreOffice (even if started with --headless).
Export the PDF from LibreOffice.
If the result does not look like you expect (not similar enough to Excel's native PDF export), then start with debugging the first step from above:
Open the XLS file with LibreOffice in a GUI. Does it look like you expect it to look? Or are some formatting options looking weird?
Export the PDF from there (with the GUI). Are the page dimensions as you expect? Did you set them up how you prefer? The margins like you want them? etc.pp. ...
If you are working on Windows, you may also want to consider OfficeToPDF.exe. It is hosted on CodePlex, licensed with the Apache 2.0 License and available in binary and in source code.
It requires a working Office 2013, Office 2010 or Office 2007 installation. But then it can commandline- and batch-convert to PDF various MS Office-based file formats, including XLS(X), PPT(X), DOC(X), VSD(X) and PUB as well as Libre/OpenOffice-based ODT, ODS and ODC files.
Although this is a little bit off from the initial question (you don't _really need Office Libre if you have the Office suite and on a Windows machine)
I do appreciate the follow-up provided by Kurt. It prompted me to post the following Gist offering some clear instructions on how to go about using the .exe in a for loop.
https://gist.github.com/einsty/2189cae4175f619cff0f
Try copying appropriate font file (for me it's
a simsun.ttc file) to your libreoffice installing directory like '/opt/libreoffice4.2/share/fonts/truetype'.But if the width of a single excel sheet is too much for a print page(sth like 'A4'),it'll still collapse.
I have a macro that imports a spreadsheet as follows: (this spreadsheet is an export from a web-based application, and during the initial export the chosen format is 97-2003)
DoCmd.TransferSpreadsheet acImport, acSpreadsheetTypeExcel12, "d2s_safety_tbl", _
"\\company.com\dfsroot$\Share\office_public\D2S\D2S\D2S_Scorecard\Source Data\D2S\D2S Safety.xls", True
When importing to Access, I get:
Run-time Error '3274': External table is not in the expected format.
When I open this Excel file, I get a dialog
"The file you are trying to open is in a different format than specified by the file extension..."
So the file name is .xls, my computer tells me its the 97-2003 Format, but once I open the file and click save, it defaults to save it as a Web Page format with the option to save as .xls. What gives?
UPDATE: If I open the file, then Save As .xls format (seemingly redundant, but apparently not), it asks me if I want to overwrite the existing file, so I do. Once I go through this, the VBA import is successful. I can't have the clerk go through this process every week--any way to avoidd this? Possibly the initial export from the web-based application?
DoCmd.TransferSpreadsheet is refusing to import your .xls file because it is not really an .xls file, it is an HTML file that has been given an .xls file extension. Providing a "fake" file extension is a trick that I've seen other "developers" use, and it really is a Bad Idea (for the reasons we've seen here).
If the keepers of the upstream system balk at doing The Right Thing and fixing their code to produce a real .xls file then try renaming the ".xls" file to .htm and importing it using
DoCmd.TransferText acImportHTML, ...
I also got a 3274 error when importing a spreadsheet into Access. I have been using this macro for a while now.
The solution was to compact and repair the Access database.
I had the same problem along with another problem (office 2016 x64): Pasting from excel to access raised 'Data on the Clipboard is damaged,...' error. I found a workaround by clicking on the lower right button in paste section of home tab of access, opening the clipboard pane.
After opening Clipborad Pane as stated above, my DoCmd.TransferSpreadsheet surprisingly worked fine.
I don't know why but it may help those having the same problem and those trying to find a real solution.
Background:
We have a web application where the user can export orders in csv-format. For users with Microsoft Excel installed it's the default program. They simply click 'Open' after the file is downloaded. Users mainly use Internet Explorer, Firefox and Chrome. No difference in behaviour.
The problem:
If the user just open the csv-file (from browser or explorer) the file is opened by Excel and data is loaded automagically. But sometimes rows are just missing. Gone. No exception, no message, nothing.
The data is there, if you open the file with notepad you'll see it.
(I suspect it has something to do with special chars, quotes, commas etc but I can't find a root cause for this)
How to make it work:
If you save the file to disk, open excel and selects File -> Open -> Format: Textfiles (*.prn, *.txt, *.csv, *.skv) -> Open excel will launch a import wizard and everything will work perfectly.
Is there anything I can do with the export-file to either force the Import Wizard or just tell Excel not to exclude our critical order information without warning?
I think I've found the cause of this. It seems to be due to regional settings as described in this post and sugesstions from superuser.com
I solved it by wrapping all fields except numbers with quotes and now it works just fine.
Make sure that you use a consistent delimiter scheme across all lines in the file. For example, check if you have both \n and \r\n sequences as line delimiters.