A 3rd party software 'Eclipse Orchestrator' saves its config file as 'csv' format. Among other things it includes camera exposure times like '1/2000' to indicate a 1/2000 sec exposure. Here a sample line from the csv file:
FOR,(VAR),0.000,5.000,49.000
TAKEPIC,MAGPRE (VAR),-,00:01:10.0,EOS450D,1/2000,9.0,100,0.000,RAW+FL,,N,Partial 450D
ENDFOR
When the csv file is loaded into Excel the screen display reads 'Jan-00'. So Excel interprets the string 1/2000 as a date. When the file is saved again as csv and inspected in an ascii editor it reads:
FOR,(VAR),0,5,49,,,,,,,,
TAKEPIC,MAGPRE (VAR),-,01:10.0,EOS450D,Jan-00,9,100,0,RAW+FL,,N,Partial 450D
ENDFOR,,,,,,,,,,,,
I had hoped to use Excel to variablearize the data and make it easier changeable. But the conversion to fake dates is not helping here.
The conversion at load-time affects the saved data format making it then unreadable for the 'Eclipse Orchestrator' program.
Any way to save the day in Excel, or just move on to write a prog to do the patching of the csv file?
Thanks,
Gert
If you import the CSV file instead of opening it, you can use the import wizard (Data ribbon > From Text) to define the data type of each column. Select Text for the exposure time and Excel will not attempt to convert it.
Related
I have a dictionary that I want to write into the CSV file. while writing the string value it becomes float. but I need the same string value in CSV file not float. any idea?
mydict={'date':int(20200729),'number':int(123),'code':int(707),'cipher':str('54545417e92')}
print mydict.values()
with open('formatting.csv','ab') as f:
w=csv.writer(f)
w.writerow(mydict.keys())
w.writerow(mydict.values())
Manually increase the width of the column and you'll see the format changes.
Since you are writing a csv file (which, unlike xlsx, does not contain its own styling and formatting), it's not related to Python and there's nothing Python can do to make Excel use a specific format.
Like DeepSpace said, this comes from what excel is doing, not from what python is doing. But from my experience, once excel has opened the file and assumed your data to be a float you cannot get that precision back. I suggest viewing your data raw by opening the .CSV file with a text editor instead of excel.
If you must open the file in excel, then there is a different way to do it. Open a blank excel document and then go to the data tab and click "From Text/CSV". Then follow the prompts and use the wizard to import your data. This way you can make sure the data type does not change from string to float.
EDIT - as a side note, I see that you tagged your question with "python 3.x", but in your example you use the old python 2 syntax of print "a string". Starting in python 3.0, you must use print("a string").
I am having problems converting an MSaccess table that contains a 12 digits barcode-number field to CSV file
The barcode field is defined as text!
I tried exporting to Excel and saving the Excel file to CSV or exporting it to CSV But but that did not work either (even when the field is defined as text).
The problem is that some barcodes start with zero which gets truncated and that displays a scientific notation instead of displaying the barcode string.
My Question is: How can I generate a CSV file that is stored as an Excel spread sheet?
any help is appreciated
Dory
Nick McDermaid thanks for your comment. When looking in a text editor everything looks perfect.....You mean the people requesting it on my website are actually using it as text file and do not care about the way it looks in a spread sheet? if so then I am just chasing a wild goose! is that what you mean?
I am using a software that starts by importing in it some csv files. These csv files are given to me but I need to make some changes and import them again in that software in order to take some results. If I just open these csv files and without doing any changes save them again I am getting a message writing 'Some features in your workbook might be lost'. If I import the new csv file that in reality they are the same with before the software I am using is not possible to run.
As I understand there is something that changes in the csv files only from opening them and saving them. Does anybody knows what is happening?
Thank you in advance!
Consider the following example csv file:
toto,titi,tata
1,2,3,4,5
1,2,3,4
1,2,3
1,2
1
1,2,3,4,5
1,2,3,4
1,2,3
1,2
1
Notice that not each row has the same number of elements. If I load it into Excel then save it back as a csv file, excel will add the necessary delimiters (, in my example) so every row will have the same number of "column" (even if some are empty).
Sure enough, if I open (in a normal text editor) the new .csv file saved by excel, I get:
toto,titi,tata,,
1,2,3,4,5
1,2,3,4,
1,2,3,,
1,2,,,
1,,,,
1,2,3,4,5
1,2,3,4,
1,2,3,,
1,2,,,
1,,,,
This is the behaviour of Excel and I couldn't find an option in Excel to change that. If this is what makes your program import fail, you'll have to consider making your changes to the csv file from a normal text editor (which doesn't make automatic assumtions like Excel).
I want to upload my Excel Workbook into Azure Machine Learning Studio. The reason is I have some data that I would like to join into my other .csv files to create a training data set.
When I upload my Excel, I don't get .xlsx, or .xls, but other extensions such as .csv, .txt etc..
This is how it looks,
I uploaded anyways and now, I am getting weird characters. How can I get excel workbook uploaded and get my sheets, so, I can join data and do, data preparation. Any suggestions?
You could save the workbook as a (set of) CSV file(s) and upload them separately.
A CSV file, a 'Comma Separated Values' file, is exactly that. A flat file with some values separated by a comma. If you load an Excel file it will mess up since there's way more information in an Excel file than just values separated by comma's. Have a look at File -> Save as -> Save as type where you can select 'CSV (comma delimited) (*.csv)'
Disclaimer: no, it's not always a comma...
In addition, the term "CSV" also denotes some closely related delimiter-separated formats that use different field delimiters. These include tab-separated values and space-separated values. A delimiter that is not present in the field data (such as tab) keeps the format parsing simple. These alternate delimiter-separated files are often even given a .csv extension despite the use of a non-comma field separator.
Edit
So apparently Excel files are supported: Supported data sources for Azure Machine Learning data preparation
Excel (.xls/.xlsx)
Read an Excel file one sheet at a time by specifying sheet name or number.
But also, only UTF-8 is supported: Import Data - Technical notes
Azure Machine Learning requires UTF-8 encoding. If the data you are importing uses a different encoding, or was exported from a data source that uses a different default encoding, various problems might appear in the text.
The first few lines of my CSV file look like this (when viewed from Notepad++):
Trace,Original Serial Number,New Serial number
0000073800000000097612345678901234567890,0054,0001
When I open this file in excel, I get this:
For some reason, excel is truncating the serial numbers and the trace number. I have tried changing the format to Text but that still doesn't work, as excel only sees the value up to the 6:
7.38000000000976E+34
If I change it to Number:
73800000000097600000000000000000000.00
What can I do? I only have 60 lines, so if I have to start over and some how recopy the text into excel I will, but I'm afraid saving it will change the format once again.
You shouldn't need to start over or alter the existing CSV. The fastest way might be to use Excel's text import wizard. In the data tab under Get External Data click From Text and select your CSV file.
The wizard that appears will let you tell Excel the data type of each "column" and you can tell it to use text for your barcode.
Excel is trying to "help" you by formatting the input values. To avoid this, do not double-click the file to open it. Instead, open the Data tab and in the Get External Data section, click on From Text
Then tell the Import Wizard that the fields are Text:
One solution that may work for you depending on the environment you consume the csv, you can add a nonnumeric character to the beginning and end (e.g. a "_") of the values. This will force Excel to recognize it as text. You can then remove the "_"s in your downstream environment (SQL, Databricks, etc.) or even keep them if they don't interfere with your reporting.