Eviews can't export to Excel when print titles are defined - excel

We want to save time when exporting to Excel from Eviews. To do this we have export a long list of variables in one command. Totally almost 800 rows are exported at once. We want to have titles on top of each page. To be readable we have at most about 70 rows per page. We can not put titles on the rows that are on top of the page, because then the titles would be written over by Eviews, so we use the Print titles options in Excel.
The problem is that when we use Print titles we get an error we do not get otherwise, and afterwards the excel file looses all old data.
The name cannot be the same as a built in name.
Old name: Print_Titles
Is there a way to get around this?
EDIT: Depending on how print_titles are stored in the xlsx-file, the file can either get overwritten or only the print titles get "removed". When unzipping the xlsx-file and reading an xml I get this row:
<definedName name="_xlnm.Print_Titles" localSheetId="0">MySheet!$1:$1</definedName>
if I do the same after export _xlnm. is removed and I get this:
<definedName name="Print_Titles" localSheetId="0">MySheet!$1:$1</definedName>
And then the print titles don't work and has to be reset manually again.

This seems to be a bug in either excel or eviews. After lots of contact with people at Eviews, we got a workaround, that is using excel 2003 file format. Then print titles are not removed.

Related

How do you guarantee that an incoming Excel file is from your original data source and not a fake?

I have a series of Excel files that I send out to customers. They fill them out and send them back with their info. How do I ensure that the excel files coming back in are the same ones I sent out and don't just share the same title and rows/column names?
The data could be falsified with the same title, row/columns. I ideally need some kind of fingerprint, artifact, or key attached to each excel file that ensures it came from my original data source.
I used to add white characters to headings as one simple trick.
Or I would put in cells odd names combined with dates in rows way below or columns far to the right.
Even inserted a name using insert name. You can also define names with vba and sometimes delete does not completely remove them - used that to hide passwords...

AnyLogic: False number format when exporting data to excel

I collect various data in time plots. If I copy the timeplot data and then paste it into Excel, the number format is often wrong. For example, I often get a date like Aug 94 instead of the actual number from the TimePlot. Unfortunately, I can't easily format this date into a number either, since the formatted number does not match the actual number from the timeplot. If I format the date in the same format as the number above and below, then I get the number 34547. However, this number does not correspond to the actual number of the TimePlot. Anyone know how I can prevent this problem?
You can only solve this on the Excel side, AnyLogic provides the raw data for you. Excel then interprets stuff. You can test it by pasting the chart raw data into a txt or csv file.
So either fix your Excel settings or paste into a csv, then into an xlsx.
Or better still: Do not manually paste at all. Instead, write your model results into the AnyLogic database and export to Excel from there: this takes away a lot of the pain for you. Check the example models to learn how to do that.
This is not AnyLogic question, rather an Excel & computer formatting problem. One way of resolving this is changing computer's date and time settings.
Another way is to save your output at txt file in AnyLogic. Replace all . with ,. Then open empty Excel, select Text format for the columns. Copy-paste from the txt file.
In Excel there are a few options
when you paste use paste as text only option
But this does not always work as Excel will still try to format the stuff for you
Use the Paste Special option and then choose text
Also possible this will not work, based on your Excel settings.
Paste using the text import wizard
(This works for me without fail)
On step 2 choose tab delimited
On step 3 choose Column format as text for every column (you need to select them in the little diagram below)
You will then see the data exactly as it came from AnyLogic. See the example below where I purposefully imported some text which has something that Excel will think is a date. You will now be able to see what in your data made Excel thing your data needed to be formatted the way it is and then you can fix it. (post a new question if you struggle with this conversion)
But as noted by other answers first prize is to write all the important data to external files. But I know that even I sometimes want to export data from a chart and review it in Excel. Option 3 works for me everytime

Find specific values and extract in Excel

I am trying to automate a process, on where I need to search an Excel file to find specific values. Below is an example of an Excel file I get:
Now, the excel file contains all sorts of information. However, I only need to get below values (which can occurr on multiple pages). However, the setup is the same for all pages.
I need to get the "Tariff" number and the value for that tariff number, then add them all up. So, for example I would end up with this:
Is this even possible? And how would I go about this?

Jira is not recognising my upload from CSV

I'm creating a CSV template for some analysts, they would need to fill it and I then do a bulk upload to Jira.
I want to upload them as defects. The issue I'm facing is:
I have a label when filling out a defect and I want to select one of the options, so for example I have a label called 'Label A' and it has 3 options in a list.
In the excel file I put the top row as 'Label A' and under it for one of the entries I put the full name of one of the options (Displayed on JIRA) for example 'Option A'. But I write this in the excel file as : Option A
But after uploading it does not recognise this and returns a validation error.
This is the same for a tick box label, for e.g. 'Label B'
However any text that I put up, (Something that requires free text and is not a multiple option) like for example 'Summary', I would put any random text e.g. 'abcd', and this will validate fine.
So my question is, what am I doing wrong with the way I'm formatting my CSV for when I upload answers to multiple choice parts of a defect?
I think if you can create a sample issue (like you need to be upload) in jira then you can export(Export all fields) that created issue and analyse the output excel file. then you can understand the input format that jira required form your CSV file.
UPDATED
the other thing you can do is read the JIRA log file it will tell you the actual error occurred some times.
are you export your created issue with this option?..see screenshot below..
The approach will depend on the field types you are using.
For example, if you were loading a simple text field then the text in the CSV file will just be inserted in to the text field.
If, however, you are populating a custom field that is represented by a radio button or a drop-down listbox then you will need to use the field mapping option that is offered during the CSV import.
Say you had a radio button that said either 'true' or 'false'. You would tick the mapping option for this field during the CSV import and configure it to map true -> true and false -> false. You can also do this mapping in the CSV file itself.
You can see more details on this link:
Atlassian - Importing Data from CSV
The approach you can follow is as below:
Count the number of labels in the Issue you are trying to import.
Every label should go into its own separate column for it to be imported properly.
Eg:- If there are 5 labels for an issue, create 5 Labels_CSV(or what suits you) column in the CSV header row and put the 5 labels in the data row.
Once the CSV is created, try to upload it with your existing config file which has mapping for Labels_CSV --> Labels.
Voila, the multiple labels will be imported properly.
Let me know if you have any queries.

Different results exporting to CSV or Excel

I have a simple report that I want to export to a CSV file. There is only the detail line that is grouped by one field, no group header, and a group footer for totals. The problem is when I export to CSV format, the total row for a group is listed in front of every record?
If I export to Excel and then save as a CSV file, the total row is where it belongs. However one field is spread across 3 columns then those columns are "merged and centered" which adds two commas in the middle of the line. And one column is added at the beginning of the record and two at the end of the record, for 3 more extra commas.
It would be easy enough to write a macro to "clean up" the spread sheet and export as a csv file for my end users. However corporate "insecurity" will not allow the end users to have macros.
Any help, suggestions, pointers to where else to look greatly appreciated.
cheers
bob
The CSV generated by any standard reporting tool does a flat data structure and hence would repeat all data set.
The XLS generated by the reproting tools are typically to be opened in the XLS and its XLS default behaviour to put additional commas for every merged cell.
The best way is to create a report with a layout that has equal data length columns even for the header, ie while formatting the report do not put the header in the center with larger lenght, bold and italics etc, put it as the first column and match the lenght with the data in the detail record.
This way you would be able to create a report that does not look presentable in XLS but would give you required data in the CSV

Resources