Powerbuilder - Keep Column names when saving as Excel format - excel

I am kind of new to PowerBuilder and I'd like to know if it was possible to keep the "visible" value of a column name when using the SaveAs() Method of my DataWindow. Currently, my report shows columns like "Numéro PB" or "Poste 1-3", but when I save, it shows the Database's names. ie: "no_pb" and "pos_1_3"...
As I am working on a deployed application, I have to make my changes and implementations As user-friendly as possible, and they won't understand anything of that.
I already use the dw2xls api to save an exact copy of the report, but they want to have an option saving only the raw data, and I don't think I can achieve it using their API.
Also, I was asked not to use the Excel OLE object to do it...
Anyone's got an idea?
Thanks,
Michael

dw.saveas(<string with filename and path>,CSV!,TRUE) saves the datawindow data as a comma separated value text file with the first row having the column headers (database names in the dw painter).
To set the column headings in a saveas you could first access the data with
any la_dwdata[] // declare array
la_dwdata = dw_1.Object.Data // get all data for all rows in dw_1 in the Primary! buffer
from here you would create an output file consisting initially of a series of strings along
with the column names you want and then the data from the array converted to a string (you loop through the array). If you insert commas between the values and name the file with the 'CSV' extension, it will load into Excel. Since this approach will also include any non visible data, you may have to use other logic to exclude them if the users don't want to see it.
So now you have a string consisting of lines of data separated by tabs along with a crlf at the end of each. You create your 'header string' with the user friendly column names in the format of 'blah,blah,blah~r~n' (this is three 'blah' strings separated by commas with a crlf at the end).
Now you parse the string obtained from dw_1.Object.Data to find the first line, strip it off, then replace it with the header string you created. You can use the replace method to replace the remaining tabs with a comma. Now you save the string to a file with a .CSV extension and you can load it into Excel

This assumes that your display columns match your raw columns. Create a DataStore ds_head . Set your report DW as the DataObject (no data). I'm calling the DataWindow with the report you want to save dw_report. You'll want to delete the two temporary files when you're done. You may need to specify EncodingUTF8! or some other encoding instead of ANSI depending on what the data in the DataWindow is. Note: Excel will open this CSV but some other programs may not like it because the header row has a trailing comma.
``
ds_head.saveAsFormattedText("file1.csv", EncodingANSI!, ",")
dw_report.saveAs("file2.csv", CSV!, FALSE, EncodingANSI!)
run("copy file1.csv file2.csv output.csv")

Related

How do you guarantee that an incoming Excel file is from your original data source and not a fake?

I have a series of Excel files that I send out to customers. They fill them out and send them back with their info. How do I ensure that the excel files coming back in are the same ones I sent out and don't just share the same title and rows/column names?
The data could be falsified with the same title, row/columns. I ideally need some kind of fingerprint, artifact, or key attached to each excel file that ensures it came from my original data source.
I used to add white characters to headings as one simple trick.
Or I would put in cells odd names combined with dates in rows way below or columns far to the right.
Even inserted a name using insert name. You can also define names with vba and sometimes delete does not completely remove them - used that to hide passwords...

Csv writer escape semicolon python [duplicate]

I am using Excel for Mac 2016 on macOS Sierra software. Although I have been successfully copying and pasting CSV files into excel for some time now, recently, they have begun to behave in an odd way. When I paste the data, the content of each row seems to split over many columns. Where as before one cell would have been able to contain many words, it seems now as though each cell is only able to contain one word, so it splits the content of what would normally be in one cell, over many cells, making some rows of data spread out over up to 100 columns!
I have tried Data tab>> From text>> which takes me through a Text Wizard. There I choose Delimited>> Choose Delimiters: Untick the 'Space' box ('Tab' box is still ticked)>> Column data as 'General'>> Finish. Following this process appears to import the data into its correct columns. It works. BUT, a lot of work to get there!
Question: Is there any way to change the default settings of Delimiters, so that the 'Space' delimiter does not automatically divide the data?
I found an answer! It has to do with the "Text to Columns" function:
The way fix this behavior is:
Select a non-empty cell
Do Data -> Text to Columns
Make sure to choose Delimited
Click Next >
Enable the Tab delimiter, disable all the others
Clear Treat consecutive delimiters as one
Click Cancel
Now try pasting your data again
I did the opposite regarding "consecutive delimiters"!
I put a tick in the box next to "Treat consecutive delimiters as one", and THEN it worked.
Choose delimiter directly in CSV file to open in Excel
For Excel to be able to read a CSV file with a field separator used in a given CSV file, you can specify the separator directly in that file. For this, open your file in any text editor, say Notepad, and type the below string before any other data:
To separate values with comma: sep=,
To separate values with semicolon: sep=;
To separate values with a pipe: sep=|
In a similar fashion, you can use any other character for the delimiter - just type the character after the equality sign.
For example, to correctly open a semicolon delimited CSV in Excel, we explicitly indicate that the field separator is a semicolon:
reference

pentaho report excel output - leading '0' gets truncated

I have a format issue with my pentaho report excel/csv output.
My report output contains zip code column, which has leading zeroes if the zip code length is less than 5. the leading zeroes get truncated when i open the report output in excel file. I used 'textfield' for the zipcode column, i even tried concatenating zeroes in my xaction sql. everything works fine if i open the output in a text editor, but when we open it in excel file the zero got trimmed.
can we prevent this trimming issue or can we use other data fields in design instead of text field.
Change the extension of your csv to .txt so you get Excel's dialogue boxes for importing text files; there you can select the comma as your column delimiter. On the third screen (after you hit "next" twice), there is an option to choose the formatting of each column. Select you zip code column, change it from "General" to "Text" format, and your leading zeroes will be retained.
use text formatting in the Home-->Number-->Special
Cannt paste imapge--> i guess not enough points
Hope it helps
I don't know whether it is proper or not but enclose field in Double quotes or single which ever you prefer..
quotes will not display in excel file format but it will display in textpad or notepad..
So it you don't have any problem in adding this extra thing then it will solve your problem.
What is the original data format in your DB? Is it an INT?
In your sql statement, try something like this (adjust for the relevant sql dialect, if necessary):
lpad(cast(zip as CHAR(5)),5,'0') zip
where zip is your field name.
Then use text-field as you are already doing.

Export data from Access to Excel without losing leading zeroes

I have a table in Access I am exporting to Excel, and I am using VBA code for the export (because I actually create a separate Excel file every time the client_id changes which creates 150 files). Unfortunately I lose the leading zeroes when I do this using DoCmd.TransferSpreadsheet. I was able to resolve this by looping through the records and writing each cell one at a time (and formatting the cell before I write to it). Unfortunately that leads to 8 hours of run time. Using DoCmd.TransferSpreadsheet, it runs in an hour (but then I lose the leading zeroes). Is there any way at all to tell Excel to just treat every cell as text when using the TransferSpreadsheet command? Can anybody think of another way to do this that won't take 8 hours? Thanks!
prefix the Excel value with an apostrophe (') character. The apostrophe tells Excel to treat the data as "text".
As in;
0001 'Excel treats as number and strips leading zeros
becomes
'0001 'Excel treats as text
You will probably need to create an expression field to prefix the field with the apostrophe, as in;
SELECT "'" & [FIELD] FROM [TABLE]
As an alternative to my other suggestion, have you played with Excel's Import External Data command? Using Access VBA, you can loop through your clients, open a template Excel file, import the data (i.e. pull instead of push) with your client as a criteria, and save it with a unique name for each client.
What if you:
In your source table, change the column type to string.
Loop through your source table and add an "x" to the field.
If the Excel data is meant to be read by a human being, you can get creative, like hiding your data column, and adding a 'display' column that references the data column, but removes the "x".

Excel changes date formats

I run a process to produce a rather large CSV file of data. Sometimes I find it helpful to open the CSV in excel, make changes manually, and then re-save. However, if there are dates in the CSV, excel automatically reformats them. Is there a way to prevent excel from doing this? It would be helpful if I could turn off all text formatting altogether.
If you prepend an apostrophe ' to the beginning of any date string during the export process, Excel will read the value literally (i.e. as plain text) rather than trying to convert it to a date.
This particular solution is handled during the export process. I'm not sure how you would change Excel to treat the file differently at runtime.
Excel does some nasty tricks when outputting XML. One of its tricks is to drop left most column delimiters if 16 or so consecutive rows have no values for these columns. This means that if you're splitting the lines up based on commmas then these rows will have a different number of columns to the rest.
It will also drop any initial 0's so things like numeric Ids can become messed up.
Another risk you run is chopping the file off short since Excel can only support a maximum number of rows. (Prior to Excel 2007 this was around 65536)
If you need to do anything to a CSV file other than read it use a text editor.
When you import the CSV file into Excel, be sure to pre-format the date column as text. There's a frequently overlooked option in the parsing that allows you to control the format column by column. This also works well for preventing the leading zeros in New England ZIP codes from getting dropped in your contact lists.
If you used the excel file version which is 2010 or later (not sure lower version), you can set up to use current operation-system date format or not in Excel/CSV file.
Right Click cell with date value (e.g. '9/12/2013') in CSV file and pop up the menu
Click 'Format Cells' and open a pop up screen
Go to 'Number' tab and you can see 'Date' was selected in 'Category' (left side) and 'Type' on the right side
Observed that there are two types of Date format (one is with () and another is not with ()). Read the comment there and you can find that you can use the date format which is not with date. It means that your changes to the CSV file will not be applied with your current operation-system date format. So, I think date format won't be changed in CSV file in this case.

Resources