APPEND FROM excel sheet gives strange error - excel

I have a problem with the APPEND FROM statement in Visual FoxPro. I cannot do an APPEND FROM an excel sheet without getting this error:
Function name is missing (
I'm working on processing some legacy data stored in a FoxPro database. I'm reading it, processing it in .NET, and then writing it back to a new FoxPro database. However, the writing part is not working. Unfortunately using another database is not an option. And yes, I am a FoxPro newbie.
I do get INSERT statements to work, but it would be useful if I could get APPEND FROM an external file to work as well, AND also be able to hydrate memo fields. Afaik you can't do that with CSV files in FoxPro, only Excel and some other formats - but not CSV.
To demonstrate the problem I'm using the Address Book sample database that comes with Visual Foxpro.
I run this query in the query window in V.FP:
USE "ADDRESS BOOK!ADDRESSES"
APPEND FROM D:\tmp\excel_data2.xls FIELDS (addressid, firstname) DELIMITED XLS
The .xls file is a Excel 97-2003 workbook and looks like this:
A | B
------------------------
23 | Sample 1
------------------------
24 | Sample 2
I think the syntax should be correct according to this doc: http://msdn.microsoft.com/en-us/library/aa977271(v=vs.71).aspx
However, running this query just gives me the error about "Function name is missing (" . I've tried all sorts of rewrites and variations of this query that I could think of, but I just can't figure out what the problem is. Any help would be appreciated, thanks.

Not to steal the show, but this is how I got it working:
Ensure the XLS file is stored in the Excel 5.0/95 format (basically an ancient Excel format, but more than sufficient for data entry).
Close the Excel file, otherwise you will get an error about the file being locked/open in another app.
I used the following amended APPEND FROM statement and it worked:
USE "ADDRESS BOOK!ADDRESSES"
APPEND FROM D:\tmp\excel_data2.xls FIELDS addressid, firstname XLS

LAK was correct, but I will clarify for your app and possible future encounters with Excel imports. If your table does not match the columns order in Excel, you could run into problems. Typically I import into a cursor that I know the order and format of the fields. Then I'll append from. Once in a cursor version of a table, I can then append to any other table, cycle through it, do data cleansing, etc.
Say your address table had it's structure of ID, LastName, FirstName, Address... but your Excel file had ID, FirstName+LastName as a single field, Address and you know you will need to parse it into proper first/last fields. This would be a good example of using the interim cursor. If the cursor has more columns than Excel, they will just come along for the ride and be blank, but there to work with as you need.
create cursor C_TmpFromExcel;
( IDCol int,;
FullName c(40),;
Address c(35),;
FirstName c(20),;
LastName c(20) )
append from D:\tmp\excel_data2.xls type xls
*/ VERY BASIC example to split the name
replace all lastname with left( fullname, at("," , FullName ) -1 )
replace all firstname with substr( fullname, at( ",", FullName ) +1 )
select LiveAddressTable
append from C_TmpFromExcel
When appending one table (or cursor) together with another, VFP will handle match by same column names and disregard those where the column(s) are otherwise extra and not needed (such as the sample "FullName" column -- vs the FirstName extracted as extra ).

Related

How to Link a Excel Table with Access and prevent NULL Values due to wrong Data Type Conversion?

In the current Project i Need to Keep a Excel File which gets Values from a Machine to the Access Database to work with them and Import them in the Data Model.
Problem is some of the Values give invalid results due to the way they are saved. For example the timestamp is saved like
030420 instead of 03:04:20 and Access cant handle that and gives me a #NUMBER
I can not simply Change the datatype in Excel because the whole Excel gets refreshed every hour by a source that i cant influence.
Any help appreciated.
If Erik's proposal does not work, you can
- create a backup copy of your Excel source
- tweak the file: enter text in the first row of the problematic columns
- link the tweaked file into Access
- put back the real file in place.
Now the problematic columns should be read as Text, and you can build a query that solves any issue like conversion, null handling...
Link, don't import, the Excel file, and you have a linked table.
Now, use this linked table as source in a simpel select query where you modify the data and alias the fields as needed. For example:
Select
F1 As SomeName,
F2 As OtherName,
TimeSerial(Mid([F5],1,2),Mid([F5],3,2),Mid([F5],5,2)) As TrueTime
From
LinkedTable
Where
F7 Is Not Null
The use this query for your import.
Consider querying the Excel file instead of using a linked table.
The query can directly query an Excel range:
SELECT * FROM
[Excel 12.0 XML;DATABASE=PathToMyExcel;HDR=Yes;IMEX=1].[MyRange] t
Then, you can use functions like TimeSerial to cast numbers to time values.

How do I remove characters from a query field that cause Excel to interpret the field as more than one column or as a function

I am stuck having to query a SQL Server database that is mimicking SQL server 2000 database and no way around it.
I have a large result set of 5 fields. The last field is a memo field. The result set is so large in SSMS 2012 that I cannot select them all with headers. So I have to save to Excel csv format. In doing so it interprets data in the 5th field as either a function (“-“, “+”, “(space) –“, “(space)+”, etc at the beginning) or as multiple columns for various reasons.
So far I have
replace(ltrim(rtrim(memo)), ',', ' ') as Memo
This, of course, trims beginning and end and replaces commas with spaces. I do not want to have to build nested replaces unless I must. This is for a large audit report that is not run often so I can, if need be, use a function.
Is there a good way to make a field like this compliant with Excel so that Excel will just keep that field as one column? I would appreciate any insight.
It seems that the correct method is to append double quotes to the beginning and the end of the field value returned in the query. As I am having to right-click and output to Excel this methods works and Excel does not misinterpret the intent.

Powerbuilder - Keep Column names when saving as Excel format

I am kind of new to PowerBuilder and I'd like to know if it was possible to keep the "visible" value of a column name when using the SaveAs() Method of my DataWindow. Currently, my report shows columns like "Numéro PB" or "Poste 1-3", but when I save, it shows the Database's names. ie: "no_pb" and "pos_1_3"...
As I am working on a deployed application, I have to make my changes and implementations As user-friendly as possible, and they won't understand anything of that.
I already use the dw2xls api to save an exact copy of the report, but they want to have an option saving only the raw data, and I don't think I can achieve it using their API.
Also, I was asked not to use the Excel OLE object to do it...
Anyone's got an idea?
Thanks,
Michael
dw.saveas(<string with filename and path>,CSV!,TRUE) saves the datawindow data as a comma separated value text file with the first row having the column headers (database names in the dw painter).
To set the column headings in a saveas you could first access the data with
any la_dwdata[] // declare array
la_dwdata = dw_1.Object.Data // get all data for all rows in dw_1 in the Primary! buffer
from here you would create an output file consisting initially of a series of strings along
with the column names you want and then the data from the array converted to a string (you loop through the array). If you insert commas between the values and name the file with the 'CSV' extension, it will load into Excel. Since this approach will also include any non visible data, you may have to use other logic to exclude them if the users don't want to see it.
So now you have a string consisting of lines of data separated by tabs along with a crlf at the end of each. You create your 'header string' with the user friendly column names in the format of 'blah,blah,blah~r~n' (this is three 'blah' strings separated by commas with a crlf at the end).
Now you parse the string obtained from dw_1.Object.Data to find the first line, strip it off, then replace it with the header string you created. You can use the replace method to replace the remaining tabs with a comma. Now you save the string to a file with a .CSV extension and you can load it into Excel
This assumes that your display columns match your raw columns. Create a DataStore ds_head . Set your report DW as the DataObject (no data). I'm calling the DataWindow with the report you want to save dw_report. You'll want to delete the two temporary files when you're done. You may need to specify EncodingUTF8! or some other encoding instead of ANSI depending on what the data in the DataWindow is. Note: Excel will open this CSV but some other programs may not like it because the header row has a trailing comma.
``
ds_head.saveAsFormattedText("file1.csv", EncodingANSI!, ",")
dw_report.saveAs("file2.csv", CSV!, FALSE, EncodingANSI!)
run("copy file1.csv file2.csv output.csv")

Import Excel spreadsheet into phpMyAdmin

I have been trying to import an excel (xlsx) file into phpMyAdmin.
I have tried as both excel and csv file. I have tried csv and csv using load data.
I have replaced the default field termination value from ; to ,.
Most times I was getting an variety of error messages, so I deleted my field names column and then was able to import a single row of data only.
The data was off by a column, and I guess that has something to do with the structure of my table, which has a field for ID# as a primary auto-incrementing field which is not in my csv file.
I tried adding a column for that before importing with no success. I would have thought that I could import right from the xlsx file as that is one of the choices in phpMyAdmin but everything I read or watch online converts to csv.
I could use some help here.
I had a similar problem that I solved it by changing the 'fields enclosed by' option from " (double quote) to ' (single quote) and doing the same to the first line of the file which contains the field names. Worked like a charm. Hope this helps.
This is hopelessly late, but I'm replying in the hope that this might help a future viewer.
The reason that the CSV data is off by one is the very fact that you don't have the ID# field in it! The way to get around this is to import the file into a temporary table, then run
INSERT INTO `table`
SELECT NULL, <field1>, <field2>...
FROM `temp table`;
Adding NULL to the list of fields means that MySQL will autogenerate the ID# field (assuming you've set it to AUTO_INCREMENT).

Export data from Access to Excel without losing leading zeroes

I have a table in Access I am exporting to Excel, and I am using VBA code for the export (because I actually create a separate Excel file every time the client_id changes which creates 150 files). Unfortunately I lose the leading zeroes when I do this using DoCmd.TransferSpreadsheet. I was able to resolve this by looping through the records and writing each cell one at a time (and formatting the cell before I write to it). Unfortunately that leads to 8 hours of run time. Using DoCmd.TransferSpreadsheet, it runs in an hour (but then I lose the leading zeroes). Is there any way at all to tell Excel to just treat every cell as text when using the TransferSpreadsheet command? Can anybody think of another way to do this that won't take 8 hours? Thanks!
prefix the Excel value with an apostrophe (') character. The apostrophe tells Excel to treat the data as "text".
As in;
0001 'Excel treats as number and strips leading zeros
becomes
'0001 'Excel treats as text
You will probably need to create an expression field to prefix the field with the apostrophe, as in;
SELECT "'" & [FIELD] FROM [TABLE]
As an alternative to my other suggestion, have you played with Excel's Import External Data command? Using Access VBA, you can loop through your clients, open a template Excel file, import the data (i.e. pull instead of push) with your client as a criteria, and save it with a unique name for each client.
What if you:
In your source table, change the column type to string.
Loop through your source table and add an "x" to the field.
If the Excel data is meant to be read by a human being, you can get creative, like hiding your data column, and adding a 'display' column that references the data column, but removes the "x".

Resources