english isn't my native tongue, but I hope I can explain my problem sufficiently.
I made a View in the Oracle DB which only contains the data I need.
Using SQL in my VBScript file, I select the View by using:
"SELECT * FROM TEST_1234"
I have selected the complete view now, that works fine.
Now I need to 'export' or copy the complete View to Excel using VBScript (via UFT [Unified Functional Testing]).
Is there an easy way to just copy the whole thing at once or at least complete rows or columns?
If 1. doesn't work, can I just 'iterate' through the rows and columns using two loops and copy the data from every field to the respective field in Excel?
It would be nice to be able to copy the Data without using the names of the columns in a recordset (is there a way to use numbers until EOC [End of columns]?), because there is a very high amount of columns to be copied and the column names are subject to change.
Thanks for any help!
From a programmer==code writer's point of you the most attractive solution is your very first approach (copy the whole thing with just one SQL statement). Depending on the providers' capabilities this statement could look like
INSERT INTO [DstTable] SELECT * FROM [SrcTable] IN '' 'odbc;dsn=DSNName'
or
SELECT * INTO [DstTable] FROM [SrcTable] IN '' 'odbc;dsn=DSNName'
Look here for a working solution that couldn't be simpler; but I admit that a dsnless connection to the destination database looks more complicated and your drivers may have other incantations to refer to the external Database. Furthermore, your pair of providers may not support an external connection from the source to the destination and the dirty trick of using the Access OLEDB driver (which came/still comes? with ADO) to connect to both Databases externally may not work for you. In all, it's certainly not easy to get "INSERT/SELECT INTO External Database" right. [Look at my (just downvoted) answer to see that people dispair and fall back (and upvote) code that uses single-item-copy-loops.] In your case, you'll have to research whether at least one of the Oracle providers available to you supports external connections to Excel (or vice versa).
From a programmer==hacker's point of view (let's get the job done with minimal fuss) an easy solution could be to export the views/tables to .csv (
I looked at this and was disappointed, but you may know much better) and to import them into Excel (just load .csv and save .xls)
If you can't/won't use the file system, you could go thru memory: Use GetRows to get the data into a two dimensional array and assign that to the desired Excel range.
If all the above fails and you need assignments to single cells in row and column loopings over the recordset, remember that the Fields collection gives you access to not only the data but the meta-info (number of columns, column-names, types, ...) too.
Thanks for the help, and the links you provided, Ekkehard and Bond! After reading them and trying a lot, i got a very simple solution.
Here's some working code, if anybody else faces the same or a similar problem:
Option explicit
Dim conn, rec, xlStat, xlStatW, dbCnnStr, SQLSec, statArt
Set conn = Createobject("ADODB.Connection")
Set rec = CreateObject("ADODB.Recordset")
Set xlStat = CreateObject("Excel.Application")
dbCnnStr = "[your DB-connection]"
conn.open dbCnnStr
'Start Excel XXX
Set xlStatW = xlStat.Workbooks.Add()
xlStatW.Sheets(1).Name = "AAA_123"
xlStatW.Sheets(2).Name = "BBB_123"
xlStatW.Sheets(3).Name = "CCC_123"
SQLSec = "SELECT * FROM XXX_123"
rec.open SQLSec,conn
xlStatW.Sheets(1).cells(2,1).CopyFromRecordset rec
rec.Close
SQLSec = "SELECT * FROM YYY_123"
rec.open SQLSec,conn
xlStatW.Sheets(2).cells(2,1).CopyFromRecordset rec
rec.Close
SQLSec = "SELECT * FROM ZZZ_123"
rec.open SQLSec,conn
xlStatW.Sheets(3).cells(2,1).CopyFromRecordset rec
rec.Close
xlStatW.SaveAs ("C:\test.xlsx")
xlStatW.Close
'Ende Excel XXX
conn.Close
Related
There are N tables in the DB with the following data types:
Numeric, long text, date and time bigint, boolean.
All of them opens, except one
I'm opening a database
db = QSqlDatabase("QODBC")
db.setDatabaseName(r"DRIVER={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=C:\Users\...\file.accdb")
db.open(username, password)
I output the tables contained in the db
db.tables()
Output:
["messages", "table1", "table2", ..., "tableN"]
And I'm trying to open the "messages" table
model = QSqlTableModel(db=db)
model.setTable("messages")
model.select()
Output:
False
Then I checked which other tables are not opening
for i in db.tables():
model.setTable(i)
if model.select() == False:
print(i)
Output:
"messages"
This means that the problem is only in this table.
But directly through MS Access the table opens
I have already tried to open it through the cycle. The keyword was found in db.tables(), but QSqlTableModel does not see the 'messages' table specifically.
I tried to change MS Access to the 2016 version. I thought, suddenly some certain type from MS Access 2019 conflicts with the old driver. It didn't help.
I was thinking of downloading a newer driver, but I didn't find one. I tried to dig into the registry... I didn't find anything either.
Please help
So, I figured out the problem by poking. I was initially right about the driver conflict with the bigint data type, but a few additional actions were missing.
Apparently, when you try to set the bigint data type and save it, Access warns you that because of it, databases may not support older versions, and you save anyway, it automatically sets the minimum supported version, or what?
Data separation helped, but before that you need to change the bigint data type to another data type. Database Tools->Move Data->Access Database.
This question may seem rudimentary but nothing that I have found online quite fits.
I am looking at an old FOXPRO script that we used to make a table. At the moment, I am attempting to translate this script into SQL. Of note is the following,
delete all for code='000000'
pack
If I understand this correctly, it deletes all rows/records where the code field has a value of 000000. Am I correct?
"Into SQL"
What do you mean by "SQL"?
Do you mean do the same thing using SQL in VFP for a VFP table? If so then:
use myTable exclusive
delete from myTable where code = '000000'
pack
But I doubt you are asking this, when you could do it by simply using the xBase code you wrote.
Do you mean how to that in an SQL backend like MS SQL server. postgreSQL, MySQL ...? If so then:
delete from myTable where code = '000000'
Note: In your code "all" is unnecessary but wouldn't do any harm.
Note2: In VFP code that you wrote, first line is "marking" the rows as "deleted" that have code value '000000'.
Second line really removes those rows from the table.
What version of foxpro?
delete from [table] where code = '00000'
pack
or
delete for code = '000000'
pack
both should work
I have exported the contents of a table with transaction SE16, by selecting all the entries and going selecting Download, unconverted.
I'd like to import these entries into another system (where the same table exists and is active).
Furthermore, when I import, there's a possibility that the specific key already exists for a number of entries (old entries).
Other entries won't have a field with the same key present in the table where they're to be imported (new entries).
Is there a way to easily update my table in the second system with the file provided from the first system? If needed, I can export the data in the 3 other format types (Spreadsheet, Rich text format and HTML format). It seems to me though like the spreadsheet and rich text formats sometimes corrupt the data, and the html is far too verbose.
[EDIT]
As per popular demand, the table i'm trying to export / import is a Z table whose fields are all numeric, character, date or time fields (flat data types).
I'm trying to do it like this because the clients don't have any basis resource to help them transport, and would like to "kinna" automate the process of updating one of the tables in one system.
At the moment it's a business request to do it like this, but I'm open to suggestions (and the clients are open too)
Edit
Ok I doubt that what you describe in your comment exists out of the box, but you can easily write something like that:
Create a method (or function module if that floats your boat) that accepts the following:
iv_table name TYPE string and
iv_filename TYPE string
This would be the method:
method upload_table.
data: lt_table type ref to data,
lx_root type ref to cx_root.
field-symbols: <table> type standard table.
try.
create data lt_table type table of (iv_table_name).
assign lt_table->* to <table>.
call method cl_gui_frontend_services=>gui_upload
exporting
filename = iv_filename
has_field_separator = abap_true
changing
data_tab = <table>
exceptions
others = 4.
if sy-subrc <> 0.
"Some appropriate error handling
"message id sy-msgid type 'I'
" number sy-msgno
" with sy-msgv1 sy-msgv2
" sy-msgv3 sy-msgv4.
return.
endif.
modify (p_name) from table <table>.
"write: / sy-tabix, ' entries updated'.
catch cx_root into lx_root.
"lv_text = lx_root->get_text( ).
"some appropriate error handling
return.
endtry.
endmethod.
This would still require that you make sure that the exported file matches the table that you want to import. However cl_gui_frontend_services=>gui_upload should return sy-subrc > 0 in that case, so you can bail out before you corrupt any data.
Original Answer:
I'll assume that you want to update a z-table and not a SAP standard table.
You will probably have to format your datafile a little bit to make it tab or comma delimited.
You can then upload the data file using cl_gui_frontend_services=>gui_upload
Then if you want to overwrite the existing data in the table you can use
modify zmydbtab from table it_importeddata.
If you do not want to overwrite existing entries you can use.
insert zmydbtab from table it_importeddata.
You will get a return code of sy-subrc = 4 if any of the keys already exists, but any new entries will be inserted.
Note
There are many reasons why you would NOT do this for a SAP-standard table. Most prominent is that there is almost always more to the data-model than what we are aware of. Also when creating transactional data, there are often follow-on events or workflow that kicks off, that will not be the case if you're updating the database directly. As a rule of thumb, it is usually a bad idea to update SAP standard tables directly.
In that case try to find a BADI, or if that's not available, record a BDC and do the updates that way.
If the system landscape was setup correctly, your client would not need any kind of basis operations support whatsoever to perform the transports. So instead of re-inventing the wheel, I'd strongly suggest to catch up on what the CTS and TMS can do once they're setup with sensible settings.
I have another puzzling problem.
I need to read .xls files with RODBC. Basically I need a matrix of all the cells in one sheet, and then use greps and strsplits etc to get the data out. As each sheet contains multiple tables in different order, and some text fields with other options inbetween, I need something that functions like readLines(), but then for excel sheets. I believe RODBC the best way to do that.
The core of my code is following function :
.read.info.default <- function(file,sheet){
fc <- odbcConnectExcel(file) # file connection
tryCatch({
x <- sqlFetch(fc,
sqtable=sheet,
as.is=TRUE,
colnames=FALSE,
rownames=FALSE
)
},
error = function(e) {stop(e)},
finally=close(fc)
)
return(x)
}
Yet, whatever I tried, it always takes the first row of the mentioned sheet as the variable names of the returned data frame. No clue how to get that solved. According to the documentation, colnames=FALSE should prevent that.
I'd like to avoid the xlsReadWrite package. Edit : and the gdata package. Client doesn't have Perl on the system and won't install it.
Edit:
I gave up and went with read.xls() from the xlsReadWrite package. Apart from the name problem, it turned out RODBC can't really read cells with special signs like slashes. A date in the format "dd/mm/yyyy" just gave NA.
Looking at the source code of sqlFetch, sqlQuery and sqlGetResults, I realized the problem is more than likely in the drivers. Somehow the first line of the sheet is seen as some column feature instead of an ordinary cell. So instead of colnames, they're equivalent to DB field names. And that's an option you can't set...
Can you use the Perl-based solution in the gdata instead? That happens to be portable too...
I found this code online to query Access and input the data into excel (2003), but it is much slower than it should be:
Sub DataPull(SQLQuery, CellPaste)
Dim Con As New ADODB.Connection
Dim RST As New ADODB.Recordset
Dim DBlocation As String, DBName As String
Dim ContractingQuery As String
If SQLQuery = "" Then
Else
DBName = Range("DBName")
If Right(DBName, 4) <> ".mdb" Then DBName = DBName + ".mdb"
DBlocation = ActiveWorkbook.Path
If Right(DBlocation, 1) <> "\" Then DBlocation = DBlocation + "\"
Con.ConnectionString = DBlocation + DBName
Con.Provider = "Microsoft.Jet.OLEDB.4.0"
Con.Open
Set RST = Con.Execute(SQLQuery)
Range(CellPaste).CopyFromRecordset RST
Con.Close
End If
End Sub
The problem is that this code takes very long. If I open up Access and just run the query in there it takes about 1/10th the time. Is there anyway to speed this up? Or any reason this might be taking so long? All my queries are simple select queries with simple where statements and no joins. Even a select * from [test] query takes much longer than it should.
EDIT: I should specify that the line
Range(CellPaste).CopyFromRecordset RST
was the one taking a long time.
I'm no expert, but I run almost exactly the same code with good results. One difference is that I use the Command object as well as the Connection object. Where you
Set RST = Con.Execute(SQLQuery)
I
Dim cmd As ADODB.Command
Set cmd.ActiveConnection = con
cmd.CommandText = SQLQuery
Set RST = cmd.Execute
I don't know if or why that might help, but maybe it will? :-)
I don't think you are comparing like-with-like.
In Access, when you view a Query's dataview what happens is:
an existing open connection is used
(and kept open);
a recordset is partially filled
with the first few rows only (and
kept open);
the partial resultset is shown in a
grid dedicated to the task and
optimized for the native data access
method Access employs (direct use of
the Access Database Engine DLLs,
probably).
In your VBA code:
a new connection is opened (then
later closed and released);
the recordset is fully populated
using all rows (then later closed and
released);
the entire resultset is read into a
Excel's generic UI using non-native
data access components.
I think the most significant point there is that the dataview in Access doesn't fetch the entire resultset until you ask it to, usually by navigating to the last row in the resultset. ADO will always fetch all rows in the resultset.
Second most significant would be the time taken to read the fetched rows (assuming a full resultset) into the UI element and the fact Excel's isn't optimized for the job.
Opening, closing and releasing connections and recordsets should be insignificant but are still a factor.
I think you need to do some timings on each step of the process to find the bottleneck. When comparing to Access, ensure you are getting a full resultset e.g. check the number of rows returned.
Since you're using Access 2003, use DAO instead, it will be faster with the Jet engine.
See http://www.erlandsendata.no/english/index.php?d=envbadacexportdao for sample code.
Note that you should never use the "As New" keyword, as it will lead to unexpected results.
I would recommend you to create the Recordset explicitly rather than implicitly using the
Execute method.
When creating explicitly you can set its CursorType and LockType properties which have impact on performance.
From what I see, you're loading data in Excel, then closing the recordset. You don't need to update, count records, etc... So my advice would be to create a Recordset with CursorType = adOpenForwardOnly & LockType = adLockReadOnly:
...
RST.Open SQLQuery, Con, adOpenForwardOnly, adLockReadOnly
Range(CellPaste).CopyFromRecordset RST
...
Recordset Object (ADO)
I used your code and pulled in a table of 38 columns and 63780 rows in less than 7 seconds - about what I'd expect - and smaller recordsets completed almost instantaneously.
Is this the kind of performance you are experiencing? If so, it is consistent with what I'd expect with an ADO connection from Excel to an MDB back-end.
If you are seeing much slower performance than this then there must be some local environment conditions that are affecting things.
Lots of formulas may reference the query. Try temporarially turning on manual calculate in the macro and turning it off when all of your queries are done updating.
This should speed it up a bit, but still doesn't fix the underlying problem.
If you retrieve a lot of records, it would explain why the Range(CellPaste)takes so long. (If you execute the query in Access it wouldn't retrieve all the records, but if you do the CopyFromRecordset it requires all the records.)
There is a MaxRows parameter for CopyFromRecordset:
Public Function CopyFromRecordset ( _
Data As Object, _
<OptionalAttribute> MaxRows As Object, _
<OptionalAttribute> MaxColumns As Object _
) As Integer
Try if settings this to a low value (like 10 or so) changes the performance.
What about the following turnarounds or improvements:
Once opened, save the recordset as xml file (rst.saveToFile xxx) and then have Excel reopen it.
Once opened, put recordset data in an array (rst.getRows xxx), and copy the array on the active sheet
And, at any time, minimise all memory / access requirements: open the recordset as read-only, forward only, close the connection once the data is on your side, etc.
I don't know if it will help, but I am using VBA and ADO to connect to an Excel spreadsheet.
It was retrieving records lightning-fast (<5 seconds), but then all of a sudden it was awfully slow (15 seconds to retrieve one record). This is what lead me to your post.
I realized I accidentally had the Excel file open myself (I had been editing it).
Once I closed it, all was lightening fast again.
The problem 9 times out of 10 is to do with the Cursor Type/Location you are using.
Using dynamic cursors over network connections can slow down the retrieval of data, even if the query executed very fast.
IF you want to get large amounts of data very quickly, you'll need to use CursorLocation = adUseClient on your connection. This mean's you'll only have a static local cursor, so you won't get live updated from other users.
However - if you are only reading data, you'll save ADO going back to the DB for each individual record to check for changes.
I recently changed this as I had a simple loop, populating a list item, and each loop was taking around 0.3s. Not to slow, but even on 1,000 records thats 30 seconds! Changing only the cursor location let the entire process complete in under 1 second.