System Resource Exceeded when alter table counter (autonumber) - excel

I make a simple program in excel that connect to access. The user of this program not good with coding/access, So it should be done with only a few clicks.
the workflow:
1. Import the raw data from BW (in excel) to access. (without ID and has 28k rows)
2. Add the ID Column with autonumber
I use this code for the second part:
acObj.CurrentDb.Execute "ALTER TABLE " & ptableName & " ADD COLUMN ID COUNTER (1, 1);", dbFailOnError
and appears error : 3035 system resource exceeded.
*with only 16k rows works just fine.
Any solution?

You could remove dbFailOnError, and use dbInconsistent instead. That way the query is executed non-transactionally.
The odds of this query failing are pretty much none (it fails if there already is an ID column, but you can check for that). Executing it non-transactionally means Access doesn't have to cache the result, and it uses less system resources.

Related

Oracle select not returning all columns under sqlplus

I have the below SQL select :
select TST_CODE ||'|'||UTI_CODE ||'|'||TST_NAME ||'|'||TST_NAME_REDUIT ||'|'||TST_GROUP ||'|'||TST_MET ||'|'||TST_MET_CODE ||'|'||TST_MET_FAMILY ||'|'||TST_MET_CALCUL ||'|'||TNS_STATUS_PAR_NM2 ||'|'||TNS_STATUS_PART_NM1 ||'|'||TNS_STATUS_PART_N ||'|'||STR_CODE ||'|'||FOUR_CODE ||'|'||TST_SIREN ||'|'||MEMO_ASC ||'|'||NAV_FICID
from TEST_TABLE;
When I run it in SQL Developer it returns my all the columns of the table.
But when I put the same request in an SQL file, like TEST_TABLE.sql and run it under sqlplus in linux it returns only the 1st 14 columns, that is it stops at FOUR_CODE.
Any idea why?
Edited:
After investigation, it is because one of the column is of data type CLOB. Any idea how to solve this? My TEST_TABLE.sql is being dynamically created.
Try to enlarge LINESIZE, such as
SQL> set linesize 200
If it is not enough, enlarge it even more.

USQL nested query performance

I have a USQL query that runs fine on it's own against 400M records in a managed table.
But during development, I don't want to run it against all records all the time, so I pop a where clause in, run it for a tiny subsection of data, and it completes in around 2 minutes (#5 AUs), writing out results to a tsv in my data lake.
Happy with that.
However, I now want to use it as the source for a second query and further processing.
So I create a view with the original USQL (minus the where clause).
Then to test, a new script :
'Select * from MyView WHERE <my original test filter>'.
Now I was expecting that to execute in around the same time as the original raw query. But instead I got to 4 minutes, only 10% through the plan, and cancelled - something is not right.
No expert at reading Job Graphs, but ...
The original script kicks off with 2* 'Extract Combine partition' both reading a couple of hundered MBs, my select on the saved View is reading over 100GB !!
So it is not taking the where clause into account at all at this stage.
Obviously this shows how little I yet understand about how DLA works behind the scenes !
Would someone please help me understand (a) what is going on and (b) a path forward to get the behavior I need ?
Currently having a play with stored procedures to store the 1st result in a table and then call the second query against that - but just seems overkill compared with 'traditional' SQL Server ?!?
All pointers & hints appreciated !
Many Thanks
Original Base Query:
CREATE VIEW IF NOT EXISTS Play.[M3_CycleStartPoints]
AS
//#BASE =
SELECT ROW_NUMBER() OVER (PARTITION BY A.[CTNNumber] ORDER BY A.[SeqNo]) AS [CTNCycleNo], A.[CTNNumber], A.[SeqNo], A.[BizstepDescription], A.[ContainerStatus], A.[FillStatus]
FROM
[Play].[RawData] AS A
LEFT OUTER JOIN
(
SELECT [CTNNumber],[SeqNo]+1 AS [SeqNo],[FillStatus],[ContainerStatus],[BizstepDescription]
FROM [Play].[RawData]
WHERE [FillStatus] == "EMPTY" AND [AssetUsage] == "CYLINDER"
) AS B
ON A.[CTNNumber] == B.[CTNNumber] AND A.[SeqNo] == B.[SeqNo]
WHERE (
(A.[FillStatus] == "FULL" AND
A.[AssetUsage] == "CYLINDER" AND
B.[CTNNumber] == A.[CTNNumber]
) OR (
A.[SeqNo] == 1
)
);
//AND A.[CTNNumber] == "BE52XH7";
//Only used to test when running script as stand-alone & output to tsv
Second Query
SELECT *
FROM [Play].[M3_CycleStartPoints]
WHERE [CTNNumber] == "BE52XH7";
Ok, I think I've got this, or at least in part.
Table valued Functions
http://www.sqlservercentral.com/articles/U-SQL/146839/
to allow the passing of an argument to a view and return the result.
Would be interested in finding some reading material around this subject still though.
Coming from a T-SQL world, seems that there are some fundamental differences I'm still tripping over.

Oracle DB View -> Copying View to Excel using VBScript

english isn't my native tongue, but I hope I can explain my problem sufficiently.
I made a View in the Oracle DB which only contains the data I need.
Using SQL in my VBScript file, I select the View by using:
"SELECT * FROM TEST_1234"
I have selected the complete view now, that works fine.
Now I need to 'export' or copy the complete View to Excel using VBScript (via UFT [Unified Functional Testing]).
Is there an easy way to just copy the whole thing at once or at least complete rows or columns?
If 1. doesn't work, can I just 'iterate' through the rows and columns using two loops and copy the data from every field to the respective field in Excel?
It would be nice to be able to copy the Data without using the names of the columns in a recordset (is there a way to use numbers until EOC [End of columns]?), because there is a very high amount of columns to be copied and the column names are subject to change.
Thanks for any help!
From a programmer==code writer's point of you the most attractive solution is your very first approach (copy the whole thing with just one SQL statement). Depending on the providers' capabilities this statement could look like
INSERT INTO [DstTable] SELECT * FROM [SrcTable] IN '' 'odbc;dsn=DSNName'
or
SELECT * INTO [DstTable] FROM [SrcTable] IN '' 'odbc;dsn=DSNName'
Look here for a working solution that couldn't be simpler; but I admit that a dsnless connection to the destination database looks more complicated and your drivers may have other incantations to refer to the external Database. Furthermore, your pair of providers may not support an external connection from the source to the destination and the dirty trick of using the Access OLEDB driver (which came/still comes? with ADO) to connect to both Databases externally may not work for you. In all, it's certainly not easy to get "INSERT/SELECT INTO External Database" right. [Look at my (just downvoted) answer to see that people dispair and fall back (and upvote) code that uses single-item-copy-loops.] In your case, you'll have to research whether at least one of the Oracle providers available to you supports external connections to Excel (or vice versa).
From a programmer==hacker's point of view (let's get the job done with minimal fuss) an easy solution could be to export the views/tables to .csv (
I looked at this and was disappointed, but you may know much better) and to import them into Excel (just load .csv and save .xls)
If you can't/won't use the file system, you could go thru memory: Use GetRows to get the data into a two dimensional array and assign that to the desired Excel range.
If all the above fails and you need assignments to single cells in row and column loopings over the recordset, remember that the Fields collection gives you access to not only the data but the meta-info (number of columns, column-names, types, ...) too.
Thanks for the help, and the links you provided, Ekkehard and Bond! After reading them and trying a lot, i got a very simple solution.
Here's some working code, if anybody else faces the same or a similar problem:
Option explicit
Dim conn, rec, xlStat, xlStatW, dbCnnStr, SQLSec, statArt
Set conn = Createobject("ADODB.Connection")
Set rec = CreateObject("ADODB.Recordset")
Set xlStat = CreateObject("Excel.Application")
dbCnnStr = "[your DB-connection]"
conn.open dbCnnStr
'Start Excel XXX
Set xlStatW = xlStat.Workbooks.Add()
xlStatW.Sheets(1).Name = "AAA_123"
xlStatW.Sheets(2).Name = "BBB_123"
xlStatW.Sheets(3).Name = "CCC_123"
SQLSec = "SELECT * FROM XXX_123"
rec.open SQLSec,conn
xlStatW.Sheets(1).cells(2,1).CopyFromRecordset rec
rec.Close
SQLSec = "SELECT * FROM YYY_123"
rec.open SQLSec,conn
xlStatW.Sheets(2).cells(2,1).CopyFromRecordset rec
rec.Close
SQLSec = "SELECT * FROM ZZZ_123"
rec.open SQLSec,conn
xlStatW.Sheets(3).cells(2,1).CopyFromRecordset rec
rec.Close
xlStatW.SaveAs ("C:\test.xlsx")
xlStatW.Close
'Ende Excel XXX
conn.Close

Excel in SSIS: How to import a column that may have more than 255 characters when DT_NTEXT causes failures?

OK, so my latest project requires loading an Excel 2007 spreadsheet into a SQL Server table. I'm working in SSIS 2008R2. Based on some stuff I found on the internet, I opened the Excel source in Advanced editor and changed the datatype of the long column to DT_NTEXT, so that it wouldn't truncate it. Then I made the database column VARCHAR(MAX). This runs correctly in debug mode on my laptop.
Then I deployed it to the development server and attempted to load the same test file. It failed with the following error messages:
Error: Code: 0xC0208265
Source: Main Data Flow Task Get Main Data [1]
Description: Failed to retrieve long data for column "DESCR".
End Error
Error: Code: 0xC020901C
Source: Main Data Flow Task Get Main Data [1]
Description: There was an error with output column "DESCR" (72) on output "Excel Source Output" (9). The column status returned was: "DBSTATUS_UNAVAILABLE".
End Error
Error: Code: 0xC0209029
Source: Main Data Flow Task Get Main Data [1]
Description: SSIS Error Code DTS_E_INDUCEDTRANSFORMFAILUREONERROR. The "output column "DESCR" (72)" failed because error code 0xC0209071 occurred, and the error row disposition on "output column "DESCR" (72)" specifies failure on error. An error occurred on the specified object of the specified component. There may be error messages posted before this with more information about the failure.
End Error
Searching for information about the error, I found about a million sites offering the same three suggested solutions:
Add 'IMEX=1' to the extended properties of the connection string.
It was already there.
Change the TypeGuessRows key in the registry.
This was set to zero on the server, which I understand to mean that it should look at the entire file. Nevertheless, I changed it to 8 to match my laptop. The same error occurred when I ran it again. Then I changed it to 1,763, which is more than the number of rows in the spreadsheet. It still gave the same error. So, I put it back to zero. (There's a 1,900-character value in the first row of my test file, so it shouldn't really matter how many it checks, in this case.)
Change the datatype to DT_WSTR(4000) in the source.
The column is supposed to have up to 10,000 characters, so I'm not sure this would be a good idea even if it worked. However, I tried it anyway. This time it gave me a truncation error. I changed the truncation error disposition to "ignore failure" and it loaded the data, but truncated the value to 255 characters. I have verified that the length is 4000 and doesn't get changed when I save the file, but it's still truncating at 255 characters.
I have no idea what else to look at. Any help would be appreciated.
UPDATE 1/29: The package, without any changes, works correctly when running on the pre-production server. It still fails when running on the development server. Both servers have the same version of SSIS (including minor version numbers) as well as the same versions of Windows, Access and Excel. I do not know how to explain this, nor do I know how to tell if it would work in production.
I created a new package with similar non-functional requirements (Excel 2007 file, SSIS 2008, SQL Server 2008 R2, VARCHAR(MAX) target column) and it worked just fine after deployment into the database server. My package:
Metadata at the Excel Source component's output (checked using Advanced Editor): DT_NTEXT
Derived Column component between source and destination to cast to non-unicode from unicode using (DT_TEXT,1252)
Metadata at the OLE DB Destination component's input (checked using Advanced Editor): DT_TEXT
Target Column data type: VARCHAR(MAX)
I do not explicitly use the extended property IMEX in the connection
Executed by right-clicking on the package at the database server, and loaded a file with a few thousand characters per record into the table without truncation. Hope this helps
I have faced this issue while importing an excel file with a field containing more than 255 characters. I solved the issue using Python.
Simply, import the excel in a pandas data frame and then calculate the length of each of those string values per row.
Then, sort the dataframe in descending order. This will enable SSIS to allocate maximum space for that field as it scans the first 3 rows to allocate storage:
df = pd.read_excel(f,sheet_name=0,skiprows = 1)
df = df.drop(df.columns[[0]], axis = 1)
df['length'] = df['Item Description'].str.len()
df.sort_values('length', ascending=False, inplace=True)
writer = ExcelWriter('Clean/Cleaned_'+f[5:])
df.to_excel(writer,sheet_name='Billing',index=False)
writer.save()

Can I import SAP tables that were exported by SE16?

I have exported the contents of a table with transaction SE16, by selecting all the entries and going selecting Download, unconverted.
I'd like to import these entries into another system (where the same table exists and is active).
Furthermore, when I import, there's a possibility that the specific key already exists for a number of entries (old entries).
Other entries won't have a field with the same key present in the table where they're to be imported (new entries).
Is there a way to easily update my table in the second system with the file provided from the first system? If needed, I can export the data in the 3 other format types (Spreadsheet, Rich text format and HTML format). It seems to me though like the spreadsheet and rich text formats sometimes corrupt the data, and the html is far too verbose.
[EDIT]
As per popular demand, the table i'm trying to export / import is a Z table whose fields are all numeric, character, date or time fields (flat data types).
I'm trying to do it like this because the clients don't have any basis resource to help them transport, and would like to "kinna" automate the process of updating one of the tables in one system.
At the moment it's a business request to do it like this, but I'm open to suggestions (and the clients are open too)
Edit
Ok I doubt that what you describe in your comment exists out of the box, but you can easily write something like that:
Create a method (or function module if that floats your boat) that accepts the following:
iv_table name TYPE string and
iv_filename TYPE string
This would be the method:
method upload_table.
data: lt_table type ref to data,
lx_root type ref to cx_root.
field-symbols: <table> type standard table.
try.
create data lt_table type table of (iv_table_name).
assign lt_table->* to <table>.
call method cl_gui_frontend_services=>gui_upload
exporting
filename = iv_filename
has_field_separator = abap_true
changing
data_tab = <table>
exceptions
others = 4.
if sy-subrc <> 0.
"Some appropriate error handling
"message id sy-msgid type 'I'
" number sy-msgno
" with sy-msgv1 sy-msgv2
" sy-msgv3 sy-msgv4.
return.
endif.
modify (p_name) from table <table>.
"write: / sy-tabix, ' entries updated'.
catch cx_root into lx_root.
"lv_text = lx_root->get_text( ).
"some appropriate error handling
return.
endtry.
endmethod.
This would still require that you make sure that the exported file matches the table that you want to import. However cl_gui_frontend_services=>gui_upload should return sy-subrc > 0 in that case, so you can bail out before you corrupt any data.
Original Answer:
I'll assume that you want to update a z-table and not a SAP standard table.
You will probably have to format your datafile a little bit to make it tab or comma delimited.
You can then upload the data file using cl_gui_frontend_services=>gui_upload
Then if you want to overwrite the existing data in the table you can use
modify zmydbtab from table it_importeddata.
If you do not want to overwrite existing entries you can use.
insert zmydbtab from table it_importeddata.
You will get a return code of sy-subrc = 4 if any of the keys already exists, but any new entries will be inserted.
Note
There are many reasons why you would NOT do this for a SAP-standard table. Most prominent is that there is almost always more to the data-model than what we are aware of. Also when creating transactional data, there are often follow-on events or workflow that kicks off, that will not be the case if you're updating the database directly. As a rule of thumb, it is usually a bad idea to update SAP standard tables directly.
In that case try to find a BADI, or if that's not available, record a BDC and do the updates that way.
If the system landscape was setup correctly, your client would not need any kind of basis operations support whatsoever to perform the transports. So instead of re-inventing the wheel, I'd strongly suggest to catch up on what the CTS and TMS can do once they're setup with sensible settings.

Resources