Migrating from SQLOLEDB to MSOLEDBSQL problem with recordset update - visual-c++

During migration from SQLOLEDB to MSOLEDBSQL i am facing some issues with ADO update operation on Recordset.
In our system we use old ADO with SQLOLEDB provider and during migration to MSOLEDBSQL it occures that there are some differences during executing update on Recordset.
There is different where clause prepared by SQLOLEDB and MSOLEDBSQL.
In our table we have primary key which contains two columns and SQLOLEDB in where clause use both of these columns, but MSOLEDBSQL in where clause is using only first column from primary key.
In my opinion SQLOLEDB provider works better.

Related

Combine multiple UNIONS for a INSERT using ADO with ACE provider

I'm using Excel VBA and ADO (provider in connection string=ACE) to insert records into a sharepoint list.
I already have the code working fine - directly from Excel to Sharepoint - as long as I either:
Insert just one record, or
Use a loop to insert all records one-by-one
However, I want a faster operation. I want to do something more along the lines of
insert into tablename (col1, col2, col3)
select 'value1' as [col1],'value2' as [col2], 'value3' as [col3]
union all
select 'value1' as [col1],'value2' as [col2], 'value3' as [col3]*
....etc etc
Apparently, when I use ACE as the provider (which is all I can do...I think?) then I am under Microsoft Access's less-than-useful "rules" for append queries. There must be a From table, for example. I've seen solutions that spoof out a "Dual" table to mimic oracle's functionality, but my whole objective is NOT to use Access - this is straight from Excel to Sharepoint, so creating and references Access Database files and Access Database Tables violates my goal.
Is there ANY way to convince ADO w/ACE to do any type of reasonably efficient bulk insert (I will be selecting values from spreadsheet cells, and I have no problem creating the VBA to create the SQL - I just need to know WHAT sql, if any, will actually work) ???
FYI, this is an example of my VBA code which works fine:
Set cnt = New ADODB.Connection
With cnt
.ConnectionString = _
"Provider=Microsoft.ACE.OLEDB.12.0;WSS;IMEX=0;RetrieveIds=Yes;DATABASE=https://redacted.ad.redacted.com/sites/sitenameredacted/;LIST=Isaac Test Excel To Sharepoint;"
.Open
End With
mySQL = "insert into [Isaac Test Excel to Sharepoint] ([Column1],[Column2]) values ('col1_val1','col2_val2');"

Refresh Access Table from Excel

I have an excel vba program that pushes two sets of data in a parent-child configuration from an excel sheet into an access database. The connection and pushing works, but I've run into a problem where if the last parent record was deleted in the database, the primary key and foreign key don't match. This is because the SQL query I have pulls the largest number in my autonumber primary key field, and even if I execute this query after inserting the parent data, it still pulls back the same primary key as before. This has me in a position where I need to requery the access database to figure out the latest entry's primary key. However, entries don't show up in the database until the table has been refreshed.
So my question is how do I refresh the access table so the latest entry will show up in my query?
After much trial and error, I solved this problem by closing the connection to the parent table and reopening it before querying the primary key again.

Azure SQL External Table alternatives

Azure external tables between two azure sql databases on the same server don't perform well. This is known. I've been able to improve performance by defining a view from which the external table is defined. This works if the view can limit the data set returned. But this partial solution isn't enough. I'd love a way to at least nightly, move all the data that has been inserted or updated from the full set of tables from the one database (dbo schema) to the second database (pushing into the altdbo schema). I think Azure data factory will let me do this, but I haven't figured out how. Any thoughts / guidance? The copy option doesn't copy over table schemas or updates
Data Factory Mapping Data Flow can help you achieve that.
Using the AlterRow active and select an uptade method in Sink:
This can help you copy the new inserted or updated data to the another Azure SQL database based on the Key Column.
Alter Row: Use the Alter Row transformation to set insert, delete, update, and
upsert policies on rows.
Update method: Determines what operations are allowed on your
database destination. The default is to only allow inserts. To
update, upsert, or delete rows, an alter-row transformation is
required to tag rows for those actions. For updates, upserts and
deletes, a key column or columns must be set to determine which row
to alter.
Hope this helps.

Sybase ASE 15.5 Identity columns

I have created couple of tables using the Sybase Central tool in Sybase ASE 15.5 (Sybase AnyWhere). I have defined a column as a primary key (int data type) and somehow the column has become Identity as well.
Now from Sybase Central, there is no way I can remove the Identity from that column, even if there is no data in this table or in any of the referenced tables.
Can anybody help? I don't want to use Set IDENTITY_INSERT, I want to remove the identity behavior altogether from this column.
Thanks
Your question is a little confusing, as I'm not sure what Sybase software, or software version you are using. Sybase ASE 15.5 is not the same as Sybase SQL Anywhere, but hopefully these steps will work regardless.
You can not remove the identity behavior from a column, but you can alter the table to accomplish the same thing. Here are the steps you should take to preserve your data. Ensure there are no indexes on the table.
Alter the table to add a new column with the same datatype as the current identity column.
Copy the data from the identity column to the new column.
Drop the identity column
(Optional)If you've written any code against the table, you will probably want to rename the new column to the same name as the column that was just dropped.
alter table TABLE_NAME add NEW_COL int NULL
go
update TABLE_NAME set NEW_COL = ID_COL_NAME
go
alter table TABLE_NAME drop ID_COL_NAME
go
alter table TABLE_NAME rename NEW_COL to ID_COL_NAME
go

Adding columns to a sybase table with unique auto_identity index option

I've inherited a Sybase database that has the 'unique auto_identity index' option enabled on it. As part of an upgrade process I need to add a few extra columns to the tables in this database i.e.
alter table mytable add <newcol> float default -1 not null
When I try to do this I get the follow error:
Column names in each table must be unique, column name SYB_IDENTITY_COL in table #syb__altab....... is specifed more than once
Is it possible to add columns to a table with this property enabled?
Update 1:
I created the following test that replicates the problem:
use master
sp_dboption 'esmdb', 'unique auto_identity indexoption',true
use esmdb
create table test_unique_ids (test_col char)
alter table test_unique_ids add new_col float default -1 not null
The alter table command here produces the error. (Have tried this on ASE 15/Solaris and 15.5/Windows)
Update 2:
This is a bug in the Sybase dbisql interface, which the client tools Sybase Central and Interactive SQL use to access the database and it only appears to affect tables with the 'unique auto_identity index' option enabled.
To work around the problem use a different SQL client (via JDBC for example) to connect to the database or use isql on the command line.
Should be no problem to ALTER TABLE with such columns; the err msg indicates the problem regards something else. I need to see the CREATE TABLE DDL.
Even if we can't ALTER TABLE, which we will try first, there are several work-arounds.
Responses
Hah! Internal Sybase error. Open a TechSupport case.
Workaround:
Make sure you get jthe the exact DDL. sp_help . Note the IDENTITY columns and indices.
Create a staging table, exactly the same. Use the DDL from (1). Exclude the Indices.
INSERT new_table SELECT old_table. If the table is large, break it into batches of 1000 rows per batch.
Now create the Indices.
If the table is very large, AND time is an issue, then use bcp. You need to research that first, I am happy to answer questions afterwards.
When I ran your sample code I first get the error:
The 'select into' database option is not enabled for database 'mydb'. ALTER TABLE with data copy cannot be done. Set the 'select into' database option and re-run
This is no doubt because the data within your table needs copying out because the new column is not null. This will use tempdb I think, and the error message you've posted refers to a temp table. Is it possible that this dboption has been accidentally enabled for the tempdb?
It's a bit of a shot in the dark, as I only have 12.5 to test on here, and it works for me. Or it could be a bug.

Resources