oracle database sync using spring integration - spring-integration

I would greatly appreciate if someone could share if it is possible to do a near real time oracle database sync application using spring integration. Its a lightweight requirement where only certain data fields across couple of tables to be copied over as soon as they change in source database. Any thoughts around what architecture can be used would greatly help. Also if any Oracle utility that can be leveraged along with SI?

I'd say that the Oracle Trigger is for you. When the main data is changed you should use a trigger to move those changes to another table at the same DB.
From SI you should use <int-jdbc:inbound-channel-adapter> to read and remove data from that sync table. Within the same transaction you have to use <int-jdbc:outboud-channel-adapter> to move the data to another DB.
The main feature here should be a XA transaction, because you use two DBs and what is good they both are Oracle.
Of course you can try to use the 1PC effort, but there will be need more work to do.

Related

NodeJS based ETL catching updates

My work environment is MS SQL Server 2016 and I'm in need to create a NodeJS ETL tool to capture all inserts and updates for a large scale DB between 2 servers. I was doing my research and found couple ETL tools such as Nextract and Empujar but none of those have examples or connections for MSSQL. But they claim they do support MSSQL still need to make the connections and everything ground up. However I think I can build a simple ETL tool to select all the records from those tables using NodeJS, that's no issue but how would I tackle the updates?
Now you might think why can't you have some INSERT and UPDATE triggers? Well the issue is our ERP system is very fragile and it breaks once we have triggers set up.
All I need the ETL tool to do is constantly checking for new data and if it gets INSERTED or UPDATED then pass it to the other server(As the real meaning of ETL). Appreciate all the help!

How to bulk delete (say millions) of documents spread across millions of logical partitions in Cosmos db sql api?

MS Azure documentation does not talk anything about it. Formal bulk executor documentations talks only about insert and update options, not delete. There is a suggested java script server side program to create a stored procedure which sounds very good, but that requires us to input the partition key value. It wont make sense if our documents are spread across millions of logical partitions.
This is a very simple business need. While migrating huge volume of data in a sql api cosmos collection, if we insert some wrong data, there seems to be no option to delete other then restore to previous state. I have explored for few hrs now, but couldnt find a solution. Even raised a case with MS support, they directed to some .net code which I see need to see as that does not look straightforward. What if someone dont know .net.
Cant we easily bulk delete docs spread across several logical partitions in MS Cosmos SQL API ? Feels disgusting ..
I hope you can provide some accurate details. How to achieve this with some simple straight forward sample code and steps as well. Hope MS and Cosmos db experts to share views as well.
Even raised a case with MS support, they directed to some .net code
which I see need to see as that does not look straightforward.
Obviously,you have already made some efforts to find any solutions except below 2 scenarios:
1.Bulk delete Stored procedure:https://github.com/Azure/azure-cosmosdb-js-server/blob/master/samples/stored-procedures/bulkDelete.js
2.Bulk delete executor:
.NET: https://github.com/Azure/azure-cosmosdb-bulkexecutor-dotnet-getting-started/blob/master/BulkDeleteSample/BulkDeleteSample/Program.cs
Java: https://github.com/Azure/azure-cosmosdb-bulkexecutor-java-getting-started/blob/master/samples/bulkexecutor-sample/src/main/java/com/microsoft/azure/cosmosdb/bulkexecutor/bulkdelete/BulkDeleter.java
So far, only above official solutions are supported. Another workaround is TTL for cosmos db.I believe you have your own logic to judge which part of data is correct and which part of data is wrong,should be deleted. You could set TTL on those data so that they could be killed as soon as expired data arrivals.
Has anyone tried this .. looks like a good solution in java
https://github.com/Azure/azure-cosmosdb-bulkexecutor-java-getting-started#bulk-delete-api
If you write a batch job to do that delete documents over night by using some date configuration we could achieve it. Here is the article published on how to do it.
https://medium.com/#vaibhav.medavarapu/bulk-delete-documents-from-azure-cosmos-db-using-asp-net-core-8bc95dd20411

Some input on how to proceed on the migration from SQL Server

I'm migrating from SQL Server to Azure SQL and I'd like to ask you who have more experience in Azure(I have basically none) some questions just to understand what I need to do to have the best migration.
Today I do a lot of cross database queries in some of my tasks that runs once a week. I execute SPs, run selects, inserts and updates cross the dbs. I solved the executions of SPs by using external data sources and sp_execute_remote. But as far as I can see it's only possible to select from an external database, meaning I won't be able to do any inserts or updates cross the dbs. Is that correct? If so, what's the best way to solve this problem?
I also read about cross db calls are slow. Does this mean it's slower that in SQL Server? I want to know if I'll face a slower process comparing to what I have today.
What I really need is some good guidelines on how to do the best migration without spending loads of time with trial and error. I appreciate any help in this matter.
Cross database transactions are not supported in Azure SQL DB. You connect to a specific database, and can't use 3 part names or use the USE syntax.
You could open up two different connections from your program, one to each database. It doesn't allow any kind of transactional consistency, but would allow you to retrieve data from one Azure SQL DB and insert it in another.
So, at least now, if you want your database in Azure and you can't avoid cross-database transactions, you'll be using an Azure VM to host SQL Server.

SQLite The database file is locked during Insert/Delete

I am using a C++ shell extension DLL which used to read, write data into the SQLite database tables. There another application ( exe) which used to access all the tables.
Sometimes, my dll displaying an exception "The database file is locked" when I try to Delete/Insert/Update to the SQLite Database tables. This is because the other application was accessing the tables at this time.
Is there any way to resolve this issue from my DLL? Can I use the solution as mentioned in the link : "http://stackoverflow.com/questions/6455290/implementing-sqlite3-busy-timeout-in-an-ios-app"
In the current code, I am using CppSQLite3.cpp method execQuery(const char* szSQL) to execute the SQL query.
Please advice.
First of all you should know that SQLite does a Database level locking. When you start a transaction and the other application tries to write something to the same database, then you get Database is locked and SQLite automatically tries executing that same query after sqlite3_busy_timeout interval.
So the trick is to make sure you keep your transactions are as short as possible i.e
1. Do a begin transaction
2. Update/Delete/Insert
3. Commit
and not have anything else between these 3 steps.
And also increase your sqlite3_busy_timeout interval to suite your application depending on how large your transactions are.
You can try WAL mode, where reading and writing to SQLite can be done at the same time. But it comes with its own set of disadvantages. You can refer SQLite documentation.
SQLite has some restrictions regarding multiple users and multiple transactions. That means you can't read/write on a resource from different transactions. The database will be locked when the database is being updated.
Here are some links that might help you
http://sqlite.org/c3ref/busy_timeout.html
http://www.sqlite.org/c3ref/busy_handler.html
Good Luck

Subsonic - let customers switch the database

I am new to subsonic and I'd like to know about the best practices regarding the following scenario:
Subsonic supports multiple database systems, e.g. SQLServer and MySQL. Our customers need to decide while deploying our application to their servers, which database system should be used. Long story short: the providerName, normally specified within the application configuration, should be configurable after the application is finished.
How can this be done? Do I have to generate seperate data libraries for each database system I want to support?
Thank you in advance
Marco
No you do not need to genarate seperate libraries.
How ever you can not use direct sql string as you understand but you need to go always using subsonic sql create code.
Also is good to make some tests on the diferent databases, because not all code have been 100% testes on every case.

Resources