Onpremise Databases to Azure SQL Databases and Sync continuously - azure

My requirements are as below :
Move 3 SAP local databases to 3 Azure SQL DB.
Then Sync daily transactions or data to azure every night. If transactions of local DB are already exists in azure, update process will do on these transactions if not insert process will do.
Local systems will not stop after moving to azure. They will still goes about 6 months.
Note :
We are not compatible with Azure Data Sync process because of it's
limitations - only support 500 tables, can't sync no primary keys
table, no views and no procedure. It also increase database size on
both(local and azure).
Azure Data Factory Pipeline can fulfill my requirements but I have
to create pipeline and procedure manually for each table. (SAP has
over 2000 tables, not good for me)
We don't use azure VM and Manage Instance
Can you guide me the best solution to move and sync? I am new to azure.
Thanks all.

Since you mentioned that ADF basically meets your needs, I will try to start from ADF. Actually,you don't need to manually create each table one by one.The creation could be done in the ADF sdk or powershell script or REST api. Please refer to the official document:https://learn.microsoft.com/en-us/azure/data-factory/
So,if you could get the list of SAP table names(i found this thread:https://answers.sap.com/questions/7375575/how-to-get-all-the-table-names.html) ,you could loop the list and execute the codes to create pipelines in the batch.Only table name property need to be set.

Related

Azure SQL: How to Merge Multiple Db's into a Single Managing Db, while Syncing Bidirectionally any future changes

I have multiple devices with assignments, where each generating similar in structure data offline. Also each device periodically gets online to sync with an Azure SQL database that is separate and only assign to it. The devices also received new assignment through syncing with the Azure SQL database.
I want to combine these multiple database into a single database for managing, while bidirectionally getting updates when a sync goes through and also relaying back any assignments to the separate databases.
Any help or ideas would be much appreciated.
You can use Azure SQL Data Sync for the same purpose, which can update bi-directionally and can be scheduled to run according to requirements. However for multiple databases, we need to create multiple sync groups.
Set up SQL Data Sync between databases in Azure SQL Database and SQL Server

One-way sync SQL Server DBs with Azure private connectivity in place

What are possible solutions to do per-request or on schedule 1-way sync of one SQL Server database to the other in Azure?
Both DBs are configured to allow access only via private endpoints.
I've just started exploring options, appreciate expert's opinion on the question.
1-way replication, incrementally data sync and on schedule configuration -- Azure Data Factory is the most suitable service to achieve the following requirements.
Using ADF, you can incrementally load data from multiple tables in a SQL Server Database to one or more databases in another or same SQL Server by creating pipeline using Copy activity. You can also schedule the pipeline trigger based on your requirement.
This official tutorial from Microsoft on Incrementally load data from multiple tables in SQL Server to a database in Azure SQL Database using the Azure portal will help you to create the ADF environment using Linked Service, Datasets, and copy Activity to accomplish your requirement (skip setting Self hosted Integration Runtime as it is required when one of your database is in on-premises).
Once your pipeline has been created, you can schedule it by creating New Trigger. Follow Create a trigger that runs a pipeline on a schedule to create new trigger.

Azure data factory - Continue on conflict on insert

We are building data migration pipeline using Azure data factory (ADF). We are transferring data from one CosmosDb instance to another. We plan to enable dual writes, so that we write to both the databases before migration begins to ensure that during migration if any data point changes both the databases get the most updated data. However, In ADF there is only Insert or upsert options available. Our case is on Insert if it gets 'conflict' continue and fail the pipeline. Can anyone give any pointers on how to achieve that in ADF?
Other option would be to create our own custom tool using CosmosDb libraries to transfer data.
If you are doing a live migration ADF is not the right tool to use as this is intended for offline migrations. If you are migrating from one Cosmos DB account to another your best option is to use the Cosmos DB Live Data Migrator.
This tool also provides dead letter support as well which is another requirement you have.

How do I backup data to azure blob from azure cosmos db during a specific stage?

I have Azure cosmos DB account what I want to do is backup data which is one month old from azure cosmos DB to Azure blob storage using my node app. I have already created pipeline and have triggered it by using create run pipeline API for Nodejs (using Azure data factory). But I am not able to figure out how to make the pipeline selective for data which is one month old from the current date. Any suggestions for that?
EDIT: Actually I want to run the API daily so that it backs up data which is one month old. For example, let's say I get 100 entries today in my cosmos DB, so the pipeline should select data from current date - 30 days and should back it up so that at any point my Azure cosmos DB has data for recent 30 days only and rest are backed up to Azure blob.
Just a supplement to #David's answer here.If you mean Cosmos DB SQL API, it has automatic backup mechanism based on this link:Automatic and online backups.
With Azure Cosmos DB, not only your data, but also the backups of your
data are highly redundant and resilient to regional disasters. The
automated backups are currently taken every four hours and at any
point of time, the latest two backups are stored. If you have
accidentally deleted or corrupted your data, you should contact Azure
support within eight hours so that the Azure Cosmos DB team can help
you restore the data from the backups.
However,you cannot access this backup directly. Azure Cosmos DB will use this backup only if a backup restore is initiated.
But the document provides two options to manage your own backups.
1.Use Azure Data Factory to move data periodically to a storage of your choice.
2.Use Azure Cosmos DB change feed to read data periodically for full backups, as well as for incremental changes, and store it in your own
storage.
You could use trigger the copy activity in ADF to transfer data in the schedule.If you want to filter data by date,you could learn about _ts in cosmos db which represents the latest modified time of data.
Not sure what pipeline you're referring to. That said: Cosmos DB doesn't have any built-in backup tools. You'd need to select and copy this data programmatically.
If using the MongoDB API, you could pass a query parameter to the mongoexport command-line tool (to serve as your date filter), but you'd still need to run mongoexport from your VM, write to a local directory, then copy to blob storage (I don't know if you can install/run MongoDB tools in something like Azure Functions or a DevOps pipeline).

I need to push data from various select statments to Azure SQL Database, best way to do so?

I have some T-sql scripts which generate some data and we manually update them into the excel spreedsheet, we need a way to push this into azure sql database, from a job so that we can access them from there and remove the manual process of uploading the information to the azure sql database every time. What is the best way to do this?
I assume you are trying to move data from an on prem server to Azure. The simplest method may be Azure Data Sync.
You could load your data from your queries into an on prem table which syncs to Azure.
On all your SQL Server instances, you can create a Linked Server to one Azure SQL Database. Once the linked server is created you can directly insert on Azure SQL Database from your on-premises SQL Server instances.
Here is how you create the Linked Server.
Below image shows how you insert data on Azure SQL Database using the linked server.
For detailed steps, you can visit this tutorial.
I think you can think about Azure Data Factory.
Azure Data Factory Copy Active can help you use T-sql scripts to move data to another Azure SQL database.
For more details, please the Azure tutorial:Copy multiple tables in bulk by using Azure Data Factory.
When the pipeline created, you can trigger and monitor the pipeline runs.
Trigger the pipeline on a schedule:
You can create a scheduler trigger to schedule the pipeline to run periodically (hourly, daily, and so on). In this procedure, you create a trigger to run every minute until the end date and time that you specify.
Please see: Trigger the pipeline on a schedule.
This can help you push the data to Azure SQL Database automatically.
Hope this helps.
you can try the SSIS package? which automates the process of data upload data into azure sql database.... i have not used ssis for Azure but to sink data from csv/xls/xlsx into ms sql server database,,I refered this article which can be helpful in anyway

Categories

Resources