I have over the last few years created some Azure SQL Servers with SSISDB, Integration Services Catalogs and corresponding Integration Runtime in ADF.
From my SSISDB knowledge from on-prem experiences, i always remember to set the Operations Log Retention Period to a few weeks, and lover the number of project versions. But a friendly colleague told me "It's fine you're setting the cleanup parameters, but there is no background job for cleaning up the logs. So you have to call the cleanup stored procedures in SSISDB yourself in ADF or other automation".
I tried to google for Azure SSIS DB and log cleanup, and I found articles describing how "you can call" the SSISDB cleanup stored procedures yourself. But it's not clear if the Azure SQL Server does it automatically or not. On a on-prem SQL server, there is a SQL Agent Job for doing this every night, but the Azure SQL server does not have the SQL Agent available, so I don't think the SSISDB log cleanup is running automatically in Azure. Right now I create a Pipeline in ADF calling the two clean up stored procedures in SSISDB, just to be sure the cleanup is done correctly.
Does anyone know, if the Azure SQL Server is calling the cleanup stored procedures in the background, or do we need to start the cleanup ourselves using ADF or other Azure automation?
In ADF we can create automatic cleanup process.
I crated sql server in azure portal and created database and created three tables and create stored procedure to delete old seven days data using below code:
CREATE PROCEDURE DeleteRecent AS
BEGIN
DELETE Student
WHERE DATEADD(day,-7,GetDate()) > RDate
Delete [dbo].[product]
WHERE DATEADD(day,-7,GetDate()) > RDate
Delete [dbo].[employee]
WHERE DATEADD(day,-7,GetDate()) > RDate
END
GO
In Azure data factory I created pipeline and create copy activity
copy activity Source:
Copy activity sink:
I linked the copy activity to the stored procedure activity. In that stored procedure I added sql stored procedure .
Stored procedure:
I added delete activity to the stored procedure.
I run the pipeline I t successfully run. files are deleted to my requirements.
Output:
I added schedule trigger to the pipeline according to my requirements.
Related
Unable to make a copy of the database using the following SQL command or through the Azure portal
CREATE DATABASE mydatabase_copy AS COPY OF mydatabase;
Unable to make a copy of the database on Azure SQL Server, no wizard is present
we know there is a way of creating bacpac and restoring it, but this is a complete manual process and take too much time
we need some automated way to achieve this.
CREATE DATABASE ... AS COPY OF is not available in Managed Instance, only in SQL Database. Instead use Point-in-time restore
Use point-in-time restore (PITR) to create a database as a copy of
another database from some time in the past. This article describes
how to do a point-in-time restore of a database in Azure SQL Managed
Instance.
Point-in-time restore is useful in recovery scenarios, such as
incidents caused by errors, incorrectly loaded data, or deletion of
crucial data. You can also use it simply for testing or auditing.
Backup files are kept for 7 to 35 days, depending on your database
settings.
Which you can automate with PowerShell.
I've created a backup of my local database through "Export Data Tier Application" and I saved the file at Azure Blob.
At Azure Portal, I choose my SQL Server and import a new database. I select the backup from the Blob, and wait a long time for the DB creation. It stucks at 1% all the time.
After 40 minutes, I get this message every single time I try to create the database:
The ImportExport operation with Request Id
'f6743e06-592d-4531-b319-4297b345f744e' failed due to 'Could not
import package. Warning SQL0: A project which specifies SQL Server
2019 or Azure SQL Database Managed Instance as the target platform may
experience compatibility issues with Microsoft Azure SQL Database v12.
Warning SQL72012: The object [data_0] exists in the target, but it
will not be dropped even though you selected the 'Generate drop
statements for objects that are in the target database but that are
not in the source' check box. Warning SQL72012: The object [log]
exists in the target, but '.
This is very frustrating, its just a database with tables (with no data) that only weights 25 megs. Im following every single tutorial to make this work, every single step, and I always get that error, no matter which database name I choose.
Any help will be appreciated.
Thanks.
Instead of going through the process of creating a bacpac, upload it to an Azure Storage account and the fail at the end to import it to Azure SQL, you can easily migrate that SQL Server to azure using Azure Data Migration Assistant (DMA).
You just have to create an empty Azure SQL Database, and DMA do the rest. You can download it from here.
What are possible solutions to do per-request or on schedule 1-way sync of one SQL Server database to the other in Azure?
Both DBs are configured to allow access only via private endpoints.
I've just started exploring options, appreciate expert's opinion on the question.
1-way replication, incrementally data sync and on schedule configuration -- Azure Data Factory is the most suitable service to achieve the following requirements.
Using ADF, you can incrementally load data from multiple tables in a SQL Server Database to one or more databases in another or same SQL Server by creating pipeline using Copy activity. You can also schedule the pipeline trigger based on your requirement.
This official tutorial from Microsoft on Incrementally load data from multiple tables in SQL Server to a database in Azure SQL Database using the Azure portal will help you to create the ADF environment using Linked Service, Datasets, and copy Activity to accomplish your requirement (skip setting Self hosted Integration Runtime as it is required when one of your database is in on-premises).
Once your pipeline has been created, you can schedule it by creating New Trigger. Follow Create a trigger that runs a pipeline on a schedule to create new trigger.
My requirements are as below :
Move 3 SAP local databases to 3 Azure SQL DB.
Then Sync daily transactions or data to azure every night. If transactions of local DB are already exists in azure, update process will do on these transactions if not insert process will do.
Local systems will not stop after moving to azure. They will still goes about 6 months.
Note :
We are not compatible with Azure Data Sync process because of it's
limitations - only support 500 tables, can't sync no primary keys
table, no views and no procedure. It also increase database size on
both(local and azure).
Azure Data Factory Pipeline can fulfill my requirements but I have
to create pipeline and procedure manually for each table. (SAP has
over 2000 tables, not good for me)
We don't use azure VM and Manage Instance
Can you guide me the best solution to move and sync? I am new to azure.
Thanks all.
Since you mentioned that ADF basically meets your needs, I will try to start from ADF. Actually,you don't need to manually create each table one by one.The creation could be done in the ADF sdk or powershell script or REST api. Please refer to the official document:https://learn.microsoft.com/en-us/azure/data-factory/
So,if you could get the list of SAP table names(i found this thread:https://answers.sap.com/questions/7375575/how-to-get-all-the-table-names.html) ,you could loop the list and execute the codes to create pipelines in the batch.Only table name property need to be set.
I have some T-sql scripts which generate some data and we manually update them into the excel spreedsheet, we need a way to push this into azure sql database, from a job so that we can access them from there and remove the manual process of uploading the information to the azure sql database every time. What is the best way to do this?
I assume you are trying to move data from an on prem server to Azure. The simplest method may be Azure Data Sync.
You could load your data from your queries into an on prem table which syncs to Azure.
On all your SQL Server instances, you can create a Linked Server to one Azure SQL Database. Once the linked server is created you can directly insert on Azure SQL Database from your on-premises SQL Server instances.
Here is how you create the Linked Server.
Below image shows how you insert data on Azure SQL Database using the linked server.
For detailed steps, you can visit this tutorial.
I think you can think about Azure Data Factory.
Azure Data Factory Copy Active can help you use T-sql scripts to move data to another Azure SQL database.
For more details, please the Azure tutorial:Copy multiple tables in bulk by using Azure Data Factory.
When the pipeline created, you can trigger and monitor the pipeline runs.
Trigger the pipeline on a schedule:
You can create a scheduler trigger to schedule the pipeline to run periodically (hourly, daily, and so on). In this procedure, you create a trigger to run every minute until the end date and time that you specify.
Please see: Trigger the pipeline on a schedule.
This can help you push the data to Azure SQL Database automatically.
Hope this helps.
you can try the SSIS package? which automates the process of data upload data into azure sql database.... i have not used ssis for Azure but to sink data from csv/xls/xlsx into ms sql server database,,I refered this article which can be helpful in anyway