What are the best ways to Back up and restore Azure SQL Database schema in Azure cloud?
I have tried creating bacpac files, but problem with that is, it will be imported as a new database. I want to back up and restore specific schema only within the same database.
Another way i am looking at is creating a sql script file which contains data and schema using SSMS. But here size of the sql script is huge.
Any help is greatly appreciated
We can use bcp Utility for exporting and importing the data in a fast way.
I want to back up and restore specific schema only within the same
database.
There is no native tool for Azure SQL Database that can do backup/restore of some certain schema.
The closest one to the requirements is a bacpac, however it can restore data into the empty or in a new database.
Therefore, a possible option is to move data out and then in using ETL tools like:
SSIS
ADF
Databricks
Related
I want to change the sever database same as mt local database using the dacpac file, and Schema Compare in SSDT. I can add new table/columns which doesnt affect the existing data.
But When I change/rename the SQL database table column name it seems to be dropping the existing data. I want to find a way to do this when i change or rename existing columns that it doesn't remove the underlying data-set.
Does anyone has any idea on how to archive this ?
Note:
I follow the below link setup the SSDT via AZ dev ops.
https://medium.com/synsoft-global/deploying-db-changes-using-ssdt-via-azure-devops-3907f326e80d
We are building data migration pipeline using Azure data factory (ADF). We are transferring data from one CosmosDb instance to another. We plan to enable dual writes, so that we write to both the databases before migration begins to ensure that during migration if any data point changes both the databases get the most updated data. However, In ADF there is only Insert or upsert options available. Our case is on Insert if it gets 'conflict' continue and fail the pipeline. Can anyone give any pointers on how to achieve that in ADF?
Other option would be to create our own custom tool using CosmosDb libraries to transfer data.
If you are doing a live migration ADF is not the right tool to use as this is intended for offline migrations. If you are migrating from one Cosmos DB account to another your best option is to use the Cosmos DB Live Data Migrator.
This tool also provides dead letter support as well which is another requirement you have.
I am trying to use Azure Data Factory to perform an incremental load on a database without using a watermark or change tracking technology. I do not have the rights to add watermarks to tables, I can only read data from the target database. The database system does not have an ability to enable change tracking technology. It is also a very large database, which is why I want to be able to incrementally load changes rather than dropping the entire database and re-uploading it every night.
Is there a way to only upload the changes without altering the on-premises database or am I SOL?
I am connecting to an old Sybase database on premises and uploading data to an Azure SQL Server Database.
I would suggest use Data Flow. It provide options 'upsert' for you to allow insert or update the data in Azure SQL database. We don't need to drop the entire database and re-uploading it every night.
Ref here : Sink transformation
I have a backup file that came from Server A and I copied that .bak files into my local and setup that DB into my Sql Server Management Studio. Now After setting it up I deployed it in Azure Sql Database. But now there were change in the Data in Server A because it's still being used, so I need to get all those changes to the Azure SQL Database that I just deployed. How am I going to do that?
Note: I'm using Azure for my server and I have a local copy of Server A database. So basically in terms of data and structure my local and the previous Server A db is the same. But after a few days Server A data is now updated and my local DB is still the same as when I just backup the db in Server A.
How can I update the DB in Azure to take all the changes in Server A and deploy it in Azure?
You've got a few choices. It's just about migrating data. It's also a question of which data you're going to migrate. Let's say it's a neat, complete replacement. Then, I'd suggest looking at the bacpac mechanism. That's a way to export a database, it's structure and data, then import it into a new location. This is one mechanism of moving to Azure.
If you can't simply replace everything, you need to look at other options. First, there's SSIS. You can build a pipeline to move the data you need. There's also export and import through sqlcmd, which can connect to Azure SQL Database. You can also look to a third party tool like Redgate SQL Data Compare as a way to pick and choose the data that gets moved. There are a whole bunch of other possible Extract/Transform/Load (ETL) tools out there that can help.
Do you want to sync schema changes as well as Data change or just Data? If it is just Data then the best service to be used would be Azure Data Migration Service, where this service can help you copy the delta with respect to Data to Azure incrementally, both is online and offline manner and you can also decide on the schedule.
I have read this SO question but mine is quite specific to the "import" of CSV and not how to access the blob to get the CSV out
Which is the best way?
1) CSV Stored in the Blob - use a worker role, read the CSV from the blob, parse data and update database
2) Is SQL BulkCopy/BulkInsert an option. The challenge here is that it should not have any on-premise involvement. All within Azure: blob->SQL DAtabase.
3) Will Azure Automation help? Are there PS scripts/workflows that help in such bulk update of CSV data to Azure SQL DB? I haven't found any though
Are there other options that help import blob CSV data to SQL DB without having to write custom code?
Appreciate any thoughts...
Your first method would work. You could also use azcopy (http://aka.ms/azcopy) to download the file locally, and then use BCP to load it into SQL - this way you wont have to write any code for this.
Azure Automation would help if you want to do this repeatedly. You should be able to set this up as a script even if one doesn't exist.
I know this is outdated question but for anyone looking for quick way to do this feel free to check my article on how to do this quickly using SQL prodecure triggered by Logic App.
In short you run on master
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'UNIQUE_STRING_HERE'
Then you run on DB
CREATE DATABASE SCOPED CREDENTIAL BlobCredential
WITH IDENTITY = 'SHARED ACCESS SIGNATURE',
SECRET = 'sv=SAS_TOKEN_HERE';
CREATE EXTERNAL DATA SOURCE AzureBlob
WITH (
TYPE = BLOB_STORAGE,
LOCATION = 'https://<account_name>.blob.core.windows.net/<container_name>',
CREDENTIAL = BlobCredential
);
And then
BULK INSERT <my_table>
FROM '<file_name>.csv'
WITH (
DATA_SOURCE = 'AzureBlob',
FORMAT = 'CSV',
FIRSTROW = 2
);
Just wrap this insert in procedure and execute it from logic app.
https://marczak.io/posts/azure-loading-csv-to-sql/
or just use ADF like here
https://azure4everyone.com/posts/2019/07/data-factory-intro/
Late answer to old question, but...
If you can use an Azure SQL Data warehouse you could take advantage of PolyBase to directly query the data in CSV format stored in the blob https://learn.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-load-polybase-guide#export-data-to-azure-blob-storage. This will allow you to directly map the data as an external table and query it dynamically.
This saves you the trouble of writing an external tool/solution for extracting, parsing and uploading the data to the Azure SQL database. Unfortunately PolyBase only works for Azure SQL Data warehouse, not Database, but you could setup something that read the structured data from the warehouse to your solution.
I know this question is two years old, but for those just now searching on the topic, I'd like to mention that the new Azure Feature Pack for SSIS makes this an easy task in SSIS. In VS Data Tools, after installing the Azure Feature pack, you would open an empty SSIS project and 1) Create an Azure Storage Connection Manager, then 2) Add a Data Flow Task, then open the Data Flow task and 3) Add a Blob Source tool to connect to the CSV, and then 4) using Destination Assistant connect to the SQL Table where the data is going. You can then execute this as a one-time load interactively inside the VS Data Tools IDE, or publish it to the SQL Server instance and create a recurring job.