We have a sql server on vm, and a azure sql database, now we want to migrate/restore the vm db to the azure db. I know how to do that via ssms manually. But we want to schedule it because we need to run it daily. I can't find the t-sql scripts to do these. Is there any way?
You can use below PowerShell scripts with Azure Automation to schedule the creation of the bacpac and the import process to Azure SQL Database.
First you need to create a Blob storage account, then create a container on the storage account.
Set-AzureSubscription -CurrentStorageAccountName “YourStorageAccountName” -SubscriptionName $subscription
New-AzureStorageContainer -Name “mycontainer” -Permission Off
Now you need to create a bacpac using SqlPackage. You can schedule the creation of the bacpac using Schedule Tasks on Windows or using PowerShell.
SqlPackage /Action:Export /SourceServerName:SampleSQLServer.sample.net,1433 /SourceDatabaseName:SampleDatabase /TargetFile:"F:\Temp\SampleDatabase.bacpac"
Next upload the bacpac to the storage account using PowerShell.
Set-AzureStorageBlobContent -Container “mycontainer” -File “F:\Temp\SampleDatabase.bacpac”
Now you can import the bacpac to the Azure SQL Database logical server as a new database:
Get-AzureStorageKey –StorageAccountName “YourStorageAccountName”
$primarykey=(Get-AzureStorageKey -StorageAccountName “YourStorageAccountName”).Primary
$StorageUri=(Get-AzureStorageBlob -blob ‘SampleDatabase.bacpac’ -Container ‘mycontainer’).ICloudBlob.uri.AbsoluteUri
In addition to Import/Export you can use Snapshot Replication or Transactional Replication. See Replication to SQL Database single and pooled databases.
You can use SQL Data Sync, it will do several synchronizations per day, you configure the interval yourself.
https://learn.microsoft.com/en-us/azure/sql-database/sql-database-sync-data
Hybrid Data Synchronization: With Data Sync, you can keep data synchronized between your on-premises databases and Azure SQL databases to enable hybrid applications. This capability may appeal to customers who are considering moving to the cloud and would like to put some of their application in Azure.
Related
I have 100 dataflows ,50 pipelines and their related datasets variables etc.
Now i want to use Synapse service and want my all pipelines amd stuff of ADF into Synapse. My Adf is git configured
Can we export them in one go??
To move data from data factory to synapse we need to download the supported module files in ADF.
I tried to migrate data from ADF to Synapse workspace using PowerShell following below procedure:
I connected to Azure account using below command:
Connect-AzAccount
I set my subscription using below command:
Select-AzSubscription -SubscriptionName "<SubscriptionName>"
I created a linked service for source and sink using below comand:
Set-AzSynapseLinkedService -WorkspaceName <synapseworkspace> -Name <linkedservicename> -DefinitionFile "<json file path>"
I created dataset for source and sink in synapse workspace using below command:
Set-AzSynapseDataset -WorkspaceName <synapseworkspace> -Name <datasetname> -DefinitionFile "<json file path>"
I created pipeline in synapse workspace using below command:
Set-AzSynapsePipeline -WorkspaceName <synapseworkspace> -Name <pipelinename> -DefinitionFile "<json file path>"
They are created successfully in synapse workspace:
In your case you are having bulk data in your data factory, you can use Azure Data Factory to Synapse Analytics Migration Tool.
You can migrate Azure Data Factory pipelines, datasets, linked service, integration runtime and triggers to a Synapse Workspace through below PowerShell script:
.\importADFtoSynapseTool.ps1 -sourceADFResourceId "/subscriptions/<SubscriptionID>/resourcegroups/<ADFResourceGroupName>/providers/Microsoft.DataFactory/factories/<ADFName>" -destSynapseResourceId "/subscriptions/<SubscriptionID>/resourcegroups/<SynapseResourceGroupName>/providers/Microsoft.Synapse/workspaces/<SynapseName>" -TenantId <tenantId>
Resource group name for Azure Data Factory
Resource group name for Azure Synapse Workspace
Subscription ID
Tenant ID (you can find this by click Azure Active Directory in the Azure Portal)
For more clarification you can visit this.
As far as I know, there is no way to do this out of the box.
There is an option to download the object's JSON and then use PowerShell to recreate the objects in Synapse workspace.
I haven't tried that myself, but the process is explained here.
I created an Azure SQL Database and configured geo-replication to a second server in a different region. In the Azure Portal, I can click on either of the databases, and see details about the regions being replicated to:
I want to use PowerShell to find this same information, but cannot find a cmdlet or property that exposes this information:
# Get database object
$database = Get-AzSqlDatabase -ResourceGroupName test-rg -ServerName testsql-eastus -DatabaseName TestDB
# Find if geo-replication is enabled?
The goal is to be able to pull all SQL databases in a subscription, and take different action on them depending if they have geo-replication enabled.
Please ref these document Get-AzSqlDatabaseFailoverGroup:
Gets a specific Azure SQL Database Failover Group or lists the
Failover Groups on a server. Either server in the Failover Group may
be used to execute the command. The returned values will reflect the
state of the specified server with respect to the Failover Group.
Example:
You can run Get-AzSqlDatabaseFailoverGroup -ResourceGroupName 'rg' -ServerName 'servername' to see if the databases in the Azure SQL server has configured geo-replication. If no failovergroup name return, then the database didn't enable the geo-replication.
I'm trying to deploy a SQL Server VM via ARM template and set backup in the same template.
I've managed to set up the VM and added the VM to backup but not the databases.
I've tried to use some of the quickstart templates on Github, but I don't find any that ascociate the SQL with a backup policy.
I want to enable auto backup on all databases and ascociate a backup policy.
Anyone have any example template?
Auto backup is one of the features of Azure SQL, you don't need to do set it manually, it's automatically.
Configure your backup policy by Change Backup Retention Period between 7 to 35 days.
For Azure SQL Managed instance:
Powershell command:
Set-AzSqlDatabaseBackupShortTermRetentionPolicy -ResourceGroupName resourceGroup -ServerName testserver -DatabaseName testDatabase -RetentionDays 28
REST API:
Sample Request
PUT https://management.azure.com/subscriptions/00000000-1111-2222-3333-444444444444/resourceGroups/resourceGroup/providers/Microsoft.Sql/servers/testserver/databases/testDatabase/backupShortTermRetentionPolicies/default?api-version=2017-10-01-preview
Request Body
{
"properties":{
"retentionDays":28
}
}
Hope this helps.
I am working on Microsoft Azure, in which I have a group of resources for a test environment and a production environment, in both I have an Azure SQL Databases database server with its respective database.
I am creating a Runbook of Automation Accounts in Powershell in another Microsoft Azure account (Important Note) that is responsible for "Copying" the production database to tests. I know that there is the New-AzSqlDatabaseCopy command, however, this command does not It works with Hyperscale databases.
Is there an alternative to this command in Hyperscale? or in this second account it is possible to create a. Bacpac remotely with Azure commands for Powershell, all I have seen are for working on the same account, but the database account is different from the automation account due to work rates.
Thank you in advance for your help and comments.
I already tried to use the New-AzureRmSqlDatabaseExport command, but it seems to work only in the same Azure Account, and I can't specify "Azure Account for backup" and "Azure account for storage". Am I right?
Like Alberto Morillo says in his comment New-AzSqlDatabaseCopy it's currently not available for Azure SQL HyperScale. at least at the moment of this answer.
So i try to use New-AzureRmSqlDatabaseExport with 2 Azure Accounts and it's tottally possible, you need to login with the Azure Account of the origin database Connect-AzureRmAccount then you need to call the New-AzureRmSqlDatabaseExport command with the following parameters.
New-AzureRmSqlDatabaseExport
-ResourceGroupName $RGName # Resource group of the source database
-ServerName $Server # Server name of the source database
-DatabaseName $Database # Name of the source database
-AdministratorLogin $User # Administrator user of the source database
-AdministratorLoginPassword $Pwd # Password of the source database
-StorageKeytype "StorageAccessKey" # Key type of the destination storage account (The one of the another azure account)
-StorageKey $StorageKey # Key of the destination storage account(The one of the another azure account)
-StorageUri $StorageFileFullURI # The full file uri of the destination storage (The one of the another azure account)
# The format of the URI file is the following:
# https://contosostorageaccount.blob.core.windows.net/backupscontainer/backupdatabasefile.bacpac
unfortunately, this command is not enabled for hyperscale, so I get the following error message:
New-AzureRmSqlDatabaseExport : 40822: This feature is not available for the selected database's edition (Hyperscale).
I used the same command with a database that was not Hyperscale and it worked perfectly.
Finally, I think I will have to perform the manual process for at least a few months, have Microsoft launch the update for HyperScale
Database copy is currently not available for Azure SQL Hyperscale but you may see it in public preview in a few months.
We have an application that uses Azure SQL for the database backend. Under normal load/conditions this database can successfully run on a Premium 1 plan. However, during the early morning hours we have jobs that run that increase database load. During these few hours we need to move to a Premium 3 plan. The cost of a Premium 3 is about 8 times more, so obviously we do not want to pay the costs of running on this plan 24/7.
Is it possible to autoscale the database up and down? Cloud services offer an easy way to scale the number of instances in the Azure Portal, however, nothing like this exists for Azure SQL databases. Can this be done programmatically with the Azure SDK? I have been unable to locate any documentation on this subject.
After digging through the articles in #ErikEJ's answer (Thanks!) I was able to find the following, which appears to be newly published with the release of the Elastic Scale preview:
Changing Database Service Tiers and Performance Levels
The following REST APIs are now newly available as well, which let you do pretty much whatever you want to your databases:
REST API Operations for Azure SQL Databases
And for my original question of scaling service tiers (ex. P1 -> P3 -> P1):
Update Database REST API
With these new developments I am going to assume it's only a matter of time before autoscaling is also available as a simple configuration in the Azure Portal, much like cloud services.
Another way to do it is using Azure automation and using run book below:
param
(
# Desired Azure SQL Database edition {Basic, Standard, Premium}
[parameter(Mandatory=$true)]
[string] $Edition,
# Desired performance level {Basic, S0, S1, S2, P1, P2, P3}
[parameter(Mandatory=$true)]
[string] $PerfLevel
)
inlinescript
{
# I only care about 1 DB so, I put it into variable asset and access from here
$SqlServerName = Get-AutomationVariable -Name 'SqlServerName'
$DatabaseName = Get-AutomationVariable -Name 'DatabaseName'
Write-Output "Begin vertical scaling script..."
# Establish credentials for Azure SQL Database server
$Servercredential = new-object System.Management.Automation.PSCredential("yourDBadmin", ("YourPassword" | ConvertTo-SecureString -asPlainText -Force))
# Create connection context for Azure SQL Database server
$CTX = New-AzureSqlDatabaseServerContext -ManageUrl “https://$SqlServerName.database.windows.net” -Credential $ServerCredential
# Get Azure SQL Database context
$Db = Get-AzureSqlDatabase $CTX –DatabaseName $DatabaseName
# Specify the specific performance level for the target $DatabaseName
$ServiceObjective = Get-AzureSqlDatabaseServiceObjective $CTX -ServiceObjectiveName "$Using:PerfLevel"
# Set the new edition/performance level
Set-AzureSqlDatabase $CTX –Database $Db –ServiceObjective $ServiceObjective –Edition $Using:Edition -Force
# Output final status message
Write-Output "Scaled the performance level of $DatabaseName to $Using:Edition - $Using:PerfLevel"
Write-Output "Completed vertical scale"
}
Ref:
Azure Vertically Scale Runbook
Setting schedule when u want to scale up/down.
For me, I used 2 schedules with input parameters, 1 for scaling up and another one for scaling down.
Hope that help.
Yes, that feature has is available: Azure SQL Database Elastic Scale
https://learn.microsoft.com/en-gb/azure/sql-database/sql-database-elastic-scale-introduction
In some cases the easiest option might be to just run SQL query as described in msdn.
For example:
ALTER DATABASE [database_name] MODIFY (EDITION = 'standard', SERVICE_OBJECTIVE = 'S3', MAXSIZE = 250 GB)