We are using Devops to recreate our demo environment. Within the Devops deployment we have an Azure Powershell task to copy our production Azure SQL database to a "demo" database on the same server the prod db is located on.
We first search for the databases on the server and if the "demo" database exists we delete it:
Remove-AzSqlDatabase -ResourceGroupName prdResource -ServerName prdServer -DatabaseName demoDb
Then we copy the prod db to the demo db:
New-AzSqlDatabaseCopy -ResourceGroupName prdResource -ServerName prdServer -DatabaseName prodDb -CopyDatabaseName demoDb
Finally we set the service level on the demoDb:
Set-AzSqlDatabase -ResourceGroupName prdResource -ServerName prdServer -DatabaseName demoDb -Edition "Standard" -RequestedServiceObjectiveName "S4"
This all works fine and the demo db is created correctly with the appropriate service level. The issue is our Azure prod webapp that is connected to the prod database struggles with performance issues. Calls that used to take ~2 seconds just prior to the copy db, now take 30+ seconds. We found if we restart the webApp that clears the issue.
Just wondering why the copy db command is effecting our performance on the web app? Are there other settings we should be using with the copy command? We have ran this process several times and get the same performance issues each time we run.
From our understanding this process should not have any negative side effects on the prod db, is that a correct assumption? Any other ways of fixing the issue without having to restart the webApp?
Scale out your sql database tier and located webapp and db to be in the same region.
These two changes resulted in a massive performance increase.
Also, you could refer to this article to troubleshoot Azure SQL Database performance issues with Intelligent Insights.
The DTU's really don't seem to be the issue as they don't go above 20%. We have a DevOps deployment in place that runs all the tasks and that is scheduled each Sat at 1:00am. Here is a screenshot of the DTUs for that database during that timeframe:
Also the DB and the WebApp are both in the East US Region so that also should not be the issue.
Again, restarting the webapp clears up the issue so that points to it not being a DB/DTU issue.
Related
We have some databases(10 databses) in dev and test environments on Azure SQL. We would like to be able to sync the data between environments using the azure pipeline.
The Schema changes happen automagically with entity framework migrations which is fine.
For syncing data, We've created a data compare in visual studio that we use to sync the data on demand. Now we would like to automate this process(syncing data).
Is there an existing task that we can add to the pipeline to run this data compare and subsequent sync?
You can do that with SQL Data Sync for Azure.
And what it does:
SQL Data Sync is a service built on Azure SQL Database that lets you synchronize the data you select bi-directionally across multiple databases, both on-premises and in the cloud.
Fir you need to:
create sync group - on your prod database
use sync members - here you will use your test database
than configure group sync
What is cool about that you can select what particularly should be synced:
You can then trigger that from azure pipelines calling
Start-AzSqlSyncGroupSync -ResourceGroupName $resourceGroupName -ServerName $serverName -DatabaseName $databaseName -SyncGroupName $syncGroupName
from Azure PowerShell task. If you define cron schedule you will get you data up to date.
I'm trying to configure an Azure pipeline where I create a copy of a production database to create a "pre-prod" environment.
After create that database I need to run some queries in the freshly created database. The problem is database is not available right away. As the process is automatic I need to know for how long I need to wait.
I put a wait step for 5 minutes but sometimes is not enough.
Thanks in advance
How about using a simple check of DB availability, through Az module, or CLI?
do {
sleep 120
$status = "Offline"
try{
$status = (Get-AzSqlDatabase -ResourceGroupName "resourcegroup01" -ServerName "server01"-DatabaseName "MyDataBase").Status
}
catch
{
"Database not available yet"
}
} while ($status -ne "Online")
You can try to use the Azure portal's query editor to query an Azure SQL Database to check DB availability.
The query editor is a tool in the Azure portal for running SQL queries against your database in Azure SQL Database or data warehouse in Azure Synapse Analytics.
Note: The query editor uses ports 443 and 1443 to communicate. Ensure you have enabled outbound HTTPS traffic on these ports. You also need to add your outbound IP address to the server's allowed firewall rules to access your databases and data warehouses.
For more query editor considerations, please refer to this.
We've got a rogue process running somewhere that executes queries against a test database we have hosted on Azure SQL. I'm trying find this process so I can kill it. There are a number of app servers and development PCs where it could be hiding, and I haven't been able to track it down by looking at processes running on these machines by hand.
I can use The Azure Data Studio Profiler extension to get some Extended Event logging from the database. From there, I can see the text of queries being run, the Application Name and the ClientProcessID.
Sample data from profiler
I can't seem to use any of this data to find the host name or ip address of the server where these queries originate. Can I determine this using the data available in Azure Data Tools Profiler? Or is there some other way to work backward to find it? Since this is hosted on Azure, I can't use the Sql Management Studio Profiler, which I think would give me the Hostname right away.
Azure SQL Auditing should provide you the application name, login name and client IP address that executed the query. Please read this article to enable Auditing and look for the event type BATCH_COMPLETED.
Set-AzureRmSqlDatabaseAuditing `
-State Enabled `
-ResourceGroupName "rgYourResourceGroup" `
-ServerName "yourservername" `
-StorageAccountName "StorageAccountForAuditing" `
-DatabaseName "YourDatabaseName"
I have used the following powershell script to delete a database:
Remove-AzureRmSqlDatabase -ServerName $server -ResourceGroupName $rgname -DatabaseName $dbname
(first setting the variables)
and have tried using the Azure Portal
The port indicates a success in deletion, as does the activity logs, however
the resource is not being deleted?
Screenshot of activity log:
The deletes (on a number of occasions after the db comes back) show successful, however there is an audit policy that seems to be doing something.
There are no Locks on the resrouce group.
UPDATE:
I have deleted from SSMS, and is not showing there or in the portal anymore..
(will wait to see if it comes back, as it did when deleting via portal and powershell)
UPDATE 2:
Database is now back, so this is the database having been deleted 3 ways, portal, powershell and via SSMS.
It turns out the web application uses EF migrations which is recreating the database.
Note: The bigger issue is that the database is created on a much higher, and much more expensive tier.
Do you happen to have a rogue policy somewhere? It seems something is running a Policy Effect: deployIfNotExist on the resource. Without access to your environment, there's not much I could recommend.
Check the documentation here: https://learn.microsoft.com/en-us/azure/governance/policy/concepts/effects#audit
We wish to implement CI using a TFS / Visual Studio Online-hosted build server. To run our unit/integration tests the build server needs to connect to a SQL Azure DB.
We've hit a stumbling block here because SQL Azure DBs use an IP address whitelist.
My understanding is that the hosted build agent is a VM which is spun-up on demand, which almost certainly means that we can't determine its IP address beforehand, or guarantee that it will be the same for each build agent.
So how can we have our hosted build agent run tests which connect to our IP-address-whitelisted SQL DB? Is it possible to programmatically add an IP to the whitelist and then remove it at the end of testing?
After little research found this (sample uses PowerShell):
Login to your azure account
Select relevant subscription
Then:
New-AzureRmSqlServerFirewallRule -EndIpAddress 1.0.0.1 -FirewallRuleName test1 -ResourceGroupName testrg-11 -ServerName mytestserver111 -StartIpAddress 1.0.0.0
To remove it:
Remove-AzureRmSqlServerFirewallRule -FirewallRuleName test1 -ServerName mytestserver111 -ResourceGroupName testrg-11 -Force
Found in Powershell ISE for windows. Alternatively there should be something similar using cross platform cli if not running on windows machine
There is the task/step of Azure PowerShell that you can call azure powershell (e.g. New-AzureRmSqlServerFirewallRule)
On the other hand, you can manage server-level firewall rules through REST API, so you can custom build/release task to get necessary information (e.g. authentication) of selected Azure Service Endpoint, then send the REST API to add new or remove firewall rules.
The SqlAzureDacpacDeployment task has the source code to add firewall rules through REST API that you can refer to. Part SqlAzureDacpacDeployment source code, VstsAzureRestHelpers_.psm1 source code.
There now is a "Azure SQL InlineSqlTask" build task which u can use to automatically set firewall rules on the Azure server. Just make sure "Delete Rule After Task Ends" is not checked. And just add some dummy query like "select top 1 * from...." as "Inline SQL Script"