How can i check queries that have run on an Azure Server? - azure

I seem to have queries running on my Azure database about 2am every morning, that change column names and seem to cause issue with table selects.
Is there any way i can check where and what this query that is being run within Azure ?

Related

Can azure alerts rule be used for sending kusto query results on regular intervals?

I have a requirement where I have to send kusto query results to different audience on a regular interval.
My current approach is setting up a azure function which runs and shares the query results with a mail service, which distributes it to the wider audience.
I was thinking if I can leverage azure alert rules for this task. I know we can set up custom log queries for azure data explorer, but can it be run so query results on one of the database's table(in ADX) can be distributed?
You can create KUSTO queries and then use azure alerts to send out results based on the query, another way it to use logic apps which can also run KUSTO queries and then send results to whatever you need and is probably better in this case since it's not really an alert if I understand you correctly you just want to run a query and distribute the result.
Just choose which the one that suits you best and try it out and if you end up with specific issues come back and ask specific questions and we will get you going.

Is there a way to purge all documents from an CosmosDB Container using the Azure Portal?

I'm developing an app that uses a CosmosDB container. Every so often, I want to purge one of the testing containers. Using the Azure Portal, I drop the container and create a new one with the same name and attributes.
Simple enough, but this feels unnecessary. Every so often I'll type something wrong. Is there a way to delete all documents in a container, without the need to recreate it, via the web Portal? It feels as if this might exist in a menu somewhere and I'm just unable to find it.
You can se the time to live of the container to something like 1 second Link. It will take some time depending on the number of documents and the throughput of your Cosmos DB.
Deletion by TTL will only use left over RU/s so it will not affect your application if your application is live.

Default Comos Db Metrics in Azure Monitor

I was trying to configure the default cosmos db metrics on azure monitor to get requests, throughputs, and other related info. as given in documentation.
One issue I found was that if I have a collection by the name of test in my database in cosmos db account , I sometimes see two collections in Azure monitor under my database that are Test and test.
But this is kind of intermittent and if I change the time range it sometimes start showing one collection only. I have checked there is no collection by the name of "Test" (with capital T) in my database.
And also the results provided are actually distributed within the two metrics.
Could not find anything on documentation regarding the same.
Is this something on azure's side or something wrong with any configuration?
(screenshot for the same.)

Azure Database Migration Service stuck at "log shipping in progress"

I have a super small (no data, just a few single column tables, I'm just testing) on premise SQL server database that I'm trying to do an online migration for, to a SQL Managed Instance.
As far as I know I've configured everything as it should be - backup files are present in the file share, and the DMS is set up and can see both the SQL Server and the Managed Instance. However, it doesn't restore anything. It's stuck saying "log shipping in progress":
If I look at the managed instance itself, I can see a database has been created, and is currently in "Restoring" status.
My question is: how can I resolve this?
Maybe there are other logs I can look at, or there's some other permissions thing I don't know about, or something else?
I've tried creating a new project from scratch, but it had the same issue. And I've tried waiting... but I don't think it's working. As I mentioned, this is a DB with only a few tables (maybe 4), a single column in each table, no data at all.
Looking at your image, looks like there is no issue with DMS connecting to backup location and uploading backup and log files.
What is interesting is the field for last backup file applied and last applied LSN is empty. Makes me think there is some issue on the SQL MI machine.
I would recommend to open a case with MS Support on this.
One other thing you can try is to do a manual failover and your machine will failover to secondary node and then run the DMS job again.
https://techcommunity.microsoft.com/t5/azure-sql/user-initiated-manual-failover-on-sql-managed-instance/ba-p/1538803
Also try to take a look at what is going on with the SQL MI with any blocking.
You can use sp_whoisactive which is the latest version and it works on Azure SQL MI
https://github.com/amachanic/sp_whoisactive/releases/tag/v11.35
You need to assign permission to the service principal running the migration service at subscription level:
az role assignment create --assignee [YOUR SERVICE PRINCIPAL] --role contributor

Azure seamless upgrade when database schema changes

Let's say I have a production and staging deployment both using their own (SQL Azure) databases. If the schema in staging has changed and needs to be deployed to production is there a defined way of achieving the database upgrade on the production database (without downtime)?
e.g. If I swap VIP staging <-> production (and at same time automate changing connection strings somehow) what is the best process to automate upgrading the sql azure database.
My thought would be to spot the environment change in RoleEnvironmentChanging (though not sure that VIP swap even fires RoleEnvironmentChanginng) and run the sql script against the to-be database (i.e. prod) at that point, however I need to make sure that script is only run once and there will be multiple instances transitioning.
So you have production deployment which has its own SQL Azure database and staging deployment which has its own SQL Azure database. In this situation both the application have their connection string pointing to two different databases.
Your first requirement is to change the Database schema on fly when you swap the deployment or do something and I have the following concern with that design:
If you write any code inside the role to do "ONCE and only ONCE" action, there is no guarantee that that this will happen only ONCE. It will happen multiple time depend on several scenario such as
1.1 In any situation you VM needs to be reimage by the system and this CODE will do the exactly same what it did during last reimage
1.2 You might protect it to not happen at role start or VM start by some registry method of some external key but there is full proof mechanism that not to happen.
Because of it I would suggest when you are ready to SWAP your deployments you can:
2.1 Run the script to update to the production related SQL Azure schema (This will have no impact on application download because it is not touched but while your database schema is updated, you may know better how it impact your application)
2.2 Change the configuration in staging deployment to point to production SQL Azure (This will not have any production application downtime at all)
2.3 SWAP the deployment (This will also have no application downtime)
So even when you manually update the DB Schema and then SWAP the deployment there is no significant downtime besides the time take by DB to update the schema.
I have been looking on best practices for this all over the place and have found none. So far this is what I do:
Deploy to staging (Production is already running)
Copy app_offline.htm file to the web root on Production. This way I block users from using the application, thus blocking changes to the database. I am using only one instance.
Backup the database.
Run DDL, DML and SP scripts. This updates the production database to the latest schema.
Test application on Staging.
Swap VIP. This brings the application back online since the app_offline.htm file is not present on Staging (new Production).
If something goes wrong, swap VIP again, restore database and delete app_offline.htm.
With this approach I have a downtime of ~5 minutes approximately; my database is small, which is better than waiting for the Vm to be created and users getting errors.

Resources