Troubleshooting query performance SSMS vs from application [closed] - azure

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 days ago.
Improve this question
New to azure but im trying to trace down where a performance issue is occuring. An application in azure performs a sql query (against an azure db) via a third-party widget and this often takes a very long time and/or eventually times out. The query when ran via ssms (against the same azue sql instance) returns with a few seconds. While poking around in the Database portal, just glancing at the overview section, it apprears we might be hitting our limit in terms of DTU's, but then it comes back down. That would explain why I see intermittant timeouts when the app tries to run this query. However, this never seems to occur when running (what I assume) is the same query via ssms. What Id like to see is the actual query the app tried executing at the time when the timeout occured - as well as other things the app was doing.
Questions:
In the Azure Db portal, where can I find individual queries that have just ran, or should I trace that via AppInsights from the app portal? I see the Db portal has "Query Insight" but this seems to only have the highest ranking cpu/io queries. I want to see an invidual query
Is there a way from SSMS to see (or calculate) the DTU cost against an Azure Db?

Related

Implementing distributed tracing in Azure [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 1 year ago.
Improve this question
We are building integrations in Azure using a combination of Logic Apps, APIs and Azure Functions. We have requirements for end-to-end tracking of transactions from source to destination, i.e distributed tracing. We need to be able to track on custom fields, such as orderId. Any advice on how to best achieve this, pointer to articles, samples, videos are highly appreciated.
I think you can consider using Application Insights.
It has both code-less and code-based mode, and can automatically track the request / dependency etc. You can also track any custom fields by using it's built-in method.
And azure function is easy to integrated with application insights, see here for more details.
For web api, you can easily use the built-in method or using code-based or code-less to monitor it.
I have done a bit more research into this. I believe using Azure Monitor is the way to go as described here: https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/logic-apps/monitor-logic-apps-log-analytics.md. What's outlined here is really good as it explains the steps required to setup Azure Monitor. Azure Monitor in combination to what's described in the following article around end-to-end correlation with custom properties should give me what I need: https://yourazurecoach.com/2018/08/05/end-to-end-correlation-across-logic-apps/

Best way for monitoring AKS [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
I need to find and compare the best solution for monitoring AKS.
Exist any comparison between different products? I dont find any link
I need to compare about pricing, functionalities, etc
We tried with Log analytics but is very expensive.
While Product recommandation isn't something you will get on StackOver Flow.
I would recommend you to have a look at the OSS stack for properly monitoring AKS and Kubernetes in general. This solution will work on any Kubernetes cluster (AKS/EKS/GKE/BareMatel).
Start with the Prometheus Operator, this will bring in Grafana/Prometheus/AlertManager and a set of default dashboard and alert for your Kubernetes Cluster.
https://github.com/helm/charts/tree/master/stable/prometheus-operator
You even get monitoring for the control plane:
kube-apiserver
kube-scheduler
kube-controller-manager
etcd
kube-dns/coredns
kube-proxy
For better storage (since Prometheus is meant to keep a short amount of retention) have a look at configuring your stack with Thanos: https://thanos.io
This will allow you to augment your retention of metrics to almost an unlimited amount.
As far as Vendors goes, a lot of vendor will have the same price when you start to deal with them. Some of them will rely on their own agents to be installed while other will rely on Prometheus and Kube-State-Metrics to be installed.
While metrics are great, you should allow you user to have access to traces, this help identifying the flows and bottle neck of the different sessions.
https://www.jaegertracing.io/
https://www.jaegertracing.io/docs/1.18/operator/
https://github.com/jaegertracing/helm-charts
Finally for Log and Log Indexing, the ELK stack is your Go To Solution.
https://github.com/elastic/cloud-on-k8s
The Elastic team have been working on a good operator to facilitate the management of the ELK cluster on Kubernetes.

reading sql server log files (ldf) with spark [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
this is probably far fetched but... can spark - or any advanced "ETL" technology you know - connect directly to sql server's log file (the .ldf) - and extract its data?
Agenda is to get SQL server's real time operational data without replicating the whole database first (nor selecting directly from it).
Appreciate your thoughts!
Rea
to answer your question, I have never heard of any tech to read an LDF directly, but there are several products on the market that can "link-clone" a database almost instantly by using some internal tricks. Keep in mind that the data is not copied using these tools, but it allows instant access for use cases like yours.
There may be some free ways to do this, especially using cloud functions, or maybe linked-clone functions that Virtual Machines offer, but I only know about paid products at this time like Dell EMC, Redgate's and Windocks.
The easiest to try that are not in the cloud are:
Red Gate SQL Clone with a 14 day free trial:
Red Gate SQL Clone Link
Windocks.com (this is free for some cases, but harder to get started with)

Many requestes in azure between web sites [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
Hi what is the correct way to get two different web sites to talk to each other within Azure.
Today we tried to just make them talk through their Rest Api's but after 1000 requests we got locked out from the web site our calling web site were calling.
We know we can use a service bus to make them talk. But we would really like to be able to use the rest api:s.
Are there limitations in how much a web site can call another web site within azure?
There are several limits on websites, depending on what type you are using (free, shared, basic, standard)
With Free and Shared, you are limited on the CPU usage (60 minutes/day and 240 minutes/day) but also just for 2.5 minutes per 5 minutes. So when doing a lot of requests without a pause, you will run into this.
Another restriction on free websites is the data limit of 165 MB.
With Basic and Standard you have your own machine, so these restrictions don't apply
http://azure.microsoft.com/nl-nl/documentation/articles/azure-subscription-service-limits/#websiteslimits

SharePoint availability [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I want to create a SharePoint Server setup that will allow applications to be highly avaliable. Say if we have a portal in SharePoint, and I wanted to make it available always. I know it has to do with WFE. Someone guide me with article or Arch that need to be set for this.
Having multiple WFE (Web Front-ends) will make the web part of your SharePoint more reliable -- if one goes down, you can have your load-balancer stop sending requests to it. There is no way to ensure 100% uptime -- reliability is a combination of having redundancy (in hardware and services), monitoring, 24x7 staff to fix problems, etc.
Some things to look at:
Plan for Redundancy
http://technet.microsoft.com/en-us/library/cc263044.aspx
Plan for Availability
http://technet.microsoft.com/en-us/library/cc748832.aspx
There are third-party products that can help with fail-over, but I haven't used one to recommend.
See Lou's links. You can have redundant WFEs, query servers, and application servers as well as cluster your database.
Note that you cannot have a redundant index server unless you have two SSPs that basically index the same content. The query servers get the index replicated on them, so if the index server goes down you can still perform a query, the index will just not be updated until the index server comes back online. If you can't get it back online you will need to rebuild your index (full crawls).

Resources