Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I have an Azure-hosted API that accepts logging/tracing data from multiple customer applications for the purpose of identifying/aggregating issues before the customer raises them in a support call. The API writes logging data to Azure Table storage. This works great with powerbi.com etc. for regular proactive monitoring, however...
As a "2.0" enhancement, I want to set up mobile notifications when defined conditions are met (e.g. 2+ table records created with a "severity" attribute = 1 within the space of 60 minutes, maybe where "ProjectType" = Mine). I don't want to send notifications on each entry to the table but rather trigger a notification on aggregated entries within a rolling time frame.
Are there any Azures service that provides this without having to create a custom cron job querying table storage every few minutes/hour (and therefore increasing PAYG costs). And would necessitate moving away from Azure Table Storage to Azure SQL?
I would investigate Azure Stream Analytics and see if that meets your needs. It provides a sql-like query dialect, including things like tumbling windows (how often an event occurred within a given time frame). Here's a nice example.
A lower tech solution would be running a WebJob within an App Service. You could run it on a free tier in order to keep cost down if that is a concern. Sql Server would give you more flexibility in your queries compared to table storage.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 months ago.
Improve this question
I have an SQL Database in Azure with General Purpose type, really basic one:
It is not used frequnetly, only sometimes when I test some things on my website, so that I didn't delete this resouce. Recently, I noticed that database management cost increased but I didn't use the database at that time:
Is there any way to investigate what caused this spikes on the diagram (Nov 22 - Nov 28)? I tried to find information about operations that were executred at that time with no success. Maybe there are some kind of logs in Azure that can help me with this?
Please consider to open Azure Portal and access your Azure SQL Database, on the left panel you will see "Query Performance Insights", use that option. Use sliders or zoom icons to change the observed interval. Read step-by-step procedure here.
While you Investigate this issue, please consider the following causes also:
Make sure you did not enable temporarily a tool to monitor or make sure your web site is up and running.
Did you enable a feature temporarily on Azure portal for Azure SQL? Azure SQL Database features like geo-replication, failover group, long-term backup retention, Azure SQL Data Sync, Elastic Jobs create activity on the database.
Did you enable temporarily features using T-SQL like Full-Text Search, that constantly generate queries against the database?
If your database is serverless, did you left accidentally a tool like SQL Server Management Tool or Visual Studio connected a couple of dates to the database until you shutdown the client computer.
My suggestion, if you rarely use this database and you have not set it up as serverless, it is a good time to try serverless.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
The community reviewed whether to reopen this question 10 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
We have a microservice project with multiple applications consisting of frontend (angular, angular.js), backend apps (ASP.NET Core, PHP), gateways etc.
I was wondering whether it's a correct approach to have an Application Insights resource per project or maybe there should be just one per environment for all the applications ? It seems if I create multiple application insight resources and assign them all to separate projects Azure can somehow figure out they are all linked (routes visible on application map). I'm not sure what's the correct approach.
There are a few things to take into account here, like the amount of events you're tracking and if that 'fits' into one instance of Application Insights. Or if you're OK with using Sampling.
As per the FAQ: use one instance:
Should I use single or multiple Application Insights resources?
Use a single resource for all the components or roles in a single business system. Use separate resources for development, test, and release versions, and for independent applications.
See the discussion here
Should I use single or multiple Application Insights resources?
I would have one app insight per service. The reason is that app insights don’t cost until you hit the threshold. So if you use one app insight to log everything, it’s likely that you will hit the threshold pretty quickly.
Also, it is good practice to separate out the logs for each service as the data they hold can differ with regards to personal information.
You can however track the request across all services by application map or writing a query that combines the logs across multiple app insights.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Currently i am trying to learn various Services of Amazon web services and Microsoft windows azure like Amazon sns,Amazon Storage,amazon Search.
I am having some question in my mind that why now a days cloud platforms are getting so popular than old traditional approach like previously we were storing our files(img,txt,.doc etc) in our web application project only but now adays some web application are storing their files on amazon storage or on azure storage.
What is the benefits of storing files and folders over cloud platform ??
Next why amazon search or azure search is preferred as when they were
not available searching was done and amazon and azure search are not
freely availabale??
Now if we talk about push notification then why to use amazon or azure
push notification if we can easily send notification through codes
that are available on internet??
I general i just want to know that why now a days web application are using more cloud platforms(Azure or amazon) even though they are costly??
Can anybody explain me this with some details??
Among the many reasons, the most important and common ones I can think of are:
High Availability - When you manage your own services, there is always an operational challenge of ensuring that they do not go down i.e, crash. This may cause a downtime to your application or even data loss depending on the situation. The cloud services you have mentioned, offer reliable solutions that guarantee maximum up time and data safety (by backup, for example). They often replicate your data across multiple servers, so that even if one of their servers are down, you do not loose any data.
Ease of use - Cloud services make it very easy to use a specific service by providing detailed documentation and client libraries to use their services. The dashboard or console of many cloud services are often user friendly and do not require extensive technical background to use. You could deploy a Hadoop cluster in Google Compute Engine in less than a minute, for instance. They offer many pre-built solutions which you can take advantage of.
Auto-Scale - Many cloud services nowadays are designed to scale automatically. The are built to scale automatically with increasing traffic. You do not have to worry about the traffic or application load.
Security - Cloud services are secure. They offer sophisticated security solutions using which, you can secure your service from being misused.
Cost - Trying to host your own services require extensive resources like high end servers, dedicated system administrators, good network connectivity etc. Cloud services are quite cheap these days.
Of course you could solve these problems yourself, but smaller organizations often do not prefer to do so because of the operational overhead. It would take more time and resources to reach a stage where your solution is both reliable and functional. People would often prefer to work on the actual problem their application is trying to solve and abstract away most operational problems which cloud services readily offer.
p.s. These are some opinions from an early stage startup perspective.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I need to measure how many concurrent users my current azure subscription will accept, to determine my cost per user. How can I do this?
This is quite a big area within capacity planning of a product/solution, but effectively you need to script up a user scenario, say using a tool like JMeter or VS2012 Ultimate has a similar feature, then fire-off lots of requests to your site an monitor the results.
Visual Studio can deploy your Azure project using a profiling mode, which is great for detecting the bottlenecks in your code for optimisation. But if you just want to see how many requests per/role before it breaks something like JMeter should work.
There are also lots of products out there on offer, like http://loader.io/ which is great for not worrying about bandwidth issues, scripting, etc... and it should just work.
If you do role your own manual load testing scripts, please be careful to avoid false negatives or false positives, by this I mean that if you internet connection is slow and you send out millions of requests, the bandwidth of your internet may cause your site to appear VERY slow, when in-fact its not your site at all...
This has been answered numerous times. I suggest searching [Azure] 'load testing' and start reading. You'll need to decide between installing a tool to a virtual machine or Cloud Service (Visual Studio Test, JMeter, etc.) and subscribing to a service (LoadStorm)... For the latter, if you're focused on maximum app load, you'll probably want to use a service that runs within Azure, and make sure they have load generators in the same data center as your system-under-test.
Announced at TechEd 2013, the Team Foundation Test Service will be available in Preview on June 26 (coincident with the //build conference). This will certainly give you load testing from Azure-based load generators. Read this post for more details.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am looking for some sort of map to showing physical location of worldwide Windows Azure MS Data Centers.
Can you help?
Here is a public Google Map of Azure datacenter locations - https://maps.google.com/maps/ms?msid=214511169319669615866.0004d04e018a4727767b8&msa=0&ll=-3.513421,-145.195312&spn=147.890481,316.054688
Microsoft does not disclose the exact location of the data centres, for obvious reasons, although the internet does have some information you may have seen, such as http://matthew.sorvaag.net/2011/06/windows-azure-data-centre-locations/
Worth noting, thought, that this refers only to the 'main' Windows/SQL Azure data centres; in addition there are many CDN nodes around the world in smaller data centres.
I am curious though - why do you ask?
Even below link will give you the location of data centers.
http://azure.microsoft.com/en-in/regions/
The exact physical location of a data centre isn't usually relevant for users of applications. What's more important is the latency that they see when reaching the application.
But the most important thing is usually the speed of your own application.
For example, at my particular location in the UK I see somewhat better responses from the Northern Europe Azure site than the Western Europe site. This will be down to the particular route taken by packets from my PC through the local network and out to the point on the wider Internet where it peers with the Microsoft Azure systems.
If I'm dialled in through a VPN to an office in the US then I'll see better responses from a US-hosted Azure site.
However, compared to the ~60 millisecond ping time I see to the data centre, the ~200 millisecond response time from the SQL Azure queries on my site are something I can control and which are more important.
Better ways to make your Web application faster include:
Cache, cache, cache. Use the CDN hosted versions of e.g. JQuery where possible.
Minify your scripts and CSS, and merge if possible.
Only perform postbacks as a last resort. Use Javascript / AJAX to load data into your application.
... all of which applies to Web applications whether they're on Azure or other hosts.