I just moved some of are web api from IIS web role to a worker role in Windows Azure and that is working way betting. What I want to know is how much better so, and before we using New Relic to monitor the web server. I have the agent installed on the worker roles but not getting any of the great analytics.
(What I followed to make this work)
So I was hoping someone could help me get some basic stats on how well my self-hosted web server is performing into new relic. I looking for throughput, response time, and log errors.
I found something that seems to make me think that I could do it but I am not familiar with owin.
If anyone has some ideas on how to get this work that would great!
Edit:
What I am looking for is if someone can help me use the newrelic api ( RecordMetric(), RecordResponseTimeMetric(), IncrementCounter(), etc.) and hook it up in the owin pipeline to record throughput, response time, and log errors?
The New Relic .NET agent gathers most transaction-related metrics within the context of the IIS pipeline. The agent can get some basic metrics for standalone services like Worker Roles (WaWorkerHost.exe). Without any special setup, you can monitor calls per minute, RAM/CPU utilization, database calls and external requests. Beyond this, you'll want to use the .NET agent API:
https://newrelic.com/docs/dotnet/the-net-agent-api
Namely RecordMetric(), RecordResponseTimeMetric() and IncrementCounter() are available for Azure Worker Roles and other non-IIS applications. Other methods in the API require proper HttpContext instances.
Related
We have a web application that is using the IHostedService.
There is an example of this here
And the method we employed is detailed here
The goal was to have an application that continually ran background tasks. So we could schedule in jobs that run automatically at set times. This application is separate from all our other applications, so it's not visited by our user base.
So we needed a way for this application to run all the time on Azure.
We tried to set up an App Service for the application on Azure, but it seems that the application does not continually run. Things seem to run locally ok, but on Azure, I have to stop and restart the service before it kicks in the IHostedService tasks.
Is there a way on Azure to keep the application alive and running?
Ok, so in the App Settings in Azure, there is a setting Always On this worked for us. :)
We also found out from Azure support, that if you are on the lowest tier in their offered packages, this is treated as a "Development" environment and will only have limited up-time. As a result, we could expect the application to go offline when we reached that limit.
We argued that there should have been something more obvious in the dashboard for us to know this.
Once we upgraded to a Standard tier, the application stayed online.
Also, if you are running a hosted service and it hits an unhandled exception, this stops the service. You need to make sure you are handling exceptions for this to work.
I'm currently developing a SOA based architecture in Azure, using disparate Web API services (they'd probably qualify as Microservices, but I'm hesitant to use the term).
I have a service which is triggered by the Azure Scheduler. It does some "stuff" and then needs to call another Web API (via HttpClient) service to trigger something else. To do this, I need to know the URI of the 2nd service. When running locally, this is fine, as it is something like
POST http://localhost:1234/2ndService/api/action
However, when I deploy to Azure (using Internal Only as the access level), it gets an obfuscated URI, such as http://microsoft-apiapp8cf3d453-39d8-4b3b-ad00-e9d8008a9b58, which I obviously can't guess at deploy time.
Any ideas on how to solve this problem? Or have I made a fundamental error here?
Instead of relying on public http endpoints, have you considered passing messages via queues in Azure Table Services? It's very simple to do and is going to be more robust since you can take advantage of built-in features like guaranteed message delivery.
The overall idea is that Service A does some "stuff" then puts a message on queue ONE. Service B continuously reads from queue ONE until it picks up a new message from Service A (or any other service for that matter) and then does its "STUFF". You can continue to chain calls like this to other services that need to be notified.
If you want a more elegant solution you can look at using Service Bus Topics but the concept is basically the same.
Also, since you mentioned that your architecture is much like microservices, you can check out the new Service Fabric which is designed for your scenario.
In case of Azure Web Apps, you may always see such properties going to the web app dashboard, then properties. When deploying from the Visual Studio, you can set the URL as you want - just checked it, and it works fine.
Not very clear what technology do you use - is it IaaS VM? Is it Web Apps?
From my standpoint, each service should be deployed as a separate Web App (or API App, if you want). Each Web App has defined its own name as in yourwebapp.azurewebsites.net, so once you have provisioned the Web App no 1 in Azure, you know its address so you will call it from the Web App no 2.
In all the cases, you should have fully qualified domain names, and not local/internal ones.
I have noticed when accessing windows mobile service I can schedule tasks to perform at certain times. I also noticed that the script to perform these tasks is only javascript? Is there a way I can use some server side code to perform a mobile service?
I want to be able to connect to an API. Check for a specific update. If that update is present send an email to myself.
Unfortunately this API suffers from the same-origin policy and doesn't offer a solution like JSONP. Therefore I will need to handle this API access server side.
Currently you can only use JavaScript, but support for other languages (like C#) is planned.
I was interested in Widows Azure Mobile Services because of the Scheduler service, as a simpler alternative to jobs running in Worker Roles with Quartz and other alternatives, but I felt the same difficulty you describe in having to work in a JS only environment, which is a problem in my scenario.
Are you aware of Windows Azure Web Site Web Jobs (here and here)? You can configure continuous or scheduled jobs developed in many script languages as well as in .Net.
We moved many smaller tasks from under-utilized and more complex Worker Roles to Web Jobs as simple .Net console apps. Via the Web Jobs SDK you also get a good monitoring environment.
We are going to host our ASP.Net web site on Azure server. I am not quite familiar with Azure. I need to create some kind of scheduler which will send request to google API once a week and save response data to DB. I read some articles about Worker Role. Is it suitable for this? How it should be deployed to the Azure server? Any other solutions?
You could certainly make use of Worker Role for that purpose however I would not recommend going down that route as you are only going to use the functionality once a week. Or in other words you would be under utilizing the resources. Do take a look at Windows Azure Mobile Service Scheduler: http://www.windowsazure.com/en-us/develop/mobile/tutorials/schedule-backend-tasks/. Other alternative would be use a 3rd party service like Aditi Scheduler: http://www.aditicloud.com/. There's also a website which also allows you to do the same functionality (I'm sorry I forgot the name of that site :)).
If you're still keen on doing it through Windows Azure Worker Role, I wrote a blog post about the same which you may find useful: http://gauravmantri.com/2013/01/23/building-a-simple-task-scheduler-in-windows-azure/.
We are thinking of using Windows Azure for simulation. ~100 VM nodes each working on it's problem set and reporting back the result to a Master node.
I have created VM instances from the web UI. In order for this to work, we would need to use Azure API to bring servers up and shut them down once they are done.
Does anyone have any experience with something like this? I am looking for advise, gotchas etc.
thanks.
You sure can do it and I have helped other to make it happen on hundreds on nodes. Take a look at Windows Azure Rest API to configure your role as described here. While others may have other idea, I think the general steps would be as below:
Create a master machine or a webrole to manage your roles using REST API
Create a worker role instance and use it to clone multiple instances as if needed
Use REST API to start and shutdown worker role along with update the instance count when in need
Use Azure Boot Strapper to bootstrap the VM depend on your requirement
Azure REST based Service Management API can work from a web app or a standalone app, so you can also have a web role to make it happen from anywhere in world. This way you don't need any on premise components at all as it will be totally cloud solution. If you need any help on creating web role I sure can help.
You can provision Virtual Machines using Service Management REST API (there's also a managed API on NuGet).
But in your case you might want to consider using Cloud Services (PaaS). With Cloud Services you simply build your application, you package it and deploy it. Then using the portal or the management API you can simply configure the number of instances. There is even a command line tool (csmanage.exe) which allows you to to change the number of instances through the service configuration.