I am trying to implement SignalR hubs on my REST service (ASP.NET Web Api) hosted on azure. I've been reading some common stuff related to SignalR and I came to this one that it is server bound. You can check here. Which means that in order to be able to scale it on multiple server instancs I have to do some additional stuff. So, then, I started to ask myself "How many instances do I currently have running of my REST service on Azure? How do I know that?"
So, what I did was - I navigated to azure portal and opened my service > Process explorer
Does that mean than my web app scales automatically and I currently have 2 instances of my web api runing? I think it clearly says that there's currently only one instaces of it running 2 processes but how do I know if it will scale some time in the future?
No, your photo shows your app process and Kudu, which is a management interface you can access at https://yourappname.scm.azurewebsites.net.
You can see the instance count in your app's App Service Plan. If it is on Free or Shared, there can only be one. If it is Basic/Standard/Premium, it is one by default. If you haven't setup auto-scale, it won't scale to more than 1 instance unless you tell it to.
Related
I am looking into web Apps Monitoring in Microsoft Azure and I can see a variety of options in the portal. I have some questions in those which I will put forward one by one. The question length may be a bit long so apologies in advance :-)
Process Explorer
Here we can find process details per instance which are running for my Web App.In case of scale out we will also see multiple instances. I want to know why we are seeing 2 processes per instance and what is the significance of each process.
2.Metrics Per Instance (Apps)
While looking at this report, I can see 2 different tabs (see image), I am unable to map it to the instances I am having in my web apps.
2.A) Is it true that If I have multiple deployment slots/ scaled out instances I will see that many tabs in the report?
2.B) Is there a way by which I can map these to my Web App instances in the Process Explorer
3.Metrics Per Instance App Service Plan
Here Again we have to different indicators same as in Apps. Can some please help me how to decipher these.
Can you guys please help me out with the reports as it seems to be quite confusing and I am unable to map it with my Instances, Deployment Slots in relation to the app service Plan.
Once again apologies for a long question.
Thanks in Advance,
Mayank
Looks like no one answered this in a long time. Let me see if I can explain this better.
This blade that you are talking about is accessible under "Diagnose and solve problems" options of an App Service Web App. Lot of changes has been made in the last few months to this feature. Read more about it here: App Lens - Azure App Service
1. Why we are seeing 2 processes per instance and what is the significance of each process.
In Azure App Service. For every web app there is another web app provisioned. This site is known as KUDU. So one w3wp.exe corresponds to the process hosting your code and the second w3wp.exe corresponds to the process hosting the KUDU. This process will have a SCM tag appended against it. You can read more about it here: Project Kudu - Github
2.Is it true that If I have multiple deployment slots/ scaled out instances I will see that many tabs in the report? Is there a way by which I can map these to my Web App instances in the Process Explorer
To answer the first part, YES, the tabs corresponds to the number of instances the app service plan is scaled out to. So if your web app is scaled out to 7 instances, then you will see 7 tabs in the report.
There is no straight approach to correlate the instance names to process explorer. There is an alternate way. I have a blog post using which you can connect to the KUDU site of a web app on a specific instance. See this: Connect to Kudu site of a specific instance
3. Metrics Per Instance App Service Plan Here Again we have to different indicators same as in Apps. Can some please help me how to decipher these.
as the name says, Metrics per instance (App Service Plan) displays data for the entire VM, while Metrics per Instance (Apps) displays data for a specific web app or process (w3wp.exe). In Azure App Service, you can provision several web apps inside a VM. So, this view provides a holistic view of the overall usage of the VM. This will help you in determining whether you need to scale out or scale up.
I hope this answers this question.
We have an application which is an Asp MVC application which communicates via WCF to a middle tier application. We will be rewriting this and want to target Azure
It's split into 3 tiers, web, business, database, which are all on separate machines, as some of the business processes can take a few seconds. When the web calls the middle tier it must wait for a response before returning to the user, i.e. something like a message queue isn't appropriate here
I'm thinking for the new architecture we have
Client: AngularJS
Web: Probably Asp MVC controllers in a Web App
Middle tier: ?
Database: Azure SQL Database
The middle tier is where it gets confusing, what is this in terms of Web Apps and how does it communicate with the web? I think we'd prefer a RPC approach rather than REST based if possible. We just want to send serialized classes back and forth
What about scalability? We're presently assuming that the web will need to be stateless
Since Azure Web Apps is a platform that builds on top of IIS, you can run an IIS-hosted WCF-service on Web Apps just fine.
As for scalability, Azure App Service allows you to scale horizontally manually or automatically.
If the Web and Middle tier share scaling requirements, you can place both on the same App Service Plan. Then they share the instances and scale simultaneously. If however their scaling requirements are quite different, I recommend you put them in their own App Service Plans. Then one's scaling doesn't affect the other.
I'm currently developing a SOA based architecture in Azure, using disparate Web API services (they'd probably qualify as Microservices, but I'm hesitant to use the term).
I have a service which is triggered by the Azure Scheduler. It does some "stuff" and then needs to call another Web API (via HttpClient) service to trigger something else. To do this, I need to know the URI of the 2nd service. When running locally, this is fine, as it is something like
POST http://localhost:1234/2ndService/api/action
However, when I deploy to Azure (using Internal Only as the access level), it gets an obfuscated URI, such as http://microsoft-apiapp8cf3d453-39d8-4b3b-ad00-e9d8008a9b58, which I obviously can't guess at deploy time.
Any ideas on how to solve this problem? Or have I made a fundamental error here?
Instead of relying on public http endpoints, have you considered passing messages via queues in Azure Table Services? It's very simple to do and is going to be more robust since you can take advantage of built-in features like guaranteed message delivery.
The overall idea is that Service A does some "stuff" then puts a message on queue ONE. Service B continuously reads from queue ONE until it picks up a new message from Service A (or any other service for that matter) and then does its "STUFF". You can continue to chain calls like this to other services that need to be notified.
If you want a more elegant solution you can look at using Service Bus Topics but the concept is basically the same.
Also, since you mentioned that your architecture is much like microservices, you can check out the new Service Fabric which is designed for your scenario.
In case of Azure Web Apps, you may always see such properties going to the web app dashboard, then properties. When deploying from the Visual Studio, you can set the URL as you want - just checked it, and it works fine.
Not very clear what technology do you use - is it IaaS VM? Is it Web Apps?
From my standpoint, each service should be deployed as a separate Web App (or API App, if you want). Each Web App has defined its own name as in yourwebapp.azurewebsites.net, so once you have provisioned the Web App no 1 in Azure, you know its address so you will call it from the Web App no 2.
In all the cases, you should have fully qualified domain names, and not local/internal ones.
We are thinking of using Windows Azure for simulation. ~100 VM nodes each working on it's problem set and reporting back the result to a Master node.
I have created VM instances from the web UI. In order for this to work, we would need to use Azure API to bring servers up and shut them down once they are done.
Does anyone have any experience with something like this? I am looking for advise, gotchas etc.
thanks.
You sure can do it and I have helped other to make it happen on hundreds on nodes. Take a look at Windows Azure Rest API to configure your role as described here. While others may have other idea, I think the general steps would be as below:
Create a master machine or a webrole to manage your roles using REST API
Create a worker role instance and use it to clone multiple instances as if needed
Use REST API to start and shutdown worker role along with update the instance count when in need
Use Azure Boot Strapper to bootstrap the VM depend on your requirement
Azure REST based Service Management API can work from a web app or a standalone app, so you can also have a web role to make it happen from anywhere in world. This way you don't need any on premise components at all as it will be totally cloud solution. If you need any help on creating web role I sure can help.
You can provision Virtual Machines using Service Management REST API (there's also a managed API on NuGet).
But in your case you might want to consider using Cloud Services (PaaS). With Cloud Services you simply build your application, you package it and deploy it. Then using the portal or the management API you can simply configure the number of instances. There is even a command line tool (csmanage.exe) which allows you to to change the number of instances through the service configuration.
I'm curious to know if this is possible, and if so, is it a good or bad idea?
We are developing an Azure application that is largely centered around worker roles that receive their work on a CloudQueue, and put the results in a CloudBlob, that the client then downloads. The web interface itself is a dead-simple ASP.NET MVC site that throws jobs in the CloudQueue, and builds URLs to download CloudBlobs.
Currently we accomplish this by having a Azure Cloud Project in our solution, which has a Web Role with the UI, and Worker Roles with the actual work.
Could we use Azure Websites to publish and host the UI, which calls back to our Worker Roles? The Azure DLLs are just regular old .NET libraries, I'm assuming Azure Websites won't have a problem with them. So, when we want to update the UI, we just publish with Visual Studio. And when we want to update the Worker Role - which is 300MB+ and has a bunch of nasty dependencies like Crystal Reports - we can build the cloud bundle and update the Cloud Service through the Azure management portal.
This seems to me like doing this would make it easier to update the UI. I think it would also be cheaper to host it, as we won't have to buy a bunch of instances for the Web Role.
If your question is "Could we use Windows Azure Websites*", based on your application architecture, you sure can use Azure Website to deploy your front end and configure all the networking connection properly so you can continue access other Azure Storage services. As you are using mostly Blob and Queue, you can continue use HTTP/HTTPS settings in the Azure websites. You can keep worker role by as it is however if it is very complex to deploy, using Windows Azure VM may be another direction to go.
I could say website deployment could be easier if your web app does not have something complex to configure in web server as websites may not be able to match web server level configuration compare to webrole and Azure VM. Answering "Easier and cheap" could be very subjective as this is all depend on load and distribution so you would have to try and evaluate it.