just starting to explore Azure and I am still a bit confused regarding the purposes of web roles vs worker roles. In the solution I'm working on mobile apps (iPhone, Android, Windows etc) will be accessing our server product via a REST api. So there is really no public facing web site for our service (as in web pages).
This made me think that I don't need a web role but instead have one or worker roles listening on our http endpoints. I have created a prototype along these lines. When from a mobile device I do I an http post to the endpoint, I get no response back. And I see nothing in the Azure logs that indicate that indeed my worker role was started or is running and responding to it.
Is this an appropriate approach? Is there something I need to do in setup code because I don't have a web role? I read in another thread that web roles run in IIS but worker roles don't.
Thanks for bearing with me. I am still getting to grips with Azure and so have a little difficulty formulating the right question.
You don't need to have a web role in your azure deployment. As you read, a web role has IIS, and your web site is hosted in it. A worker role is basically a plain old W2K8 server without IIS. Honestly, I haven't RDP'd to a worker role instance, so I'm not 100% sure that you don't have IIS or not.
But you don't need a web role in order to expose a WCF service. Here's a nice example (although the background color needs some work) that shows you how to do this.
Good luck! I hope this helps.
Adding to what David Hoerster said: You can host up to 25 externally-facing endpoints (each with its own port number) on any role type, with each endpoint being http, https, or tcp. With a Web Role and IIS, a web application typically grabs an endpoint mapped to port 80. In your case, you'll be creating your own endpoints on your specific ports. You're responsible for creating your ServiceHost (or whatever you're using to host your service) and binding it to one of your endpoints. To do this, you'll need to either map each endpoint explicitly to a specific internally-facing port, or inspect the endpoint's properties to discover which port has been dynamically assigned to it, for you to bind to (might this be the issue you're running into with your prototype code?).
If you're looking for the benefits IIS offers when hosting your endpoint, you're better off with a Web Role, as it's going to be much easier for you to do this since a Web Role enables IIS by default (and it's easy to add WCF services to a Web Role from Visual Studio).
Even if you were to self-host your endpoints, you could still use a Web Role, but now you'd be carrying the extra memory baggage of a running, yet unused, IIS service.
Related
I have an Api Rest developed with entity framework core 3.1 in C #, I need to deploy the application in a virtual machine in Azure, but it does not work, most of the tutorials that I have taken talk about how to create the virtual machine and publish a web application simple, any guide, help or tutorial?
Generally the error is 500 (internal server error), and problems with the web config
You need to make sure that external requests can land and be processed by the Web Server (typically IIS) running inside the VM. For that you need to open firewall ports to allow inbound traffic within the VM as well as through the network interface (found on the Networking tab) of the VM within the portal.
An API is technically deployed as part of a web application. Hence the following links would help.
Link 1
Link 2 (Note: Video has no voice)
That being said, deploying your API as a App Service in Azure (PaaS) is a much better approach rather than using VMs (unless your API has specific requirements that it needs to be deployed in a VM). App Services also makes setting up other associated services e.g. Logging and monitoring, authentication, etc. much easier.
I have a web application that was being developed using Web Roles in Azure. It is a relatively complex application in which clients communicate with each other via the web server. Client to server communication is via SignalR and within server instances Web Api is used.
It was critical that it was tested against multiple instances of web roles since the all of plumbing needed to potentially communicate across the various instances of web roles.
This was easy to do in web roles since in Visual Studio's project properties you would simply up the instance count and the Azure Compute Emulator would open a bunch of instances for you.
After attending a recent Microsoft technical briefing, it was suggested that web roles were being superseded by Web Apps in Azure App Service. Indeed on the surface these appeared to be a better fit to my problem and I have been investigating this as an architecture.
The problem I have found is how to simulate multiple instances? Web Apps in development spin up in a single IIS express instance and thus have the same IP address on my development computer. Web Roles spin up in difference instances and all have different IP addresses which makes testing easy. From what I understand on production, web apps, if configured to have multiple instances, will get different IP addresses (and/or ports) since they may be running on different servers
So how do I test multiple instances of Web Apps in the Azure App Service that need to cross communicate in development?
...or am I just missing something big here?
Thanks in advance.
Dave A
You can use the ARRAffinity value to specify which instance you want to hit, allowing your request to hit any instance you want.
You can find more details here: http://blog.amitapple.com/post/2014/03/access-specific-instance/#.VhLIGXmFMis
Could you run full IIS (w3wp) locally for testing? If so, you could create multiple sites or applications using different application pools, and hence processes.
I found this. http://blog.tylerdoerksen.com/2013/11/01/azure-websites-vs-cloud-services/
In short, given that I need to have some internal communication then Web Apps in Azure App Service should not be used.
I'm planning an application that has a mobile app as a front end (and perhaps a web front end also that performs a different purpose). Something like Runkeeper, or Runtastic, if you're familiar with those apps. The mobile device is the primary method of user interaction, and the web site has stats and dashboards that the users can view afterwards.
I would like the main application to reside in Windows Azure. I'm confused about how to architect the application though - should the business logic reside in a web role, or a worker role? If the main user interface is a mobile app, does it connect to the worker role to persist or retrieve data, or to a web role, or neither? I understand a typical scenario where a web role provides a user interface which can persist data directly to storage or pass data to queues or tables to be picked up by worker roles, but the presence of the mobile app throws me off.
Any help? Thanks!
Andy's answer is great, but let me add a different flavor. The only difference between a web role and a worker role is that the web role automatically has IIS turned on and configured. A good rule of thumb is that if you want IIS, use a web role. If you don't want IIS, use a worker role.
For hosting a server component for mobile apps to connect to, I think the simplest thing that would work would be a web role hosting an ASP.NET web application. Web applications can be used for services as well as web front end (HTML) web sites.
ASP.NET MVC and Web API make setting up web services really easy, and it's easy to work with non-HTML data formats, such as JSON or XML. Your mobile app could communicate with the web app using a REST JSON API, or you could use XML/SOAP if you wanted to, or whatever format you want. REST APIs with JSON as the transfer format is probably the most popular at the moment. One way to think about a web app is that it's just a way to send and recieve data from clients. If the client is a web browser, you can serve up your content as HTML pages, or if your client is a mobile app, you can serve up your data as JSON and let the client display it however it needs to. Basically, your web app can be both your web site (HTML), and your "API" for non-web-browser clients.
You can think of worker roles sort of like Windows Services. They are primarily used for doing back-end processing, and things like that. A worker role might provide some capability to host a public facing API, but you would have to manage connections, message pipelines, recycling, and all that yourself; whereas, a web role would have a web server (IIS) provided for you to manage connections, etc. If you are going to introduce things like message queues, it would make sense to have the public facing API be a web role, and the message processing component a worker role. The web app could receive the message from the client via a REST JSON API, and then pass the message off to a queue, where the worker role picks it up. Introducing queues and worker roles makes sense if you have heavy-duty server-side business logic that can be processed in the background without impacting the client.
I have an application that includes multiple hosted services in Azure. Two are web roles, one is a worker role. The problem is, two of the roles need to now communicate. One is a web role that serves as the admin interface. The other is a worker role. The admin interface needs to issue commands, like pause any running jobs, report status, etc. The 2nd web role is just a site, unrelated to the first two.
(Just to preface, I want to make sure my use of Azure terms are correct):
Hosted Service: An Azure 'application'. Multiple roles with two deployments, production and staging
Deployment: A specific instance of all the roles, either in production or staging, with a single external endpoint (*.cloudapp.net)
Role: A single 'job', either a web role or a worker role.
Instance: The VM's that service a role
Also to verify: Is it possible to add roles to an existing hosted service? That is, if I deploy 2 roles from one solution, can I add a third role in another deployment from a different solution?
Because each role is in it's own hosted service, it presents some challenges. Here's my understanding of the choices in how they can communicate:
Service Bus: This seems to be the best from an architecture standpoint. Each hosted service can connect a WCF service to the service bus, and admin can issue commands to the worker role. The downside is this is pretty cost prohibitive.
Internal endpoints: This seems best if cost is factored it. The downside is you have to deploy all the roles at once, and the web roles cannot have unique addresses. The only way to access both web roles externally is with port forwarding. As far as I'm aware, it's not possible to deploy 2 roles from one solution, and 1 role from another?
External WCF service: Each component can be in individual projects and individual hosted services. The downside is there's now an externally visible service for administration.
Queue/Table storage: Admin can write commands to the Azure Queue, and the worker roles can write their responses to table storage. This seems fine for generating reports, but seems not great for issuing synchronous commands.
Should multiple roles that all service "the application" all go into the same Azure hosted service? If from a logical standpoint it makes the most sense, then I'd be happy to go with #2 and just deal with port forwarding.
First off, your definitions look pretty good and I think you understand the problem pretty well.
Also with each deployment, each external endpoint can only be assigned to one role. So if you want to run two sites on port 80, then they need to be in the same role. This is just like setting up two sites on an IIS with the same port (which is exactly what you're working with). The sites are distinguished using host headers. If you don't want to go to that effort or if you want to deploy the sites separately, then you'll want to put your stand alone site in its own service/cloud project.
For the communication part, the one option that you've missed off is service bus queues. Microsoft have released a library using service bus queues that is specifically designed for inter-role communication.
Other than that, the extra comments on your points:
You're right internal endpoints is the cheapest way to go, but you will be rolling it all yourself. Of course it could setup WCF services to listen on these internal endpoints.
An external WCF service might work OK, but if you have more than one instance of your role, all WCF calls will go through the load balancer and the message will only be sent to one of the instances. You would need to make multiple calls to make sure the message was received by all instances and even then you couldn't be sure it had worked without some other feedback method.
Storage queues suffer from a similar issue. If you have two instances and want them both to receive the same message, there's no way to guarantee that this will happen.
I have 2 web applications running in a web role and I only run single instance in the azure cloud. I would like to send and receive notifications between these 2 applications and any outsider should not have access to them.
That means, web service in both of them are out unless there is a
way to block outsiders from accessing a web service and only a
request from same system would succeed (May be vip and request ip
comparison would do, anything beyond that?).
File system watchers. Create a LocalStorage and use it in both
web apps and watch for files webappA and webappB in each other.
Use Azure Storage Queues.
MSMQ - not interested as its not supported in azure.
Could you please list other options available for me in azure web role
? Thanks in advance.
Note: Please avoid suggesting Internal Endpoint as I am running only a single instance with 2 web applications running in it.
You can set up "private" web services to listen on Internal endpoints. These are not accessible via the outside world. You could have a WebAppOne endpoint and WebAppTwo endpoint, both marked Internal. You then just query the role environment to discover the assigned port for each, and fire up your ServiceHost.
Or... you could use a queue to pass information, as long as:
You're ok with it being asynchronous
You're ok with messages being looked at "at least" once
You're ok with messages possibly being looked at out of order
Or... your apps could write information to an Azure table. No need to expose the table to the outside world.