When deploying multi-instance WebRole to Windows Azure Emulator, the Emulator is running multiple IIS Express instances of the WebRole, each one on a reserved local IP, like:
127.255.0.1
127.255.0.2
127.255.0.3
The problem is that i want to access the WebRole as if it was really deployed on Azure, i need to check that Session State is persisted between instances.
Since my Session Id is stored on a cookie, each time i'm connecting to a different instance i need to manually 'inject' the cookie to the request to check session data (since the browser considers the IP of the next instance as different domain).
Is there a way i can use a hostname (on a Windows 7 machine) that will point itself randomally to one of those IP?
Well, apparently the emulator does load balance all request between instances:
Clicking 'Debug' on the Cloud Project will open a web page with an IP that is the virtual Load Balancer (usually 127.0.0.1:80 if not taken already).
Yet, there were 2 things that misled me in the first place:
1. The list of multliple IIS Express instances each with it's own binding (image attached).
2. Implicit Affinity:
I made the my web application output the instance-id and kept getting
the same instance-id all the time. the reason for that is (i guess)
the affinity that the emulator enforce (probably using cookie
comparison).
Conclusion:
If you want to manually load balance or to control the affinity yourself, you can leverage IIS Server Farming capabilities (as i did eventually) to emulate load balancing.
(Apache/Nginx as some kind of 'Reverse Proxy' is also a good option, but i preferred to stick with products that are already installed and in-use).
Related
I have some app services and processes in the back that the target it's the machine name (hostname) where the app service it's hosted. I realized that when I go to the console in the service app, the hostname keeps changing for some reason.
I would like to know if there is a way to set a static hostname.
Thanks in advance
The machine name changes because you have multipul instances(machine) of your webapp. If you have only one instance, the machine name will not change.
If you want to keep multiple instances, you can disable the ARR affinity feature of the webapp.
Reference:
Disable Session affinity cookie (ARR cookie) for Azure web apps
Are you running on multiple instances? (Check Scale-out blade of the WebApp). When you try to access the Kudu site, it can go to the Kudu of any of these instances. Which is why you'd see the hostname changing. You can go to a specific instance using ARR Affinity cookie, but you wouldn't have control over the hostname itself.
I just created my first Azure SQL Server and Database and now am trying to configure the firewall so that my web application can connect and make changes to the single database on the server.
I see that you can allow all clients to connect by allowing a rule such:
However is this bad practice? I see my Client IP address is listed in the Azure portal can I get clarification that I should allow just that single IP address access for now, and then later when I publish my web application to Azure I will be given an IP address of where that web app lives and only restrict access to that (assuming that people can only make database changes through my front end application).
Thanks
Yes, this is bad practice. There are other layers they'd have to get through (e.g., the server login), but this opens the front door and allows anyone to pick away at it at their leisure.
If you have a web server hosting your web application on some server (which you must be), whitelist that server's IP address (and perhaps your own, for development/admin purposes), but allowing all IPs is not considered good practice, no.
One particular case where you may really want to allow this is if you are distributing a desktop application to unknown clients that must connect to the backend. It becomes extremely enticing at that point, but even so, the recommended practice (or at least, my recommended practice) would be to utilize a web service that will accept an application registration upon startup of the app, register the client IP temporarily through that, and then have a background server (think WebJobs) that will flush the firewall rules say, every night or so (or for a more elaborate setup, accept the registration with a timeout and use a continuous WebJob to poll for the timeout and refresh/revoke as necessary)
We deployed a Node.js Azure Web App and defined a minimum of 2 instances (for scalability and high-availability).
It seems like the LB is balancing the load between the instances, but it doesn't react on instance error (crash) and seems to insist balancing the load between all the instances including the one which crashed.
Is there a way to set a fail-over mechanism for high-availability?
The load balancer used by Azure App Service will continue to send requests to individual web servers as long as the underlying virtual machines are up and running.
To workaround the issue you are running into, you can try configuring the "auto-heal" feature. If the scenario is that the app gets "stuck" in a permanently broken state, auto-heal rules can be configured to automatically restart the app.
More details on auto-heal here:
Auto-heal for Azure Web Sites
If I deploy a web app (formerly known as an Azure WebSite) to an App Hosting Plan in Azure with a couple of instances (scale = 2) will the load balancer in front of the instances care if any of the instances is unhealthy?
I'm troubleshooting an issue that sometimes causes my site to return an http 503 ~50% of the time. My thinking here is that one of two of my instances has failed but the load balancer hasn't noticed.
If the load balancer does care, what does it look for? I can't find anyway to specify a ping url, for instance.
Note: this question has nothing to do with Traffic Manager.
Yes, Azure Web Apps monitors the health of the workers by making internal requests to it and verifying that they're healthy.
However, we don't check status codes that the web app returns to user requests (like 500, etc) since that could easily be an app specific issue rather than a problem with the machine.
So the answer you're looking for is: We continuously test whether or not the instances (VMs) are healthy and take them down if they're not. However, those tests do not rely on error codes the customer's site returns
I'm designing a custom client server tcp/ip app. The networking requirements for the app are:
Be able to speak a custom application layer protocol through a secure TCP/IP channel (opened at a designated port)
The client-server connection/channel needs to remain persistent.
If multiple instances of the server side app is running, be able to dispatch the client connection to a specific instance of the server side app (based on a server side unique ID).
One of the design goals is to make the app scale so load balancing is particularly important. I've been researching the load-balancing capabilities of EC2 and Windows Azure. I believe requirement 1 is supported by most offerings today. However I'm not so sure about requirement 2 and 3. In particular:
Do any of these services (EC2, Azure) allow the app to influence the load-balancing policy, by specifying additional application-layer requirements? Azure, for example, uses round-robin job allocation for cloud services, but requirement 3 above clearly needs to be factored in as part of the load balancing decision, i.e. forward based on unique ID, but uses round-robin allocation if the unique ID is not found at any of the server side instances.
Do the load-balancer work with persistent connection, per requirement 2? My understanding from Azure is that you can specify a public and private port-pair as an end-point, so the load-balancer monitors the public port and forward the connection request to the private port of some running instance, so basically you can do whatever you want with that connection thereafter. Is this the correct understanding?
Any help would be appreciated.
Windows Azure has input endpoints on a hosted service, which are public-facing ports. If you have one or more instances of a VM (Web or Worker role), the traffic will be distributed amongst the instances; you cannot choose which instance to route to (e.g. you must support a stateless app model).
If you wanted to enforce a sticky-session model, you'd need to run your own front-end load-balancer (in a Web / Worker role). For instance: You could use IIS + ARR (application request routing), or maybe nginx or other servers supporting this.
What I said above also applies to Windows Azure IaaS (Virtual Machines). In this case, you create load-balanced endpoints. But you also have the option of non-load-balanced endpoints: Maybe 3 servers, each with a unique port number. This bypasses any type of load balancing, but gives direct access to each Virtual Machine. You could also just run a single Virtual Machine running a server (again, nginx, IIS+ARR, etc.) which then routes traffic to one of several app-server Virtual Machines (accessed via direct communication between load-balancer Virtual Machine and app server Virtual Machine).
Note: The public-to-private-port mapping does not let you do any load-balancing. This is more of a convenience to you: Sometimes you'll run software that absolutely has to listen on a specific port, regardless of the port you want your clients to visit.