We have a Virtual machine on azure on which a web service is running. Since last few days there is constant over 1000 request from a single ip on the vm due to which the vm responds very slow or sometimes stop. Is there any feature in azure portal to limit the access of an ip address on vm after a limit or any other option?
Azure API Management is a managed service that allows you to throttle particular clients and also provides a lot of security and other features.
https://learn.microsoft.com/en-us/azure/api-management/api-management-key-concepts
Related
Is there any limit to send/receive an API response service from windows Azure Virtual Machine?
As far as I know, there isn't any limit to send/receive an API response from Azure VM. The VM create in Azure, it works as a server same as a on-premises server.
About the Azure VMs limits, refer to the link.
The Azure VM's performance according to different VM size.
No, I think there isn't any limit as such. VMs are billed on capacity and no of hours run.
No limit, but you do need to pay for bandwidth after a certain point. Check here: https://azure.microsoft.com/en-us/pricing/details/bandwidth/.
The first 5 Gigabytes per month of outbound traffic are free.
After that you have to start paying something. Your VM will usually cost way more than the bandwidth (license, VM size, storage).
We have a third party product run as a windows service, expose as a web service. The goal is to dynamically provision the service instances in business peak hours.
Just to run the thought with you guys,
- I've already deployed the service on multiple vm, configured the vm in the same cloud service Availability Sets, configured azure to turn on/off vm instances based on cpu use
- I am to configure a separate vm, run iss arr there, add points to the endpoints on the vm configured above, with the hope ARR balanced the requests to the back-end vm dynamically
Will this work? What's the best practice for the IaaS scale? Any thoughts? Truly appreciate the input.
If I have understood correctly, you just need to use the built in load balancer of the cloud service. Create a load balance set for your endpoint. For example, if you want to balace the incoming traffic to port 80 in your application all you have to do is to create a LB-set for this port and configure this set to all the VMs in the Cloud Service.
The Azure Load Balancer randomly distributes a specific type of
incoming traffic across multiple virtual machines or services in a
configuration known as a load-balanced set. For example, you can
spread the load of web request traffic across multiple web servers or
web roles.
Configure a load-balanced set
Azure load balancing for virtual machines
No matter if VMs are up or down, once it turns on and if the endpoint is configured in the same LB-set, it will automatically start responding to requests once port 80 is online (IIS started and is returning STATUS 200 OK, for example). So, answering your question: yes, it will work with auto-scale or manuallying turning on/off vms.
I have 2 Azure vm's (Linux) being load balanced by a public Azure Cloud Service. Both instances show in the Azure Management portal for the same cloud service. I want to take down one instance and perform some maintenance. However since the instance is still showing even though the VM has been shutdown it the Cloud Service is still directing traffic to it. How do I delete an instance from the Cloud Service or stop the Cloud Service from directing traffic to a particular VM instance? Then afterwards how does one re-associate an existing VM to that service? (i.e. change from one Cloud Service to another).
Note: SSH works into the VM but other ports used by the VM are not working acting like they are trying to go to the other VM even though the correct endpoints are created to the active VM.
The purpose of a port probe in a load-balanced set is for the load balancer to be able to detect whether or not a VM is able to accept traffic. When configuring the load-balanced endpoint you can specify a webpage or a TCP endpoint for the probe - and this should be present on each instance. Traffic will be directed to the VM as long as the webpage returns 200 OK or the TCP endpoint accepts the connection when the load balancer probes. You can specify the time interval between probes and the number of probes that must fail before the endpoint is deemed dead and should be taken out of rotation (defaults are every 15 seconds and 2 probes).
You can take a VM out of load-balancer rotation by ensuring that the configured probe page returns something other than 200 OK and then bring it back into rotation by having it once again send a 200 OK.
When I have needed to keep my webservice running and returning status of 200 I have had to resort to removing the endpoint from the load-balanced set. It is pretty simple to do but it does take usually a minute for the webPortal to remove the endpoint and then again once you recreate the endpoint to put it back in the set.
I want to move my web role to a smaller VM size for cost saving purposes.
I changed the vmsize attribute in WebRole in the ServiceDefinition.csdef accordingly. On publishing I received the following error:
Total requested resources are too large for the specified VM size
So I then reduced the size of the local storage resources in the ServiceDefinition.csdef. Then I got the error while publishing:
The size of local resources cannot be reduced. Affected local resource
is DataFiles in role Website.
From what I have read online, I will need to delete the deployment and republish it. But this will assign a new IP to my cloud service. I can't have this happen.
Is there another solution to my problem?
To add on to what sharptooth said....
In your specific case you should deploy to the staging slot and then perform a VIP swap. This will leave you with your original IP address, and will put your new hosted service (with the smaller VM size) in the production slot. You can then delete your staging slot (your old service with the larger VM size).
If you can't do a VIP swap then you can deploy your updated application to a new hosted service which will result in a new IP address. You can then update whatever is dependent on the IP address (firewalls, whitelists, etc) to the new hosted service's IP address, then once everything is working correctly you can update your cname/arecord to the new hosted service and then delete the old hosted service.
However, while you can't do it for your scenario, an in-place upgrade is a better upgrade option than VIP swap whenever possible. With the VIP swap you have the potential to momentarily lose connectivity to external resources that rely on your public IP address. The issue is that outbound traffic can fail if connecting to a resource which does IP address whitelisting, which for most services effectively means that they are down.
Normally, outbound traffic (ie. a call to SQL Azure) is SNATed from the DIP to the VIP. If the resource being called (ie. SQL Azure) does IP whitelisting then this is no problem because the traffic will be coming from the VIP which is a known good IP address. During the VIP swap there is a short period of time, typically just a few seconds but in some cases can be a couple minutes or more, where the SNAT is in flux and does not happen. This means that traffic from an Azure VM appears to be coming from the DIP which will cause the connection to be blocked because the DIP IP address is not in the whitelist.
I have a Cloud Service that is connected to a LAN through a virtual network. I have a web role that machines on the LAN will be hitting for tasks like telling the cloud service that data needs to be refreshed. It it possible to have and endpoint that's load-balanced, but that only accepts traffic through the virtual network?
Well... you have a few things to think about.
You could set up your own load balancer in a separate role, which then does the load balancing. You'd probably want two instances to deal with high availability, and if there was any stateful/sticky-session data you'd need to sync it between your two load balancers. OR...
Now: If your code needing load-balancing lived in a Virtual Machine, rather than in a web/worker role, you could take advantage of the brand-new IP-level endpoint ACL feature introduced at TechEd. With this feature, you can have an endpoint that allows/blocks traffic based on source IP address. So you could have a load-balanced endpoint balancing traffic between a few virtual machines, and you could then limit access to, say, your LAN machines, and even add your existing Cloud Service (web/worker) VIP so that your web and worker role instances could access the service, all through the endpoint without going through the VPN. This way, you'd get to take advantage of Azure's built-in load balancer, while at the same time providing secure access for your app's services.
You can see more details of endpoint ACLs here.
No. The load balancer for a cloud service is public only. You can't predict the ip addresses of the individual instances on the virtual network, so you can't even hook them into your own load balancer. Yes, you can do it with VMs (as David recommends) — but then you're doing old-school IIS, not a cloud service. I went through this in November 2012, and was unable to find a decent solution.