Here´s what we try to do:
We try to run several App Servers with Meteor and Mongo DB Servers on Azure VMs. We have them bundled in one cloud service with Endpoint sets that balance the load.
We setting them up via the Management Console (Ubuntu VMs) and then deploy Meteor and the content via Meteor Up.
Now we want to do HTTPS. Initially we thought that is handled by the LoadBalancer by mapping an external port 443 to the internal port 80 as there are options to upload Certificates.
It seems there is no option to configure that this way except for deployments using Visual Studio (and those seem to have to use at least some Web roles).
Here is where we struggle:
Using HTTPS seems to be tied to deploying an App developed in visual studio and/or on Windows VMs?
That´s the question:
Is it possible to use load balanced HTTPS on szure with Linux VMs?
PS: This is the Article that made me think the Load Balancer might does SSL encryption by itself: https://msdn.microsoft.com/en-us/library/azure/ff795779.aspx
AFAIK, the load balancer of Azure would only spread the traffic to the instances internal endpoint, and do not encrypt the traffic in SSL. (see: https://azure.microsoft.com/en-us/documentation/articles/load-balancer-overview/)
Even for the deployment using Visual Studio, I believe the mechanism behind is also the IIS on each web role encrypting the traffic, but not the load balancer job.
So, you should add SSL on the web server of each VMs such as using nginx.
Related
I have a web application that is currently running on IIS in 3 Azure VMs. I have been working to make my application App-Services friendly, but would like to test the migration to App-Services in a safe / controlled environment.
Would it be possible to spin up the App-Service and use an Azure Load Balancer to redirect a percentage of traffic off the VM and onto the App-Service?
Is there any other technology that would help me get there?
You might be able to achieve this if you are using an App Service Environment and an internal load balancer
https://learn.microsoft.com/en-us/azure/app-service/environment/app-service-environment-with-internal-load-balancer
However, based on your description of your current setup I don't believe there is an ideal solution for this as a standard load balancer only allows for the backend ports to map to VMs. Using an Application Gateway might be another option as well
https://learn.microsoft.com/en-us/azure/application-gateway/
I would suggest you make use of the deployment and production slots available that comes a Web App. Once you have the webapp running in the dev slots, test the site to ensure all works as expected. Once it does, switch it to the production slot and reroute all traffic from the VMs to the App Service.
All in all, running an app on a Web App is quite simple. Microsoft takes away the need to manage the VM settings so you can simply deploy and run. I don't see you having any issues simply migrating. The likelihood for issues is small. You can also minimalism it by performing the migration during off hours in case you need to make any changes.
There is also some Web App migration guidance you might find useful
https://learn.microsoft.com/en-us/dotnet/azure/dotnet-howto-choose-migration?view=azure-dotnet
We have a web API that we have created using .NET core 2.1. We have two servers which sit behind a netscaler load balancer. Inside these servers, we managed to install our web APIs in port 5000. Now we have a DNS hostname which mapped to our load balancer. How can we do hostname binding for our windows service? If we deploy using IIS then there we can specify this binding, but when deployed as a windows service how can we achieve this?
The best way is to use UseUrls extensions method on IWebHostBuilder and use special way of binding to the urls: http://+:5000. With this binding the application will try to bind to http://localhost:5000 as well as http://{hostname}:5000.
So the code in your case would be:
WebHost.CreateDefaultBuilder(args).UseUrls("http://+:5000");
You have to ensure that the account running the windows service have appropriate level of permissions to bind to the network path!
EDIT:
As #Lex Li mentioned in comments, this last part about permissions is not completely correct, as you can reserve the use of URL during the installation process. When done in this manner, the account running the Windows service does not have to have elevated permissions.
Details
Create proxies using Azure for a bot that i have developed.
I'm creating a bot which utilizes proxies to buy merchandise of sites from back-end and i wanted to generate some proxies using Azure to mask my local IP. There are various services that offer the proxies (data center and residential proxies) for a little amount.
However, i would like to generate proxies by myself using Azure. How can i create proxies and use it in my local application so websites believe that the request is coming from the proxy server?
I create few virtual VM's (two linux and two windows based). I tries using squid proxy to convert the VM into a proxy but since i do not have much knowledge on linux, i am facing multiple issues. Also, i am not understanding on how to proceed on the windows machine.
You can use Azure Functions to call proxies for App Services and OnPremise Servers:
Announcement:
https://azure.microsoft.com/en-us/updates/announcing-azure-functions-proxies-in-public-preview/
How To, Sample:
https://learn.microsoft.com/en-us/azure/azure-functions/functions-proxies
As developers we wrote microservices on Azure Service Fabric and we can run them in Azure in some sort of PaaS concept for many customers. But some of our customers do not want to run in the cloud, as databases are on-premises and not going to be available from the outside, not even through a DMZ. It's ok, we promised to support it as Azure Service Fabric can be installed as a cluster on-premises.
We have an API-gateway microservice running inside the cluster on every virtual machine, which uses the name resolver, and requests are routed and distributed accordingly, but the API that the API gateway microservice provides is the entrance for another piece of client software which our customers use, that software runs outside of the cluster and have to send requests to the API.
I suggested to use an Load Balancer like HA-Proxy or Nginx on a seperate machine (or machines) where the client software send their requests to and then the reverse proxy would forward it to an available machine inside the cluster.
It seems that is not what our customer want, another machine as load balancer is not an option. They suggest: make the client software smarter to figure out which host to go to, in other words: we should write our own fail-over/load balancer inside the client software.
What other options do we have?
Install Network Load Balancer Feature on each of the virtual machine to give the cluster a single IP address, is this even possible? Something like https://www.poweradmin.com/blog/configuring-network-load-balancing-in-windows-server/
Suggest an API gateway outside the cluster, like KONG https://getkong.org/
Something else ?
PS: The client applications do not send many requests per second, maybe a few per minute.
Very similar problem, we have a many services and Service Fabric Cluster that runs on-premises. When it's time to use the load balancer we install IIS on the same machine where Service Fabric cluster runs. As the IIS is a good load balancer we use IIS as a reverse proxy only for API Gateway. Kestrel hosting is using for other services that communicate by HTTP. The API gateway microservice is the single entry point for all clients and has always static URI inside SF, we used that URI to configure IIS
If you do not have possibility to use IIS then look at Using nginx as HTTP load balancer
You don't need another machine just for HTTP forwarding. Just use/run it as a service on the cluster.
Did you consider using the built in Reverse Proxy of Service Fabric? This runs on all nodes, and it will forward http calls to services inside the cluster.
You can also run nginx as a guest executable or inside a Container on the cluster.
We have also faced the same situation when started working with service fabric cluster. We configured Application Gateway as Proxy but it would not provide the function like HTTP to HTTPS redirection.
For that, we configured Nginx Instead of Azure Application Gateway as Proxy to Service Fabric Application.
Is it possible to use Web Deploy (wmsvc) across domains? That is, can I deploy from my dev box/build server in one domain onto a web server in another? I am able to do this inside the same domain so I know that I do have the web deployment service configured properly. However from another domain I can't even get the https://severname.domain.com:8172/msdeploy.axd to challenge for credentials.
The short answer is yes.
WMSVC exposes itself on port 8172, but it uses the https protocol. So long as you have a direct way to get from one network to the other, over that port, it will work.
We run all of our webservers on a DMZ, which is an isolated network with separate DNS, active directory servers, etc. I can directly deploy from my build server (on the *.hq network) to the *.dmz.com server over port 8172.
However, I did have to communicate this requirement to the networking group so that they could allow port 8172 to pass through our firewall. Also, I wasn't able to set up web deploy with automatic Windows Auth because the two networks had different domains and different sets of users.