I need to configure an Orleans cluster to connect to an Azure App Service. The issue is that networking is my weakest point ;).
I have configured an Orleans Silo using Azure Worker Role (4 instances), listening to the default ports:
.ConfigureEndpoints(siloPort: 11111, gatewayPort: 30000)
I've assigned the Worker Role to an Azure VNET (Classic) with these settings:
Address Range 10.0.0.0/24
Subnet-1 10.0.0.0/27 (the Worker Role is Assigned here as part of a network security group)
Point to Site range 10.0.1.0/24
GatewaySubnet 10.0.0.32/29 (added to the same network security group)
I see that the 4 instances take proper IPs in the Subnet-1: 10.0.0.4 to 10.0.0.7.
The App Service is assigned to this VPN ("Certificates in sync") and reports:
IP ADDRESSES ROUTED TO VNET
10.0.0.0 - 10.255.255.255
172.16.0.0 - 172.31.255.255
192.168.0.0 - 192.168.255.255
I see that the app service tries to connect to 10.0.0.7:30000
I tested both by checking application diagnostics and by using tcpping that 10.0.0.7:30000 is not accessible by the application. (Could not connect to 10.0.0.7:30000: AccessDenied)
I am definitely missing something elementary here, I haven't configured IPs in a decade!
(This is similar to Vnet between Virtual Machine and App Service in Azure but in this case I do want to configure the VNet, and I have a specific practical issue)
For the networking, I suggest verifying the following things:
You have integrated your app into a Classic VNET, and enable Point to Site in a Classic VNet as this DOC.
Confirm if the desired port in Orleans cluster is listening. You can go through this website to troubleshoot on the Orleans cluster side.
Firewall (VM or host lever) and NSG rules to allow the desired ports. Get more details from this.
For more references, Create a VNET and access an Azure VM hosted within it from an App Services Web App
After checking in detail all the documents Nancy provided, I ended up connecting through VM to one of the cloud service VMs (the silo). I needed to allow it through the NSG. I checked with netstat -aon that the service was listening to the expected ports. I could ping the other instances of the service.
Then I downloaded tcping and tried to connect to the expected ports from that instance to the others. It was blocked. As I was within the same silo, I could now pinpoint the problem to "Firewall (VM or host level)" (one of the possible issues Nancy mentioned).
The solution was to configure the Endpoints at the Cloud Service definition (csdef), thus the VM firewall was blocking access to these ports. I naively thought that it was enough to configure them at the SiloBuilder level, but SiloBuilder is application layer, it doesn't update the VM it's running on.
The result is that now netstat -aon was showing the service connections to 11111 as "established" not just "listening" and the VM's firewall was showing the new rules. The worker role instances could connect to each other.
Still the app service (web app) couldn't connect to the host:port of any of the worker roles. I tried to remove the NSG but this caused the instances not to be able to see each other again, so I reassigned the NSG to subnet-1 and the GatewaySubnet.
The final thing I tried was to disconnect the App Service from the VNET and re-connect it. I've run on other (unrelated) errors at that step, I will update the post when I sort them out.
Related
currently we have the following scenario:
We have established our connection (network wise) from on-premise to the Azure Kubernetes Cluster (private cluster!) without any problems.
Ports which are being routed and are allowed
TCP 80
TCP 443
So far, we are in a development environment and test different configurations.
For setting up our AKS, we need to set the virtual network (via CNI) and Service CIDR. We have set the following configuration (just an example)
Virtual Network: 10.2.0.0 /21
Service CIDR: 10.2.8.0 /23
So, our pods are having IPs from our virtual network subnet, the services are getting their IPs from the Service CIDR. So far so good. A route table for the virtual network (subnet has been associated with the route table) is forwarding all traffic to our firewall and vice versa: Interacting with the virtual network is working without any issue. The network team (which is new to Azure cloud stuff as well) has said that the connection and access to the Service CIDR should be working.
Sadly, we are unable to access the Service CIDR.
For example, let's say we want to establish the kubernetes dashboard (web ui) via https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/. After running the YAML code, the kubernetes dashboard pod and service is being successfully created. The pod can be pinged and "accessed", but the service, which makes it possible to access the kubernetes dashboard via port 443 cannot be accessed. For example https://10.2.8.42/.
My workaround so far is that the kubernetes dashboard (as a Service, type: ClusterIP) is having set an external IP from the virtual network. This sounds all great, but I am not really fond of it, since I have to interact with the virtual network rather than the Service CIDR.
Is this really the correct way? Any hints how to make the Service CIDR accessible? What am I missing ?
Any help would be appreciated.
I have a container (linux .NET Core) running in Azure. This application reads from Azure Service Bus and writes information in a database on-premises.
The connection to ASB is working fine but when the application tries to connect to SQL Server, I get a timeout. Initially, I was running the container with no network setup (the 'None' option). Then I went to public and it now gives me an IP address.
My infrastructure team added this IP to our firewall but either Azure is trying to access it with a different IP address OR the connection never leaves the Docker environment.
ps.: I have an App Service running (.NET Core API) and it does connect to the same SQL Server (same IP address) correctly.
Suggestions?
Since the IP address that outgoing from the Azure container group is random from Azure cloud IP list, you can not directly add its IP to the firewall. You can vote up this feature request for using the same exposed public IP for outbound traffic starting from the container group.
Currently, you could deploy container instances into an Azure virtual network, then the container could communicate with on-premises resources through a VPN gateway or ExpressRoute. For more details, you could see enable containers to use Azure Virtual Network capabilities.
I need to close a port 8010 on one of my app service on Azure. Is it possible to configure ports on app service?
In the App Service shared tenant environment, it is not possible to block specific ports because of the nature of the infrastructure.
But in App Service Environment(ASE), you have full control over inbound and outbound traffic. You can use Network Security Groups to restrict or block specific ports.
An ASE always exists in a virtual network, and more precisely, within a subnet of a virtual network. You can use the security features of virtual networks to control inbound and outbound network communications for your apps.
So, you need to create app service, virtual network. Then deploy app service to virtual network. (The vnet applies asg rules)
follow steps:
1.create a virtual network and make sure it is in the same location with your app service.
2.create a network secure group.(also make sure in the same location, In my side all of location is 'Central US')
3.add security rules of your nsg. Inbound or Outbound.
3.create a subnet in the vnet. Use the network secure group that you create above.
4.deploy your app service to the subnet of virtual network that you create above.
Finally, the app service deploy to the subnet of virtual network and the subnet virtual network use the network secure group which blocks specific ports. So your app service also block specific ports.
Please let me know if you have more problems.
I'm trying to build a simple two-tier wordpress environment on CentOS 7.2 in Azure.
I've defined a virtual network, have connected it to my home-lab via IPsec VPN, and I've defined several subnets in Azure (for Web tier, SQL tier, and utility tier role segregation using Network Security Groups).
I have two web-tier VMs, both members of the same Availability Set, and are both on the web-tier subnet. They have internet access (outbound), I can SSH to them from my home-lab, and the seem fine operationally to me - httpd is listening on 80/tcp, and I can hit the web pages from my home-lab network by visiting each web server directly on its 192.168.x address.
I should mention my web servers DO NOT have public IPs assigned, but I can't see this being an issue.. they're intended to be behind the load balancer.
So, I've created a Load Balancer, and:
assigned a public IP to the LB
added a backend pool (selected my availability set, and chose my two web servers)
added a probe (http probing the two web servers)
added a load balancer rule
Notice I did NOT add an inbound NAT rule. I can't figure out what that's for, or if I need it.
On my web tier, I tcpdump port 80 and see the probes. In httpd logs, I see 200 success messages for the probes. I go to a web browser, hit the external VIP I assigned to the LB, and nothing. It just times out. I cannot connect to the LB VIP.
What am I missing? What are the NAT rules about?
Any help would be appreciated. All I can find online are examples doing this in powershell etc.. and I'm using the Azure web interface.
Thanks!
Edit: Found the issue - Needed the NSG to allow not just the AzureLoadBalancer, but "Internet" to hit port 80/tcp. Should have thought of that sooner..
Found the issue - Needed the NSG to allow not just the AzureLoadBalancer, but "Internet" to hit port 80/tcp. Should have thought of that sooner..
I have a .Net Web API deployed as a Web App and am trying to connect it to a MySQL db on a VM in a virtual network, but it's responding with a 500 internal server error.
My VNET just consists of one VM with no DNS or site-to-site configuration.
The preview portal says VNET Integration is connected, my certificates are in sync and the gateway is online.
I gave my VM a static IP address which I'm using in my web.config connection string, thinking requests would be routed through the gateway to the VM, but according to my general mysql log their aren't any connection attempts to the mysql server.
The address I gave my VM is within the range of addresses being routed to the VNET, and I setup an endpoint on the VM for the port I'm trying to connect to mysql on with an access rule that allows all connections, so I'm not sure why the connection doesn't appear to be getting through the gateway to my VM.
You may check this link which provides instructions on how to connect Azure App Service - Web App with Azure Virtual Network, so that it can use resources visible within network itself:
https://azure.microsoft.com/en-us/documentation/articles/web-sites-integrate-with-vnet/
App Service supports three ways to connect to VNETs.
ASE - (App Service Environment) is a dedicated Cloud Service that includes all the needed pieces for App Service and as such can be joined to a VNET. A good starting point on ASE is this blog (https://azure.microsoft.com/en-us/blog/introducing-app-service-environment/).
Hybrid Connections - an agent based way to punch an application specific "wormhole" through network boundaries (https://azure.microsoft.com/en-us/documentation/articles/integration-hybrid-connection-overview/)
Virtual Networks - a way to "dial up" from an App Service App into an network (https://azure.microsoft.com/en-us/documentation/articles/web-sites-integrate-with-vnet/)