Connecting to Azure VM from Azure App Service - azure

I have an Azure virtual machine, on which a process listens on a certain port. A Node.js application on my local computer is able to connect to this process using the VM's public IP address. But the same Node.js application, deployed as an app service on Azure, is apparently not able to connect using any IP address, despite the fact that the VM allows all incoming traffic on all ports.
(Details: The VM process is running "q" (kdb+), and the Node.js application is using the "node-q" package to connect to it. Both the Azure VM and the Azure app service are Linux, but the local version of the app service is on Windows. The Azure app service is able to connect to my Azure SQL database.)
Any insights into this problem would be appreciated.

There are many reasons for Bad gateway error, probably you could verify these factors on your side:
Azure VM side. Make sure the Azure VM is running and the process port is listening when you request a connection from an application. You could run sudo netstat -plnt on Linux VM to check the listening ports. Or, a server can crash if it has exhausted its memory, due to a multitude of visitors on site or a DDOS attack.
Firewall blocks a request. You should allow all incoming traffic or Azure web app service outbound traffic on this listening port on the VM. In this scenario, you could verify the Network Security Group configuration for the VM and firewall inside the VM if you have. You could find NSG settings by clicking Virtual machine--Settings---Networking---inbound port rules on the Azure portal.
Faulty programming. It seems the Node.js application could work locally.
Temporary issue. Sometimes, there is no real issue but your browser thinks there is one thanks to an issue with your browser, a problem with your networking equipment, or some other reasons. You could refresh your web browser or clear cache and cookies to get the page back what you are looking for. More details you can refer to fixing 502 error.
If you still have any question, feel free to let me know.

It was faulty deployment. I didn't include all dependencies in the upload to Azure. Thank you.

Related

Azure calls working on local but not on production server - Firewall settings?

We have an Azure set up where we use Azure as our proxy for sending data to our apps via Azure functions.
We are having issues because evidently our local development Windows environments send the calls to Azure, and we verify this by logging into our Azure portal, and watching any traffic to the calls in the Azure function console. When we run our code on our local machines, we see the traffic and the calls getting made, BUT when we try the same calls on our production server environment (hosted onsite, Windows Server 2016) to Azure, we can't see any traffic come through our Azure calls.
I am trying to chase down whether it is the Firewall on the production server machine and to see if there are any Outbound Firewall rules that need to be opened up or added to talk to Azure, but I have not seen anything by doing my Google searches that brings up local machine talking to Azure. Most of the articles that come up are about setting up a Firewall on Azure, not local firewall rules to Azure.
The application we are running is an onsite IIS hosted website with calls out to Azure.
Anyone have any pointers on where or what I should be looking at to see if there is any communication coming from our production server to Azure on the production server. Which logs, rules, anything that could point us in a direction. I felt I have looked in most places. I have looked in IIS logs, application logs (we just send a log saying that the call was sent)
But if there is a specific Firewall setting on the produciton server that I need to add, I don't know what that would be and if anyone does know, it would be very helpful.
UPDATE:
We have so far found that we can hit the functions through a browser enabling GET requests and other functions that allow GET requests. The issue seems to be either IIS or a permission with IIS or the application itself. We actually set the permissions on our application on our server to "Everyone" just to see what would happen, on the folder for the application and still have not have any luck. The calls we are calling are actually POST to the Azure function. We don't have Postman on the machine.
Assuming you're calling out to an Azure Funciton, which is not running on an App Service Environment, or behind API management or similar, then the only place you can restrict access is on the networking tab of the settings of the function. If you don't have this configured then the function is not where the issue is.
If traffic outbound from your on-prem server is being blocked, then you will need to talk to your IT team to get that opened up. You don't mention how you're calling your funciton, but if it is an HTTP trigger, then you would need port 443 outbound open.

How can I diagnose a connection failure to my Load-balanced Service Fabric Cluster in Azure?

I'm taking my first foray into Azure Service Fabric using a cluster hosted in Azure. I've successfully deployed my cluster via ARM template, which includes the cluster manager resource, VMs for hosting Service Fabric, a Load Balancer, an IP Address and several storage accounts. I've successfully configured the certificate for the management interface and I've successfully written and deployed an application to my cluster. However, when I try to connect to my API via Postman (or even via browser, e.g. Chrome) the connection invariably times out and does not get a response. I've double checked all of my settings for the Load Balancer and traffic should be getting through since I've configured my load balancing rules using the same port for the front and back ends to use the same port for my API in Service Fabric. Can anyone provide me with some tips for how to troubleshoot this situation and find out where exactly the connection problem lies ?
To clarify, I've examined the documentation here, here and here
Have you tried logging in to one of your service fabric nodes via remote desktop and calling your API directly from the VM? I have found that if I can confirm it's working directly on a node, the issue likely lies within the LB or potentially an NSG.

SMB access to on-premise resource from Azure Web App via Virtual Network

We have a setup where we have both VMs and Web Apps in Azure connected to our on-premise resources via a point-to-site virtual network.
We have an folder on premise with access to Everyone open (both on the share and NTFS) and the Azure VMs that are on that virtual network are able to browse to the share without difficulty.
The web apps are not able to access them however.
I'm assuming the following line in this article explains the reason, but I'm looking to confirm this is not possible:
The work required to secure your networks to only the web apps that need access prevents being able to create SMB connections. While you can access remote resources this does not include being able to mount a remote drive.
Coming out of the logs from the attempt from the website to access it:
Taking the C# code out of the picture, trying to get the directory listing from the powershell console on the web app:
I've also tried this with Hybrid Connections, and am getting closer - once it's setup and attached to the Web App, I'm able to tcping the SMB port from the powershell console (which is further than I can get when using the VNET), but it's still unable to list a directory:
Any thoughts? Anyone doing anything similar?
The tcping result is actually misleading - you are really pinging a local port hosted on your web app (hence why the tcping has results of ~1ms). Tcping doesn't actually test the full tunnel for Hybrid Connections because the tunnel is a TCP level data relay only (that is, it does not send TCP headers, etc., over the tunnel, only payload) and tcping does not send any data, only simply verifies that the TCP handshake succeeded.
Unfortunately, the article is correct - SMB will not work at all in your Web App. There are security layers in place that will block the attempt.

WorkerRole in Azure Cloud Service net connection

This afternoon I have uploaded my WorkerRole in Cloud Service on Azure, this service run on VM with Windows Server 2012. I have realized that WorkerRole can't get query from Databases (BigQuery, TSQL). When I have read the service log in VM I have seen the following error:
The VM and host networking components failed to negotiate protocol version '5.0'
I think that Hyper-V-vsc has something to do. Anybody knows what happens?
Thanks,
Roger
First thing I could check is to make sure the databases you are trying to connect too have whitelisted the VIP for the cloud service you're connecting from. And if you haven't already, remote into an instance of the worker and try reaching the DB's using a thin a client UI as you can.
In my experience, these issues are usually on the db end. Azure doesn't do much with blocking outbound connections. Those that fail are usually more a matter of protocol (UDP multicast for example).

Allowing only local network connections to a Windows Azure VM?

I am trying lock down a virtual machine that acts as an app server for a web application. I have a two VM's: One for the app server and another one running the web server. I have to open a ton of ports to allow the web server talk to some wcf services, but I only want to allow those connections from the web server and no one outside of that network. I have to add endpoints in order for the web server to access the wcf services, but this also makes them accesible to the public IP. How can I only allow this traffic on the
For Virtual Machines, the only way of accessing ports from outside the hosted service is by defining input endpoints (with or without load-balancing across a set of machines). In your case, you'd just open, say, 80 and 443, specifically for your web server (e.g. not load-balanced). This is considered a port-forwarded endpoint since traffic on these two ports get forwarded directly to your web server. For more clarity around port-forwarded endpoints, I suggest Michael Washam's blog post, here.
At this point, you'd open various other ports on your app server (through its firewall config), and now your web server can talk to the app server, yet the outside world won't be able to reach the app server. Note: I'm assuming you placed your web server and app server in the same hosted service. Otherwise, you'd need to find a different way to connect between web and app servers, such as configuring a Virtual Network.
EDIT 6/5/2013 You can now enable ACLs on input endpoints, allowing (or blocking) IP ranges. Today ACLs may only be managed through PowerShell, with the June 2013 update. See this post to learn more.
Machines that exist on the same virtual network will be able to talk to each other as long as the local firewall has been opened to those ports. This problem was with my configuration in my application and not because of this. I also didn't have the correct ports open. Now it works like a charm.

Resources