I have been running a single container .net core web app on Azure using their preview single-container service for a couple of months.
It runs fine and Azure diligently proxies requests to the container from the web as expected - albeit with an important caveat.
What it doesn't do, to my chargrin, is proxy the requester's IP address to the container directly, it does this via an additional http header. This means that the backend code, which uses Application Insights for telemetry, is capturing the proxy's IP address (0.0.0.0) instead of the originating client's IP address.
This is clearly an oversight and is addressable via a pull request (since the Application Insights code is open source) - the problem I have is I'm missing important IP address information.
Does anyone know if there's a way to obtain this retrospective request/log data (at host level) from the Azure Portal?
Related
I have a app service published on Azure tha have some request in another server. Sometimes the client's server blocks me due to too many requests, taking a while to return to normal. Is there any way to merge the outgoing IP address of my requests without using a third-party proxy service?
Hello i was wondering if someone could answer this question for me:
Is there a way for me to view logs of incoming requests and their IP Addresses.
Here is the scenario:
We have multiple host machines that every 5 minutes submit data into our .NET Web Application via a simple MVC controller. One of the machine's configuration is pointing to a correct domain, but the wrong controller name.
So every 5 minutes this generates a 404 error on Azure Portal. I would like to identify which machine is configured wrongly by identifying the IP Address of the incoming request that is causing this issue. We are running .NET web application with 12 VM Instances and I have checked the ApplicationInsights/Logs section, but can not find any references to the IP Address.
Any way to track it via Azure Portal site ?
Thanks in advance.
As long as the Application Insights .NET or .NET Core SDK is installed and configured on the server to log requests, you can create/update an Application Insights resource on Azure that shows the client's IP address.
You may currently be seeing the IP 0.0.0.0 in logs, which is the default:
This behavior is by design to help avoid unnecessary collection of personal data. Whenever possible, we recommend avoiding the collection of personal data.
From the same article you can see the setting to configure as follows (shortened for brevity).
{
// ...
"properties": {
// ...
"DisableIpMasking": true
}
}
After this setting is configured, logs will begin showing with the client ip addresses when queried in Application Insights.
I have Azure App Services behind the Azure Application Gateway/Firewall. There are few application that talks between them. Does that applications talk internally(using xxx.azurewebsites.net) or they talk with public domain(mydomain.com)?
Also, how to check these things in logs.
Current configuration:
HTTPSettings: Pick hostname from the backend address has checked.
Probes: pick hostname from backend https settings has checked.
To answer your question, No if your applications are inside azure's network, it usually wont go through the public domain. But it will go through the firewall/gateways and follow the same networking restriction you have defined.
What logs you want to check? if you want to see the application event logs you can do it using scm. You can access it via Diagnostics/Advanced Tools in your azure app services.
You can enable Access Logs in the Application gateway to see all the request that hits Application Gateway. It has the hostname field where you can check how the site is being accessed.
Let me know if you have any further questions.
I have an Azure virtual machine, on which a process listens on a certain port. A Node.js application on my local computer is able to connect to this process using the VM's public IP address. But the same Node.js application, deployed as an app service on Azure, is apparently not able to connect using any IP address, despite the fact that the VM allows all incoming traffic on all ports.
(Details: The VM process is running "q" (kdb+), and the Node.js application is using the "node-q" package to connect to it. Both the Azure VM and the Azure app service are Linux, but the local version of the app service is on Windows. The Azure app service is able to connect to my Azure SQL database.)
Any insights into this problem would be appreciated.
There are many reasons for Bad gateway error, probably you could verify these factors on your side:
Azure VM side. Make sure the Azure VM is running and the process port is listening when you request a connection from an application. You could run sudo netstat -plnt on Linux VM to check the listening ports. Or, a server can crash if it has exhausted its memory, due to a multitude of visitors on site or a DDOS attack.
Firewall blocks a request. You should allow all incoming traffic or Azure web app service outbound traffic on this listening port on the VM. In this scenario, you could verify the Network Security Group configuration for the VM and firewall inside the VM if you have. You could find NSG settings by clicking Virtual machine--Settings---Networking---inbound port rules on the Azure portal.
Faulty programming. It seems the Node.js application could work locally.
Temporary issue. Sometimes, there is no real issue but your browser thinks there is one thanks to an issue with your browser, a problem with your networking equipment, or some other reasons. You could refresh your web browser or clear cache and cookies to get the page back what you are looking for. More details you can refer to fixing 502 error.
If you still have any question, feel free to let me know.
It was faulty deployment. I didn't include all dependencies in the upload to Azure. Thank you.
I'm taking my first foray into Azure Service Fabric using a cluster hosted in Azure. I've successfully deployed my cluster via ARM template, which includes the cluster manager resource, VMs for hosting Service Fabric, a Load Balancer, an IP Address and several storage accounts. I've successfully configured the certificate for the management interface and I've successfully written and deployed an application to my cluster. However, when I try to connect to my API via Postman (or even via browser, e.g. Chrome) the connection invariably times out and does not get a response. I've double checked all of my settings for the Load Balancer and traffic should be getting through since I've configured my load balancing rules using the same port for the front and back ends to use the same port for my API in Service Fabric. Can anyone provide me with some tips for how to troubleshoot this situation and find out where exactly the connection problem lies ?
To clarify, I've examined the documentation here, here and here
Have you tried logging in to one of your service fabric nodes via remote desktop and calling your API directly from the VM? I have found that if I can confirm it's working directly on a node, the issue likely lies within the LB or potentially an NSG.