Host name resolution error (DNS) while accessing to app service - azure

Recently, I have noticed that many requests to my app service fails due to a DNS issue (on daily basis).
I use the app service to run a web service to my platform (app service is located in west UK).
I get the same errors from both Android&IOS applications being used by my users.
The error says: "unable to resolve host ***.azurewebsites.net"
In addition, I would like to specify that I created a new app service (located in west Europe), but still
get the same errors.
The errors seems to be received randomly, on different times and from different devices.
Update
I created a new app service and add some simple logic in the client side, in which I switched between the two app services upon dns error detected.
After exploring my logs, I have noticed that sometimes I had success (no dns issue when switching to the secondary app service), and sometimes
the dns error keep occurring.

Related

Unhealty backend after scaling up App service plan

I have an application gateway running with a web application in a App service plan. The application gateway listens and passes requests to the backend, which is the web app. There is a health probe implemented that works fine.
The web app was reachable fine until I scaled up the Service plan. Suddenly the health probe timed out reaching the backend and I got a 502 bad gateway error in the browser trying to reach the web application. After hours the website suddenly was back and the backend was healthy again. I was under the impression that you could scale up and down the App plan without any noticeable effect on the website, but it seems the gateway was not playing along.
Did I configure something wrong or should this work like I assumed?
I tried to reproduce the same in environment create app service running with application gateway and got a 502 error.
The number of TCP connections allowed by the plan standard while is an older it contains the double make sure while scaling up and down in app service try to remain in same tier so that inbound IP will wait for sometimes and then scale back.
Try to update your default setting in configuration ->General setting-> ARR Affinity Off. Either your application isn't stateful, or the session state is kept on a distant service like a cache or database. And try to Run your application with a minimum of 2-3 instances to prevent from failure.
You can make use of app service diagnostics gives you the right information to more easily
For Reference:
Get started with autoscale in Azure - Azure Monitor| Microsoft
Guide to Running Healthy Apps - Azure App Service
And I got the same error in application gateway as well to avoid the issue
In your virtual network -> service endpoint -> Add endpoint Microsoft.web in default subnet
.

Simple App Service cannot access Azure SQL after upgrading service tier ("no such host is known")

I upgraded my service tier on only our App Service app (not the other resources). A day after doing this, the app will no longer start up - giving the following:
HTTP Error 500.30 - ASP.NET Core app failed to start
If I connect to the App Service portal and use Kudu to start it via a debug cmd prompt, I see
Microsoft.Data.SqlClient.SqlException (0x80131904): A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 0 - No such host is known.)
System.ComponentModel.Win32Exception (11001): No such host is known.
Storage queue startup also fails with my queue hostname. In kudu I also tried
C:\home\site\wwwroot>nameresolver mydatabase.database.windows.net
Server: Default
Can't find mydatabase.database.windows.net: Non-existent domain
(not my actual server name...) but same thing for google.com or any other hostname
also:
C:\home\site\wwwroot>ping 142.251.32.142
Unable to contact IP driver. General failure.
but this seems to be expected?
I am not using any fancy VPNs, private networks, groups or whatnot - this is as plain vanilla as I could make it. This worked for months before the upgrade. It works when run locally. My firewall for the database allows all azure services and my work/remote IP. I tried connection strings in the app service configuration only as well as only in appsettings.json. The connection string works in VS 2019, 2022, and sql management studio as well as the query tool within Azure portal itself. I've restarted the App service many times. OpenDNS cache check shows multiple different IPs for my database server but is this expected for regions? Why is the DNS apparently broken? How can I get this back to a functional state?
FYI - MS support contacted me and the service started working again with no changes after 6 days. So - there is no answer.

Azure App Gateway Back-End Health State Flipping

I have an Azure App Gateway connected to 3 different App Service apps all running as part of the same App Service Plan (3 different back-end pools). In the Backend Health section of the AG, one of the app/pool is constantly flipping between Healthy and Unknown states. I have checked the entire network configuration according to this article and everything seems to be configured properly.
I have configured ip restrictions on the app services according to this article specifying the subnet the AG resides in as allowed. I have also temporarily allowed my ip address and every time the health for the 1 app goes to "unknown", I am still able to access the app service using its native .azurewebsites.net url locally on my machine.
Any ideas how I can troubleshoot this?
Please check if below points help to work around the issue.
As a workaround initially,try to restart the application gateway after the backend is deployed .
Also check this discussion on github issue
Sometimes Appgateway will cache the response indefinitely and the fix
maybe "Dynamic DNS" which ensures that the "no existing domain" is not
cached on the Appgw.Also check for the fix using v16.
Also check this > similar issue which says to use custom domain names as the request looks for some domain.

Azure web app is 503 Service Unavailable. How do I get it back running?

Our website has been hosted on Azure for a few years. Tonight it is throwing 503 Service unavailable errors. I cannot even load a url to a .jpg file. I have restarted the app and still nothing loads from the website. I cannot buy Azure support because I have bought and cancelled Azure support in the past. We are a 3 person business and depend on our small website and it is down and I don't know what to do. None of the trace logs make any sense to me.
I think 503 could mean that you reached a quota and Azure now respond with a 503 for requests. So I would check the Quotas section within your App Service Plan.
Also check:
Troubleshoot HTTP errors of "502 bad gateway" and "503 service unavailable" in Azure App Service
There are several things you can do to help remedy the situation.
Restart the application (please indicate what it is that will help
us)
Restart the instance that the application is running on.
Restore from a previous working backup of the site.
You should also add more information to your post so we can help, like what application you are using e.g. Apache, Nginx ect.
I've also had a similar problem. I had two deployment slots and in the first slot (production) which I had the latest code and in the second slot, I've missed deploying the latest code and configured traffic as 60-40 which gave me hard time finding it.
Once I've set 100% to the production slot it started working.
Just thought to share this in case it could be useful if you come across the same stuff in the future.
For me it was "Path mappings" in "Configuration".
As soon as i added a new Azure Storage Mount, the application broke.
Setting my Storage account -> Networking, to "Enabled from all networks" fixed the issue.
For us it was a result of the remote debugger. Disabling remote debugger and restarting the app service fixed the 503 error. I think one dev was remote-debugging while another was deploying the app and that seems to have caused an issue under the hood of the app service that broke port binding (we were seeing a stack track in logs about failing to bind to port).

Obtain Virtual IP - Azure App Service Environment

I am trying to setup IP Based SSL instead of SNI SSL on an azure Web App.
The App Service Plan is Standard S1, but unfortunately I am getting the following error message:
There are no IP addresses in the App Service Environment that are available to be assigned to your app.
What are the possible options?
I believe moving the current Web App to a different App Service Plan in a different resource group would solve this issue. I Have already tried moving the App service plan to a different resource group but failed.
Note: Clicking the scale up button doesn't work and shows a JavaScript error in the console
JavaScript Error found in Chrome console
Byron is correct that this is a bug in the UX.
A fix has been made and should be live later today.
Your app is being hosted in an App Service Environment.
Looks like the scale up bottom is not working and that is probably a bug in the UX.
As a workaround you should be able to go directly to the App Service Environment that is hosting your app and perform the scale operation there.
once the scale operation in the App Service environment is done and the new IP Address is added, then you should be able to come back to the SSL binging UX in the app and try this again.

Resources