I want to deploy a simple FastAPI onto an Azure app service, but I keep getting this error message.
This is my api.
from fastapi import FastAPI
app = FastAPI()
#app.get('/')
async def welcome():
return {'message':'Wecome to My website!'}
The api works just fine on my local machine. The command I use on my VS code terminal is "uvicorn main: app".
In order to deploy my app, I have startup.sh where there is only one command line:
gunicorn -w 2 -k uvicorn.workers.UvicornWorker main:app
I've set up the configuration of the app service:
The pricing tier is :
I don't see any probem in the pipeline:
Alernatively, I've tried with each of following lines of code in startup.sh:
python -m uvicorn main:app
gunicorn --bind=0.0.0.0 --timeout:600 main:app
But all failed. Help is much appreciated!
Great to know that upgrading the App service plan resolves the issue.
In general, the first step in troubleshooting is to use App Service Diagnostics:
In the Azure portal for your web app, select Diagnose and solve
problems from the left menu.
Select Availability and Performance.
Examine the information in the Application Logs, Container Crash, and Container
Issues options, where the most common issues will appear.
Next, examine both the deployment logs and the app logs for any error messages. These logs often identify specific issues that can prevent app deployment or app startup.
Related
My issue:
When I try access the main URL for my web app, Azure replies with a '403 - You do not have permission to view this directory or page'.
Context:
I have deployed a Python webapp to Azure using the Pipeline/Release on DevOps (Azure Web App Deploy task seems to run successfully with the artifact generated by the Pipeline). I have previously deployed Python Function Apps successfully with a similar pipeline (different app type of course, and sku).
The Kudu SCM page works e.g.,: myapp.scm.azurewebsites.net
All logs seem to indicate the webapp deployment was successful. If I use CMD or Powershell from the SCM, I can see my app.py (for Flask) is in the correct location. The deployment has my requirements under the site packages installed including Flask.
The app runs quite successfully on my local machine via 'flask run', after I activate the virtual environment.
Yet when I try connect to myapp.azurewebsites.net, I get a 403 on the plain route. Anything after it like /test or /myapi returns a 404.
Something I do not see in any of the logs I can access via Kudu is mention of 'gunicorn', which I believe is what Azure uses by default. I just want to see some kind of log output somewhere to show that flask or gunicorn or something has successfully loaded app.py and is listening for incoming connections.
Maybe you do not know why I would get 403's, but you might know where I should be seeing the aforementioned logs.
TIA for any suggestions.
EDIT:
Something to add is that if I enable logs, and connect to the logstream then I do see logs generated as I access Kudu. This suggests some Application & Web Server are running - at least for whatever container runs that side of things.
It even notes the failed connections from Postman for the actual myapp.azurewebsites.net, but has nothing other than a line indicating that there is a 403.
My app has been stripped down to the most bare app.py with no includes other than Flask and routes which simply return a string. Most includes in requirements.txt have also been stripped out.
Still same issue.
I do have an answer after a couple of days worth of pulling my hair out.
Turns out that the 403's were not actually a permissions issue.
az webapp list-runtimes --os windows
The list shows no runtimes available for Python/Flask Web App. This is why I could not find any gunicorn or Flask logs - neither are set up. Azure deployed the artifact's zip and called it a day.
To rectify this, the DevOps Pipeline/Release must run on Linux. The Azure Web App Deploy task, when set to "Web App on Linux", will have Python runtime stacks available. Once selected, these will allow for a startup command to be specified. (Such as flask run --host=0.0.0.0 --port=8000)
Furthermore in azuredeploy.json the "Microsoft.Web/serverfarms" must have a "kind" specified to include "linux". It also requires:
"properties": {"reserved" : true}
Once deployed, logs indicate that docker is being set to an internal port of 8000 while the default 'flask run' which is executed would use 5000.
Ideally: use gunicorn with port mapping but, to get things going, tell flask to use port 8000.
I have an Azure Container App with simple nodeJs api service. I need to read logs of this application, just to see my console.log('Hi there!').
Container App has Monitoring Logs with huge list of different queries. Which one I need to use to see my console? Or can some one provide a simple query to fetch my logs?
p.s. I want to see same logs which I can see with command:
az container logs show -n <containerName> -g <resource group>
I have tried to reproduce the issue by deploying a sample app and app service container in Azure.
To view our application console logs go to Revision Management-->click on your app-->select console logs(view details) as shown in below image:
After running the query above you can see the console logs which were generated by your application.
Publish-AzWebApp is not working for Linux function app
Publish-AzWebApp -ResourceGroupName Default-Web-WestUS -Name MyApp -ArchivePath C:\\project\\app.zip
I am using the above command in powershell and running that powershell in CICD process. But that is not deploying the files to function app(App service deployed) which is in Linux.
Getting the following error:
Service unavailable
It is working fine for Windows.
The following reasons will result in a service unavailable error:
The function host is down/restarting.
Platform issue as a result of the backend server not being available/allocated
A memory leak in the code caused the backend server to return a service unavailable error.
The "Diagnose and solve problems" blade in the Function app should be used to select the "Function app down or reporting" detector.
The diagnostic information regarding the function app and its infrastructure will be displayed by this detector. This will provide some insight into the issues regarding function hosts. Also, look under the Web app restarted section to determine if any platform-related issues contributed to the service unavailable error.
As you are running on Linux platform, then you would get information about the container recycles in the Web app restarted detector.
I'm trying to deploy the simple NodeJS hello-world functions the Serverless Framework provides to my Azure free-tier account from an Azure DevOps Build Pipeline using the Service Principal credentials I created when making the deployment from my desktop originally. I've used several of the Build Agents and Tasks combinations, including Windows and Ubuntu Agents as well as Bash, Command Line, Azure Powershell, and Azure CLI tasks with the DevOps provided link to the Service Principal credentials. I've made sure to add them as Pipeline variables so that they are included in the tasks' environmental variables and I've confirmed that they are there when the tasks run. I also make sure that the Azure CLI is installed and logged into with the subscription set. No matter what settings/permissions I tweak or new configurations I try, when the task runs successfully to the point where the serverless framework attempts the deployment it always tries to get me to use a browser to authenticate my account. This obviously defeats the purpose of a CI/CD pipeline and even if I do use a browser to authenticate, the process just hangs there.
The sample code and deployment works on my desktop, so I know the credentials work. I believe I've emulated each step I take on my desktop in the Build Pipeline, yet while my desktop deploys without browser authentication the build always requests it. Does anyone have experience in this manner and know what step/configuration I'm missing?
To look at the sample code and process look here or run these steps:
serverless create -t azure-nodejs -p testApp
cd .\testApp\
Change Node Runtime and Region in serverless.yml (nodejs12.x not supported & no free tier in West US)
serverless deploy
Here's the link I used to get this working on my desktop: link
Edit: Here is the default serverless.yml created by the steps above:
service: azure-serverless-helloworld
provider:
name: azure
region: East US
runtime: nodejs8.10
environment:
VARIABLE_FOO: 'foo'
plugins:
- serverless-azure-functions
package:
exclude:
- local.settings.json
- .vscode/**
- index.html
functions:
hello:
handler: src/handlers/hello.sayHello
events:
- http: true
x-azure-settings:
methods:
- GET
authLevel: anonymous
goodbye:
handler: src/handlers/goodbye.sayGoodbye
events:
- http: true
x-azure-settings:
methods:
- GET
authLevel: anonymous
You can try below steps to run sls package in command line task to create a deployment package, and then use Azure Function App task to deploy to azure.
1,install specific version nodejs using Node.js tool installer task
_
2, install serverless using npm task to run custom command
3, use npm task to run install command to install dependencies
_
4, Use command line task to run sls package to create the deployment package
_
5, use azure function app deploy task to deploy the deployment package
Right now the Serverless Framework thinks you're trying to deploy your application using the Serverless Dashboard (which does not yet support Azure).
I'm not sure, because it haven't posted your serverless.yml file, but I think you'll need to remove the app and org attributes from your serverless.yml configuration file. Then it will stop asking you to log in.
Using the serverless framework to deploy a function through DevOps gave me the same issue.
The problem is that the sls deplopy command will build, package and deploy the code but will ask you for credentials each time you run the pipeline.
I solved this using the command serverless package in the build task, after that I deployed the zip that was generated for the command with a normal web app deploy task.
I am unable to run docker image dpage/pgadmin4 on azure web app (Linux) which is available on docker hub.
I have installed Docker in my Linux machine and was able to run that docker image locally. Then I created Web app in Azure with options as given below:
OS: Linux
Publish: Docker Image
App service plan: Linux app service
After creating web app, I added two env variables in App Settings section:
PGADMIN_DEFAULT_EMAIL : user#domain.com
PGADMIN_DEFAULT_PASSWORD : SuperSecret
Finally login screen is visible but when I enter above credentials, it doesn't work and keeps redirecting to login page.
Update: If login is working properly, screen appears as shown below.
!(pgadmin initial screen)
After several retries i once got an message (CSRF token invalid) displayed in the right-top corner of the login screen.
For CSRF to properly work there must be some serverside state? So I activated the "ARR affinity" in the "General Settings" on the azure "Configuration".
I also noticed in the explamples on documentation the two environment-variables PGADMIN_CONFIG_CONSOLE_LOG_LEVEL (which is in the example set to '10') and PGADMIN_CONFIG_ENHANCED_COOKIE_PROTECTION (which is in the example set to 'True').
After enabling "ARR" and setting PGADMIN_CONFIG_ENHANCED_COOKIE_PROTECTION to False the login started to work. I have no idea what PGADMIN_CONFIG_ENHANCED_COOKIE_PROTECTION is actually doing, so please take that with caution.
If thats not working for you, maybe setting PGADMIN_CONFIG_CONSOLE_LOG_LEVEL to 10 and enabling console debug logging can give you a clue whats happening.
For your issue, I do the test and find that it's really a strange thing. When I deploy the docker image dpage/pgadmin4 in Azure service Web App for Container through Azure CLI and set the app settings, there is no problem to log in with the user and password. But when I deploy it through the Azure portal, then I meet the same thing with you.
Not sure what is the reason, but the solution is that set the environment variables PGADMIN_DEFAULT_EMAIL and PGADMIN_DEFAULT_PASSWORD through the Azure CLI like below:
az webapp config appsettings set --resource-group <resource-group-name> --name <app-name> --settings PGADMIN_DEFAULT_EMAIL="user#domain.com" PGADMIN_DEFAULT_PASSWORD="SuperSecret"
If you really want to know the reason, then you can make feedback to Microsoft. Maybe it's a bug or some special settings.
Update
The screenshot of the test on my side here: