Azure functions developing locally - cannot register EventHub triggered function - azure

I want to develop locally my Azure Function App and later publish it to Azure Portal.
I am using Azure Functions Core Tools command line and all my functions are in Node.js
Currently, I managed to download my functions locally and fetch their settings with command:
func azure functionapp fetch-app-settings
So after that my local.settings.json has correct settings values. When I make any changes I am also able to publish them succesfully to Azure Portal.
The problem is now that I have two functions in my app, one is Http Triggered and the second is EventHub triggered.
When I try run locally host with:
func host start
I get the following output from console:
[10.12.2017 13:03:47] Found the following functions:
[10.12.2017 13:03:47] Host.Functions.HttpTriggerJS1
[10.12.2017 13:03:47]
[10.12.2017 13:03:47] Job host started
[10.12.2017 13:03:47] The following 1 functions are in error:
[10.12.2017 13:03:47] EventHubTriggerJS1: The binding type 'eventHubTrigger' is not registered. Please ensure the type is correct and the binding extension is installed.
And when I try to run locally this EventHubTriggerJS1 function with curl:
curl --request POST -H "Content-Type:application/json" --data '{"input":"sample queue data"}' http://localhost:7071/admin/functions/EventHubTriggerJS1
then nothing happens, so I guess this is a problem of this trigger registration.
The HttpTriggerJS1 runs perfectly, I can access it under
http://localhost:7071/api/HttpTriggerJS1
So, do you have any idea where might be a problem in configuring? BTW Is it possible to have locally function and connect to the remote EventHub in portal?

I was unable to reproduce your error on the Version 1.0 runtime.
I reproduced the error in 2.0. I believe 2.0 does not support event hubs yet,
https://github.com/Azure/azure-webjobs-sdk-script/wiki/Azure-Functions-runtime-2.0-known-issues#functional-gaps
try installing the extensions
func extensions install --package Microsoft.Azure.WebJobs.Extensions.EventHubs -v 3.0.0-beta4
Can you provided more detail about your functions, and the steps you took to create them?
HttpTriggerJS1 was created locally and then published to the portal following the steps outlined in https://learn.microsoft.com/en-us/azure/azure-functions/functions-run-local ?
EventHubTriggerJS1 was created in the portal? in the same Function App?
Do not mix local development with portal development in the same function app. When you create and publish functions from a local project, you should not try to maintain or modify project code in the portal.

Related

Deployed Azure WebApp gives 403

My issue:
When I try access the main URL for my web app, Azure replies with a '403 - You do not have permission to view this directory or page'.
Context:
I have deployed a Python webapp to Azure using the Pipeline/Release on DevOps (Azure Web App Deploy task seems to run successfully with the artifact generated by the Pipeline). I have previously deployed Python Function Apps successfully with a similar pipeline (different app type of course, and sku).
The Kudu SCM page works e.g.,: myapp.scm.azurewebsites.net
All logs seem to indicate the webapp deployment was successful. If I use CMD or Powershell from the SCM, I can see my app.py (for Flask) is in the correct location. The deployment has my requirements under the site packages installed including Flask.
The app runs quite successfully on my local machine via 'flask run', after I activate the virtual environment.
Yet when I try connect to myapp.azurewebsites.net, I get a 403 on the plain route. Anything after it like /test or /myapi returns a 404.
Something I do not see in any of the logs I can access via Kudu is mention of 'gunicorn', which I believe is what Azure uses by default. I just want to see some kind of log output somewhere to show that flask or gunicorn or something has successfully loaded app.py and is listening for incoming connections.
Maybe you do not know why I would get 403's, but you might know where I should be seeing the aforementioned logs.
TIA for any suggestions.
EDIT:
Something to add is that if I enable logs, and connect to the logstream then I do see logs generated as I access Kudu. This suggests some Application & Web Server are running - at least for whatever container runs that side of things.
It even notes the failed connections from Postman for the actual myapp.azurewebsites.net, but has nothing other than a line indicating that there is a 403.
My app has been stripped down to the most bare app.py with no includes other than Flask and routes which simply return a string. Most includes in requirements.txt have also been stripped out.
Still same issue.
I do have an answer after a couple of days worth of pulling my hair out.
Turns out that the 403's were not actually a permissions issue.
az webapp list-runtimes --os windows
The list shows no runtimes available for Python/Flask Web App. This is why I could not find any gunicorn or Flask logs - neither are set up. Azure deployed the artifact's zip and called it a day.
To rectify this, the DevOps Pipeline/Release must run on Linux. The Azure Web App Deploy task, when set to "Web App on Linux", will have Python runtime stacks available. Once selected, these will allow for a startup command to be specified. (Such as flask run --host=0.0.0.0 --port=8000)
Furthermore in azuredeploy.json the "Microsoft.Web/serverfarms" must have a "kind" specified to include "linux". It also requires:
"properties": {"reserved" : true}
Once deployed, logs indicate that docker is being set to an internal port of 8000 while the default 'flask run' which is executed would use 5000.
Ideally: use gunicorn with port mapping but, to get things going, tell flask to use port 8000.

AzureBlobCredentialMissing Error only occurs when triggered, versus no error in Debug

I get the following error in a pipeline that's first activity is to do a lookup on a storage container to get the contents of a file. When I test the connectionns, linked server, datasets or debug the pipeline I do not receive any errors. However when the pipeline is triggered by the storage event, it throws this error:
ErrorCode=AzureBlobCredentialMissing,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Please provide either connectionString or sasUri or serviceEndpoint to connect to Blob.,Source=Microsoft.DataTransfer.ClientLibrary,'
As per your scenario, where the debug is successful but the trigger runs failing. This make me assume that your dev changes have not been published which is why the trigger run fails. In simple terms the most recent published version of your linked service is different than that of your development version which haven't been published.
In case if you are using Source control then I would recommed following this tutorial for best practices - Automated publishing for continuous integration and delivery
If you are using CI-CD, then the issue might indeed cause by the DevOps pipeline not overriding the linked service parameters. Try redeploying the resource bye following below step and it should work as expected. (Linked service parameters had to be overwritten on the Azure resource template)
For example, if you have a linked service such as below:
Then you will still have to add below values into the overrideParameters section of the AzureResourceManagerTemplateDeployment task.

Azure Function publish - "Timed out waiting for SCM to update the Environment Settings"

I've deployed and published several Function Apps without issues over the last 12 months. However, as of this week, when publishing a Function App using the following PowerShell script:
func azure functionapp publish <functionAppName> --java
I will receive the following error after a few minutes: "Timed out waiting for SCM to update the Environment Settings"
Similarly, I'm also unable to deploy any Function Apps, using:
mvn azure-functions:deploy
In the Function App activity log, the following error is logged for both cases:
Operation name: Sync Web Apps Function Triggers.
Status: Failed.
Error code: BadRequest (HTTP Status Code: 400)
Message: Encountered an error (InternalServerError) from host runtime.
So far I've created the Application setting WEBSITE_WEBDEPLOY_USE_SCM (value: true) based on feedback in another topic, which unfortunately hasn't helped. Other than that I've not been able to find much other information on this issue.
Does anyone have any thoughts?
Resolved this issue myself. The Application Setting WEBSITE_CONTENTAZUREFILECONNECTIONSTRING contained an outdated storage account key.

Azure function error: app does not support remote build as it was created before August 1st, 2019

I am trying to deploy my azure function with VS code using func azure functionapp publish nhtsa --build remote and I am getting below error.
Remote build is a new feature added to function apps.
Your function app does not support remote build as it was created before August 1st, 2019.
Please use '--build local' or '--build-native-deps'.
For more information, please visit https://aka.ms/remotebuild
I thought it's because of the storage account access -tier and access level, so I change my storage account tier to cool and container access to public, and I deploy the function again, and I'm still getting the error.
Any idea how I can resolve this issue.
Thanks
As the error specified, it is not supported for function app created before August 1, 2019 (see documentation):
If you're having issues with remote build, it might be because your app was created before the feature was made available (August 1, 2019). Try creating a new function app, or running az functionapp update -g <RESOURCE_GROUP_NAME> -n <APP_NAME> to update your function app. This command might take two tries to succeed.
I had the same issue the solution was to fix the "defaultAction"
It should be "Allow"

How do I use the Serverless Framework in an Azure DevOps Build Pipeline without Browser Authentication?

I'm trying to deploy the simple NodeJS hello-world functions the Serverless Framework provides to my Azure free-tier account from an Azure DevOps Build Pipeline using the Service Principal credentials I created when making the deployment from my desktop originally. I've used several of the Build Agents and Tasks combinations, including Windows and Ubuntu Agents as well as Bash, Command Line, Azure Powershell, and Azure CLI tasks with the DevOps provided link to the Service Principal credentials. I've made sure to add them as Pipeline variables so that they are included in the tasks' environmental variables and I've confirmed that they are there when the tasks run. I also make sure that the Azure CLI is installed and logged into with the subscription set. No matter what settings/permissions I tweak or new configurations I try, when the task runs successfully to the point where the serverless framework attempts the deployment it always tries to get me to use a browser to authenticate my account. This obviously defeats the purpose of a CI/CD pipeline and even if I do use a browser to authenticate, the process just hangs there.
The sample code and deployment works on my desktop, so I know the credentials work. I believe I've emulated each step I take on my desktop in the Build Pipeline, yet while my desktop deploys without browser authentication the build always requests it. Does anyone have experience in this manner and know what step/configuration I'm missing?
To look at the sample code and process look here or run these steps:
serverless create -t azure-nodejs -p testApp
cd .\testApp\
Change Node Runtime and Region in serverless.yml (nodejs12.x not supported & no free tier in West US)
serverless deploy
Here's the link I used to get this working on my desktop: link
Edit: Here is the default serverless.yml created by the steps above:
service: azure-serverless-helloworld
provider:
name: azure
region: East US
runtime: nodejs8.10
environment:
VARIABLE_FOO: 'foo'
plugins:
- serverless-azure-functions
package:
exclude:
- local.settings.json
- .vscode/**
- index.html
functions:
hello:
handler: src/handlers/hello.sayHello
events:
- http: true
x-azure-settings:
methods:
- GET
authLevel: anonymous
goodbye:
handler: src/handlers/goodbye.sayGoodbye
events:
- http: true
x-azure-settings:
methods:
- GET
authLevel: anonymous
You can try below steps to run sls package in command line task to create a deployment package, and then use Azure Function App task to deploy to azure.
1,install specific version nodejs using Node.js tool installer task
_
2, install serverless using npm task to run custom command
3, use npm task to run install command to install dependencies
_
4, Use command line task to run sls package to create the deployment package
_
5, use azure function app deploy task to deploy the deployment package
Right now the Serverless Framework thinks you're trying to deploy your application using the Serverless Dashboard (which does not yet support Azure).
I'm not sure, because it haven't posted your serverless.yml file, but I think you'll need to remove the app and org attributes from your serverless.yml configuration file. Then it will stop asking you to log in.
Using the serverless framework to deploy a function through DevOps gave me the same issue.
The problem is that the sls deplopy command will build, package and deploy the code but will ask you for credentials each time you run the pipeline.
I solved this using the command serverless package in the build task, after that I deployed the zip that was generated for the command with a normal web app deploy task.

Resources