Scheduling an httpTrigger function - azure

I'm trying to set the schedule field at function.json on my "httpTrigger" type function but it seems the timer functionality doesn't run. My goal is to have a function that could be even scheduled and manually started, if needed, without having to add another function just for scheduling.
{
"disabled": false,
"bindings": [
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get"
],
"schedule": "0 0 * * * *"
},
{
"type": "http",
"direction": "out",
"name": "res"
}
]
}

The only trigger type that you can schedule is timerTrigger.
So, if you need a piece of code to both run on schedule and be runnable from a nice URL, you will have to create two Functions with two trigger types.
If you don't need a nice HTTP URL, you can call a timer Functions manually, see this answer.

My goal is to have a function that could be even scheduled and manually started, if needed, without having to add another function just for scheduling.
As other members mentioned that HttpTrigger does not support schedule. For your requirement, you could create another function or manually invoke your function programmatically as Mikhail answered.
Per my understanding, you could leverage Azure Scheduler for a simpler approach to schedule your azure function.
GET https://{your-function-name}.azurewebsites.net/api/HttpTriggerCSharp1?code={your-function-key}
Or
GET https://{your-function-name}.azurewebsites.net/api/HttpTriggerCSharp1
Header x-functions-key:{your-function-key}

Related

Azure Data Factory Pipelines API: Cannot apply filters

The issue might be trivial, but I cannot find out what am I doing wrong. I am trying to check whether there are any "in progress" runs of a specific pipeline within my data factory. The below call gives me a full list of all runs in my ADF (correct):'
However, the very moment I add either a filter on a pipeline name, or a filter on status, the results are empty. Even though there are valid runs that should be returned.
Assuming we are talking about the same API then the documentation said that lastUpdatedAfter and lastUpdatedBefore are required fields in the body.
Take note of the format of these dates in the example Microsoft provides:
{
"lastUpdatedAfter": "2018-06-16T00:36:44.3345758Z",
"lastUpdatedBefore": "2018-06-16T00:49:48.3686473Z",
"filters": [
{
"operand": "PipelineName",
"operator": "Equals",
"values": [
"examplePipeline"
]
}
]
}

Using parameters to locate file during trigger creation in Azure Data Factory

I am trying to create a trigger that I will use for starting a pipeline in ADF:
The folder I want to set my trigger on can have different paths:
202001/Test/TriggerFolder
202002/Test/TriggerFolder
202003/Test/TriggerFolder
etc..
Therefore in my Blob path begins with I would like to use a parameter (that I will set somewhere else through another pipeline) that tells the trigger where to look for instead of having a static name file.
Unfortunately it doesn't seem to give me the chance to add dynamic content as (for example) in a DataSet.
If there is really no chance, because maybe I may think the trigger is something instantiated once, is there a way to create a trigger as a step during a pipeline?
Thank you!
There is possibility to pass parameter from "ARM Template" of the Azure Data Factory. During the deployment of pipelines, this parameter can be passed with necessary value. Below is example code for it.
Sample Code:
{
"name": "[concat(parameters('factoryName'), '/trigger1')]",
"type": "Microsoft.DataFactory/factories/triggers",
"apiVersion": "2018-06-01",
"properties": {
"annotations": [],
"runtimeState": "Stopped",
"pipelines": [],
"type": "BlobEventsTrigger",
"typeProperties": {
"blobPathBeginsWith": "[parameters('trigger1_properties_typeProperties_blobPathBeginsWith')]",
"ignoreEmptyBlobs": true,
"scope": "[parameters('trigger1_properties_typeProperties_scope')]",
"events": [
"Microsoft.Storage.BlobCreated"
]
}
},

Azure TimerTrigger Multiple instances with different configuration

Using VS Code + "Azure Function" Extension, I generated the default python 3.7 timedTrigger function with the following settings:
// functions.json
{
"scriptFile": "__init__.py",
"bindings": [
{
"name": "mytimer",
"type": "timerTrigger",
"direction": "in",
"schedule": "0 0 */6 * * *"
}
]
}
I have also set up two environment variables "USER" and "PASSWORD" which are set up in the Configuration of the app service.
{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": ****************,
"FUNCTIONS_WORKER_RUNTIME": "python",
"USER": "********",
"PASSWORD": "*********"
}
}
Goal:
I want to run two instances of the same function, but using two different Configs, i.e. Users+Passwords.
Problem:
I believe that the Configuration/App Settings might not be sufficient for this. I can't find a way to run the function twice with multiple different parameters.
Question: What options do I have to reach my goal? One idea I had was to put the User/PW into the functions.json, but I could not figure out how to access that information from within the app function.
You have two options:
Read a custom json (not necessarily reading the value of function.json), you can add a custom json in the function app, and then read the value you want according to the hierarchy of the json file, Then use the value you read in the trigger.
Use deployment slot. (This is the official method, I think it is completely suitable for your current needs)
In this newly created slot you can use completely different environment variables in Configuration Settings.
This is the doc:
https://learn.microsoft.com/en-us/azure/azure-functions/functions-deployment-slots
I'd probably do it by having a single setting that holds a JSON array, viz
"Credentials": "[{'username':'***','password':'***'},{'username':'******','password':'******'}]"
Then, assuming you want to process them all at the same time, make a single function that parses the array and iterates over each username and password.
If you need to run them on different schedules, create a shared Python function DoTheThing(credentialIndex) that actually does the work and then multiple Azure Functions that simply call DoTheThing(0), DoTheThing(1), ...
(Security note: not immediately relevant to the problem at hand, but secrets are best kept in a secret store such as Key Vault rather than directly in the settings)
EDIT/SOLUTION:
I ended up having a the following keys in my environment variables:
"USERS": "[\"UserA\", \"UserB\"]"
"UserA_USER": "Username1"
"UserA_PW": "Password1"
"UserB_USER": "Username2"
"UserB_PW": "Password2"
Then I iterated over the USERS array and retrieved the keys for each user like so:
import os
import json
users = json.loads(os.environ["USERS"])
for u in users:
user = os.environ[u + "_USER"]
pw = os.environ[u + "_PW"]
doStuff(user, pw)

Is it possible to create a blob triggered azure function with file name pattern?

I am developing a blob triggered azure function. Following is the configuration of my "function.json" file:
{
"disabled": false,
"bindings": [
{
"name": "myBlob",
"type": "blobTrigger",
"direction": "in",
"path": "input/{name}",
"connection": "BlobConnectionString"
}
]
}
My function is working fine. It is triggered for all files in "input" blob. Now I want to filter files by its naming pattern. For Example : I want to trigger my azure function for only those files which contains "~123~" in its name.
Is it possible to do with some change in "path" property of "function.json" file?
If yes, then what should be the value of the "path" property?
If not, please let me know if there is any other workaround possible.
Thanks,
input/{prefix}~123~{suffix} should work. In function method signature, instead of name, use prefix and suffix to get blob name if needed.

In Azure Functions, using a Bash script, is it possible to access properties from the queue message trigger?

Using Azure Functions, I'd like use the properties from a dequeued message as arguments in my Bash script. Is this possible? And if so, how? It seems documentation on Bash azure functions is a bit sparse.
I have looked at:
This documentation on binding to custom input properties. It gives C#/Javascript examples, but no bash samples.
And this GitHub sample with a similar Batch function.
However, after trying to apply the similar concepts to my function, I've come up short.
Here is my setup:
Functions.json
{
"bindings": [
{
"name": "inputMessage",
"type": "queueTrigger",
"direction": "in",
"queueName": "some-queue",
"connection": "AzureWebJobsStorage"
}
],
"disabled": false
}
Run.sh
echo "My name is $FirstName $LastName"
Sample Queue Message
{
"FirstName": "John",
"LastName": "Doe"
}
Actual Result
My name is:
What I'm hoping for
My name is: John Doe
Any thoughts on how to accomplish this, either by updating Functions.json or Run.sh?
For bash queue trigger queue message returns as string and you need to parse JSON yourself in run.sh. Note bash queue trigger is experimental. I think it's not easy to implement bash json parser as you can't install third party libs like jq in function app sandbox.
You can easily extract json objects from queue message using other languages (JS/C#/Powershell)

Resources