Handling optional parameters in Azure function table bindings - azure

I'm trying to use a filter on an Azure function to look up a row in a table based on an optional route parameter. If the parameter isn't provided, or doesn't match a row, a default row should be returned. This works if I provide a matching value, or a wrong value, I get a row as I expect. If I provide no value at all, I get the following error:
Exception while executing function: Functions.link. Microsoft.Azure.WebJobs.Extensions.Storage: Object reference not set to an instance of an object.
The three key bits (AFAIU) are:
httpTrigger route: "link/{ref?}"
Table filter: "Email eq '{ref}' or Default eq true"
Endpoint URL: https://[subdomain].azurewebsites.net/api/link/(ref)
Is there some way to construct the filter or route so that I get the second clause of the filter when the optional parameter is not provided? Or is there a better way to do this?
My full function.json looks like this:
{
"bindings": [
{
"authLevel": "anonymous",
"type": "httpTrigger",
"route": "link/{ref?}",
"direction": "in",
"name": "req",
"methods": [
"get"
]
},
{
"name": "emailRule",
"type": "table",
"take": "1",
"filter": "Email eq '{ref}' or Default eq true",
"tableName": "RedirectRules",
"connection": "TestStorageConnectionAppSetting",
"direction": "in"
},
{
"type": "http",
"direction": "out",
"name": "res"
}
]
}

Try to use this route "route": "link/{ref=''}", this way you dont have null value but always empty string

Related

Using http header value in azure function binding

I am trying to read the api_key from the headers and pass it as binding expression, as show below:
{
"type": "table",
"direction": "in",
"name": "subscriptions",
"tableName": "subscriptions",
"partitionKey": "{api_key}",
"take": "50",
"connection": "learnbindingslab1_STORAGE"
}
what would the correct expression to get the api_key from the request headers?
It seems you can use any value under context.bindingData, and you can access nested data with dot notation - so in this case given:
context.bindingData =
{
"headers": {
"some-header": "value"
}
}
You can use it as a binding expression in Id/PartitionKey/sqlQuery with {headers.some-header}.
{
"Id": "{headers.some-id}",
"PartitionKey": "{headers.some-key}",
"sqlQuery": "SELECT * FROM O WHERE O.key = {headers.some-value}"
}

How to insert data to bigquery table with custom fields with NodeJS?

I'm using npm BigQuery module for inserting data into bigquery. I have a custom field say params which is of type RECORD and accept any int,float or string value as a key value pair. How can I insert to such fields?
Looked into this, but could not find anything useful
[https://cloud.google.com/nodejs/docs/reference/bigquery/1.3.x/Table#insert]
If I understand correctly, you are asking for a map with ANY TYPE value, which is not support in BigQuery.
You may have a map with value type info with a record like below schema.
Your insert code needs to pick correct type_value to set.
{
"name": "map_field",
"type": "RECORD",
"mode": "REPEATED",
"fields": [
{
"name": "key",
"type": "STRING",
},
{
"name": "int_value",
"type": "INTEGER"
},
{
"name": "string_value",
"type": "STRING"
},
{
"name": "float_value",
"type": "FLOAT"
}
]
}

Azure Data Factory v2 using utcnow() as a pipeline parameter

For context, I currently have a Data Factory v2 pipeline with a ForEach Activity that calls a Copy Activity. The Copy Activity simply copies data from an FTP server to a blob storage container.
Here is the pipeline json file :
{
"name": "pipeline1",
"properties": {
"activities": [
{
"name": "ForEach1",
"type": "ForEach",
"typeProperties": {
"items": {
"value": "#pipeline().parameters.InputParams",
"type": "Expression"
},
"isSequential": true,
"activities": [
{
"name": "Copy1",
"type": "Copy",
"policy": {
"timeout": "7.00:00:00",
"retry": 0,
"retryIntervalInSeconds": 30,
"secureOutput": false
},
"typeProperties": {
"source": {
"type": "FileSystemSource",
"recursive": true
},
"sink": {
"type": "BlobSink"
},
"enableStaging": false,
"cloudDataMovementUnits": 0
},
"inputs": [
{
"referenceName": "FtpDataset",
"type": "DatasetReference",
"parameters": {
"FtpFileName": "#item().FtpFileName",
"FtpFolderPath": "#item().FtpFolderPath"
}
}
],
"outputs": [
{
"referenceName": "BlobDataset",
"type": "DatasetReference",
"parameters": {
"BlobFileName": "#item().BlobFileName",
"BlobFolderPath": "#item().BlobFolderPath"
}
}
]
}
]
}
}
],
"parameters": {
"InputParams": {
"type": "Array",
"defaultValue": [
{
"FtpFolderPath": "/Folder1/",
"FtpFileName": "#concat('File_',formatDateTime(utcnow(), 'yyyyMMdd'), '.txt')",
"BlobFolderPath": "blobfolderpath",
"BlobFileName": "blobfile1"
},
{
"FtpFolderPath": "/Folder2/",
"FtpFileName": "#concat('File_',formatDateTime(utcnow(), 'yyyyMMdd'), '.txt')",
"BlobFolderPath": "blobfolderpath",
"BlobFileName": "blobfile2"
}
]
}
}
},
"type": "Microsoft.DataFactory/factories/pipelines"
}
The issue I am having is that when specifying pipeline parameters, it seems I cannot use system variables and functions the same way I can when for example specifying folder paths for a blob storage dataset.
The consequence of this is that formatDateTime(utcnow(), 'yyyyMMdd') is not being interpreted as function calls but rather the actual string with value formatDateTime(utcnow(), 'yyyyMMdd').
To counter this I am guessing I should be using a trigger to execute my pipeline and pass the trigger's execution time as a parameter to the pipeline like trigger().startTime but is this the only way? Am I simply doing something wrong in my pipeline's JSON?
This should work:
File_#{formatDateTime(utcnow(), 'yyyyMMdd')}
Or complex paths as well:
rootfolder/subfolder/#{formatDateTime(utcnow(),'yyyy')}/#{formatDateTime(utcnow(),'MM')}/#{formatDateTime(utcnow(),'dd')}/#{formatDateTime(utcnow(),'HH')}
You can't put a dynamic expression in the default value. You should define this expression and function either when you creating a trigger, or when you define dataset parameters in sink/source in copy activity.
So either you create dataset property FtpFileName with some default value in source dataset, and then in copy activity, you can in source category to specify that dynamic expression.
Another way is to define pipeline parameter, and then to add dynamic expression to that pipeline parameter when you are defining a trigger. Hope this is a clear answer to you. :)
Default value of parameters cannot be expressions. They must be literal strings.
You could use trigger to achieve this. Or you could extract the common part of your expressions and just put literal values into the foreach items.

Are Azure Function Bindings Executed in Order?

In a given Azure Function, I can have 1 or more output bindings. For example, I might have a blob storage output (writing a file blob to storage) and a queue output (pushing a message into a queue).
For example, if I have this very simple Azure function (written in Node.js)...
module.exports = function (context, req) {
context.log('START: Multi-output function.');
context.bindings.outputBlob = "blob-contents";
context.bindings.outputQueueItem = "{'message': 'hello!'}";
context.done();
};
... with the output bindings set up in function.json as follows...
{
"bindings": [
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "req"
},
{
"type": "http",
"direction": "out",
"name": "res"
},
{
"type": "blob",
"name": "outputBlob",
"path": "outcontainer/{rand-guid}",
"connection": "AzureWebJobsDashboard",
"direction": "out"
},
{
"type": "queue",
"name": "outputQueueItem",
"queueName": "outqueue",
"connection": "AzureWebJobsDashboard",
"direction": "out"
}
],
"disabled": false
}
... when do the two output bindings actually fire, and in which order?
For the when part of the question:
Do they fire at the point where the function sets the output binding? (e.g. the line of code that sets context.bindings.outputBlob)
Do they fire at/after context.done()?
For the order part of the question:
Do they fire in the order they're seen in the code?
Do they fire in the order they're seen in function.json ?
Output bindings fire after the function execution is completed - after context.done().
The order that you set them in the code has no influence on binding executions.
If you can, treat the actual execution order as the implementation detail and do not rely on it. Having said that, if I'm not mistaken, the actual order will be:
Execute all non-queue bindings in order of function.json
Then, execute all queue bindings in order of function.json
UPDATE: based on this issue and this issue I conclude that order is not guaranteed at the moment.

Error indexing method FunctionName does not resolve to a value

Does anyone know why I'm getting this error?
Here is my SendGrid output binding:
{
"bindings": [
{
"name": "docId",
"type": "queueTrigger",
"direction": "in",
"queueName": "movedocument",
"connection": "cpoffice365_STORAGE"
},
{
"type": "sendGrid",
"name": "message",
"apiKey": "<MYSENDGRIDKEY>",
"from": "<MYFROMEMAIL>",
"direction": "out"
}
],
"disabled": false
}
My code compiles, but then it throws this error in the log:
Microsoft.Azure.WebJobs.Host: Error indexing method 'Functions.<MYFUNCTIONAME>'. Microsoft.Azure.WebJobs.Host: '<MYSENDGRIDKEY>' does not resolve to a value.
Statto,
Please make sure you define an app setting (Function app settings > App Settings) with the name matching what you've used for your binding configuration, where the value is your Sendgrid key.
The binding configuration expects that to be an app setting name, not the actual key.
Thanks!

Resources