Azure durable functions: Get Execution time - azure

I am building an azure durable function which has may activities/azure functions to be called as a part of Job execution.
I have a requirement to view the total execution time taken for completing an orchestration instance. Is there any way to read the total execution time took to run an instance of durable function orchestration?

You can calculate the total execution time for an orchestration by calling the statusQueryGetUri returned when the durable function is first created. The call URI should look like this:
http://<functionappname>.azurewebsites.net/runtime/webhooks/durabletask/instances/<instanceId>?taskHub=<taskHub>&connection=<connectionName>&code=<systemKey>&showHistory=true
The response should look like this:
{
"createdTime": "2018-02-28T05:18:49Z",
"historyEvents": [
...
],
"input": null,
"customStatus": { "nextActions": ["A", "B", "C"], "foo": 2 },
"lastUpdatedTime": "2018-02-28T05:18:54Z",
"output": [
...
],
"runtimeStatus": "Completed"
}
The duration can be determined by polling the status URI until the runtimeStatus reaches any of the terminal states (Failed, Cancelled, Terminated, or Completed), and then subtracting createdTime from lastUpdatedTime.
The following Typescript snippet shows how a the above JSON response (parsed into the status variable) could be processed to show duration as hh:mm:ss:
formatDuration(status: DurableFunctionJob): string {
const diff:number = (new Date(status.lastUpdatedTime).getTime()
- new Date(status.createdTime).getTime());
return new Date(diff).toISOString().substr(11,8);
}
For the example response, this function outputs 00:00:05.

Related

Regarding Oracle Node as a NEAR Protocol Contract implemented in Rust

Recently, I have been executing the implementation of this repository:
https://github.com/smartcontractkit/near-protocol-contracts
So, everything is working fine in this...
But what I want to ask in this is:
When I executed the request command:
near call oracle.$NEAR_ACCT request '{"payment": "10", "spec_id": "dW5pcXVlIHNwZWMgaWQ=", "callback_address": "client.'$NEAR_ACCT'", "callback_method": "token_price_callback", "nonce": "11", "data_version": "1", "data": "eyJnZXQiOiJodHRwczovL21pbi1hcGkuY3J5cHRvY29tcGFyZS5jb20vZGF0YS9wcmljZT9mc3ltPUVUSCZ0c3ltcz1VU0QiLCJwYXRoIjoiVVNEIiwidGltZXMiOjEwMH0="}' --accountId client.$NEAR_ACCT --gas 300000000000000
The resultant transaction was:
https://explorer.testnet.near.org/transactions/Gr4ddg77Hj1KN2EB3W7vErc6aaDq8sNPfo6KnQWkN9rm
And then, I executed the fulfill_request command:
near call oracle.$NEAR_ACCT fulfill_request '{"account": "client.'$NEAR_ACCT'", "nonce": "11", "data": "Nzg2"}' --accountId oracle-node.$NEAR_ACCT --gas 300000000000000
Then, the resultant transaction was:
https://explorer.testnet.near.org/transactions/39XZF81s9vGDzbUkobQZJufGxAX7wNrPT336S8TEnk29
NOTE:
As we can see in the first command that is request, the data parameter that is passed is:
eyJnZXQiOiJodHRwczovL21pbi1hcGkuY3J5cHRvY29tcGFyZS5jb20vZGF0YS9wcmljZT9mc3ltPUVUSCZ0c3ltcz1VU0QiLCJwYXRoIjoiVVNEIiwidGltZXMiOjEwMH0=
When we base64 decode that, then it comes out to be as:
{"get":"https://min-api.cryptocompare.com/data/price?fsym=ETH&tsyms=USD","path":"USD","times":100}
Similarly, in the second command that is fulfill_request, I passed the data parameter as:
Nzg2
When we base64 decode that, it comes out to be as:
786
And, that is also the result of the second transaction that you can see by scrolling to the midway of https://explorer.testnet.near.org/transactions/39XZF81s9vGDzbUkobQZJufGxAX7wNrPT336S8TEnk29.
We can see the result as:
Client contract received price: "786"
So, basically what I want is:
To get the response of https://min-api.cryptocompare.com/data/price?fsym=ETH&tsyms=USD as the result of fulfill_request command, instead of hard-coded 786.

JSR223 PostProcessor error logs due to same json path in previous multiple response model

I have seen below error in the logs in JSR223 PostProcessor despite the script running successfully and the Jmeter steps having no issues.
Caused by: java.lang.NullPointerException: Cannot get property 'result' on null object
My JSR223 PostProcessor - groovy 3.0.7 script looks like this:
import groovy.json.JsonSlurper;
String json = prev.getResponseDataAsString();
def root = new JsonSlurper().parseText(json);
def C_accountNumber = root.myAccount.result[0].accountNumber;
vars.put("C_accountNumber", C_accountNumber);
The response just above the above script
{
"myAccount": {
"result": [
{
"accountNumber": "Something",
"accountAddress": "Something",
}
]
}
}
However, the thing I noticed was, well before the above script in multiple previous step responses also I see below path. (exactly same path myAccount.result[0] )
So I am guessing this caused the issue because as one of the previous responses, I can't see accountNumber despite having myAccount.result[0].
e.g.
{
"myAccount": {
"result": [
{
"country": "Something",
"address": "Something",
}
]
}
}
Because this part String json = prev.getResponseDataAsString(); in the groovy script go through all the previous responses but NOT just immediate before. Is my understanding correct?
Is there way to get rid of these errors in the log?
Your post processor should be defined as a child of the HTTP request
JMeter Post-Processor is execute in scope, for specific HTTP request add it as a child of the request
Post-Processor executes some action after a Sampler Request has been made. If a Post-Processor is attached to a Sampler element, then it will execute just after that sampler element runs.
Be aware of JMeter Scoping Rules
If you put the PostProcessor at the same level as several Samplers - it will be applied to all of them
So if you want to execute the PostProcessor after a certain sampler only - you need to make the PostProcessor a child of the particular Sampler:

What is the best way to queue AWS Step Function executions?

I am triggering a step function through an Express route in a Node app. This step function interacts with the Ethereum blockchain, and is thus highly asynchronous. There is also a possibility of transactions failing if a multiple attempts are made at once.
As such, I want to queue up executions of this step function, but oddly there doesn't seem a straightforward way to do so.
What is the best way to do this?
You can go with Map in Step Functions.
Step Functions provide inbuilt way execute in parallel or on execution at a time for given number of items. Below is an example:
"Validate-All": {
"Type": "Map",
"InputPath": "$.detail",
"ItemsPath": "$.shipped",
"MaxConcurrency": 0,
"Iterator": {
"StartAt": "Validate",
"States": {
"Validate": {
"Type": "Task",
"Resource": "arn:aws:lambda:us-east-1:123456789012:function:ship-val",
"End": true
}
}
},
"ResultPath": "$.detail.shipped",
"End": true
}
You need to change the value of MaxConcurrency=1 so that only one execution will happen once and it will keep going until your items in InputPath is completely not exhausted. InputPath is supposed to be a list, you can queue up your items in InputPath and start the State Machine.
You can read more here.

Azure Functions table binding not being created when developing locally

I'm attempting to use an output table binding with an Azure Function V2 (node).
I have added the table binding to function.json, as described in the documentation.
{
"tableName": "Person",
"connection": "MyStorageConnectionAppSetting",
"name": "tableBinding",
"type": "table",
"direction": "out"
}
And then I am attempting to insert some content in to that table, again using the example as described in the documentation.
for (var i = 1; i < 10; i++) {
context.bindings.tableBinding.push({
PartitionKey: "Test",
RowKey: i.toString(),
Name: "Name " + i
});
}
To confirm - I have also added a setting called MyStorageConnectionAppSetting in to local.settings.json, with a valid Storage Account connection string as it's value.
Sadly though, this is failing and I'm seeing the following error -
System.Private.CoreLib: Exception while executing function: Functions.config. System.Private.CoreLib: Result: Failure
Exception: TypeError: Cannot read property 'push' of undefined
It seems that the binding object has not be created as expected, but I have no idea why.
The package Microsoft.Azure.WebJobs.Extensions.Storage is included in extensions.csproj, and the Function App starts just fine when I call func start.
Although I believe that no connection to the Storage Account is taking place, I did try to run my function both when the Table existed, and when it didn't.
Make sure the parameter has been initialized before usage. The output binding is undefined unless it's initialized or assigned value.
context.bindings.tableBinding = [];

Azure Stream Analytics: Specified cast is not valid

I'm having a hard time tracking down a casting error in Azure Stream Analytics. The input data is coming from an Azure IoT Hub. Here's my query code:
-- Create average data from raw data
WITH
AverageSensorData AS
(
SELECT
[Node],
[SensorType],
udf.round(AVG([Value]), 2) AS [Value],
MAX(TRY_CAST([Timestamp] AS DateTime)) AS [Timestamp]
FROM [SensorData]
WHERE TRY_CAST([Timestamp] AS DateTime) IS NOT NULL
GROUP BY
[Node],
[SensorType],
TumblingWindow(minute, 2)
)
-- Insert average data into PowerBI-Output
SELECT
[Node],
[SensorType]
[Value],
[Timestamp],
DATETIMEFROMPARTS(
DATEPART(yy, [Timestamp]),
DATEPART(mm, [Timestamp]),
DATEPART(dd, [Timestamp]),
0, 0, 0, 0) AS [Date]
INTO [SensorDataAveragePowerBI]
FROM [AverageSensorData]
While this runs fine most of the time (at least for a couple of hundreds or thousands of input entities) it will eventually fail. After having turned on Diagnostic logs I was able to find the following eror message in the corresponding execution log (in reality it was in JSON format, I cleaned it up a little for readability):
Message: Runtime exception occurred while processing events, - Specified cast is not valid. OutputSourceAlias: averagesensordata; Type: SqlRuntimeError, Correlation ID: 49938626-c1a3-4f19-b18d-ee2c5a5332f9
And here's some JSON input that probably caused the error:
[
{
"Date": "2017-04-27T00:00:00.0000000",
"EventEnqueuedUtcTime": "2017-04-27T07:53:52.3900000Z",
"EventProcessedUtcTime": "2017-04-27T07:53:50.6877268Z",
"IoTHub": {
/* Some more data that is not being used */
},
"Node": "IoT_Lab",
"PartitionId": 0,
"SensorType": "temperature",
"Timestamp": "2017-04-27T09:53:50.0000000",
"Value": 21.06
},
{
"Date": "2017-04-27T00:00:00.0000000",
"EventEnqueuedUtcTime": "2017-04-27T07:53:53.6300000Z",
"EventProcessedUtcTime": "2017-04-27T07:53:52.0157515Z",
"IoTHub": {
/* Some more data that is not being used */
},
"Node": "IT_Services",
"PartitionId": 2,
"SensorType": "temperature",
"Timestamp": "2017-04-27T09:53:52.0000000",
"Value": 27.0
}
]
The first entry was the last one to go through, so the second one might have been the one breaking everything. I'm not sure, though, and do not see any suspicious values here. If I upload this as test data within the Azure portal then no errors are being raised.
The query above explicitely uses casting for the [Timestamp] column. But since I'm using TRY_CAST I wouldn't expect any casting errors:
Returns a value cast to the specified data type if the cast succeeds; otherwise, returns null.
As I said, the error only appears once in a while (sometimes after 20 minutes and sometimes after a couple of hours) and cannot be reproduced explicitely. Does anyone have an idea about where the error originates or if there is a chance of getting more detailed error information?
Thanks a lot in advance.
UPDATE: Here's the source of the udf.round function:
function main(value, decimalPlaces) {
if (typeof(value) === 'number'){
var decimalMultiplier = 1;
if (decimalPlaces){
decimalMultiplier = Math.pow(10, decimalPlaces);
}
return Math.round(value * decimalMultiplier) / decimalMultiplier
}
return value;
}
Unfortunately, it's been a while since I wrote this, so I'm not a hundred percent sure why I wrote exactly this code. One thing I do remember, though, is, that I all the analyzed messages always contained a valid number in the respective field. I still think that there's a good chance of this function being responsible for my problem.

Resources