How to call an API on an interval in Superblocks - superblock

I tried Superblocks, and wanted to call an API for a Grid Component on an interval. My Grid component pulls data from a Google Sheet and I wanted it to update every 2 seconds, but wasn't able to find an auto-refresh functionality to poll the API trigger on the interval. Is there a way to do this in Superblocks?

In Superblocks you can use Timers to call actions on a pre-defined interval.
In this case you can create a Timer, set the desired interval in milliseconds, choose Run APIs as the Trigger and pick the API polls the data from your Google Sheet from the dropdown.
The Trigger then calls your API on the specified interval and auto-refreshes your Grid Component with the updated data.

Related

Executing Azure Function back to back

If I schedule a timer triggered Azure function to run every second and my function is taking 2 seconds to execute, will I just get back-to-back executions or will some execution queue eventually overflow?
Background:
We have an timer triggered Azure function that is currently executing every 30 seconds and is checking for new rows in a database table. If there are new rows, the data will be processed and the rows will be marked as handled.
If there are no new rows the execution is very fast. If there are 500 new rows (which is the max we are fetching at the moment) the execution takes about 20-25 seconds.
We would like to decrease the interval to one second to reduce the latency or row processing.
Update: I want back-to-back executions and I want to avoid overlapping executions.
Multiple azure functions can run concurrently. This is means you can still trigger the function again while the previous triggered function is still running. They will both run concurrently. They will only queue up if you setup options to run only 1 function at a time on 1 instance but doesn't look like you want that.
With concurrency, this means that 2 functions will read the same table on the DB at the same time. So you should read your table with UPDLOCK option LINK. This will prevent the subsequent triggered function from reading the same rows that were read in the previous function.
In short, the answer to your question is neither. If your functions overlap, by default, you will get multiple functions running at the same time. LINK
To achieve back to back execution for time triggers, set WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT and FUNCTIONS_WORKER_PROCESS_COUNT as 1 in the application settings configuration. This will ensure only 1 function executes runs at a time . See this LINK.

Tracking a counter value in application insights

I'm trying to use application insights to keep track of a counter of number of active streams in my application. I have 2 goals to achieve:
Show the current (or at least recent) number of active streams in a dashboard
Activate a kind of warning if the number exceeds a certain limit.
These streams can be quite long lived, and sometimes brief. So the number can sometimes change say 100 times a second, and sometimes remain unchanged for many hours.
I have been trying to track this active streams count as an application insights metric.
I'm incrementing a counter in my application when a new stream opens, and decrementing when one closes. On each change I use the telemetry client something like this
var myMetric = myTelemetryClient.GetMetric("Metricname");
myMetric.TrackValue(myCount);
When I query my metric values with Kusto, I see that because of these clusters of activity within a 10 sec period, my metric values get aggregated. For the purposes of my alarm, I can live with that, as I can look at the max value of the aggregate. But I can't present a dashboard of the number of active streams, as I have no way of knowing the number of active streams between my measurement points. I know the min value, max and average, but I don't know the last value of the aggregate period, and since it can be somewhere between 0 and 1000, its no help.
So the solution I have doesn't serve my needs, I thought of a couple of changes:
Adding a scheduled pump to my counter component, which will send the current counter value, once every say 5 minutes. But I don't like that I then have to add a thread for each of these counters.
Adding a timer to send the current value once, 5 minutes after the last change. Countdown gets reset each time the counter changes. This has the same problem as above, and does an excessive amount of work to reset the counter when it could be changing thousands of times a second.
In the end, I don't think my needs are all that exotic, so I wonder if I'm using app insights incorrectly.
Is there some way I can change the metric's behavior to suit my purposes? I appreciate that it's pre-aggregating before sending data in order to reduce ingest costs, but it's preventing me from solving a simple problem.
Is a metric even the right way to do this? Are there alternative approaches within app insights?
You can use TrackMetric instead of the GetMetric ceremony to track individual values withouth aggregation. From the docs:
Microsoft.ApplicationInsights.TelemetryClient.TrackMetric is not the preferred method for sending metrics. Metrics should always be pre-aggregated across a time period before being sent. Use one of the GetMetric(..) overloads to get a metric object for accessing SDK pre-aggregation capabilities. If you are implementing your own pre-aggregation logic, you can use the TrackMetric() method to send the resulting aggregates.
But you can also use events as described next:
If your application requires sending a separate telemetry item at every occasion without aggregation across time, you likely have a use case for event telemetry; see TelemetryClient.TrackEvent (Microsoft.ApplicationInsights.DataContracts.EventTelemetry).

Azure event grid to check

I have a requirement to move a file from ADLS Gen 2 from path(directory) 'A' to directory 'B' or 'C' based on 2 conditions : Move to 'C' if file is not csv or file size is 0 else move to 'B'.
I am planning to use Event grid (as soon as file lands in location 'A')+ Azure function (for checks and move to location 'B' or 'C').
If there are 100 files landing per day, this approach will trigger azure function 100 times.
Is there a better way to do this - can this smarts be built using just one service (such as Event Hub instead of Event grid + Function) so that there is less overhead to maintain.
Thanks for your time.
If you want low effort then try Logic Apps.
What you want is to create a Logic App with Blob Trigger, that will be triggered when there are new blobs. That takes care of trigger.
For action, you can use the "copy blob" if you like. Not sure if there is a "move blob" action supported, but if it's not and "copy blob" action isn't good enough for you then you can provide a custom JS snippet action as inline code.
Couple of notes:
If your Azure Functions are called only 100 times a day and they are only doing some small check and then moving the blob, then under consumption plan you'll probably pay less than a $1 US per month.
With Azure Functions you'll have a lot more control and it'll take you a lot longer (compared to Logic Apps) to develop/operate.
can this smarts be built using just one service
Of course, you can directly use blob trigger of azure function.
https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-storage-blob-trigger?tabs=csharp
If there are 100 files landing per day, this approach will trigger
azure function 100 times.
You can use azure function to do a daily check instead of use event grid to trigger function(Timertrigger).
https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-timer?tabs=csharp
Just put the logic in the body of function.

How to combine data from two end points that don't need to be called the same amount of times? [nodejs]

I'm creating an end point that gives the current traffic conditions and the temperature. It combines data from two endpoints:
GET current traffic statistics (at every request)
GET current temperature (every 3 hours)
A simple solution would be to chain two promises together, but I don't need to call 2. at every request. How could I structure my code to hold the data for 2. and refresh it periodically?
create temperature module with current temperature value, setInterval to update this value every 3 hours.
On your endpoint make request for traffic data, and read cached value from temperature module.

Oracle responsys - send record to table in warehouse on event

I'm trying to figure out how to send Responsys record/s to a table in our data warehouse (MS SQL) in real time, when triggered to do so from an interaction event.
Use case is-
- Mass email is sent
- Customer X interacts with email (e.g. open, click)
- Responsys sends contact along with unique identifier (let's call it 'customer_key') and phone number to the table in the warehouse, within several minutes of customer interaction
Once in the table I can pass to our third party call centre platform.
Any help would be greatly appreciated!
Thanks
Alex
From what I know of Responsys, the most you can download interaction data is 6 times a day via the Export Event Data Feed.
If you need it more often than that I think you will to set up a filter in Responsys that checks user interactions in the last 15 mins. And then schedule a daily download per 15 mins interval via connect.
It would have to be 15 mins as you can only schedule a custom download within a 15 min window on responsys.
You'd then need to automate downloading the file, loading and importing.
I highly doubt this is responsive enough for you however!

Resources