Prometheus-net : callback to update a gauge when /metrics endpoint is called? - prometheus-net

I'm not sure I got correctly how prometheus-net should be used but having a gauge that is time related I'd need to be able to update its value when data is being requested via the /metrics endpoint. I thought I might use a timer to updated the gauge but the ideal thing would be to have a callback exposed by prometheus-net to update what need updating just before data is returned.

If anyone needs it, prometheus-net provides the AddBeforeCollectCallback callback for collecting or updating your metrics, gauges, etc. just before Prometheus collects data from you.
From prometheus-net documentation
In some scenarios you may want to only collect data when it is requested by Prometheus. To easily implement this scenario prometheus-net enables you to register a callback before every collection occurs. Register your callback using Metrics.DefaultRegistry.AddBeforeCollectCallback().

Related

Enqueueing a message to Azure Storage in an Azure function without changing the output

I have a custom handler written in Go running as an Azure Function. It has an endpoint with two methods:
POST /entities
PUT /entities
It was easy to make my application run as an Azure function: I added "enableForwardingHttpRequest": true to host.json, and it just works.
What I need to achieve: life happened and now I need to enqueue a message when my entities change, so it will trigger another function that uses a queueTrigger to perform some async stuff.
What I tried: The only way I found so far was to disable enableForwardingHttpRequest, and change all my endpoints to accept the Azure Function's raw JSON input and output, and then output a message in one of the fields (as documented here.
It sounds like a huge change to perform something simple... Is there a way I can enqueue a message without having to change all the way my application handles requests?
As per this Github document as of now custom handlers related to go lang in Azure functions having a bug and which need to be fixed.

Is it possible to add a callback method in the backend to perform once a query finishes?

I'm setting up a service that will email a user the data generated by a Cubejs query. I'd like to have Cubejs notify the email-sending service (perhaps through SNS) that new data is available for sending: Is this possible? Are there better options for allowing asynchronous access to query results?
Perhaps you could look into WebSocketTransport, part of the real-time data fetch mechanism?

Modify a CloudFront request before logging?

I'm building an ELK stack (for the first time) to track end-user REST API usage for a CloudFront distribution (in front of an S3 origin). Users pass a refresh token as part of their request and I was hoping to use this token to identify which users were making which request. Unfortunately, it looks like CloudFront access logs are missing some header information (particularly Authorization/Accept in my use case). This leaves me with three questions:
Is there a way to tell CloudFront to log additional items? It appears the answer is no.
As an alternative strategy, I tried modifying the request object with lambda#edge (in Viewer Request) to move the header information into the query string (so that it would get logged) but any manipulation in lambda#edge does not seem to be reflected in the log. (though it is reflected in the Origin Request function). Should this be possible?
If doing what I want is impossible, I think the alternative approach is forgo CloudFront logs completely and just fire an http request to logstash with every user request, but I feel like this could be easy to overload.
Thanks
After a few days of research and reaching out to Amazon, I was finally able to answer my own questions:
CloudFront logs can't be customized, they are what they are.
See 1.
It turns out that customization is the wrong approach. What I really need to do is aggregate two separate logs that have the information I need into a single logstash entry. It turns out that the Viewer Response lambda#edge function contains a requestId property (actually event.Records[0].cf.config.requestId) which matches the CloudFront log x-edge-request-id column. So while I haven't finished implementing it yet, these two columns can be used in the logstash config for aggregation. I just need to make sure I set up a Viewer Response event that logs out a consistent format that I can then part with logstash. I'm using the logstash-input-cloudwatch_logs to retrieve teh cloudwatch logs.

Cordova Offline Sync - multiple calls to API on Pull - JS Library

We are observing that there are three API calls happening when we execute an offline sync selective pull query
GET domain/tables/Events?$filter=updatedAt%20ge%20datetimeoffset'1969-12-30T22:00:00.000Z'
GET domain/tables/Events?$filter=updatedAt%20ge%20datetimeoffset'2017-06-27T22:00:00.000Z' (current datetime)
GET domain/tables/Events?$filter=updatedAt%20ge%20datetimeoffset'2017-06-27T22:00:00.000Z'&$skip=1
These 3 calls happen every time a pull is done, can anyone explain why this happens? The selective sync query is created in the following format
syncContext
.pull(new WindowsAzure.Query('Events'), 'eventspull')
.then(function() { /* pull complete */ });
We are using latest version of the following javascript offline library. https://zumo.blob.core.windows.net/sdk/azure-mobile-apps-client.js
These 3 calls happen every time a pull is done, can anyone explain why this happens?
This happens because the "pull" function pulls one page from the server table at a time. You can check out the source code here for details.
Let’s say you have thousands of records. If you execute the query without paging, then it is likely you will tie up your client process on the phone for a considerable period of time as you receive and process the data. To alleviate that and allow your mobile application to remain responsive, the client SDK implements paging. By default, 50 records will be requested for each paged operation. In reality, this means that you will see one more request than you expect.
For more info, pelease refer to Understanding offline sync.

Is there a way to run custom code on Azure Cache expiration? (where last cached value is accessible)

What I mean is a kind of event or callback which is called when some cached value is expiring. Supposedly this callback should be given the currenlty cached value, for example, to store it somewhere else apart from caching.
To find such a way, I have reviewed Notifications option, but they look like they are applicable for explicit actions with cache like adding or removing, whereas expiration is a kind of thing that occurs implicitly. I found out that none of these callbacks is not called many minutes after cache value has expired and has become null, while it is called normally within polling interval if I call DataCache.Remove explicitly (wrong, see update below).
I find this behavior strange as ASP.Net has such callback. You can even find an explanation how to utilize it here on SO.
Also, I tried DataCache Events. It is writtent in MSDN that literally
This API supports the .NET Framework infrastructure and is not intended to be used directly from your code.
Nevertheless I created a handler for these event to see if I can test its args like CacheOperationStartedEventArgs.OperationType == CacheOperationType.ClearCache but it seemed to be in vain.
At the moment, I started to think about workarounds of this issue of the lack of required callback. So suggestions how to implement them are welcome too.
UPDATE. After more attentive and patient testing I found out that notification with DataCacheOperations.ReplaceItem is sent after expiration. Regrettably, I did not find the way to get the value that was cached before the expiration had occurred.

Resources