We use New Relic and I was tasked with figuring out a way to send an email every morning with the results of our overnight load and performance testing. I need to be able to just send a snapshot of a certain time frame that the test(s) is/are ran, showing things like the throughput, web transaction time, top DB calls, etc.
You can generate dynamic permalinks for any given time window in New Relic by using UNIX time. For example:
https://rpm.newrelic.com/accounts/<your_account_id>/applications/<your_application_id>?tw%5Bend%5D=1501877076&tw%5Bstart%5D=1501875276
Adjust the tw[end] and tw[start] values to the time range you want to return.
Related
I'm trying to use application insights to keep track of a counter of number of active streams in my application. I have 2 goals to achieve:
Show the current (or at least recent) number of active streams in a dashboard
Activate a kind of warning if the number exceeds a certain limit.
These streams can be quite long lived, and sometimes brief. So the number can sometimes change say 100 times a second, and sometimes remain unchanged for many hours.
I have been trying to track this active streams count as an application insights metric.
I'm incrementing a counter in my application when a new stream opens, and decrementing when one closes. On each change I use the telemetry client something like this
var myMetric = myTelemetryClient.GetMetric("Metricname");
myMetric.TrackValue(myCount);
When I query my metric values with Kusto, I see that because of these clusters of activity within a 10 sec period, my metric values get aggregated. For the purposes of my alarm, I can live with that, as I can look at the max value of the aggregate. But I can't present a dashboard of the number of active streams, as I have no way of knowing the number of active streams between my measurement points. I know the min value, max and average, but I don't know the last value of the aggregate period, and since it can be somewhere between 0 and 1000, its no help.
So the solution I have doesn't serve my needs, I thought of a couple of changes:
Adding a scheduled pump to my counter component, which will send the current counter value, once every say 5 minutes. But I don't like that I then have to add a thread for each of these counters.
Adding a timer to send the current value once, 5 minutes after the last change. Countdown gets reset each time the counter changes. This has the same problem as above, and does an excessive amount of work to reset the counter when it could be changing thousands of times a second.
In the end, I don't think my needs are all that exotic, so I wonder if I'm using app insights incorrectly.
Is there some way I can change the metric's behavior to suit my purposes? I appreciate that it's pre-aggregating before sending data in order to reduce ingest costs, but it's preventing me from solving a simple problem.
Is a metric even the right way to do this? Are there alternative approaches within app insights?
You can use TrackMetric instead of the GetMetric ceremony to track individual values withouth aggregation. From the docs:
Microsoft.ApplicationInsights.TelemetryClient.TrackMetric is not the preferred method for sending metrics. Metrics should always be pre-aggregated across a time period before being sent. Use one of the GetMetric(..) overloads to get a metric object for accessing SDK pre-aggregation capabilities. If you are implementing your own pre-aggregation logic, you can use the TrackMetric() method to send the resulting aggregates.
But you can also use events as described next:
If your application requires sending a separate telemetry item at every occasion without aggregation across time, you likely have a use case for event telemetry; see TelemetryClient.TrackEvent (Microsoft.ApplicationInsights.DataContracts.EventTelemetry).
So I have database of users which have a reminderTime field which currently is just a string which looks like that 07:00 which is a UTC time.
In the future I'll have a multiple strings inside reminderTime which will correspond to at which time the user should receive a notification.
So imagine you logged into an app, set a multiple reminders like so 07:00, 15:00, 23:30 and sent it to server. The server will save those inside a database and run a task and send a notification at 07:00 then at 15:00 and so on. So later a user decided that he will no longer wants to receive notifications at 15:00 or change it to 15:30 and we should adapt to that.
And each user has a timezone, but I guess since reminderTime is already in UTC I can just create a task without looking at timezone.
Currently I have a reminderTime as number and after client sends me a 07:00 I convert it to seconds, but as I understand I can change that and stick to string.
All my tasks are running with Bull queue library and Redis. So as I understood the best scalable approach is to take reminderTime and just create notifications for each day at a given time and just run the task, the only problem is that should I save them to my database or add a task to a queue in Bull. The same will be for multiple times.
I don't understand how should I change already created tasks inside Bull so that the time will be different and so on.
Maybe I could just create like a 1000 records at which time user should receive a notification inside my database. Then I create a repeatable job which will run like every 5 minutes and take all of the notifications which should be send in the next couple of hours and then add them to a Bull queue and mark it that it was sent.
So basically you get the idea, maybe it could be done a little bit better.
Unless you have really a lot of users, you could simply create a schedule-like table in your DB, which is simply a list of user_id | notify_at records. Then, run a periodic task every 1-5 minutes, which compares current time and selects all the records, where notify_at is less than the current time.
Add the flag notified, if you want to send notifications more than once a day to ignore ones that was already sent. There is no need to create thousands of records for every day, you can just reset that flag once a day, e.g. at 00:00 AM.
It's ok that your users wont recieve their notifications all at the same time, there could be little delays.
The solution you suggested is pretty much fine :)
Currently, I'm working on a project that requires a window of time to be selected that is used as a valid window to trigger an event within. This window is selected by the user as a start time (24 hour time), end time (24 hour time), and a timezone. My goal is to then be able to convert these times into UTC based on the offset from the provided timezone and save into MySQL.
The main problem is I have set up the entire flow to deal with time-only data types from the mobile app all the way back to the MySQL database. I have been trying to figure out a solution that won't require changing all those data types to include date and time which would require changes in many parts of the project.
Can I make this calculation without dealing with the date? I don't believe I can as timezone offsets range from -12:00 to +14:00 which would push some windows to the next or previous days when turned into UTC.
Is the correct approach to add in the date component and then continue to update it as time progresses? I also want to ensure daylight savings doesn't create errors.
Ultimately I would like the best approach to take so if I have to change a lot now I'd rather do that then deal with a headache later. Any thoughts would be greatly appreciated!
I have a question regarding the Python API of Interactive Brokers.
Can multiple asset and stock contracts be passed into reqMktData() function and obtain the last prices? (I can set the snapshots = TRUE in reqMktData to get the last price. You can assume that I have subscribed to the appropriate data services.)
To put things in perspective, this is what I am trying to do:
1) Call reqMktData, get last prices for multiple assets.
2) Feed the data into my prediction engine, and do something
3) Go to step 1.
When I contacted Interactive Brokers, they said:
"Only one contract can be passed to reqMktData() at one time, so there is no bulk request feature in requesting real time data."
Obviously one way to get around this is to do a loop but this is too slow. Another way to do this is through multithreading but this is a lot of work plus I can't afford the extra expense of a new computer. I am not interested in either one.
Any suggestions?
You can only specify 1 contract in each reqMktData call. There is no choice but to use a loop of some type. The speed shouldn't be an issue as you can make up to 50 requests per second, maybe even more for snapshots.
The speed issue could be that you want too much data (> 50/s) or you're using an old version of the IB python api, check in connection.py for lock.acquire, I've deleted all of them. Also, if there has been no trade for >10 seconds, IB will wait for a trade before sending a snapshot. Test with active symbols.
However, what you should do is request live streaming data by setting snapshot to false and just keep track of the last price in the stream. You can stream up to 100 tickers with the default minimums. You keep them separate by using unique ticker ids.
I'm trying to figure out how to send Responsys record/s to a table in our data warehouse (MS SQL) in real time, when triggered to do so from an interaction event.
Use case is-
- Mass email is sent
- Customer X interacts with email (e.g. open, click)
- Responsys sends contact along with unique identifier (let's call it 'customer_key') and phone number to the table in the warehouse, within several minutes of customer interaction
Once in the table I can pass to our third party call centre platform.
Any help would be greatly appreciated!
Thanks
Alex
From what I know of Responsys, the most you can download interaction data is 6 times a day via the Export Event Data Feed.
If you need it more often than that I think you will to set up a filter in Responsys that checks user interactions in the last 15 mins. And then schedule a daily download per 15 mins interval via connect.
It would have to be 15 mins as you can only schedule a custom download within a 15 min window on responsys.
You'd then need to automate downloading the file, loading and importing.
I highly doubt this is responsive enough for you however!