Oracle responsys - send record to table in warehouse on event - responsys

I'm trying to figure out how to send Responsys record/s to a table in our data warehouse (MS SQL) in real time, when triggered to do so from an interaction event.
Use case is-
- Mass email is sent
- Customer X interacts with email (e.g. open, click)
- Responsys sends contact along with unique identifier (let's call it 'customer_key') and phone number to the table in the warehouse, within several minutes of customer interaction
Once in the table I can pass to our third party call centre platform.
Any help would be greatly appreciated!
Thanks
Alex

From what I know of Responsys, the most you can download interaction data is 6 times a day via the Export Event Data Feed.
If you need it more often than that I think you will to set up a filter in Responsys that checks user interactions in the last 15 mins. And then schedule a daily download per 15 mins interval via connect.
It would have to be 15 mins as you can only schedule a custom download within a 15 min window on responsys.
You'd then need to automate downloading the file, loading and importing.
I highly doubt this is responsive enough for you however!

Related

Rally CumulativeFlowData endpoint showing data > 24hours old

Using the CumulativeFlowData diagram available in Rally we can see a chart includes data up to the current day. When pulling data from the Rally API we can only get data for up to 24 hours earlier.
We have tried pulling CumulativeFlow data by both Iteration Id and Created date and both only provide data with a 24 hour delay.
Does anyone know why there is such a big lag with the data being available via the API?
https://rally1.rallydev.com/slm/webservice/v2.0/iterationcumulativeflowdata?workspace=<<myworkspace>>&project=https://rally1.rallydev.com/slm/webservice/v2.0/project/<<myprojectId>>&query=(IterationObjectID = <<myIterationId>>)&fetch=CreationDate,CardEstimateTotal,CardState&start=1&pagesize=200
eg.
Rally View of CFD
View of data as retried from API

SharePoint list update item takes more than 5 seconds

We have the Sharepoint 2016 hosted on prem with a minimum set of services running on the server. The resource utilization is very low and the user base is around 100. There are no workflows or any other resource consuming service running.
We use list to store and update information for certain users with the help of a form for the end user. Of recent, the time consumed for the update has increased to over 6 seconds for a list data update.
Example:
https://sitename_url/_api/web/lists/GetByTitle('WFListInfo')/items(15207)
This list has about 15 items, mostly numbers and single line text or number or DateTime.
The indexing is set to automatic.
As part of the review, we conducted a few checks and DB indexing on our cluster, however there is no improvement.
Looking forward to any help / suggestions. Thank you.

Running a repetitive task in Node.js for each row in a postgres table on a different interval for each row

What would be a good approach to running a repetitive task for each row in a large postgres db table on a different per row interval in Node.js.
To give you some more context, here's a quick description of the application:
It's a chat based customer support app.
It consists of teams, which can be either a client team or a support team. Teams have users, which can be either client users or support users.
Client users send messages to a support team and wait for one of that team's users to answer their question.
When there's an unanswered client message waiting for a response, every agent for the receiving support team will receive a notification every n seconds (n being set on a per-team basis by the team admin).
So this task needs to infinitely loop through the rows in the teams table and send notifications if:
The team has messages waiting to be answered.
N seconds have passed since the last notification was sent (N being the number of seconds set by the team admin).
There might be a better approach to this condition altogether.
So my questions are:
What is an efficient way to infinitely loop through a postgres table with no upper limit on the number rows?
Should I load 1 row at a time? Several at a time?
What would be a good way to do this in Node?
I'm using Knex. Does Knex provide a mechanism for lazy loading a table and iterating through the rows?
A) Running a repetitive task via node can be done via a the js built-in function 'setInterval'.
// run the intervalFnc() every 5 seconds
const timerId = setTimeout(intervalFnc, 5000);
function intervalFnc() { console.log("Hello"); }
// to quit running it:
clearTimeout(timerId);
Then your interval function can do the actual work. An alternative would be to use cron (linux), or some OS process scheduler to trigger the function. I would use this method if you want to do it every minute, and a cron job if you want to do it every hour (in between these times becomes more debatable).
B) An efficient way...
B-1) Retrieving a block of records from a DB will be more efficient than one at a time. Knex has .offset and .limit clauses to choose a group of records to retrieve. A sample from the knex doc:
knex.select('*').from('users').limit(10).offset(30)
B-2) Database indexed access is important for performance if your tables are very large. I would recommend including an status flag field in your table to note which records are 'in-process', and also include a "next-review-timestamp" field with both fields being both indexed. Retrieve the records that have status_flag='in-process' AND next_review_timestamp <= now(). Sample:
knex('users').where('status_flag', 'in-process').whereRaw('next_review_timestamp <= now()')
Hope this helps!

Send multiple notifications to a user on specific time

So I have database of users which have a reminderTime field which currently is just a string which looks like that 07:00 which is a UTC time.
In the future I'll have a multiple strings inside reminderTime which will correspond to at which time the user should receive a notification.
So imagine you logged into an app, set a multiple reminders like so 07:00, 15:00, 23:30 and sent it to server. The server will save those inside a database and run a task and send a notification at 07:00 then at 15:00 and so on. So later a user decided that he will no longer wants to receive notifications at 15:00 or change it to 15:30 and we should adapt to that.
And each user has a timezone, but I guess since reminderTime is already in UTC I can just create a task without looking at timezone.
Currently I have a reminderTime as number and after client sends me a 07:00 I convert it to seconds, but as I understand I can change that and stick to string.
All my tasks are running with Bull queue library and Redis. So as I understood the best scalable approach is to take reminderTime and just create notifications for each day at a given time and just run the task, the only problem is that should I save them to my database or add a task to a queue in Bull. The same will be for multiple times.
I don't understand how should I change already created tasks inside Bull so that the time will be different and so on.
Maybe I could just create like a 1000 records at which time user should receive a notification inside my database. Then I create a repeatable job which will run like every 5 minutes and take all of the notifications which should be send in the next couple of hours and then add them to a Bull queue and mark it that it was sent.
So basically you get the idea, maybe it could be done a little bit better.
Unless you have really a lot of users, you could simply create a schedule-like table in your DB, which is simply a list of user_id | notify_at records. Then, run a periodic task every 1-5 minutes, which compares current time and selects all the records, where notify_at is less than the current time.
Add the flag notified, if you want to send notifications more than once a day to ignore ones that was already sent. There is no need to create thousands of records for every day, you can just reset that flag once a day, e.g. at 00:00 AM.
It's ok that your users wont recieve their notifications all at the same time, there could be little delays.
The solution you suggested is pretty much fine :)

How to send an email with snapshot of New Relic chart data

We use New Relic and I was tasked with figuring out a way to send an email every morning with the results of our overnight load and performance testing. I need to be able to just send a snapshot of a certain time frame that the test(s) is/are ran, showing things like the throughput, web transaction time, top DB calls, etc.
You can generate dynamic permalinks for any given time window in New Relic by using UNIX time. For example:
https://rpm.newrelic.com/accounts/<your_account_id>/applications/<your_application_id>?tw%5Bend%5D=1501877076&tw%5Bstart%5D=1501875276
Adjust the tw[end] and tw[start] values to the time range you want to return.

Resources