I have an application that it's gathering data from another application, both in NodeJS.
I was wondering, how can I trigger sending the data to a third application on certain conditions? For example, every 10 mins if there's data in a bucket or when I have 20 elements to send?
And if the call on the third parties fails, how can I repeat it after 10-15 mins?
EDIT:
The behaviour should be something like:
if you have 1 data posted (axios.post) AND [10 mins passed OR other 10 data posted] SUBMIT to App n.3
What can help me doing so? Can I keep the value saved until those requirements are satisfied?
Thank you <3
You can use packages like node-schedule which is popular to schedule tasks. When callback runs check if there is enough data(posts) to send.
Related
If I schedule a timer triggered Azure function to run every second and my function is taking 2 seconds to execute, will I just get back-to-back executions or will some execution queue eventually overflow?
Background:
We have an timer triggered Azure function that is currently executing every 30 seconds and is checking for new rows in a database table. If there are new rows, the data will be processed and the rows will be marked as handled.
If there are no new rows the execution is very fast. If there are 500 new rows (which is the max we are fetching at the moment) the execution takes about 20-25 seconds.
We would like to decrease the interval to one second to reduce the latency or row processing.
Update: I want back-to-back executions and I want to avoid overlapping executions.
Multiple azure functions can run concurrently. This is means you can still trigger the function again while the previous triggered function is still running. They will both run concurrently. They will only queue up if you setup options to run only 1 function at a time on 1 instance but doesn't look like you want that.
With concurrency, this means that 2 functions will read the same table on the DB at the same time. So you should read your table with UPDLOCK option LINK. This will prevent the subsequent triggered function from reading the same rows that were read in the previous function.
In short, the answer to your question is neither. If your functions overlap, by default, you will get multiple functions running at the same time. LINK
To achieve back to back execution for time triggers, set WEBSITE_MAX_DYNAMIC_APPLICATION_SCALE_OUT and FUNCTIONS_WORKER_PROCESS_COUNT as 1 in the application settings configuration. This will ensure only 1 function executes runs at a time . See this LINK.
What would be a good approach to running a repetitive task for each row in a large postgres db table on a different per row interval in Node.js.
To give you some more context, here's a quick description of the application:
It's a chat based customer support app.
It consists of teams, which can be either a client team or a support team. Teams have users, which can be either client users or support users.
Client users send messages to a support team and wait for one of that team's users to answer their question.
When there's an unanswered client message waiting for a response, every agent for the receiving support team will receive a notification every n seconds (n being set on a per-team basis by the team admin).
So this task needs to infinitely loop through the rows in the teams table and send notifications if:
The team has messages waiting to be answered.
N seconds have passed since the last notification was sent (N being the number of seconds set by the team admin).
There might be a better approach to this condition altogether.
So my questions are:
What is an efficient way to infinitely loop through a postgres table with no upper limit on the number rows?
Should I load 1 row at a time? Several at a time?
What would be a good way to do this in Node?
I'm using Knex. Does Knex provide a mechanism for lazy loading a table and iterating through the rows?
A) Running a repetitive task via node can be done via a the js built-in function 'setInterval'.
// run the intervalFnc() every 5 seconds
const timerId = setTimeout(intervalFnc, 5000);
function intervalFnc() { console.log("Hello"); }
// to quit running it:
clearTimeout(timerId);
Then your interval function can do the actual work. An alternative would be to use cron (linux), or some OS process scheduler to trigger the function. I would use this method if you want to do it every minute, and a cron job if you want to do it every hour (in between these times becomes more debatable).
B) An efficient way...
B-1) Retrieving a block of records from a DB will be more efficient than one at a time. Knex has .offset and .limit clauses to choose a group of records to retrieve. A sample from the knex doc:
knex.select('*').from('users').limit(10).offset(30)
B-2) Database indexed access is important for performance if your tables are very large. I would recommend including an status flag field in your table to note which records are 'in-process', and also include a "next-review-timestamp" field with both fields being both indexed. Retrieve the records that have status_flag='in-process' AND next_review_timestamp <= now(). Sample:
knex('users').where('status_flag', 'in-process').whereRaw('next_review_timestamp <= now()')
Hope this helps!
I want to make schedules that depend on entries of a database to schedule cron jobs. Like if there's an entry in database with a timestamp 2:00 PM, 3rd of Apr, I want to send a mail to users on 2nd of Apr. I also want to send notifications at 1:55 PM 3rd of Apr.
So, this means I have to look into the database, find the entries after the current times tamp, see if they suit the criteria for notification (like 5 minutes to time stamp or 1 day to time stamp) and send the notification or mail. I'm only worried that every one minute seems like too much overload. Are the AWS web workers built for this sort of thing?
Any suggestions on how this can be accomplished?
I don't think crontab will be the best choice but if you're familiar with it, it's fine.
First you should estimate how frequently your entries are created. If, let's say, only a couple of hundred a day. My suggestion is to create the crontab job right after the entry is created. But if more than a hundred a minutes, pooling will be fine.
But there are also side effects, like canceling or updating the cron job .
I think it's better to use a proper MQ.
In my node application with mongodb I have feature where users can post books on rent and other users can request for them with a "whenDate". One post is mapped to only one book.
Consider a user requests for a book for 1 week 5 days from now. In this case I want to lock the book for a week so that no one else can request at that period.
1) How can I achieve in NodeJs that a function gets executed after sometime considering that I will be having many of them? This function will get executed after 5 days in the above case to lock the particular book document. Please consider the question 2 also.
2) I don't want these timers to get deleted if I restart my application. How can I achieve this?
Thanks in advance.
You can use TTL feature in mongo DB to discard records automatically after the time to live.
Let's say you keep a table with the booking requests and set TTL according to the booking duration. Mongo DB then can remove these booking record after the TTL is achieved. So your node.js application does not need to trigger any job.
Refer: https://docs.mongodb.com/manual/tutorial/expire-data/
I currently do service using beanstalkd and node.js.
I would like when jobs fail, retry n time before give up the job.
If the job succede i want do it the same job 10 time.
So, what is the best practice, stock in mongo db with the jobId the error and success count, or delete and put a new job with a an error and success count in the body.
I dont know if i'm clear? so tell me , thanks a lot
There is a stats-job <id>\r\n that should also be available via the API library that returns, among other things, how many times the specific job has been reserved, released, buried, and so on.
This allows for a number of retries of failed jobs by checking previous reservation/releases.
To run the same job multiple times, I would personally create either one additional job, with a success count that would then be incremented (into another new job) - or, all nine new jobs, with optional delays before they start.
You have a couple of ways to do this:
you can release the job, and obtain from stats the number of reserves
you can put a new job with a retry count, and keep track of history in the data payload
You should do the later, and you don't need MongoDB as a second dependency.