I am integrating Asana project metrics with our help desk dashboard. I would like to show 3 numbers for each project:
- Total tasks in project
- Total completed tasks in project
- Total incomplete tasks in project
When I call the project/tasks api, I want to simply get a count, and not have to retrieve all the pages and programatically count the tasks. Is there any parameter for the API calls which just gets me a count of how many tasks match the criteria?
Thanks,
Craig
Unfortunately, the Asana API doesn't currently have the type of filtering where you can query to a subset of tasks that match an arbitrary pattern that you specify (i.e. "only the tasks where completed=true"). We also don't have an easy way to only get the completed tasks. You can get all incomplete tasks fairly easily by specifying completed_since=now on the tasks query endpoint - which is admittedly a bit strange, but works - but its converse (get only completed tasks) doesn't.
We are evaluating use cases for more filtering options, so you might see it at some point! For now, however, the only way to go about this is to get all of the tasks for a project and count them on your side.
Related
What would be a good approach to running a repetitive task for each row in a large postgres db table on a different per row interval in Node.js.
To give you some more context, here's a quick description of the application:
It's a chat based customer support app.
It consists of teams, which can be either a client team or a support team. Teams have users, which can be either client users or support users.
Client users send messages to a support team and wait for one of that team's users to answer their question.
When there's an unanswered client message waiting for a response, every agent for the receiving support team will receive a notification every n seconds (n being set on a per-team basis by the team admin).
So this task needs to infinitely loop through the rows in the teams table and send notifications if:
The team has messages waiting to be answered.
N seconds have passed since the last notification was sent (N being the number of seconds set by the team admin).
There might be a better approach to this condition altogether.
So my questions are:
What is an efficient way to infinitely loop through a postgres table with no upper limit on the number rows?
Should I load 1 row at a time? Several at a time?
What would be a good way to do this in Node?
I'm using Knex. Does Knex provide a mechanism for lazy loading a table and iterating through the rows?
A) Running a repetitive task via node can be done via a the js built-in function 'setInterval'.
// run the intervalFnc() every 5 seconds
const timerId = setTimeout(intervalFnc, 5000);
function intervalFnc() { console.log("Hello"); }
// to quit running it:
clearTimeout(timerId);
Then your interval function can do the actual work. An alternative would be to use cron (linux), or some OS process scheduler to trigger the function. I would use this method if you want to do it every minute, and a cron job if you want to do it every hour (in between these times becomes more debatable).
B) An efficient way...
B-1) Retrieving a block of records from a DB will be more efficient than one at a time. Knex has .offset and .limit clauses to choose a group of records to retrieve. A sample from the knex doc:
knex.select('*').from('users').limit(10).offset(30)
B-2) Database indexed access is important for performance if your tables are very large. I would recommend including an status flag field in your table to note which records are 'in-process', and also include a "next-review-timestamp" field with both fields being both indexed. Retrieve the records that have status_flag='in-process' AND next_review_timestamp <= now(). Sample:
knex('users').where('status_flag', 'in-process').whereRaw('next_review_timestamp <= now()')
Hope this helps!
I currently do service using beanstalkd and node.js.
I would like when jobs fail, retry n time before give up the job.
If the job succede i want do it the same job 10 time.
So, what is the best practice, stock in mongo db with the jobId the error and success count, or delete and put a new job with a an error and success count in the body.
I dont know if i'm clear? so tell me , thanks a lot
There is a stats-job <id>\r\n that should also be available via the API library that returns, among other things, how many times the specific job has been reserved, released, buried, and so on.
This allows for a number of retries of failed jobs by checking previous reservation/releases.
To run the same job multiple times, I would personally create either one additional job, with a success count that would then be incremented (into another new job) - or, all nine new jobs, with optional delays before they start.
You have a couple of ways to do this:
you can release the job, and obtain from stats the number of reserves
you can put a new job with a retry count, and keep track of history in the data payload
You should do the later, and you don't need MongoDB as a second dependency.
When I run qacct with the job ID, after it is finished, I get two results,
the one I run and an older job with the same jobid.
how can I delete the history of qacct?
Any one know how to solve this?
Thanks
Tsvi
Grid Endine (or SGE) has job IDs in the range 0..99999. This may roll over quickly in some clusters and people may be interested in finding statistics of older jobs with the same ID. You can identify your jobs knowing also the approximate submit time.
Anyway if you want to eliminate the duplicate job IDs from qacct you can rotate the accounting file (//common/accounting) using utilities like logchecker.sh.
Check the man page or this grid engine online documentation:
http://gridscheduler.sourceforge.net/howto/rotatelogs.html
In my webapp, users can create recurring invoices that need to be generated and sent out at certain dates each month. For example, an invoice might need to be sent out on the 5th of every month.
I am using Kue to process all my background jobs so I want to do it in this case as well.
My current solution is to use setInterval() to create a processRecurringInvoices job every hour. This job will then find all recurring invoices from database and create a separate generateInvoice job for each recurring invoice.
The generateInvoice job will then actually generate the invoice, and if needed, will also in turn create a sendInvoiceToEmail job that will email the invoice.
At the moment this solution looks good to me, because it has a nice separation of concerns, however, I have the following questions:
I am not sure if I should wait for all the 'child' jobs to complete before I call done() on the main processRecurringInvoices job?
Where should I handle errors? Should i pass them back to the processRecurringInvoices job or should I handle them separately for each job?
How can I make sure that if processing takes extra long time (more than an hour), and either processRecurringInvoices or any of the child jobs are still runnning, the processRecurringInvoices job is not created again? Kind of like a unique job, or mutual exclusion?
Instead of "processRecurringInvoices" it might be easier to think of it as a job that initiates other, separate invoice-processing jobs. Thinking of it this way, once the invoice processing jobs have been enqueued, you can safely call done() on the job that kicks them all off.
Thinking of the problem in the way described in question 1, errors should be handled within each of the individual invoice processing jobs. If an error occurred finding potential invoice jobs, then that would probably be handled in the processRecurringInvoices jobs.
you can use kue.Job.rangeByType() to search for currently active jobs. If a job is active, you can skip kicking it off again.
there is no limit for coding how long is the code it doesn't matter.
I want to do this because i got it from my company and i have to write a script for this any idea regarding this is accepted.
nlapiSearchRecord(type, id, filters, columns)
Note:This API returns 1000 results at a time so if the saved search more than 1000 results then you would need to run the sorted search in a loop and concatenate with the results of the previous search
or nlapiLoadSearch(type, id)
Note: This returns in 4000 results at a time
these API's allow to fetch the results from the saved search.
and regarding API limit the restlet allows 5000 APIs so that's enough for serving most of the purposes. If at all you exceed this limit then you can make use of the API
nlapiYieldScript();
that creates the resume point and script resumes from that point.
** If any further clarification is needed please ask **
Cheers!!!!