Node JS resource manager for parallel queries - node.js

Recommend a library for node - resource manager
There is a particular resource that is taken from a renewable pool
Let's, for example, Disneyland allows the presence of no more than 1000 people at a time and we sell online entrance tickets.
Let all the seats be filled and the ticket can be sold when someone leaves.
For example, requests for the purchase of tickets for friends are simultaneously received, someone wants to buy 5 tickets, someone wants to buy 10 tickets
What resource manager library can I use in this case?
I expect 2 million requests per hour (these are certainly not tickets)
I know, that the best solution will be Amazon Simple Queue Service.
But my budget is minimal, can I use some other solution?

Related

Azure Service Bus - is it a good solution for peer-to-peer messaging platform?

We are designing a system where users can exchange "messages" (let's say XML files for simplicity sake). This system is peer to peer by design - meaning only directed messages are supported. User A can only send message to User B, it is not possible to send messages to "groups" of users etc. FIFO order is mandatory requirement as well.
This must be a reliable solution - so we started looking into Azure and its services. And Service Bus does look like the right solution to me. It offers all bells and whistles we are looking for:
FIFO order is guaranteed
Dead-letter queue with timeouts
Geo-redundancy
Transactions
and so on
So naturally, I started playing with it. And the first idea I had was to give each user of my system a QUEUE from the service bus. It will act as an INBOX for them. Other users send messages to the user (let's say using unique USER_ID as a queue ID for example), messages get accumulated in the queue and when user decides to check the inbox, they will get all the messages in the correct order. This way we "outsource" all routing, security etc etc to the service bus itself - thus considerable simplifying the app logic.
But there is a serious caveat in this approach - Service Bus supports only up to 10,000 queues: https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-azure-and-service-bus-queues-compared-contrasted#capacity-and-quotas and the number of users in my system can reach tens of thousands (but max out at 100,000 or so). So I'm somewhat in the range but not really. Therefore, I have questions:
Is there a flaw in my approach? Overall, is that a good idea to give a queue to the user exclusively? Or perhaps I should implement some kind of metadata and route messages based on it?
Am I looking at the right solution? I want to use SaaS as much as possible so I don't want to start building RabbitMQs on VMs etc - but are there built-in alternatives? may be a different approach should be considered?..
As for the numbers, I'm looking to start with 2,000 users and 200,000 messages a day - not a high load by any means. But if things work out, I see how these numbers can increase by 20x - 30x (but no more).
I would appreciate any options on this. Thank you.

How to handle multiple API Requests

I am working with the Google Admin SDK to create Google Groups for my organization. I can't add members to the group when creating the group, ideally, when I create a new group I'd like to add roughly 60 members. In addition, the ability to add members after the group is created in bulk (a batch request) was deprecated August 2020. Right now, after I create a group I need to make a web request to the API to add each member of the group individually (which will be about 60 members).
I am using node.js and express, is there a good way to handle 60 web requests to an api? I don't know how taxing this will be on my server. If anyone has any resources to share where I can learn about the impact this would have on a nodejs server that would be great.
Fortunately, these groups aren't created often, maybe 15 a month.
One idea I have is to offload the work to something like a cloud function so my node server makes one request to the cloud function, then the cloud function makes all the additional requests to add members to the Group. I'm not 100% sure if this is necessary and I'm curious on other approaches.
Limits and Quotas
Note that adding group members may take 10 minutes to propagate.
The rate limit for the Directory API is 3000 queries per 100 seconds per IP address. This works out to around 30 per second. 60 requests is not a large amount of requests, but if you try to send them all in a few milliseconds the system may extrapolate the rate and deem it over the limit, I wouldn't think so, though probably best to test it on your end with your system and connection etc.
Exponential Backoff
If you do need to make many requests this is the method Google recommends. It involves repeating the request if it fails and then exponentially increasing the amount of time to wait until it reaches 16 seconds. You can always implement a longer wait to retry. Its only 60 requests after all.
Batch Requests
The previously mentioned methods should work no issue for you, since there are only 60 requests to make, it won't put any "stress" on the system as such. That said, the most performant way to handle many requests to the Directory API is to use a batch request. This allows you to bundle up all your member requests into one large batch, of up to 1000 calls. This will also give you a nice cushion in case you need to increase your requests in future!
EDIT - I am sorry, I missed that you mentioned that Batching is deprecated. Only global batching is deprecated. If you send a batch request to a specific API, batching will still be supported. What you can no longer do is send a single batch request to different APIs, like Directory and Sheets in one.
References
Limits and Quotas
Exponential Backoff
Batch Requests

Is API Polling more efficient than WebSockets for infrequent updates?

I am working on building a trading algorithm for cryptocurrencies and have added several different exchanges.
for the last 4 exchanges I added, I use the exchange api endpoint to retrieve ticker prices for all coins (I make one api call every 5 minutes per exchange).
I started working on implementing Coinbase Pro which does not have an endpoint to import all ticker prices at once.
Based on my current coding experience I came up with two options...
Option A: make 30 API Calls once every 5 minutes to import price data.
or
Option B: Connect through web-sockets on 30 different channels and update a dictionary with the most recent updates and send to SQL at intervals of 5 minutes.
My concerns:
Coinbase is very active and there seems to be multiple updates per second when only connected to 2 web-sockets.
Is it wasteful to use websockets in this scenario?
I am not building a high frequency trading strategy so minor latency is not a concern. At the same time I am concerned that due to http hangups/timeouts there could be a scenario where 30 requests could take over a minute.
AS well if API polling is the best solution, will I be able to reduce latency using grequests (for async http requests), urllib3 (connection pooling), or httplib (by keeping connections open)?
I apologize if this is a dumb question I have only a year of coding experience and I cannot seem to find the right solution for this situation on google/stackoverflow.
Any advice would be greatly appreciated!
best regards,
Slurpgoose

Number of channels and billing

I am looking at building an app that monitors the public transport buses for a major city:
I did a quick prototype using pubnub. The buses have a phone transmitting gps signals to a channel and bus users have phones subscribed to channels. I have questions:
I am planning for each bus route there is a channel. The city has 50 routes so there will be 50 routes. Does this adhere to the best practice?
Is there an api to list channels ?
I am sending a message to a channel every second. Assume, there are 50 routes with 5 buses each running 24 hours. There will be 216000000 daily messages. what will i be charged for a day?
Does your Android client open a network connection everytime a publish is call? I want to minimize the bandwith used by the phone that is transmitting the GPS signal.
Bus users may want to see location of multiples buses. I know best practice is to subscribe to one public and one private channel. What is the best way to do it?
I would appreciate if you could answer the above questions.
Full disclosure up front - I work for PubNub Customer Success so responses for pricing related questions are informational in nature only and not to be construed as a promotional. Asker specifically mentions PubNub and the information provided below is publicly available from the PubNub website.
Anant, also as an FYI StackOverflow would normally ask that each of these questions gets asked as a separate thread. Moving forward please do your best to adhere to community guidelines.
1 Every implementation will be different as far as the specific architecture and design pattern strategy though your proposed approach seems to be a sensible utilization of channel methodology. PubNub does not limit the total number of channels in use, however as a practical limitation for most mobile development frameworks subscribing to more than 50 channels simultaneously would be around the upper limit. Adding more than that and both iOS and Android will begin exhibiting performance limitations. If new bus lines are added the subscriptions can be managed to only subscribe to nearby routes, etc.
Question 1 the second with the indent. Yes that can be done with the here_now API
2 PubNub charges $1 per million messages (without SSL enabled) so based on your hypothetical your message charges would be $216 per day. That being said, there is significant room here for design pattern optimization so that busses only publish a new location whenever there is a change - repeated publishes while the bus is standing still are unnecessary. This optimization on it's own will bring the message usage figure down significantly, and there are other strategies which can be utilized to further optimize depending on your specific implementation approach. If you anticipate needing more than 1 billion messages per month, a deployment to Global Cloud would make sense so as to avail yourself of volume discount pricing not otherwise available on Go Cloud.
3 Rather than opening a new connection with every publish, PubNub keeps an active socket connection open until unsubscribed or disconnected via loss of network connection/app force close. The bandwidth utilization to keep this connection active over a period of several hours and absent any other publish/subscribe activity typically measures less than 1K depending on your configuration parameters. Android supports background threading so even when the app is not in focus the connection can remain open to facilitate data push alerts which can be used to prompt the user to bring the app back into the foreground to review any updated information.
4 This question is not clear, assuming that the bus locations are published to the public channel what would the purpose of the private channel serve? If you meant a private channel to receive alerts for the arrival of the user's selected bus, then yes that would be an appropriate implementation strategy. Please clarify if you meant something different.

Is running 2 batch processes on 2 separate threads possible and permissible with Intuit?

I wanted to tailor the application I am making which communicates with the quickbooks server and adds things like customers and check expenses and I wanted my application to be as efficient as possible regarding performance. For example, my intention was to have all customer additions (batch process) on one thread and all check expenses or bills (batch process) on another thread which is logically possible as the two procedures don't interfere and are not related to one another.
My question is would such a design approach be permissible by Intuit? I guess my concern is regarding any limitations on communication with their servers.
In the docs site, the following throttling policy is mentioned.
What are the throttling limits based on QB accounts, OAuth client, and RealmId at any given time?
EDIT Following line is not valid anymore. FAQ page is updated.
Apart from an upper limit set that ensures no more than 10 requests in progress at any given time;
EDIT
we have a throttling policy across all IDS apis to permit 500 requests/minute per AuthId and per RealmId. The policy permits 200 requests/minute per AuthId for reports endpoints.
Ref - https://developer.intuit.com/docs/0025_quickbooksapi/0058_faq
So, if you follow the above throttling limit then parallel processing using multiple threads is not an issue.
PN - You can't create multiple name entities ( ex - Vendor, Employee and Customer) using parallel threads. Service puts a lock across these 3 entities to ensure an unique name is getting used while creating a new entity.
Thanks

Resources