I recently developed a Teams Node.JS app for one of my clients. I'm very new to Node development, so I need some help on the following.
The app will be used by hundreds of users. I haven't done anything special to handle concurrency. I use a combination of node and express. Will these two handle concurrency and multi-threading automatically?
The app makes heavy use of Microsoft Graph, and occasionally for no reason at all, graph calls fails with 503 errors. Upon retrying after some time, it starts working again, but this behavior seems repeating off late. What could be the problem?
I've hosted the app on Azure App Service and set it to auto-scale. Is there any other configuration required to handle additional users?
Related
I've started to port a web app backend to Google App Engine for scaling. But I'm completely new to GAE and just reading into the concepts. Steep learning curve.
I'm 95% certain that at some point many millions or at another point at least hundreds of thousands of users will start using the web app through a GUI app that I'm writing. And they will be globals users, so at some point in the future I'm expecting a relatively stable flow of connection requests.
The GAE Standard Environment comes to mind for scaling.
But I also want the GUI app to react when user related data changes in the backend. Which suggests web sockets, which aren't supported in the Standard Environment, but in the Flexible Environment.
Here's my idea: The main backend happens in a Standard app, but the GUI listens to update notifications from a Flexible app through web sockets. The Standard app calls the Flexible app after noteworthy data changes have occurred, and the Flexible app notifies the GUI.
But is that even possible? Because sibling Flexible instances aren't aware of each other (or are they?), how can I trigger the persistent connections held by the Flexible instance with an incoming call from the Standard app to send out a notification?
(The same question goes for the case where I have only one Flexible app and no Standard app, because the situation is kind of the same.)
I'm assuming that the Flexible app can access the same datastore that the Standard app can. Didn't look this one up.
Please also comment on whether the Standard app is even a good idea at all in this case and I should just go with Flexible. These are really new concepts to me.
Also: Are there limits to number of persistent connections held by a Flexible app? Or will it simply start another instance if a limit is reached?
Which of the two environments end up cheaper in the long run?
Many thanks.
You can only have one App engine instance per project however you can have multiple flex services or standard services inside of an instance.
Whether if standard is a good idea it depends up to your arquitecture, I'm pretty sure you've looked at the comparison chart, from experience is that if your app can work okay with all the restrictions (code runtimes, no availability to do background process, no SSH debugging, among others) I will definitely go for standard since it has a very good performance when working with spikes of traffic deploys new services in just seconds, keep in mind that automatic scaling is needed for the best performance result.
There are multiple ways to connect between flex or standard services one would be to just send an HTTP request from one service to another, but some other options with GCP services like Pub/Sub.
In the standard environment, you can also pass requests between
services and from services to external endpoints using the URL Fetch
API.
Additionally, services in the standard environment that reside within
the same GCP project can also use one of the App Engine APIs for the
following tasks:
Share a single memcache instance.
Collaborate by assigning work
between services through Task Queues.
Regarding Data Store you can access the same datastore from different services here is a quickstart for flex and the quickstart for standard
Which of the two environments end up cheaper in the long run?
Standard pricing is based on instance hours
Flexible pricing is based on usage of vCPU, memory, and persistent disks
If your service run very hight performance process on short periods of time probably standard will be chepear, however if you run low performance process on long periods of time, flex will be chepear, but again it depends on each use case.
I currently manage a cluster of VMs on a number of dedicated hosts to provide apache, nginx and node live and development servers. This of course requires constant and time consuming maintenance to ensure security and reliability. I've found more time is spent looking after this platform then coding new and exciting projects. So I've been looking into the Google App Engine to remove the need of managing any VMs but I'm struggling to work out how to get it to function for me!
Currently I find myself developing mostly in Angular (v4-5) for my frontend and nodejs for backend. My development nginx server powers my angular apps and routing to ng-serve and to a separate vm that runs my node apps. I use PM2 to manage the apps on both servers.
This works great! I can code locally push my changes via an rsync script to the servers, the app restarts and changes updated. More importantly, I can affectively code between the front and backend! When ready I can comfortably switch the code to the live servers with little effort - nice!
This is where I am struggling...
I can't seem to work how I would develop and publish versions of both the front and backend code in one App Engine project.
Is this possible? How would I go about deploying/publishing both aspects?
Would I be better having two projects such as example.com & api.example.com? If so, can I get the two projects to talk to one another when developing?
I have and can create a angular/nodejs app in the App Engine but I can't work the basics of front and backend development in this managed service.
I'd like to use the great features of the App Engine such as versioning, easy scaling and importantly deployment of apps and updates. Also, to move all my websites including some older ones in PHP to the App Engine.
Any help surrounding this would be much appreciated. Thanks!
As #Yandrak3 suggested, a microservices architecture is what you need. But keep in mind that that document relates to the App Engine Standard environment which does not support Node.js as a runtime environment. But keep the microservices architecture in mind when deploying to App Engine Flexible.
On backend and frontend
Frontend and backend are no longer used to describe the presentation layer and the data access layer of an App Engine application. The only reference in the documentation is here. The (VM) instances managing a service of your app which are configured with automatic scaling are considered part of the frontend infrastructure, while the ones configured with manual scaling are considered backend infrastructure.
The reason for this is that automatic scaling is one of App Engine's
great features [...] easy scaling,
automatically presenting your app's frontend in a manner scaled with the number of external requests incoming to your app.
Manual scaling is more suited for backend operations, where you might want to run operations dependant on the state of the memory over time, or other scenarios. You can find some more information on scaling types here. Keep in mind that this latter document is under App Engine Standard documentation and it includes basic scaling, a feature not available in the App Engine Flexible environment.
On services and versioning
In your case, your frontend and backend modules of your application will become two separate services in App Engine Flex. For each service you can deploy multiple versions. More, explained here.
Communication between services, in this case between your frontend and backend, can be done through HTTP requests between them.
If the next question is how HTTP requests from users reach the appropriate version of a service (or a service), check this document.
To deploy multiple services, you will use the same commands and you will separate each deployment and service through their afferent configuration file, app.yaml.
Your question requires a response with a pretty wide (and deep) spectrum of concepts. Hopefully, this answer is good to start with.
I am planning to build a nodejs application as back end of a web application.
This server application will provide a rest api, websockets service and process to scrape some sites with a headless navigation (like zombie.js) each n hours to feed a database.
I would to ask if it's a good approache build all the things in one nodejs instance or if it's better use several nodejs applications for every task.
If you are having small size application (which doesnot require scale in future), you can keep all the stuff Rest, Socket, scraping on the same project).
Note: After scraping if you are processing HTML content, it will take some time to process that in Sync manner. So at that time event loop will be blocked. So Rest API will not handle any request. In this scenario, you can keep Rest and Socket combinely in one project and Scraping thing in another project.
If you are planning to scale in near future, I suggest to Keep Seperate Instance, considering Benefit of SOA in scaling.
In my opinion better way is using different Node.js applications. First one will server your API. Another one will work as a socket server on different port.
About scraper it can be PHP (as well as nodejs) script which will run as a cron job. How to setup cron job you can check this question:
https://askubuntu.com/questions/2368/how-do-i-set-up-a-cron-job
or this tutorial for Ubuntu server: https://help.ubuntu.com/community/CronHowto
I think it will be best approach.
I am trying to plan out the best way to approach developing a node application, and am not sure what would provide the best performance. A little bit of info on the overall plan: the entire project will involve a web app as well as a 'bot' app. The bot app in question is node-steam, which is quite a substantial application on its own. My question is whether I should run two separate node processes for each app (one for web server and one for node-steam), or code them into one combined node process?
Also please note that I will need for the web app to be able to communicate with node-steam. I am planning on integrating socket.io into node-steam to make calls to it via web app actions. Is this the best approach if I keep the apps as separate node processes?
EDIT: When I refer to letting the web app communicating with node-steam, I meant that there are functions which need to be triggered in node-steam when a user does something in the web app (namely they perform specific actions in the browser), so I am planning on doing this via socket directly to the node-steam app, rather than to the web app and then routing the calls on to node-steam. As far as I can tell this is the simplest way of doing it.
Any guidance is appreciated.
Thank you
If the bot app is quite substantial, you should code it for scalability. You probably will be spawning worker threads or even scale across multiple nodes in no time, for the bot app alone.
I'm making an app that will have:
iOS and Android apps
A web-based "dashboard" to display data gathered from the mobile apps
The app requires that end-users create an account with us (we mostly likely will NOT use Facebook/Twitter logins).
Everything is/will be hosted on AWS using EC2/RDS/S3 (All encapsulated in Elastic Beanstalk)
| Web Browser | <----> | sails.js app | <-------> |actionhero.js API|
⬆︎
⬆︎
| Mobile app(s) | <-------------------------------------/
So far, I've built most of the backing API in actionhero.js, hosted on AWS.
It made sense to me to separate the API and the web app, because there web app is only for a small subset of users -- I'd expect 50x the traffic from our mobile apps over the web app.. We could scale the API to server the mobile users without unnecessarily scaling the sails.js app.
My questions are:
(biuggest unknown) How should I handle authentication? The sails.js app needs to be able to make requests to the API, and so do the mobile applications.
I was looking at the oauth2orize node module for creating our own Auth server, but it is designed for Connect/Express, so I don't think I could leverage it in the actionhero.js-based API.
If the solution is to create an OAuth server, am I supposed to host that on its own EC2 instance?
(AWS-specific question) I don't fully understand the use case for creating what AWS describes as a "worker tier" enviornment. Would there be a reason that the API would fall into that category?
If I want to run a data querying and aggregation task, I would create a separate node process for that, correct? If so, would that background worker have to exist on its own EC2 instance?
Sails.js and Actionhero.js both provide heavy support for socket.io. Should communication between the Sails app and my API happen over a persistent WebSocket connection? Will that scale if I need to create new instances in the future?
This seems like a fairly typical pattern; I'd like to hear if there are any big red flags in this design, before I paint myself into a corner. :-) THANKS!
Bonus question (specific to AWS Elastic Beanstalk)
Will I create separate "Applications" for the sails.js server and the API server? It looks like that's the only way to set it up, anyhow, but I want to make sure.
We have used node and beanstalk for a couple of applications now. For authentication, you can create an account for the user when they first access the app, and store the account id on the device. If you want them to be able to log in from multiple devices, you'll need to provide some kind of way of them identifying themselves, which is either id/password, or using Facebook. It's not that tough to set that up. Use session to allow them to log in and stay logged in. We generally just store the user id in the session.
A worker tier is for something you want to decouple from your app, something that you want to do that you don't need to know whether it succeeded/failed. A notification server is a prime example. You send the info for the notification into an SQS queue, that then gets sent to the worker tier, that does the work. We are just trying to figure this out now.
A big aggregation process, yes, I'd take it elsewhere, so it's not eating up your production server(s). You might want to create some data aggregation ongoing, as transactions are saved, so it accumulates. Big rollups after the fact can be time consuming and fragile.
Sounds like yes, they would be seperate applications.
A good tip. We use grunt to create the zip files for the app. It's a node batch tool. We check the latest info out of SVN, clean it up by doing things like removing .svn directories, apply our configuration into the config files by doing simple string replacement, then zip up resulting output. This then gets loaded into beanstalk. This takes all the guess work and time out of actually doing a new deployment. We can get a new build up in minutes that way.
Beanstalk can be very frustrating. When it fails, it's not very good at telling you why.