how to deploy and run multi language multi process app on heroku - node.js

I want to build an app with potentially large number of io and computation heavy logic.
I learnt that one way to tackle this is to use node.js as a client facing service, then pass on client request to another backend service for heavy computing, then return the result asychronously.
This means I'll have at least two processes in my app: a node.js server and a backend service written in another language.
I also want to use heroku, but how do I deploy such an architecture to heroku? Do I have to confine all web and worker processes in the same box? For example, if I have 10 dynos, I want 2 of them run the node web server, the other 8 run my backend service as worker, how do I deploy such an architecture?

First, if you really need multiple languages, you can use heroku-buildpack-multi.
Actually, both your Web dyno and your worker dynos could be node.js, if that works for you. In that case, you could use the exact setup described here.
At any rate, you can separately configure as many Web and Worker dynos as you want with heroku ps:scale as described in Heroku scaling. See also What is a Heroku Dyno?

You should check out MEAN stack.
http://mean.io/#!/
Mongo (or any Database really, for heroku I recommend Postgres if you don't want a NoSQL database).
Express This is you backend that handles communications from you front-end to you database
Angular Super powerful front-end Single Page Application with a HUGE community and tons of add-ons and goodies.
Node The server that makes this all possible.
It's all written in javascript.
Heroku itself is a platform similar to an AWS Server that has node on it that allows you to run Express. You will build your apps then use the heroku command line to deploy your app to their servers after you have created your account. Heroku will also host your database via heroku add-ons.
https://devcenter.heroku.com/articles/getting-started-with-nodejs
Dynos will dynamically handle you processes as needed. Node isn't really run so much as a service as its running your backend (Express). Think of Node like Apache, it's what is running your server. Express is the backend running on that server and can run async so you can have as many processes running as you want, that's where the dynos will start to load balance for you.

Related

What is the recommended way to run a react app and a node js backend in the same container using cloud run

I would like to containerize a react app and node js backend API, that's currently running on a GCE VM. The majority of the information that's available online leads me to believe that I need to deploy them as separate containers to Cloud Run. However as Martin points out in the video - https://www.youtube.com/watch?v=WHH7eQLbG_s - Simplify your web apps on Google Cloud Run, that one could you use Cloud Run itself.
Is this possible and if so, is there any info. available online relating to this
Cloud Run can export only one port. Therefore, if you want to expose 2 different things (i.e. the frontend and the backend), you need to have a unique entrypoint.
You can wrap your static pages in your NodeJS backend server, or package, in your container, a NGINX server that will route to the react app the request to the static content, and to the nodeJS backend, the backend request.
About the Martin's video, you can also read my comments. It's better to deploy 2 different stuffs and to put a Load Balancer in front of both (similar to your NGINX in fact, but decoupled and therefore more scalable and easier to update). I personally recommend App Engine Standard for the static content (because serving that content is processing free, and the cheapest!), and cloud run for the backend.

Deploy node.js in production

What are the best practices for deploying a nodejs application in production?
I would like to know how deploy for production Api's nodejs is being done today, today my application is in docker and running locally.
I wonder if I should use a Nginx inside the container and deploy my server on it or just upload my image node that is already running today.
*I need load balance
There are few main types of deployment that are popular today.
Using platform as a service like Heroku
Using a VPS like AWS, Digital Ocean etc.
Using a dedicated server
This list is in the order of growing difficulty and control. So it's easiest with PaaS but you get more control with a dedicated server - thought it gets significantly more difficult, especially when you need to scale out and build clusters.
See this answer for more details on how to install Node on a VPS or a dedicated server:
how to run node js on dedicated server?
I can only add from experience on AWS using a NAT Gateway which is a dedicated Node server with a MongoDB server behind the gateway. (Obviously this is a scalable system and project.)
With or without Docker, you need to control the production environment. This means clearly defining which NPM libraries you will need for production, how you handle environment variables and clusters for cores.
I would suggest, very strongly, using a tool like PM2 to handle clusters, server shutdowns and restarts and logs. (Workers & slaves also if you need them and code for them).
This list can go on and on, but keep in mind this is only from an AWS perspective. Setting up a Gateway correctly on AWS is also not an easy process. Be prepared for some gotcha's along the way.

running nodejs app inside go

I have a requirement. Is there a way to run nodejs apps inside golang? I need to wrap the nodejs app inside a golang application and in the end to result a golang binary that starts the nodejs server and then to be able to call nodejs rest endpoints. I need to encapsulate in the golang binary the entire nodejs application with nodem_odules, if necessarily the nodejs runtime.
Well, you could make a Go program that includes e.g. a zipped Node application that it extracts and starts but it will be very hard to do well - you will have huge binaries, delays in extracting files, potential portability problems etc. Usually when you want to call REST endpoints then you host your Node app on some server and you let the client app (the Go app in your example) to connect to that Node app to work correctly. Advantages are that it is much faster, the app is much smaller, you don't have portability issues with Node binaries and addons and you can quickly update your backend any time you want.
It will be a very bad idea to embed a nodejs app into your golang, for various reasons such as: size, security updates pushing, etc.
However, if you so strong feel that they should be together, you could easily create a docker container with these two (a golang server + a node app) and launch them via docker. You can set the entrypoint to a supervisord daemon so that your node server as well as the golang server can be brought up when your container is run.
If you are planning to deploy via kubernetes you can create two individual docker containers (one for the golang server, one for the node server) but deploy them always together as a pod too.
There are multiple projects to embed binary files and/or file system data into your Go application.
Look at 'Alternatives' section of project 'vfsgen':
https://github.com/shurcooL/vfsgen#alternatives

How can I deploy a web process and a worker process with Elastic Beanstalk (node.js)?

My heroku Procfile looks like:
web: coffee server.coffee
scorebot: npm run score
So within the same codebase, I have 2 different types of process that will run. Is there an equivalent to doing this with Elastic Beanstalk?
Generally speaking Amazon gives you much more control than Heroku. With great power comes great responsibility. That means that with the increased power comes increased configuration steps. Amazon performs optimizations (both technical and billing) based on what tasks you're performing. You configure web or worker environments separately and deploy to them separately. Heroku does this for you but in some cases you may not want to deploy both at once. Amazon leaves that configuration up to you.
Now, don't get me wrong, you might see this as a feature of a heroku, but in advanced configurations you might have entire teams working on and redeploying workers independent from your web tier. This means that the default on Amazon is basically that you set up two completely separate apps that might happen to share source code (but don't have to).
Basically the answer to your question is no, there is not something that will allow you to do what you're asking in as simple a manor as with Heroku. That doesn't mean it is impossible, it just means you need to set up your environments yourself instead of Heroku doing it for you.
For more info see:
Worker "dyno" in AWS Elastic Beanstalk
http://colintoh.com/blog/configure-worker-for-aws-elastic-beanstalk

Redis deployment configuration - master slave replication

Currently I have two servers which I have deployed node.js/Express.JS based web services API. I am using Redis for caching the JSON strings.
What will be the best option deploying this setup in to production? I see here it advices to go with a dedicated server redis. OK. I take it and use a dedicated server for running redis master. Can I use existing app servers as slave nodes? Note : these app servers are running an Node/Express application.
What other other options do I have?
You can.
It all depends on the load that those other servers have, it's a problem of resource sharing. To be honest my main issue with your architecture is not the dedicated vs the non-dedicated servers, it's the fact that you are placing a Redis server (master or not) on a host that most likely will be facing the internet (expressJS app), meaning, it's quite exposed.
If you can simulate HTTP load into your Node/Express JS servers, see the difference between running some benchmark tests on your dedicated server vs the non dedicated ones:
On a running redis server type in:
redis-benchmark -q -n 100000
If the app servers are being hammered and using all cores frequently you should see a substantial difference in the benchmarks.
My suggestion is, go ahead with your first setup and add monitoring for the redis response times, and only act when you have to, which might be now if the benchmarks show very poor results.
As a side note, consider the option of not sharing hosts for services that you expose to the internet with services that perform internal functions to your application.

Resources