In my company we don't use aws or asuze or any cloud providers yet, so we are deploy our nodejs applications (based on express.js) on physical machine or virtual machine (linux).
However, it is possible to use Serverless Framework just like express.js server?
I saw the serverless offline plugin and it launch a server (endpoint, function) which I can access.
But can I use serverless this way in the production?
No, the Serverless Framework is really only useful if you're deploying to a cloud (and mostly it's just AWS). serverless offline will run a small nodejs server, but it's emulating AWS Lambda. So if you'll never use Lambda, there's no real point to emulating it.
In your case, just run a regular nodejs server.
Serverless framework is not designed for such a use case, this tool is used for deployments to cloud environment. Offline mode is just a simulation which is supposed to be run in local environment, not production.
Another option would be to use a process manager like PM2. To deploy to the virtual machine you can use tools like ansible and then PM2 will take care of the runtime.
It also has some neat features - for example if your server crashes, it can automatically revive it ; cluster mode can run multiple node instances in the virtual machine to utilize all the cores of the CPU which can boost your performance if you run a stateless cluster. It covers more than "serverless offline" is designed for and if you run without containers this would be my next best bet.
Related
I am working on an app that needs to be self-hosted on a Windows 10 PC where all the clients are inside a company network. I am using docker-compose for the various microservices and I am considering JHipster for an API Gateway and Registry service.
As I am reading the documentation, there is a line in the JHipster docs (https://www.jhipster.tech/jhipster-registry/
) that says "You can run a JHipster Registry instance in the cloud. This is mandatory in production, but this can also be useful in development". I am not a cloud expert so I not sure what is different in the environments that would cause the Registry app issues when running on a local PC. Or perhaps there is a way to give the local Windows PC a 'cloud' environment.
Thanks for any insight you have for me.
I think that this sentence does not make sense.
Of course, you must run a registry if you generated apps with this option but it does not mean that you must run it in the cloud. The same doc shows you many alternative ways to run it.
Running all your micro services on a single PC is a bit strange, it defeats the purpose of a microservice architecture because you got a single point of failure that can't scale. Basically, you are paying for extra complexity without getting the benefits, where a monolith would be so much simpler and more productive.
How many developers are working on your app?
Does the reSolveJS generally run as a single NodeJS application on the server for production deployment?
Of course, event store and read models may be separate applications (e.g. databases) but are the CQRS read-side and write-side handled in the same NodeJS application?
If so, can / could these be split to enable them to scale separately, given the premise of CQRS is that the read-side is usually much more active than the write-side?
The reSolve Cloud Platform may alleviate these concerns, given the use of Lambdas that can scale naturally. Perhaps this is the recommended production deployment option?
That is, develop and test as monolith (single NodeJS application) and deploy in production to reSolve Cloud Platform to allow scaling?
Thanks again for developing and sharing an innovative platform.
Cheers,
Ashley.
reSolve app can be scaled as any other NodeJS app, using containers or any other scaling mechanisms.
So several instances can work with the same event store, and it is possible to configure several instances to work with the same read database, or for every instance to have its own read database.
reSolve config logic is specified in the app's run.js code, so you can extend it to have different configurations for different instance types.
Or you can have the same code in all instances and just route command and queries to the different instance pools.
Of course, reSolve Cloud frees you from these worries, in this case you use local reSolve as dev and test environment, and deploy there.
Please note that reSolve Cloud is not yet publicly released. Also, local reSolve can not have all required database adapters at the moment, so those yet to be written.
I'm looking for any method or tutorial to deploy on local server/computer a nodejs express app. this will be for production environment. All I read about solutions like zenit now, localtunnel, forever, pm2 and similars is that they aren't recomended for production environments. The idea is to have a public web without hosting. I eed that the method allows to maintain more than one node/web active at the same time.
When people say a component is not recommended for production, it does not mean that it is not stable. Most of the times it means that it is not a full blow solution that considers all the aspects of a production deployment:
scalability
fail-over
security
configurability
automation
etc.
If you are trying to build a solution that has precise requirements (requests per seconds, media streaming, etc.) you should post in your question as well to make it concrete. If this is not the case, you just have to install a basic setup that runs your configuration and fix bottlenecks as they appear. Don't try to build a theoretically correct solution now.
A couple of examples:
A classical setup (goes well with Do-It-Yourself deployments)
install Git + (Node.js and NPM) + (Forever or equivalent) + your database (e.g. MongoDB) + (NGINX or HAProxy) on your favourite/accepted Linux distribution
clone each Node.js app in its own directory
install cronjobs for basic monitoring and maintenance
add scripts to dynamically remove/add NGINX web server configurations based on deleted/added Node.js apps
A more modern setup (goes well with AWS/GCE deployments but also possible locally with tools like skaffold)
install a Kubernetes cluster on a couple of machines
prepare a base Docker container image that matches all your Node.js applications
if required, add a Dockerfile to each Node.js application to build one Docker image per application based on the base Docker container image
add a new deployment for each of your Node.js application
Kubernetes will do for you the "keep-alive"
fill-in the plumbing between your server network (DNS, IP, ports) and the IP's provided to you by Kubernetes (NGINX or HAProxy would also fill in this hole)
What are the best practices for deploying a nodejs application in production?
I would like to know how deploy for production Api's nodejs is being done today, today my application is in docker and running locally.
I wonder if I should use a Nginx inside the container and deploy my server on it or just upload my image node that is already running today.
*I need load balance
There are few main types of deployment that are popular today.
Using platform as a service like Heroku
Using a VPS like AWS, Digital Ocean etc.
Using a dedicated server
This list is in the order of growing difficulty and control. So it's easiest with PaaS but you get more control with a dedicated server - thought it gets significantly more difficult, especially when you need to scale out and build clusters.
See this answer for more details on how to install Node on a VPS or a dedicated server:
how to run node js on dedicated server?
I can only add from experience on AWS using a NAT Gateway which is a dedicated Node server with a MongoDB server behind the gateway. (Obviously this is a scalable system and project.)
With or without Docker, you need to control the production environment. This means clearly defining which NPM libraries you will need for production, how you handle environment variables and clusters for cores.
I would suggest, very strongly, using a tool like PM2 to handle clusters, server shutdowns and restarts and logs. (Workers & slaves also if you need them and code for them).
This list can go on and on, but keep in mind this is only from an AWS perspective. Setting up a Gateway correctly on AWS is also not an easy process. Be prepared for some gotcha's along the way.
Is there any linux scripts to for uploading nodejs app to myown linux server?
Like appfog or heroku. I have dedicated linux server and working on linux too.
I want upload my nodejs application to server and restart nodejs with one shell command.
I can write script, but maybe don't need to invent bycicle?
Popular choices using SSH:
rsync
fabric
For serious stuff you really should look at configuration management and server provisioning applications like (in no particular order):
Chef
Puppet
Ansible (+1 for the name, "Enders Game" is one of my favorite books)
Most revision control systems allows for "after/before-commit" hooks; sometimes I use these hooks to run tests before and automatically deploy to the acceptance environment after commits.
See also Jenkins CI (Continuous Integration is a hot topic).
I use fleet from substack to manage deployment. Fleet is a git-based tool that allows you to deploy code and manage your node processes running on remote servers.
Adding in seaport and either bouncy or node-http-proxy is a great way to build an application that is made up of lots of small components that work together.