I have a single EC2 instance on AWS, running HTTPS server with NodeJS.
I'm starting my NodeJS server from the /etc/rc.local, so it will start automatically on every boot.
I have 2 questions:
Is there a better way to start an https server listening on port 443 without using sudo path/to/node myScript.js? What risks do I have if I run this process as root?
Where do I see my logs? When running the script from the shell, I see the logs of the process, but now when it is runs from rc.local, how do I access the output of the server?
Thanks!
Starting the application using sudo definately is not a good practice. You should not run a publicaly accessible service with root credentials. If there is a flaw in your application and someone find this out there is the danger to access more services in the machine.
Your application should start in a non-priviledged port (e.g. 5000) and then having nginx or apache as a reverse proxy that will forward the traffic internally to your application that is running on port 5000. pm2 is suggesting something like that as well: http://pm2.keymetrics.io/docs/tutorials/pm2-nginx-production-setup. Searching online you will be able to find tutorials on how to configura nginx to run on https and how to forward all the traffic from http to https. Your application should not be aware of ssl certificates etc. Remember that the pm2 module should be installed locally within your project and you have to take advantage of the package.json. In there you can define a task that will boot your application on production using the local pm2 module. The advantage is that you don't have to install the pm2 module globally and you will not mess the things again with the permissions and super users.
I don't think that the log is saved somewhere until you will tell it to happen in the rc.local script. How do you spawn the process in there? Something like that should redirect the stdout to a file:
node path/to/node myScript.js 2> /var/log/my-app.rc.local.log # send stderr from rc.local to a log file`
Don't you use a logger in your application, though? I would suggest picking one (there are a lot available like bunyan, winston etc) and substitute all of your console.logs with the logger. Then you can define explicitly in your application where the logs will be saved, you can have different log levels and more features in general.
Not a direct answer, more a small return on experience here.
We have a heavy used nodejs app in production on AWS, on a non-Docker setup (for now ;) ).
We have a user dedicated to run the node app, I guess that if you start your node process with root, it has root access, and that's not a safe thing to do.
To run the app we use pm2, as a process manager, it allow to restart the node process when it fail (and it will), and scale the number of worker to match the number of core of your EC2 instance. You also have access to log of all the workers using ./path/to/node ./node_modules/.bin/pm2 logs, and can send it to whatever you want (from ELK to slack).
My2cents.
Related
sorry if this is a wrong question on this forum but I am simply just stuck and need some advice. I have a shared hosting service and a cloud based hosting server with node.js installed. I want to host my website as normal but I also want to add real time chat and location tracking using node.js I am confused with what I am reading in several places because node.js is itself a server but not designed to host websites? So I have to run 2 different servers? One for the website and one to run node.js? When I setup the cloud one with a node.js script running I can no longer access the webpages.
Whats the best way for me achieve this as I am just going round in circles. Also is there a way I can set up a server on my PC and run and test both of these together before hand so I see what is needed and get it working as it will stop me ordering servers I dont need.
Many thanks for any help or advice.
Node can serve webpages using a framework like Express, but can cause conflicts if run on the same port as another webserver program (Apache, etc). One solution could be to serve your webpages through your webserver on port 80 (or 443 for HTTPS) and run your node server on a different port in order to send information back and forth.
There are a number of ways you can achieve this but here is one popular approach.
You can use NGINX as your front facing web server and proxy the requests to your backend Node service.
In NGINX, for example, you will configure your upstream service as follows:
upstream lucyservice {
server 127.0.0.1:8000;
keepalive 64;
}
The 8000 you see above is just an example, you may be running your Node service on a different port.
Further in your config (in the server config section) you will proxy the requests to your service as follows:
location / {
proxy_pass http://lucyservice;
}
You're Node service can be running in a process manager like forever / pm2 etc. You can have multiple Node services running in a cluster depending on how many processors your machine has etc.
So to recap - your front facing web server will be handling all traffic on port 80 (HTTP) and or 443 (HTTPS) and this will proxy the requests to your Node service running on whatever port(s) you define. All of this can happen on one single server or multiple if you need / desire.
I have two apps
/app1
/app2
I am running both on same port
using
app
.use('/app1', require('./app1/app').app)
.use('/app2', require('./app2/app').app)
.listen(80);
How can I restart only one app.
Without affecting other.
Thanks
You are not running two apps on one port, you are running one app (one Node.js process) that handles several API paths. So if, by restart, you mean stop the process and then start the process, then it is impossible.
So I have an ec2 instance running a node app on port 3000, very typical setup. However I now need to run additional apps on this server, which currently are running on their own servers, also on port 3000. So I need to migrate them all to one server, and presumably run them on different ports.
So if I want to run node apps and have them on 3000, 3010, 3020, etc, how do I do this the right way?
You need to authorize inbound traffic to your ec2 instance via AWS Console, or API. Here is a good description how to do that :
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/authorizing-access-to-an-instance.html
Since authorizing is normally a one off, probably better to do it through the AWS Console, however, if one of your requirements is to spin up node apps in different ports in an automated fashion, then you'll probably want to look at this:
http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/EC2.html#authorizeSecurityGroupIngress-property
So I created a node app that uploads pictures and the app works locally, I can upload stuff from all of my home devices and they end up in my designated upload folder. Next thing is to go global, so I moved the app to an FTP server and... I don't know how to start it. I can't go
node server.js
like I do on my PC in cmd, can I? I open my index page but when I upload something I get: Server responded with 0 code. Just like when I open my index.html without starting the node app trough cmd on my PC. I'm a front-end guy and I don't know almost anything about servers and I've searched quite a bit around the internet, but to little avail.
One fairly quick way to get this set up would be to sign up for a virtual server on Amazon using their EC2 server instances. Just choose the basic instance (whichever one is free for the first year) and then install Node and run npm install on your root directory once you have uploaded the files. Also if you are going to want this site accessible with your own domain you will have to set up an elastic IP address also available via Amazon (AWS). Furthermore if you want to have your url accessable via the standard port 80 (meanning that you don't have to type your url:[port number]/path then you might want to look into setting up a reverse proxy using something like nginx.
I know this sounds like a lot and i won't lie to you this is kind of complicated but there is a lot more to getting a node application up an running that you might expect.
Before a node app can run on a server you need to make sure that:
Node is installed
npm install has successfully run / or all dependencies must be transferred to the app directory in the right place
The port of the node server should be reachable, so the routing must be set up correctly.
Also, you can't start a program from an ftp prompt usually.
I've created a simple stack in AWS OpsWorks consisting of a Node app server Layer and an Elastic Load Balancer -- I'm trying to get my application to kick off on the deploy life cycle event. In other words, at some point I need the server to run node start
I have the built-in Chef recipes, summarized by life-cycle event below:
Setup: opsworks_nodejs
Configure: opsworks_nodejs::configure
Deploy: opsworks_nodejs, deploy::nodejs
But when I SSH into my instance and check for running node processes, nothing comes up. I'm diving into the individual recipes now, but would appreciate any help or guidance on this task.
If you're running with default OpsWorks Chef recipes, you must make sure that your main app file is named server.js and it's listening on ports 80 or 443.
See here for additional information - http://docs.aws.amazon.com/opsworks/latest/userguide/workinglayers-node.html