Using AWS OpsWorks, how do I start my node application? - node.js

I've created a simple stack in AWS OpsWorks consisting of a Node app server Layer and an Elastic Load Balancer -- I'm trying to get my application to kick off on the deploy life cycle event. In other words, at some point I need the server to run node start
I have the built-in Chef recipes, summarized by life-cycle event below:
Setup: opsworks_nodejs
Configure: opsworks_nodejs::configure
Deploy: opsworks_nodejs, deploy::nodejs
But when I SSH into my instance and check for running node processes, nothing comes up. I'm diving into the individual recipes now, but would appreciate any help or guidance on this task.

If you're running with default OpsWorks Chef recipes, you must make sure that your main app file is named server.js and it's listening on ports 80 or 443.
See here for additional information - http://docs.aws.amazon.com/opsworks/latest/userguide/workinglayers-node.html

Related

Run two applications in the same AWS Cloud9 instance (front-end and API)

I'm trying to run json-server on port 8082 to serve some mock data to the front-end I'm developing using create-react-app (via yarn start to get the dev server on port 8080). Although both servers run correctly in their respective terminals, I can only access the first one I run from the AWS-provided URL, thus making it impossible to perform any kind of HTTP request from the React app to the json-server.
How should I go about this? Probably running two EC2 instances would work, but that seems awfully inefficient...

How to open a port for http traffic on ec2 from node app?

So I have an ec2 instance running a node app on port 3000, very typical setup. However I now need to run additional apps on this server, which currently are running on their own servers, also on port 3000. So I need to migrate them all to one server, and presumably run them on different ports.
So if I want to run node apps and have them on 3000, 3010, 3020, etc, how do I do this the right way?
You need to authorize inbound traffic to your ec2 instance via AWS Console, or API. Here is a good description how to do that :
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/authorizing-access-to-an-instance.html
Since authorizing is normally a one off, probably better to do it through the AWS Console, however, if one of your requirements is to spin up node apps in different ports in an automated fashion, then you'll probably want to look at this:
http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/EC2.html#authorizeSecurityGroupIngress-property

Watch logs from NodeJS on EC2

I have a single EC2 instance on AWS, running HTTPS server with NodeJS.
I'm starting my NodeJS server from the /etc/rc.local, so it will start automatically on every boot.
I have 2 questions:
Is there a better way to start an https server listening on port 443 without using sudo path/to/node myScript.js? What risks do I have if I run this process as root?
Where do I see my logs? When running the script from the shell, I see the logs of the process, but now when it is runs from rc.local, how do I access the output of the server?
Thanks!
Starting the application using sudo definately is not a good practice. You should not run a publicaly accessible service with root credentials. If there is a flaw in your application and someone find this out there is the danger to access more services in the machine.
Your application should start in a non-priviledged port (e.g. 5000) and then having nginx or apache as a reverse proxy that will forward the traffic internally to your application that is running on port 5000. pm2 is suggesting something like that as well: http://pm2.keymetrics.io/docs/tutorials/pm2-nginx-production-setup. Searching online you will be able to find tutorials on how to configura nginx to run on https and how to forward all the traffic from http to https. Your application should not be aware of ssl certificates etc. Remember that the pm2 module should be installed locally within your project and you have to take advantage of the package.json. In there you can define a task that will boot your application on production using the local pm2 module. The advantage is that you don't have to install the pm2 module globally and you will not mess the things again with the permissions and super users.
I don't think that the log is saved somewhere until you will tell it to happen in the rc.local script. How do you spawn the process in there? Something like that should redirect the stdout to a file:
node path/to/node myScript.js 2> /var/log/my-app.rc.local.log # send stderr from rc.local to a log file`
Don't you use a logger in your application, though? I would suggest picking one (there are a lot available like bunyan, winston etc) and substitute all of your console.logs with the logger. Then you can define explicitly in your application where the logs will be saved, you can have different log levels and more features in general.
Not a direct answer, more a small return on experience here.
We have a heavy used nodejs app in production on AWS, on a non-Docker setup (for now ;) ).
We have a user dedicated to run the node app, I guess that if you start your node process with root, it has root access, and that's not a safe thing to do.
To run the app we use pm2, as a process manager, it allow to restart the node process when it fail (and it will), and scale the number of worker to match the number of core of your EC2 instance. You also have access to log of all the workers using ./path/to/node ./node_modules/.bin/pm2 logs, and can send it to whatever you want (from ELK to slack).
My2cents.

Parallel deployment of express nodejs application to multiple application servers

I have a mean.io express/node js web application deployed on a Linode stack.
I have my 2 application servers running Ubuntu 14.04, which are accessed behind 2 Haproxy load balancers again running on Ubuntu 14.04.
Let us call Application server 1 => APP1 and Application server 2 => APP2
Currently, I deploy manually by
Removing APP1 entry from haproxy.cfg of both the load balancers and restarting.
Update the code on APP1
Remove APP2 entry from haproxy.cfg of both the load balancers and put APP1 entry back
Restart APP1
Update code on APP2
Put the APP2 entry back in both haproxy.cfg's and restart
Restart APP2
I follow this process so that at any point of time the users of our web application get consistent data even during deployment, i.e both the instances of app server are not running a different copy of the code.
I am moving to automated deployment system and the 2 options I have looked at for deployment are Capistrano and Shipit JS.
They both provide ways to mention multiple servers in their configuration, for e.g in capistrano
role :app, "APP1", "APP2"
and in Shipit JS
shipit.config.servers = ['APP1', 'APP2']
So, my question is how do these libraries make sure that both the servers are updated parallely before they are restarted? Is there a way by which they lock incoming requests to these servers during updation ?
Do these deployment systems work only for simple Client - App Server architecture or they can used in systems where there is a load balancer?
Any explanation would be invaluable. I have tried my best to explain the situation here.If you need more input please mention below in the comments.

How to deploy live Node.js server using Heroku

Sorry for confusing title, new to Node.js and Heroku, but trying to quickly pick it up.
I currently own a domain, and my first goal is to have it set up so I go to the domain, it uses Heroku to wake up and run Node.js, run my web.js, and display "Hello world".
I've already gone through this heroku/node.js tutorial, and so I understand how to set up stuff locally, push to the heroku remote, and run my Node.js server with it (am I understanding that correctly?). However, I couldn't find anything for how you actually put your Node.js files onto your external server (so, in this case, my domain), and connect the service of Heroku to those files, allowing my local machine to interact with the node on my server.
If there's any tutorials or pages you'd recommend, I'd appreciate it. Kinda stuff and most likely confused on quite a few things here. Thanks!
Heroku apps have their own git repository. So, you push from your local git directory to heroku's git remote.
Setup:
git remote add heroku <git://yourherokurepourl.git>
and then every time to deploy:
git push heroku
That's all needed to get your node.js files on heroku's server. Heroku follows foreman as process launcher. foreman needs a special file in project root called procfile. procfile has simple unix commands to launch processes in each line :
web : npm install && node app.js
So, when you push your project to heroku's git. It will look for procfile and launch processes defined there. You can place more commands here. Better install foreman on your local/development machine and test using that.
In heroku app settings you can "bind" your www.domain.com address to node app running on heroku's server. You have to do the same in settings of domain provider. The DNS will now route requests to www.domain.com to your app's server IP.
On heroku, configuration lives in environment. A lot many process.env.* are available on heroku. You can locally simulate this by providing .env files to foreman.
finally, in your node.js code make sure you listen on value provided by process.env.PORT.
Connecting servers:
Use Request module to directly call other server urls.
Or, Let server's subscribe and publish to a centralized service bus.
You are describing operating two servers. One on Heroku, one "on your domain". I suspect you haven't made the connection that you can merely get your domain to point to your Heroku server. Contact your domain name provider with the heroku url you are using and they can do this for you.
In effect they will "point" your domain to your Heroku node.js server and from there it will act as you expect.

Resources