Why can't I run two node.js AWS opsworks layers? - node.js

This might be very simple to fix, but it seems that I cannot deploy two node.js OpsWorks layers on AWS. I need to have one layer for node.js for my web front-end, and I middle tier that consumes messages from a queue. I have the web node.js layer running, but now when I try to add a second node.js layer, node.js is not one of the options in the drop-down. Is this intentional? I've been forced to create a second app for my node.js layer to deal with this, but it is an ugly solution since by default the same chef scripts run on all the node.js instances and on my load balancing layer. Any help appreciated!

Creating a second App is the best way to go.
In your recipes, you can use stack configuration and deployment attribute values to see in which layer the current instance resides, and decide what you should do (if anything) when a configure/deploy lifecycle event runs.
On your front-end layer, you would deploy the frontend app and ignore
the second app and vice versa on your middle tier layer.
On your load balancer, you would probably do nothing on deploy.
To identify which app is being deployed in a Deploy life cycle event you can leverage the search function within chef and a "deploy" attribute placed on the application.
search(:aws_opsworks_app, "deploy:true").each do |app|
#Your deploy logic here
end

Related

What is the production deployment / runtime architecture of ResolveJS backend systems?

Does the reSolveJS generally run as a single NodeJS application on the server for production deployment?
Of course, event store and read models may be separate applications (e.g. databases) but are the CQRS read-side and write-side handled in the same NodeJS application?
If so, can / could these be split to enable them to scale separately, given the premise of CQRS is that the read-side is usually much more active than the write-side?
The reSolve Cloud Platform may alleviate these concerns, given the use of Lambdas that can scale naturally. Perhaps this is the recommended production deployment option?
That is, develop and test as monolith (single NodeJS application) and deploy in production to reSolve Cloud Platform to allow scaling?
Thanks again for developing and sharing an innovative platform.
Cheers,
Ashley.
reSolve app can be scaled as any other NodeJS app, using containers or any other scaling mechanisms.
So several instances can work with the same event store, and it is possible to configure several instances to work with the same read database, or for every instance to have its own read database.
reSolve config logic is specified in the app's run.js code, so you can extend it to have different configurations for different instance types.
Or you can have the same code in all instances and just route command and queries to the different instance pools.
Of course, reSolve Cloud frees you from these worries, in this case you use local reSolve as dev and test environment, and deploy there.
Please note that reSolve Cloud is not yet publicly released. Also, local reSolve can not have all required database adapters at the moment, so those yet to be written.

Deploy node.js in production

What are the best practices for deploying a nodejs application in production?
I would like to know how deploy for production Api's nodejs is being done today, today my application is in docker and running locally.
I wonder if I should use a Nginx inside the container and deploy my server on it or just upload my image node that is already running today.
*I need load balance
There are few main types of deployment that are popular today.
Using platform as a service like Heroku
Using a VPS like AWS, Digital Ocean etc.
Using a dedicated server
This list is in the order of growing difficulty and control. So it's easiest with PaaS but you get more control with a dedicated server - thought it gets significantly more difficult, especially when you need to scale out and build clusters.
See this answer for more details on how to install Node on a VPS or a dedicated server:
how to run node js on dedicated server?
I can only add from experience on AWS using a NAT Gateway which is a dedicated Node server with a MongoDB server behind the gateway. (Obviously this is a scalable system and project.)
With or without Docker, you need to control the production environment. This means clearly defining which NPM libraries you will need for production, how you handle environment variables and clusters for cores.
I would suggest, very strongly, using a tool like PM2 to handle clusters, server shutdowns and restarts and logs. (Workers & slaves also if you need them and code for them).
This list can go on and on, but keep in mind this is only from an AWS perspective. Setting up a Gateway correctly on AWS is also not an easy process. Be prepared for some gotcha's along the way.

How can I deploy a web process and a worker process with Elastic Beanstalk (node.js)?

My heroku Procfile looks like:
web: coffee server.coffee
scorebot: npm run score
So within the same codebase, I have 2 different types of process that will run. Is there an equivalent to doing this with Elastic Beanstalk?
Generally speaking Amazon gives you much more control than Heroku. With great power comes great responsibility. That means that with the increased power comes increased configuration steps. Amazon performs optimizations (both technical and billing) based on what tasks you're performing. You configure web or worker environments separately and deploy to them separately. Heroku does this for you but in some cases you may not want to deploy both at once. Amazon leaves that configuration up to you.
Now, don't get me wrong, you might see this as a feature of a heroku, but in advanced configurations you might have entire teams working on and redeploying workers independent from your web tier. This means that the default on Amazon is basically that you set up two completely separate apps that might happen to share source code (but don't have to).
Basically the answer to your question is no, there is not something that will allow you to do what you're asking in as simple a manor as with Heroku. That doesn't mean it is impossible, it just means you need to set up your environments yourself instead of Heroku doing it for you.
For more info see:
Worker "dyno" in AWS Elastic Beanstalk
http://colintoh.com/blog/configure-worker-for-aws-elastic-beanstalk

Deploy a java web application using AWS codeDeploy

I have a java web application, which has two layers( business layer and presentation layer both layers has their own war files). I want to deploy this into AWS using AWS CodeDeploy.and I am using RDS MySQL as database.
can anyone tell me how can I deploy this application using CodeDeploy? Do I need to have two different appspec.yml file for the deployment?
Thanks in Advance.
This really depends on how you want to deploy your code. You need exactly one appspec per bundle you want to deploy.
If you want to deploy your businesses layer and your presentation layer separately, you will need two different deployments, two separate (logical) applications in CodeDeploy, two bundles, each with their own appspec. Each appspec would be responsible for stopping, configuring, and starting only one of the layers.
If you want to deploy both layers as part of the same deployment then you will have one deployment, one (logical) application in CodeDeploy, one bundle, which would have only one appspec. That appspec would be responsible for managing the lifecycle of both of you layers.

How do I handle multiple apps running on a single server?

I am new to Chef. I just finished creating a cookbook that deploys a node.js app, configures Nginx, and then starts the app as 1 or more workers that are "load balanced" by Nginx. It works great. I made sure to keep it pretty generic, and all the app level config is done via attributes.
Now I am trying to think about an instance where I have multiple node.js apps running on the same server. For example, the primary API app, and another app that registered itself as a Gearman worker.
How would I go about doing this? Would I simply create another cookbook that is specific to that app, make sure it includes the generic cookbook's recipe, and then do attribute overrides just for that app's recipe?
Or, would it be better if I moved away from using attributes for the app config, and used data_bags instead?
Any help is appreciated.
I would have separated nginx and node.js installation/configuration into separate cookbooks.
If you must have several different applications running on node.js, I think it's ok to add a recipe for every application inside node.js cookbook and make sure each of them includes installation of node.js itself.
If you must have several instances of 1 and the same application/service running, then it is better to use one recipe with different attributes or data bags to introduce differences among instances.
Any more specific questions?
You should use roles Roles to manage multiple cookbooks on a server.
I'm not exactly sure of your scenario, but from your description, I would create 3 cookbooks. One that installs nginx, one that installs your app, and one that does node specific configuration and deployment. Bundle these into a role 'app_server' and put the role in the run_list.
This makes your app more composable, and it's easier to change out any of the pieces in the future.

Resources