I am using pm2 to deploy nodejs apps to production. Currently all apps are running in the same host. But I am going to put a few apps to a different hosts. I know pm2 support deploying to multiple servers by defining host in an array such as: "host" : ["212.83.163.1", "212.83.163.2", "212.83.163.3"], But this approach will deploy all apps to all hosts listed in this array. How can I deploy some apps to one host and other apps to another hosts?
In addition, it would be good if pm2 support master-slave mode. I have checked pm2 cluster but it doesn't seem to work in a cluster of hosts mode. It is more like a cluster of nodejs processes in one host.
Related
I have a nodeJS application running as a microservice and there are two scenarios:
when a developer is doing development on his machine. nodeJS application should register to eureka service and be able to communicate with other microservices without specifying URL or port to those services
when nodeJS application is deployed to kubernetes cluster, it should not register to Eureka and instead use existing kubernetes Services within the cluster to communicate with other services, again without specifying URL or port, only kubernetes service name
I was thinking of maybe using some env var that will tell the nodeJS service how to behave depending on the environment it is being started in
We are pretty new to AWS and looking to deploy multiple services into one EC2 instance.
Each micro-service is developed in its own repository.
Each service will have its own endpoint URL
Services may talk to each other
Services can be updated/deployed separately
Do we need a beanstalk for each? I hope not.
Thank you in advance
So the way we tackled a similar issue at our workplace was to leverage the multi-container docker platform supported by Elastic Beanstalk in most AWS regions.
The way this works in brief is, we had dedicated repositories for each of our services in ECR (Elastic Container Registry) where the different "versioned" images were deployed using a deploy script.
Once that is configured and set up, all you would need is deploy a Dockerrun.aws.json file which basically highlights all the apps you would want to deploy as part of the docker cluster into 1 EC2 instance (make sure it is big enough to handle multiple applications). This is the file where one would also highlight link between applications (so they can talk to one another), port configurations, logging drivers and groups (yea we used AWS CloudWatch for logging) and many other fields. This JSON is very similar to one's docker-compose.yml which is used to bring up your stack for local development and testing.
I would suggest checking out the sample example configuration that Amazon provides for more information. Also, I found the docker documentation to be pretty helpful in this regard.
Hope this helps!!
It is not clear if you have a particular tool in mind. If you are using any tool for deployment of a single micro-service, multiple should be the same.
How does one deploy multiple micro-services in Node on a single AWS
EC2 instance?
Each micro-service is developed in its own repository.
Services can be updated/deployed separately
This should be the same as deployment of a single micro-service. As long as they have different path and port that they are running on, it should be fine.
Each service will have its own endpoint URL
You can use nginx as a reverse proxy which can redirect your request from port 80 to the required port of your micro service.
Services may talk to each other
This again should not be an issue. You can either call them directly with the port number or via fully qualified name and come back via nginx.
I currently have a simple app consisting of a few micro services (database, front-end node app, user services, etc.) each with its own Dockerfile, and a docker-compose.yml file to get them all up on a local deployment environment. So everything works fine doing docker-compose up.
For production, I was looking for a Heroku (open to other PaaS), which do not support Docker Compose. Not specially nice, but could live with it for now.
The thing is that with Docker Compose on local deployment, the different services are linked via its hostname automatically (if the mongo database service is called "mydatabase", I can do mongodb://mydatabase/whatever within my other services).
So, the question is, what happens with those links on Heroku? What are the best practices to have the different services linked consistently between development and production in this case?
Thanks!
Docker compose creates a docker virtual network which allows you to connect the containers using the container name as a hostname. Heroku doesn't directly support docker-compose, as Docker compose is really intended for
local development on your own machine and not for production.
For production Docker has Docker swarm, which is very similar to Docker compose, however is intended for production environments. You can use the same docker-compose file (called stackfile in swarm) to deploy on swarm.
In docker swarm, you can connect the containers that you have using the same service name just like you would do in docker-compose.
Heroku supports Docker swarm via the DockerHero add-on which you can use to to have your Docker container connected and running on Heroku.
In case anyone else comes across this in their current searches for solutions, Heroku offers an approach using a file similar to docker-compose.yml, called heroku.yml. You simply put it in the root of your project and structure it to call your Dockerfiles: https://devcenter.heroku.com/articles/build-docker-images-heroku-yml
How can I deploy an application in a docker container on a cluster of machines and configure that application with settings like database username and password and other application specific settings. without putting the settings in the container as config file and without placing the settings on de machine, because the machine is recyclable. Also Environment variables are no options because these are visible in logs and not really suited for passwords and private keys imo.
The application is a Node.js application when developing I run it with a JSON config file. The production environment will exists of multiple machines in a AWS ECS environment. The machines all run docker in a cluster and the application it self is a docker image, and multiple instances of the application will run with an load balancer dividing the load between the instances.
What you are looking for is Docker swarm which is responsible for running and mannaging containers running on a cluster of machines. Docker swarm has a very nice feature for securing configuration such as passwords ... called Docker secrets.
You can create docker secrets for usernames and passwords, and those secrets will be shared among the containers in the cluster in an encrypted and secure way.
I have created an Azure Service Fabric Cluster using "Windows 2016 DataCenter with Containers" OS and enabled Reverse Proxy listening on Port 80 on the cluster. We intend to deploy our legacy ASP.NET MVC & WCF applications on this cluster.
In our existing deployment model we have services co-located on same host communicating to each other because of chatty communication and low latency requirements. Is it possible for apps hosted in windows container to communicate with other windows container apps on the same Node? Basically I would like to have Node affinity between two windows container apps on the same Service Fabric Node.
I tried it with a sample application and it looks like the only option is to use Private IP of container which is getting dynamically assigned. Is it possible to pass --ip-address parameter while instantiating containers on Service Fabric cluster?
[Update: 04/05/2017]
Service Fabric Cluster Version: 5.5.219.0
Total NodeTypes: 1
Total Nodes: 3
Both containers are deployed on all three nodes. Azure internet load balancer is used to expose Service Fabric Reverse Proxy. All services are accessing internally as well as externally via reverse proxy.
Gaurav
There is what you can do now in the v5.5 release and what is coming in the next release.
For v5.5 Release
This GitHub repo https://github.com/Azure-Samples/service-fabric-dotnet-containers shows how to communicate between Windows Containers where you can map a host port to the private port on the container using PortBinding policy in the Application manifest.
You can then communicate either called the Naming Service over REST (this is what this sample does) or as you say and is easier, with the ReverseProxy which has a URL format. If both containers are deployed on the same machine then the communication is via the local ReverseProxy and you get local calls between the containers. See https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-reverseproxy. As Mikkel points out, you have to tell Service Fabric that you want the containers on the same VM, through placement constraints.
Note: for the GitHub sample above to work on your local machine you need to update the local dev manifest ClusterManifestTemplate.xml in the C:\Program Files\Microsoft SDKs\Service Fabric\ClusterSetup\NonSecure\FiveNode folder and change the IPAddressOrFQDN attribute from "localhost" to the actual IP of the machine it is running on. This is due to a Windows Server Network bug, documentated in the git repo above. i.e
<Node NodeName="_Node_0" IPAddressOrFQDN="ComputerFullName" IsSeedNode="true" NodeTypeRef="NodeType0" FaultDomain="fd:/0" UpgradeDomain="0" />
becomes
<Node NodeName="_Node_0" IPAddressOrFQDN="192.1.3.50" IsSeedNode="true" NodeTypeRef="NodeType0" FaultDomain="fd:/0" UpgradeDomain="0" />
Coming for v5.6 Release
In the next release we have added a DNS server layered on top of the Naming Service, so that you can use DNS names in place of fabric:/ names used by the reverse proxy. This means that within the Service Fabric cluster you can call between the containers with http://[domainname]/path calls instead. Same rules apply for the calls between local containers on the same machine as for the reverse proxy.
Using Service affinity would be an option: https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-resource-manager-advanced-placement-rules-affinity