I am using docker-API module for accessing service list and the replica information. It works better in swarm-mode and on manager node. But if the app goes on worker node, it gives me the following error
"unexpected - This node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again". following is my code,
const Docker = require('node-docker-api').Docker
const docker = new Docker({ socketPath: '/var/run/docker.sock' })
const services = await docker.service.list()
Is it possible to get the service list on both node by giving any options or permissions to worker node? Also Is it possible to get by running app using docker-compose or kubernetes?
As docker docs state:
Worker nodes don’t participate in the Raft distributed state, make
scheduling decisions, or serve the swarm mode HTTP API.
Possible solutions:
You may consider running only manager nodes in your swarm topology.
Alternatively, you can use placement constraint --constraint node.role==manager for you node.js based docker service
This way your service containers would run only on manager nodes which serve the swarm mode HTTP API.
Related
I am in a kubernetes cluster with two services running. One of the services expose a endpoint like /customer/servcie-endpoint and other service is a nodejs application which is trying to access data from this service. Axios doesn't work as it needs a host to work with.
If I do a kubectl exec shell and run curl /customer/servcie-endpoint I receive all the data.
I am not sure how to get this data in a nodejs application. Sry for naive ask!
I have 2 docker containers running one with node which is my backend code and another with react build running on node, how do I link between the containers so that my react API calls are fulfilled via my node backend container.
I'm running the containers using ECS and routing using ALB, both my containers are in different target groups and the same cluster.
Actual Result:-
Both the containers are running individually but there is no response to the API calls made using react.
Expected Result:-
API requests should be fulfilled using the other node container.
There is no way to link two different task definition, You should run both containers in one task definition. you can only link container if it is the single task definition otherwise you need to use service discovery or internal load balancer if the there is two different task definition and having two different target group so better to use an internal load balancer or service discovery.
If you want to configure in the same task definition then you can refer or access the container using container name along with port. But I will not suggest docker linking in case of AWS I think such an approach is not good for production.
One of the biggest disadvantages is that if you want to scale one container in the service, all the container will be the scale of the task definition.
You can link this way, Go to task definition -> Container -> Networking -> just add the name of the second container as pasted in the image.
You can explore here and here further about service discovery and load balancer.
How can I deploy an application in a docker container on a cluster of machines and configure that application with settings like database username and password and other application specific settings. without putting the settings in the container as config file and without placing the settings on de machine, because the machine is recyclable. Also Environment variables are no options because these are visible in logs and not really suited for passwords and private keys imo.
The application is a Node.js application when developing I run it with a JSON config file. The production environment will exists of multiple machines in a AWS ECS environment. The machines all run docker in a cluster and the application it self is a docker image, and multiple instances of the application will run with an load balancer dividing the load between the instances.
What you are looking for is Docker swarm which is responsible for running and mannaging containers running on a cluster of machines. Docker swarm has a very nice feature for securing configuration such as passwords ... called Docker secrets.
You can create docker secrets for usernames and passwords, and those secrets will be shared among the containers in the cluster in an encrypted and secure way.
I want to know if exist a secure distributed container system for docker.
I want to deploy encrypted container from master server (Private) and executed into a worker public server (Unsafe).
Do you know solution able to do that ?
Thank you in advance for your answer.
Docker Swarm is a built-in docker orchestration framework for docker. Swarm allows you to run containers on different machines. By default, communication between the machines collaborating in a docker swarm is encrypted and secure.
Swarm has master nodes and worker node. Master nodes distribute containers to be executed on worker nodes. Therefore, your usecase can be handled by swarm.
I'm trying to deploy a simple WordPress example (WordPress & MySQL DB) on Microsofts new Azure Container Service with Mesos & Marathon as the underlying orchestration platform. I already ran this on the services offered by Google (Kubernetes) and Amazon (ECS) and thought it would be an easy task on ACS as well.
I have my Mesos cluster deployed and everything is up and running. Deploying the MySQL container isn't a problem either, but when I deploy my WordPress container I can't get a connection to my MySQL container. I think this might be because MySQL runs on a different Mesos agent?
What I tried so far:
Using the Mesos DNS to get ahold of the MySQL container host (for now I don't really care which container I get ahold of). I set the WORDPRESS_DB_HOST environment var to mysql.marathon.mesos and specified the host of MySQL container as suggested here.
I created a new rule for the Agent Load Balancer and a Probe for port 3306 in Azure itself, this worked but seems like a very complicated way to achieve something so simple. In Kubernetes and ECS links can be simply defined by using the container name as hostname.
An other question that came up, what difference is their in Marathon between setting the Port in the Port Mappings Section and in the Optional Settings section. (See screenshot attached)
Update: If I ssh into the master node than I can dig by using mysql.marathon.mesos, how ever I can't get a connection to work from within an other container (in my case the wordpress container).
So there are essentially two questions here: one around stateful services on Marathon, the other around port management. Let me first clarify that neither has to do anything with Azure or ACS in the first place, they are both Marathon-related.
Q1: Stateful services
Depending on your requirements (development/testing or prod) you can either use Marathon's persistent volumes feature (simple but no automatic failover/HA for the data) or, since you are on Azure, a robust solution like I showed here (essentially mounting a file share).
Q2: Ports
The port mapping you see in the Marathon UI screen shot is only relevant if you launch a Docker image and want to explicitly map container ports to host ports in BRIDGE mode, see the docs for details.