Secure distributed container system for docker - security

I want to know if exist a secure distributed container system for docker.
I want to deploy encrypted container from master server (Private) and executed into a worker public server (Unsafe).
Do you know solution able to do that ?
Thank you in advance for your answer.

Docker Swarm is a built-in docker orchestration framework for docker. Swarm allows you to run containers on different machines. By default, communication between the machines collaborating in a docker swarm is encrypted and secure.
Swarm has master nodes and worker node. Master nodes distribute containers to be executed on worker nodes. Therefore, your usecase can be handled by swarm.

Related

Access to service list on docker swarm worker node

I am using docker-API module for accessing service list and the replica information. It works better in swarm-mode and on manager node. But if the app goes on worker node, it gives me the following error
"unexpected - This node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again". following is my code,
const Docker = require('node-docker-api').Docker
const docker = new Docker({ socketPath: '/var/run/docker.sock' })
const services = await docker.service.list()
Is it possible to get the service list on both node by giving any options or permissions to worker node? Also Is it possible to get by running app using docker-compose or kubernetes?
As docker docs state:
Worker nodes don’t participate in the Raft distributed state, make
scheduling decisions, or serve the swarm mode HTTP API.
Possible solutions:
You may consider running only manager nodes in your swarm topology.
Alternatively, you can use placement constraint --constraint node.role==manager for you node.js based docker service
This way your service containers would run only on manager nodes which serve the swarm mode HTTP API.

Multi host Kafka-zookeeper Hyper-ledger fabric network

I am trying to setup a Multi Org Multi Host network based on Hyperledger fabric block chain. I developed a network structure and trying to run docker containers in swarm mode. I have three aws instance Ubuntu on aws.
Here is link of my public repository https://github.com/medipal/MultiOrgNetwork
When I am running docker images there are no services replicated.
There is error while deploying the network thats why container are not starting.
How can I build a network like this or What should I have to correct in my code?
Here you have a great example of a orderer-kafka network. This is the first-network from fabric-sample with kafka. You need to adjust this to work in swarm mode and apply you changes.

Configuration for application in docker container on cluster

How can I deploy an application in a docker container on a cluster of machines and configure that application with settings like database username and password and other application specific settings. without putting the settings in the container as config file and without placing the settings on de machine, because the machine is recyclable. Also Environment variables are no options because these are visible in logs and not really suited for passwords and private keys imo.
The application is a Node.js application when developing I run it with a JSON config file. The production environment will exists of multiple machines in a AWS ECS environment. The machines all run docker in a cluster and the application it self is a docker image, and multiple instances of the application will run with an load balancer dividing the load between the instances.
What you are looking for is Docker swarm which is responsible for running and mannaging containers running on a cluster of machines. Docker swarm has a very nice feature for securing configuration such as passwords ... called Docker secrets.
You can create docker secrets for usernames and passwords, and those secrets will be shared among the containers in the cluster in an encrypted and secure way.

How can I link docker containers in mesos cluster (dc/os) running on Azure?

I am setting up a multi container application on mesos cluster on Azure using azure container service and currently stuck in linking containers.
My setup brief info:
- Mesos cluster is deployed on Azure using Azure container service
- It's a 3 container application - A, B and C
- B is dependent on A and C is dependent on A & B-
- A is deployed currently
How can I link the above containers?
Thanks,
Suraj
If by linking you mean Docker's --link then thats deprecated practice and inter-container communication should be done using Docker networks and port mappings.
For DC/OS - you have some different ways to achieve this (also called Service Discovery). I have written a blog post explaining these different tools by examples: http://blog.itaysk.com/2017/04/28/dcos-service-discovery-and-load-balancing-by-example
If you don't want to read through that long post and looking for a recommendation: Try using VIPs.
When creating the application (either from Marathon or DC/OS UI), look for the 'VIP' setting. Enter an IP there (it can be a made up IP) and port. Your service will be discoverable under this IP:Port.
More on VIPs: https://dcos.io/docs/1.9/networking/load-balancing-vips/virtual-ip-addresses/

Link containers in Azure Container Service with Mesos & Marathon

I'm trying to deploy a simple WordPress example (WordPress & MySQL DB) on Microsofts new Azure Container Service with Mesos & Marathon as the underlying orchestration platform. I already ran this on the services offered by Google (Kubernetes) and Amazon (ECS) and thought it would be an easy task on ACS as well.
I have my Mesos cluster deployed and everything is up and running. Deploying the MySQL container isn't a problem either, but when I deploy my WordPress container I can't get a connection to my MySQL container. I think this might be because MySQL runs on a different Mesos agent?
What I tried so far:
Using the Mesos DNS to get ahold of the MySQL container host (for now I don't really care which container I get ahold of). I set the WORDPRESS_DB_HOST environment var to mysql.marathon.mesos and specified the host of MySQL container as suggested here.
I created a new rule for the Agent Load Balancer and a Probe for port 3306 in Azure itself, this worked but seems like a very complicated way to achieve something so simple. In Kubernetes and ECS links can be simply defined by using the container name as hostname.
An other question that came up, what difference is their in Marathon between setting the Port in the Port Mappings Section and in the Optional Settings section. (See screenshot attached)
Update: If I ssh into the master node than I can dig by using mysql.marathon.mesos, how ever I can't get a connection to work from within an other container (in my case the wordpress container).
So there are essentially two questions here: one around stateful services on Marathon, the other around port management. Let me first clarify that neither has to do anything with Azure or ACS in the first place, they are both Marathon-related.
Q1: Stateful services
Depending on your requirements (development/testing or prod) you can either use Marathon's persistent volumes feature (simple but no automatic failover/HA for the data) or, since you are on Azure, a robust solution like I showed here (essentially mounting a file share).
Q2: Ports
The port mapping you see in the Marathon UI screen shot is only relevant if you launch a Docker image and want to explicitly map container ports to host ports in BRIDGE mode, see the docs for details.

Resources