Configuration for application in docker container on cluster - node.js

How can I deploy an application in a docker container on a cluster of machines and configure that application with settings like database username and password and other application specific settings. without putting the settings in the container as config file and without placing the settings on de machine, because the machine is recyclable. Also Environment variables are no options because these are visible in logs and not really suited for passwords and private keys imo.
The application is a Node.js application when developing I run it with a JSON config file. The production environment will exists of multiple machines in a AWS ECS environment. The machines all run docker in a cluster and the application it self is a docker image, and multiple instances of the application will run with an load balancer dividing the load between the instances.

What you are looking for is Docker swarm which is responsible for running and mannaging containers running on a cluster of machines. Docker swarm has a very nice feature for securing configuration such as passwords ... called Docker secrets.
You can create docker secrets for usernames and passwords, and those secrets will be shared among the containers in the cluster in an encrypted and secure way.

Related

Authenticate private registry (ACR) with Minikube on developer machine

At my company, we're setting up an on-prem k8s cluster in combination with a private image repository hosted on Azure (Azure Container Registry).
For development purposes, I'd like our developers to be able to run the apps they create locally using minikube. ACR offers several ways to authenticate, including:
Indiviual login
Access Token
Service Principal
When developing locally, and using the Docker CLI, individual authentication can be setup by running az acr login -n my-repository.azurecr.io. We manage all SSO authentication through Azure Active Directory, and Docker Desktop comes with the docker-credentials-wincred.exe extension, to delegate handling of authentication to the Windows Credential Store. This is specified in ~\.docker\config.json. Pretty neat and seamless, love it.
However, when authenticating k8s to work with a ACR, most documentation steers you towards setting up a Service Principal, and storing credentials in a k8s secret. For a production environment this makes perfect sense, but for a developer machine it feels a bit weird to authenticate using headless credentials.
An alternative is to pull images manually, and set imagePullPolicy to IfNotPresent in the k8s manifest. The tricky part here is that Minikube runs in a separate VM, with its own Docker daemon. By default, Docker CLI is not configured to connect with this daemon. Minikube exposes some control over the daemon through the minikube CLI (e.g. minikube image pull) but here we run into authorization issues. Alternatively, we can configure the Docker CLI - which is already configured to use the Windows credentials store - to connect to the minikube daemon. This is pretty straight-forward using minikube docker-env.
So, this works... but my question is: isn't there a more sensible way to do this? Can't minikube image somehow be configured to work with the windows credentials store?

Run Docker in production with environment variables that are secret and cannot be seen on the server

I need to send environment variables to my application running in a container but I understand that it is bad practice that the ".env" file is on the server since the "root" user could read it. What would be the best option to use these variables in my application and leave no trace on the server and without using Kubernetes?
There are several solutions, depending on your actual production stack:
(1) Running on a k8s cluster
Kubernete supports user uploading binary as a secret. You could mount the secret to your production pod to decouple your docker image and the secret.
https://kubernetes.io/docs/concepts/configuration/secret/
(2) Docker on a standalone server
This is an isomorphic solution to (1), but without native support from k8s.
https://docs.docker.com/storage/volumes/
(3) External Key management service
If you are using hosting your application on cloud, there are much more options for you to consider. Take azure as example, if you are hosting your application on a virtual machine, you could use service like Azure KeyVault:
https://learn.microsoft.com/en-us/azure/key-vault/general/basic-concepts
The concept is that all your key is stored and obtained via connecting your server to the service. You could have the secret loaded in your application on the fly fetching from KeyVault, which prevent leaving secret footprint in your service instance. The connection between Key Management Service and your virtual machine could be configured in a password less way (iam in aws / managed identity in azure) to prevent having secret in your server.

Docker compose, heroku, hostname links and production deployment

I currently have a simple app consisting of a few micro services (database, front-end node app, user services, etc.) each with its own Dockerfile, and a docker-compose.yml file to get them all up on a local deployment environment. So everything works fine doing docker-compose up.
For production, I was looking for a Heroku (open to other PaaS), which do not support Docker Compose. Not specially nice, but could live with it for now.
The thing is that with Docker Compose on local deployment, the different services are linked via its hostname automatically (if the mongo database service is called "mydatabase", I can do mongodb://mydatabase/whatever within my other services).
So, the question is, what happens with those links on Heroku? What are the best practices to have the different services linked consistently between development and production in this case?
Thanks!
Docker compose creates a docker virtual network which allows you to connect the containers using the container name as a hostname. Heroku doesn't directly support docker-compose, as Docker compose is really intended for
local development on your own machine and not for production.
For production Docker has Docker swarm, which is very similar to Docker compose, however is intended for production environments. You can use the same docker-compose file (called stackfile in swarm) to deploy on swarm.
In docker swarm, you can connect the containers that you have using the same service name just like you would do in docker-compose.
Heroku supports Docker swarm via the DockerHero add-on which you can use to to have your Docker container connected and running on Heroku.
In case anyone else comes across this in their current searches for solutions, Heroku offers an approach using a file similar to docker-compose.yml, called heroku.yml. You simply put it in the root of your project and structure it to call your Dockerfiles: https://devcenter.heroku.com/articles/build-docker-images-heroku-yml

Secure distributed container system for docker

I want to know if exist a secure distributed container system for docker.
I want to deploy encrypted container from master server (Private) and executed into a worker public server (Unsafe).
Do you know solution able to do that ?
Thank you in advance for your answer.
Docker Swarm is a built-in docker orchestration framework for docker. Swarm allows you to run containers on different machines. By default, communication between the machines collaborating in a docker swarm is encrypted and secure.
Swarm has master nodes and worker node. Master nodes distribute containers to be executed on worker nodes. Therefore, your usecase can be handled by swarm.

Link containers in Azure Container Service with Mesos & Marathon

I'm trying to deploy a simple WordPress example (WordPress & MySQL DB) on Microsofts new Azure Container Service with Mesos & Marathon as the underlying orchestration platform. I already ran this on the services offered by Google (Kubernetes) and Amazon (ECS) and thought it would be an easy task on ACS as well.
I have my Mesos cluster deployed and everything is up and running. Deploying the MySQL container isn't a problem either, but when I deploy my WordPress container I can't get a connection to my MySQL container. I think this might be because MySQL runs on a different Mesos agent?
What I tried so far:
Using the Mesos DNS to get ahold of the MySQL container host (for now I don't really care which container I get ahold of). I set the WORDPRESS_DB_HOST environment var to mysql.marathon.mesos and specified the host of MySQL container as suggested here.
I created a new rule for the Agent Load Balancer and a Probe for port 3306 in Azure itself, this worked but seems like a very complicated way to achieve something so simple. In Kubernetes and ECS links can be simply defined by using the container name as hostname.
An other question that came up, what difference is their in Marathon between setting the Port in the Port Mappings Section and in the Optional Settings section. (See screenshot attached)
Update: If I ssh into the master node than I can dig by using mysql.marathon.mesos, how ever I can't get a connection to work from within an other container (in my case the wordpress container).
So there are essentially two questions here: one around stateful services on Marathon, the other around port management. Let me first clarify that neither has to do anything with Azure or ACS in the first place, they are both Marathon-related.
Q1: Stateful services
Depending on your requirements (development/testing or prod) you can either use Marathon's persistent volumes feature (simple but no automatic failover/HA for the data) or, since you are on Azure, a robust solution like I showed here (essentially mounting a file share).
Q2: Ports
The port mapping you see in the Marathon UI screen shot is only relevant if you launch a Docker image and want to explicitly map container ports to host ports in BRIDGE mode, see the docs for details.

Resources