Setting NGINX TLS with certificate/key loaded by initContainer - azure

I've found several pages describing how to set NGINX to use HTTPS/TLS.
However, all suggest setting a secret tls with the key & cert.
We want to be able to use TLS but ask NGINX to load the key/cert via init-container which in this case implemented by acs-keyvault-agent.
Any ideas?

If your only goal is to obtain the TLS key/cert from Azure Key Vault, then you're probably better of going with the Key Vault FlexVolume project from Azure. This would have the advantage of not using init containers at all and just dealing with volumes and volume mounts.
Since you explicitly want to use Hexadite/acs-keyvault-agent and in default mode (which uses volume mounts btw) there is a full example of how to do this in the projects examples folder here: examples/acs-keyvault-deployment.yaml#L40-L47.
Obviously you need to build, push, and configure the container correctly for your environment. Then you will need to conifgure Nginx to use the CertFileName.pem and KeyFilename.pem from the /secrets/certs_keys/ folder.
Hope this helps.

Related

Docker Compose - How to handle credentials securely?

I have been trying to understand how to handle credentials (e.g. database passwords) with Docker Compose (on Linux/Ubuntu) in a secure but not overly complicated way. I have not yet been able to find a definitive answer.
I saw multiple approaches:
Using environment variables to pass credentials. However, this would mean that passwords are stored as plain text both on the system and in the container itself. Storing passwords as plain text isn't something I would be comfortable with. I think most people use this approach - how secure is it?
Using Docker secrets. This requires Docker Swarm though which would just add unnecessary overhead since I only have one Docker host.
Using a Password Vault to inject credentials into containers. This approach seems to be quite complicated.
Is there no other secure, standardized way to manage credentials for Docker containers which are created with Docker Compose? Docker secrets without the need of Docker Swarm would be perfect if it existed.
Thank you in advance for any responses.

Best Practise for docker intercontainer communication

I have two docker containers A and B. On container A a django application is running. On container B a WEBDAV Source is mounted.
Now I want to check from container A if a folder exists in container B (in the WebDAV mount destination).
What is the best solution to do something like that? Currently I solved it mounting the docker socket into the container A to execute cmds from A inside B. I am aware that mounting the docker socket into a container is a security risk for the host and the whole application stack.
Other possible solutions would be to use SSH or share and mount the directory which should be checked. Of course there are further possible solutions like doing it with HTTP requests.
Because there are so many ways to solve a problem like that, I want to know if there is a best practise (considering security, effort to implement, performance) to execute commands from container A in contianer B.
Thanks in advance
WebDAV provides a file-system-like interface on top of HTTP. I'd just directly use this. This requires almost no setup other than providing the other container's name in configuration (and if you're using plain docker run putting both containers on the same network), and it's the same setup in basically all container environments (including Docker Swarm, Kubernetes, Nomad, AWS ECS, ...) and a non-Docker development environment.
Of the other options you suggest:
Sharing a filesystem is possible. It leads to potential permission problems which can be tricky to iron out. There are potential security issues if the client container isn't supposed to be able to write the files. It may not work well in clustered environments like Kubernetes.
ssh is very hard to set up securely in a Docker environment. You don't want to hard-code a plain-text password that can be easily recovered from docker history; a best-practice setup would require generating host and user keys outside of Docker and bind-mounting them into both containers (I've never seen a setup like this in an SO question). This also brings the complexity of running multiple processes inside a container.
Mounting the Docker socket is complicated, non-portable across environments, and a massive security risk (you can very easily use the Docker socket to root the entire host). You'd need to rewrite that code for each different container environment you might run in. This should be a last resort; I'd consider it only if creating and destroying containers would need to be a key part of this one container's operation.
Is there a best practise to execute commands from container A in contianer B?
"Don't." Rearchitect your application to have some other way to communicate between the two containers, often over HTTP or using a message queue like RabbitMQ.
One solution would be to mount one filesystem readonly on one container and read-write on the other container.
See this answer: Docker, mount volumes as readonly

How do I get Rocket.Chat to trust my company's root CA certificate?

In order to create an outgoing WebHook, I need to trust my company's root CA certificate, which is inserted into all traffic going through our network.
I have tried everything as I described in the following issue that I logged:
https://github.com/RocketChat/Rocket.Chat/issues/11546
This involved mainly trying both NODE_EXTRA_CA_CERTS and CAFILE (the latter being a suggestion from Meteor) environment variables, by using service config override. I can confirm the environment variables are set in the running processes for the Rocket.Chat service, but they have no effect.
(Not that I think it makes a difference, but I am running Rocket.Chat from the snap. I followed advice given here to add the environment variables to the service processes:
https://forum.snapcraft.io/t/declaratively-defining-environment-variables/175/26)

Unable to use docker due to ZScaler and certificate issues

I have VMware Photon OS running in VMware Player. This will be used as the host OS to run Docker containers.
However, since I'm behind a ZScaler, I'm having issues running commands that access external resources. E.g. docker pull python gives me the following output (I added some line breaks to make it more readable):
error pulling image configuration:
Get https://dseasb33srnrn.cloudfront.net/registry-v2/docker/registry/v2/blobs/sha256/a0/a0d32d529a0a6728f808050fd2baf9c12e24c852e5b0967ad245c006c3eea2ed/data
?Expires=1493287220
&Signature=gQ60zfNavWYavBzKK12qbqwfOH2ReXMVbWlS39oKNg0xQi-DZM68zPi22xfDl-8W56tQmz5WL5j8L39tjWkLJRNmKHwvwjsxaSNOkPMYQmhppIRD0OuVwfwHr-
1jvnk6mDZM7fCrChLCrF8Ds-2j-dq1XqhiNe5Sn8DYjFTpVWM_
&Key-Pair-Id=APKAJECH5M7VWIS5YZ6Q:
x509: certificate signed by unknown authority
I have tries to extract the CA root certificates (in PEM format) for ZScaler from my Windows workstation, and have appended them to /etc/pki/tls/certs/ca-bundle.crt. But even after restarting Docker, this didn't solve the issue.
I've read through numerous posts, most referencing the command update-ca-trust which does not exist on my system (even though the ca-certificates package is installed).
I have no idea how to go forward. AFAIK, there are two options. Either:
Add the ZScaler certificates so SSL connections are trusted.
Allow insecure connections to the Docker hub (but even then it will probably still complain because the certificate isn't trusted).
The latter works by the way, e.g. executing curl with the -k option allows me to access any https resource.
The problem is zscaler is acting as MAN-IN-THE-MIDDLE doing the ssl inspecting in your organization (see https://support.zscaler.com/hc/en-us/articles/205059995-How-does-Zscaler-protect-SSL-traffic-).
Since you've tried put the certificate in docker, I guess you've been already familiar with steps described in https://stackoverflow.com/a/36454369/1443505. The answer in this is almost correct for the zscaler scenario. One thing need to note is that because zscaler intercepts the CA tree. We need add all the certificates on the chains.
For now, the certificate chains behind zscaler looks as following
We need to export them all one by one and follow the instructions in https://stackoverflow.com/a/36454369/1443505 for each of them.

Node JS - Tips on SSH to my MongoDB

I'm currently using Compose.io to host my MongoDB - however its costs $31, my DB isn't so big and I don't really use any specific features.
I've decided to create a droplet on DigitalOcean and then use their one click install for MongoDB.
With Compose.io, I simply use a a connection URL like mongodb://USERNAME:PASSWORD#aws-xxxx.com:xxx/myDB along with a ssl certificate.
However with DigitalOcean, it looks like SSH'ing into the droplet then connecting is the best approach (rather than creating an open access bind_url.
So i want to ask:
Is this SSH process quite intensive/time consuming in terms of would it simply SSH once then remain connected, until the node app (website) was closed?
I'm thinking of using npm install tunnel-ssh. Is this recommended?
Any tips/advice/security notes would be appreciated.
Thanks.
Compose definitely offers a lot of security features that would take quite a bit of configuration to replicate. If this is a production database I would consider $31/month a good value. But speaking directly to your questions:
OpenSSH can be configured to keep the tunnel alive. The settings can be configured on both the client and server configuration file.
Keep SSH session alive
OpenSSH is very efficient an doesn't impose much overhead. Resource-wise it's not a concern. SSH2 implemented in native javascript is not going to perform as well as the OpenSSH binary. So I wouldn't use 'tunnel-ssh' without a convincing reason.
If you store your key with your application when somebody roots your application server they will also have your key. So make sure the user that you tunnel with has reduced privileges on the server, just what they need to access MongoDB and no more.
You might also consider just running your application and MongoDB on the same droplet. Don't expose MongoDB to the network. I wouldn't recommend this for production, but it's fine for low key scenarios. Keep in mind, if someone roots your server or application they will also have full access to the DB. Make sure you have a backup strategy.

Resources