Coverity: How to use cov-import-scm to extract scm data - perforce

I need to assign owners to the coverity defects assigned. On the Coverity platform, the scm users are mapped to coverity users. On the Client side, I to run cov-import-scm to gather the scm data but looks like the command is not getting what it wants. The help on the cov-import-scm command is not very intuitive and neither is the usage guide. But, from what I have gathered the command looks like
./set-p4env.bat
./cov-import-scm --scm perforce --dir="" --command-arg="%P4CLIENT%/..."
What does the command need? Anyone had success in executing this or any other way to gather the scm user information ?
Thanks

cov-import-scm would be run after your cov-build and before your cov-analyze command.
Example script:
cov-build --dir $coverity_intermediate_dir_path make
cov-import-scm --dir $coverity_intermediate_dir_path --scm perforce
cov-analyze --dir $coverity_intermediate_dir_path $analyze_options
cov-commit-defects --dir $coverity_intermediate_dir_path --user $coverity_user --password $coverity_password --host $coverity_host --port $coverity_port --stream "$coverity_stream" --description "$BUILD_TAG"
You would need to supply values for all the variables there, but that's pretty much what I use. Depending on how you're running your commands, you might need to supply some command args to the cov-import-scm command.
For mine, using svn, I use:
cov-import-scm --dir $coverity_intermediate_dir_path --scm svn --command-arg "--username $svn_user --password $svn_pw --non-interactive --trust-server-cert"

Related

Create Build in Jenkins by using command line with ssh

I hope you're doing well,
I'm traying to automate a Jenkins process using a bash script in Linux, in which I need to create a build, then with created build I need to create a build using the option "Build with parameters" and use a specific build to create it.
First I'm creating the build with a similar command (it is corking fine it created the build successfully):
ssh -l MyUser -p JENK_PORT JENK_SERver build job-build -s –v
it creates the build number 10 then I need to use this build to create another one for the job job-deploy, something like:
ssh -l MyUser -p JENK_PORT JENK_SERver build job-deploy -p COPY_PROMOTION_LEVEL=1 -p BUILD_SELECTOR="\<SpecificBuildSelector plugin=\"copyartifact#1.37\"\> \<buildNumber\>10 \</buildNumber\>\</SpecificBuildSelector\>" -s -v
when I ran it, I'm getting this error:
ERROR: Too many arguments: plugin=copyartifact#1.37>
If I change the "space" for &nbsp/&#032 or adding a back slash between SpecificBuildSelector and plugin=copyartifact#1.37, I got this error:
ERROR: Unexpected exception occurred while performing build command.
com.thoughtworks.xstream.io.StreamException: : only whitespace
content allowed before start tag and not \ (position: START_DOCUMENT
seen ... #1:1)
Do you know how can I do it??, create the build for an specific build in the command line by passing the build parameters with -p option?
Thanks in advance.

Docker - unable to run script

What I'm doing
I am using AWS batch to run a docker container for a large compute job. I have configured the ECR/ECS successfully to the best of my knowledge but am having issues running the required commands for reasons that are beyond my level of understanding with docker ( newbie )
What I need to do is pass the below commands into my application and start my application to perform some heavy computing tasks; all commands listed below must be present.
The Issue(s)
The issue arises when I send the submit job to AWS batch; this service pulls the image from the ACR ( amazon container repository ) and spins up a compute environment. The issue comes from when I try to run the command I pass in, below I will go throgh it.
"command": [
"mkdir -p logging",
"chmod 777 logging/",
"docker run -t -i -e my-application", # container name
"-e APIKEY",
"-e BASEURI",
"-e APIUSER",
"-v WORKSPACE /logging:/src/log",
"DOCKERIMAGE",
"python my_app.py",
"-t APP_USER",
"-e APP_ENVIRONMENT",
"-u APP_USERNAME",
"-p APP_PASSWORD",
"-i IN_PATH",
"-o OUT_PATH",
"-b tmp/"
]
The command above generates the following error(s)
container_linux.go:370: starting container process caused: exec: "mkdir -p log": executable file not found in $PATH
I tried to pass in the command to echo the env var $PATH but was unsuccesfull getting a response and resulted in a similar error.
I have ran successfully "ls" and was able to see the directory contents of my application inside.
I am not however able to run any of these commands that I have included in the command [] section. I have tried just running python and such in hopes of getting a more detailed error but was unsuccessful.
Logic in plain English
Create a path called logging if it doesnt exist
set the permissions for logging
run the docker container and pass in the environment variables while doing so
Tell docker to run the python file my_app.py and pass in the expected runtime args
Execute and perform the required logic deligated in the python3 application
Questions
Why can I not create a directory here called "logging" where am I ?
Am I running these properly as defined by AWS batch? or docker
What am I missing or where am I going wrong?
AWS Batch high level doc
AWS Batch link specific to what i'm doing
Assuming that you're following the syntax described in the Container
Properties
section of the AWS docs, you have several problems with the syntax of
your command directive.
First
The command directive can only run a single command. You can't mash together a bunch of commands as you're trying to do in your example. If you need to run multiple commands you would need to embed them as an argument to a shell. For example, something like:
command: ["/bin/sh", "-c", "mkdir -p logging; chmod 777 logging; ..."]
Second
You must properly tokenize your
command lines -- that is, when you type mkdir -p logging at the
command prompt, the shell splits this into three parts (or "tokens"): ['mkdir', '-p', 'logging']. You need to do the same thing when building up the
list of arguments to command.
This is invalid:
command: ["mkdir -p logging"]
That would looking for a command named mkdir -p logging, and of course no such command exists. That would properly be written as:
command: ["mkdir", "-p", "logging"]
Third
I'm not very familiar with the AWS batch environment, but it's unlikely you can run a docker command inside a docker` container as you're trying to do. It's unclear why you're doing this, though: why not just configure your AWS batch job with the appropriate image, environment variables, etc?
Take a look at some of these example job definitions.

'Git: gpg failed to sign the data' in visual studio code

After a fresh Linux install I'm trying to set up my environment and I keep getting the Git: gpg failed to sign the data error upon committing changes locally. I'm using Visual Studio Code, proprietary, not opensource version.
.gitconfig:
[user]
name = djweaver-dev
email = djweaver#djweaver.dev
signingkey = 37A0xxxx...
[core]
excludesfile = /home/dweaver/.gitignore_global
[commit]
gpgSign = true
yikes. furthermore I can't find a way to copy the output log nor can I find where that log is so here is a pic:
Steps I have taken so far:
generated new key (RSA 4096) in gnugp
added signing key to global .gitconfig
set "git.enableCommitSigning": true in Visual Studio Code settings
cloned my repo from github
Typically when I commits in the past I would get a dialog box requesting GPG authentication upon commit. I do not get this now, just the error dialog.
UPDATE: Okay now I'm really confused. I restarted vscode (not the first time I've done this in this process) and voilà, it works. Only thing I can think of is maybe I biffed the directory somehow? Either way, it works now.
UPDATE: Oddly, I'm back to this same issue almost a month later after a fresh arch install. I've tried everything that I've been able to find on this site, and nothing works.
I've tried adding export GPG_TTY=$(tty) to .bash_profile, and also .bashrc
Git log:
Looking for git in: git
Using git 2.26.2 from git
> git rev-parse --show-toplevel
> git rev-parse --git-dir
Open repository: /home/dw/dev/website
> git status -z -u
> git symbolic-ref --short HEAD
> git rev-parse master
> git rev-parse --symbolic-full-name master#{u}
> git rev-list --left-right master...refs/remotes/origin/master
> git for-each-ref --format %(refname) %(objectname) --sort -committerdate
> git remote --verbose
Failed to watch ref '/home/dw/dev/website/.git/refs/remotes/origin/master', is most likely packed.
Error: ENOENT: no such file or directory, watch '/home/dw/dev/website/.git/refs/remotes/origin/master'
at FSWatcher.start (internal/fs/watchers.js:165:26)
at Object.watch (fs.js:1270:11)
at Object.t.watch (/usr/lib/code/extensions/git/dist/main.js:1:604919)
at T.updateTransientWatchers (/usr/lib/code/extensions/git/dist/main.js:1:83965)
at e.fire (/usr/lib/code/out/vs/workbench/services/extensions/node/extensionHostProcess.js:46:87)
at e.updateModelState (/usr/lib/code/extensions/git/dist/main.js:1:103179)
> git config --get commit.template
> git check-ignore -v -z --stdin
> git check-ignore -v -z --stdin
> git commit --quiet --allow-empty-message --file - -S
error: gpg failed to sign the data
fatal: failed to write commit object
> git config --get-all user.name
> git config --get-all user.email
Same config as last time, user.name and user.email both match each key I've been trying it with... user.signingkey matches. Not sure where else to go with this one, as I've tried it across newly initialized local repos as well as repos that I've pulled from github both with official MS vscode (AUR) and OSS version, in the vscode terminal emulator as well as gnome terminal with same results so it has to be either a git thing or a gnugp thing.
What I have noticed is that after committing without signing, it will work immediately after: I get prompted for my key passphrase the first time, then it works on subsequent commits until a seemingly random number of minutes later, it just doesn't work anymore and the process has to be repeated.
There were a few macos users posting about having a stalled gpg-agent running in the background and it fixed it for them, however, I am seeing:
[dw#dwLinux website]$ gpg-agent
gpg-agent[2870]: gpg-agent running and available
Whats interesting also is that by doing echo "test" | gpg --clearsign I get the same results: it works for a short period of time, then I can't sign anymore.
UPDATE
Okay so day number 2 of trying to fix this. To rule out the gpg-agent theory as described here I followed the instructions on how to reload gpg-agent using the $ gpg-connect-agent reloadagent /bye command demonstrated on the Arch Linux Wiki
This had no effect
So being that I can reproduce this problem across vscode official, oss code, and vscodium, as well as bash, I thought maybe this was a permissions related issue, as so many problems with linux typically are. I added my user to all kinds of groups, including root, and this also had no effect so I think I can safely rule out the following:
VS Code
GnuGP
gpg-agent
Linux permissions
So my next focus was the config files themselves, but as has been stated before the credentials match the key in .gitconfig and my .bash_profile has been correctly configured with export GPG_TTY=$(tty).
An interesting note on this from the official GnuPG docs shows a syntax discrepency between their way, and the way you are instructed to append this to .bash_profile on the GitHub docs here
From GnuPG: "The far most common reason for this is that the environment variable GPG_TTY has not been set correctly. Make sure that it has been set to a real tty device and not just to ‘/dev/tty’; i.e. ‘GPG_TTY=tty’ is plainly wrong; what you want is ‘GPG_TTY=tty’ — note the back ticks. Also make sure that this environment variable gets exported, that is you should follow up the setting with an ‘export GPG_TTY’"
As I understood $(whatever) in bash was to execute a command, but for safe measure I've appended .bash_profile using both ways and neither solved the issue.
One last thing
In this post the user talks about gpg-agent authentication not being available when daemonized and gpg access is being initiated by another application (such as an IDE like VSCode), which explains how I could temporarily sign commits after committing a random file or doing echo "test" | gpg --clearsign and being authenticated... but alas like most other 'solutions' to this topic, they reveal that all they had to do in the end was add export GPG_TTY=$(tty) to their .bash_profile, which I have already tried.
Where to go from here?
I still can't explain why it worked on my previous install, and frankly, not a whole lot has changed afaik. I typically do fresh installs often and keep a pretty minimal arch linux build with lts kernel each time w/base-devel and nodejs/python/git/vscode/firefox/discord is pretty much my entire workflow. I'm all out of ideas.
first make sure to add
export GPG_TTY=$(tty)
in your .bashrc
Apparently VSCode doesn't ask for the passphrase and that's why it gives an error.
I don't know the reason.
My personal solution do a console commit first or run the following line
echo "test" | gpg --clearsign
Edit
In order to avoid typing the passphrase on every commit, you can make GPG remember it for 8 hours or until the next reboot:
mkdir -p ~/.gnupg
echo "default-cache-ttl 28800" >> ~/.gnupg/gpg-agent.conf
GitHub Guide
Maybe git cannot find gpg? That was my problem with working with VSCode and using Remote-Containers to create development containers. Try running this in the Terminal within VSCode (in the container)
git config --global --unset gpg.program
git config --global --add gpg.program /usr/bin/gpg
or wherever your gpg is located. You can find out by typing
which gpg
If that works then you can put it in your Dockerfile for your development container.
I had the same issue a few days ago while using VS Code with WSL. The problem is that VS Code doesn't load the .profile file (and all the environment variables in it) correctly (try to run this command but it doesn't get the correct result: echo $GPG_TTY). Fortunately, setting the "-l" option for shell args in VS Code preferences worked for me. This ensures that the .profile (or .zprofile) file is successfully loaded.
I added these lines to settings.json:
"terminal.integrated.shellArgs.linux": [
"-l"
]
Make sure to add export GPG_TTY=$(tty) in your .profile file and restart your terminal and VS Code.
Update: Since VSCode is deprecating the shellArgs oprion, use
the following snippet as an alternative.
"terminal.integrated.profiles.linux": {
"bash": {
"path": "bash",
"args": ["-l"],
"icon": "terminal-bash"
},
"zsh": {
"path": "zsh",
"args": ["-l"],
},
"fish": {
"path": "fish",
"args": ["-l"],
},
"tmux": {
"path": "tmux",
"args": ["-l"],
"icon": "terminal-tmux"
},
"pwsh": {
"path": "pwsh",
"args": ["-l"],
"icon": "terminal-powershell"
}
},
"terminal.integrated.defaultProfile.linux": "bash"
-l option is added to all terminal profiles above,
delete unused profiles and set your default profile at your wish.
I have same issue, and I have resolved it.
Background
macOS
GPG Suite to generate GPG key
pinentry-mac
How I solve this problem
I saw this answer, and followed it.
Get keys
gpg2 --list-keys
Result
/Users/xxuser/.gnupg/pubring.kbx
---------------------------------
pub dsa2048 2010-08-19 [SC] [expires: 2024-05-11]
85E38F69046BSDFB07B76D78F0500D026C4
uid [ unknown] GPGTools Team <team#gpgtools.org>
uid [ unknown] [jpeg image of size 6329]
sub rsa4096 2014-04-08 [S] [expires: 2024-05-11]
sub rsa4096 2020-05-11 [E] [expires: 2024-05-11]
pub rsa4096 2020-05-04 [SC] [expires: 2024-05-03]
B97E9964ACAD1928300D37CC8A9E3745558E41AF
uid [ unknown] GPGTools Support <support#gpgtools.org>
sub rsa4096 2020-05-04 [E] [expires: 2024-05-03]
pub rsa4096 2021-07-29 [SC] [expires: 2025-07-29]
926E268C01892E8A2FCCD2A101CEB6267272A9A5
uid [ultimate] xxuser <x#xxgolo.com>
sub rsa4096 2021-07-29 [E] [expires: 2025-07-29]
Since x#xxgolo.com is the email that I create secret for, 926E268C01892E8A2FCCD2A101CEB6267272A9A5 is the key code I need.
Let git user this key.
git config --global user.signingkey 926E268C01892E8A2FCCD2A101CEB6267272A9A5
Now it should work.
git commit -S -m "This is a signed commit"
note If you need it to work with Github, you need to add your public GPG key to Github, following this Guide.
Make sure echo "test" | gpg --clearsign runs successfully first before trying the below.
Very helpful to check what git commit is doing under the hood. Run the following commit with GIT_TRACE=1 as follow
GIT_TRACE=1 git commit -S -m "MESSAGE"
This will show what user name, email and key does git uses when committing.
In my case, I found that git was picking up the wrong user's and key details for signing the commit. I mainly intended to use the local config of the repo rather than the global and adding the following to the local git config (located at "REPO_PATH/.git/config") got signing the commit to work both in Terminal and VSCode
[user]
name = USER NAME
email = USER EMAIL
signingKey = SIGNING KEY
It can also be set with the following:
git config --local user.name "USER NAME"
git config --local user.email "USER EMAIL"
git config --local user.signingkey "USIGNING KEY"
I'm not sure if this is too late, but... I did find an immediate solution.
To see what user.name and user.email you have, run:
git config -l
You may notice two entries for user.name. You may have made the same mistake as me! I put my actual name in there instead of GitHub username, and there ended up being two entries of user.name! I just changed the global user.name back to my github username, like so...
git config --global user.name "ghusername"
Next, git commit, and it should work:
git commit -m "<YOUR MESSAGE>"
Let me know if this works for you, I want to know if it's the same problem.

Security token not effective on SonarCloud

On SonarCloud, I created an organization and a user (from GitHub), plus a project. For the user I created a token. Then I ran the command
mvn org.sonarsource.scanner.maven:sonar-maven-plugin:3.5.0.1254:sonar -Dsonar.projectKey=<project key> -Dsonar.organization=<my org> -Dsonar.host.url=https://sonarcloud.io -Dsonar.login=<token>
I come up with the error message
[ERROR] Failed to execute goal org.sonarsource.scanner.maven:sonar-maven-plugin:3.5.0.1254:sonar (default-cli) on project XXX: You're not authorized to run analysis. Please contact the project administrator.
In the project settings > Administration > Permissions, the user does have "Execute Analysis" permission.
If I add the "Execute Analysis" permission to Anyone, the command above works (it does not need the -Dsonar.login option).
Does anyone have a clue?
Adding the "Execute Analysis" permission to the SonarCloud user who generated the token should be enough.
Can you retry with:
mvn sonar:sonar
"-Dsonar.projectKey=<project key>" \
"-Dsonar.organization=<my org>" \
"-Dsonar.host.url=https://sonarcloud.io" \
"-Dsonar.login=<token>"
In case it doesn't work, can you provide the output of the command?
It turns out that SonarCloud works as expected. I had forgotten that some people in my organization seem to enjoy making their colleagues' life miserable. Sneakily removing items such as sonar.login from the requests is one among their tricks.

Docker and securing passwords

I've been experimenting with Docker recently on building some services to play around with and one thing that keeps nagging me has been putting passwords in a Dockerfile. I'm a developer so storing passwords in source feels like a punch in the face. Should this even be a concern? Are there any good conventions on how to handle passwords in Dockerfiles?
Definitely it is a concern. Dockerfiles are commonly checked in to repositories and shared with other people. An alternative is to provide any credentials (usernames, passwords, tokens, anything sensitive) as environment variables at runtime. This is possible via the -e argument (for individual vars on the CLI) or --env-file argument (for multiple variables in a file) to docker run. Read this for using environmental with docker-compose.
Using --env-file is definitely a safer option since this protects against the secrets showing up in ps or in logs if one uses set -x.
However, env vars are not particularly secure either. They are visible via docker inspect, and hence they are available to any user that can run docker commands. (Of course, any user that has access to docker on the host also has root anyway.)
My preferred pattern is to use a wrapper script as the ENTRYPOINT or CMD. The wrapper script can first import secrets from an outside location in to the container at run time, then execute the application, providing the secrets. The exact mechanics of this vary based on your run time environment. In AWS, you can use a combination of IAM roles, the Key Management Service, and S3 to store encrypted secrets in an S3 bucket. Something like HashiCorp Vault or credstash is another option.
AFAIK there is no optimal pattern for using sensitive data as part of the build process. In fact, I have an SO question on this topic. You can use docker-squash to remove layers from an image. But there's no native functionality in Docker for this purpose.
You may find shykes comments on config in containers useful.
Our team avoids putting credentials in repositories, so that means they're not allowed in Dockerfile. Our best practice within applications is to use creds from environment variables.
We solve for this using docker-compose.
Within docker-compose.yml, you can specify a file that contains the environment variables for the container:
env_file:
- .env
Make sure to add .env to .gitignore, then set the credentials within the .env file like:
SOME_USERNAME=myUser
SOME_PWD_VAR=myPwd
Store the .env file locally or in a secure location where the rest of the team can grab it.
See: https://docs.docker.com/compose/environment-variables/#/the-env-file
Docker now (version 1.13 or 17.06 and higher) has support for managing secret information. Here's an overview and more detailed documentation
Similar feature exists in kubernetes and DCOS
You should never add credentials to a container unless you're OK broadcasting the creds to whomever can download the image. In particular, doing and ADD creds and later RUN rm creds is not secure because the creds file remains in the final image in an intermediate filesystem layer. It's easy for anyone with access to the image to extract it.
The typical solution I've seen when you need creds to checkout dependencies and such is to use one container to build another. I.e., typically you have some build environment in your base container and you need to invoke that to build your app container. So the simple solution is to add your app source and then RUN the build commands. This is insecure if you need creds in that RUN. Instead what you do is put your source into a local directory, run (as in docker run) the container to perform the build step with the local source directory mounted as volume and the creds either injected or mounted as another volume. Once the build step is complete you build your final container by simply ADDing the local source directory which now contains the built artifacts.
I'm hoping Docker adds some features to simplify all this!
Update: looks like the method going forward will be to have nested builds. In short, the dockerfile would describe a first container that is used to build the run-time environment and then a second nested container build that can assemble all the pieces into the final container. This way the build-time stuff isn't in the second container. This of a Java app where you need the JDK for building the app but only the JRE for running it. There are a number of proposals being discussed, best to start from https://github.com/docker/docker/issues/7115 and follow some of the links for alternate proposals.
An alternative to using environment variables, which can get messy if you have a lot of them, is to use volumes to make a directory on the host accessible in the container.
If you put all your credentials as files in that folder, then the container can read the files and use them as it pleases.
For example:
$ echo "secret" > /root/configs/password.txt
$ docker run -v /root/configs:/cfg ...
In the Docker container:
# echo Password is `cat /cfg/password.txt`
Password is secret
Many programs can read their credentials from a separate file, so this way you can just point the program to one of the files.
run-time only solution
docker-compose also provides a non-swarm mode solution (since v1.11:
Secrets using bind mounts).
The secrets are mounted as files below /run/secrets/ by docker-compose. This solves the problem at run-time (running the container), but not at build-time (building the image), because /run/secrets/ is not mounted at build-time. Furthermore this behavior depends on running the container with docker-compose.
Example:
Dockerfile
FROM alpine
CMD cat /run/secrets/password
docker-compose.yml
version: '3.1'
services:
app:
build: .
secrets:
- password
secrets:
password:
file: password.txt
To build, execute:
docker-compose up -d
Further reading:
mikesir87's blog - Using Docker Secrets during Development
My approach seems to work, but is probably naive. Tell me why it is wrong.
ARGs set during docker build are exposed by the history subcommand, so no go there. However, when running a container, environment variables given in the run command are available to the container, but are not part of the image.
So, in the Dockerfile, do setup that does not involve secret data. Set a CMD of something like /root/finish.sh. In the run command, use environmental variables to send secret data into the container. finish.sh uses the variables essentially to finish build tasks.
To make managing the secret data easier, put it into a file that is loaded by docker run with the --env-file switch. Of course, keep the file secret. .gitignore and such.
For me, finish.sh runs a Python program. It checks to make sure it hasn't run before, then finishes the setup (e.g., copies the database name into Django's settings.py).
There is a new docker command for "secrets" management. But that only works for swarm clusters.
docker service create
--name my-iis
--publish target=8000,port=8000
--secret src=homepage,target="\inetpub\wwwroot\index.html"
microsoft/iis:nanoserver
The issue 13490 "Secrets: write-up best practices, do's and don'ts, roadmap" just got a new update in Sept. 2020, from Sebastiaan van Stijn:
Build time secrets are now possible when using buildkit as builder; see the blog post "Build secrets and SSH forwarding in Docker 18.09", Nov. 2018, from Tõnis Tiigi.
The documentation is updated: "Build images with BuildKit"
The RUN --mount option used for secrets will graduate to the default (stable) Dockerfile syntax soon.
That last part is new (Sept. 2020)
New Docker Build secret information
The new --secret flag for docker build allows the user to pass secret information to be used in the Dockerfile for building docker images in a safe way that will not end up stored in the final image.
id is the identifier to pass into the docker build --secret.
This identifier is associated with the RUN --mount identifier to use in the Dockerfile.
Docker does not use the filename of where the secret is kept outside of the Dockerfile, since this may be sensitive information.
dst renames the secret file to a specific file in the Dockerfile RUN command to use.
For example, with a secret piece of information stored in a text file:
$ echo 'WARMACHINEROX' > mysecret.txt
And with a Dockerfile that specifies use of a BuildKit frontend docker/dockerfile:1.0-experimental, the secret can be accessed.
For example:
# syntax = docker/dockerfile:1.0-experimental
FROM alpine
# shows secret from default secret location:
RUN --mount=type=secret,id=mysecret cat /run/secrets/mysecret
# shows secret from custom secret location:
RUN --mount=type=secret,id=mysecret,dst=/foobar cat /foobar
This Dockerfile is only to demonstrate that the secret can be accessed. As you can see the secret printed in the build output. The final image built will not have the secret file:
$ docker build --no-cache --progress=plain --secret id=mysecret,src=mysecret.txt .
...
#8 [2/3] RUN --mount=type=secret,id=mysecret cat /run/secrets/mysecret
#8 digest: sha256:5d8cbaeb66183993700828632bfbde246cae8feded11aad40e524f54ce7438d6
#8 name: "[2/3] RUN --mount=type=secret,id=mysecret cat /run/secrets/mysecret"
#8 started: 2018-08-31 21:03:30.703550864 +0000 UTC
#8 1.081 WARMACHINEROX
#8 completed: 2018-08-31 21:03:32.051053831 +0000 UTC
#8 duration: 1.347502967s
#9 [3/3] RUN --mount=type=secret,id=mysecret,dst=/foobar cat /foobar
#9 digest: sha256:6c7ebda4599ec6acb40358017e51ccb4c5471dc434573b9b7188143757459efa
#9 name: "[3/3] RUN --mount=type=secret,id=mysecret,dst=/foobar cat /foobar"
#9 started: 2018-08-31 21:03:32.052880985 +0000 UTC
#9 1.216 WARMACHINEROX
#9 completed: 2018-08-31 21:03:33.523282118 +0000 UTC
#9 duration: 1.470401133s
...
The 12-Factor app methodology tells, that any configuration should be stored in environment variables.
Docker compose could do variable substitution in configuration, so that could be used to pass passwords from host to docker.
Starting from Version 20.10, besides using secret-file, you could also provide secrets directly with env.
buildkit: secrets: allow providing secrets with env moby/moby#41234 docker/cli#2656 moby/buildkit#1534
Support --secret id=foo,env=MY_ENV as an alternative for storing a secret value to a file.
--secret id=GIT_AUTH_TOKEN will load env if it exists and the file does not.
secret-file:
THIS IS SECRET
Dockerfile:
# syntax = docker/dockerfile:1.3
FROM python:3.8-slim-buster
COPY build-script.sh .
RUN --mount=type=secret,id=mysecret ./build-script.sh
build-script.sh:
cat /run/secrets/mysecret
Execution:
$ export MYSECRET=theverysecretpassword
$ export DOCKER_BUILDKIT=1
$ docker build --progress=plain --secret id=mysecret,env=MYSECRET -t abc:1 . --no-cache
......
#9 [stage-0 3/3] RUN --mount=type=secret,id=mysecret ./build-script.sh
#9 sha256:e32137e3eeb0fe2e4b515862f4cd6df4b73019567ae0f49eb5896a10e3f7c94e
#9 0.931 theverysecretpassword#9 DONE 1.5s
......
With Docker v1.9 you can use the ARG instruction to fetch arguments passed by command line to the image on build action. Simply use the --build-arg flag. So you can avoid to keep explicit password (or other sensible information) on the Dockerfile and pass them on the fly.
source: https://docs.docker.com/engine/reference/commandline/build/ http://docs.docker.com/engine/reference/builder/#arg
Example:
Dockerfile
FROM busybox
ARG user
RUN echo "user is $user"
build image command
docker build --build-arg user=capuccino -t test_arguments -f path/to/dockerfile .
during the build it print
$ docker build --build-arg user=capuccino -t test_arguments -f ./test_args.Dockerfile .
Sending build context to Docker daemon 2.048 kB
Step 1 : FROM busybox
---> c51f86c28340
Step 2 : ARG user
---> Running in 43a4aa0e421d
---> f0359070fc8f
Removing intermediate container 43a4aa0e421d
Step 3 : RUN echo "user is $user"
---> Running in 4360fb10d46a
**user is capuccino**
---> 1408147c1cb9
Removing intermediate container 4360fb10d46a
Successfully built 1408147c1cb9
Hope it helps! Bye.
Something simply like this will work I guess if it is bash shell.
read -sp "db_password:" password | docker run -itd --name <container_name> --build-arg mysql_db_password=$db_password alpine /bin/bash
Simply read it silently and pass as argument in Docker image. You need to accept the variable as ARG in Dockerfile.
While I totally agree there is no simple solution. There continues to be a single point of failure. Either the dockerfile, etcd, and so on. Apcera has a plan that looks like sidekick - dual authentication. In other words two container cannot talk unless there is a Apcera configuration rule. In their demo the uid/pwd was in the clear and could not be reused until the admin configured the linkage. For this to work, however, it probably meant patching Docker or at least the network plugin (if there is such a thing).

Resources