Accessing Secrets/Private Files Needed for Building in Dockerfile? - security

I'm trying to build an image in Docker that requires a few secret files to do things like pulling from a private git repo. I've seen a lot of people with code like this:
ADD id_rsa /root/.ssh/id_rsa
RUN chmod 700 /root/.ssh/id_rsa
RUN touch /root/.ssh/known_hosts
RUN ssh-keyscan github.com >> /root/.ssh/known_hosts
RUN git clone git#github.com:some/repo.git /usr/local/some_folder
Although that works, it means I have to store my private id_rsa with my image, which strikes me as a bad idea. What I'd much rather do is keep my secret files in some cloud storage like s3, and just pass in credentials as environment variables to be able to pull everything else down.
I know that I can pass environment variables in at docker run with the -e switch, but if I need some files at build time (like the id_rsa to perform a git clone), what can I do? Ideally I'd be able to pass environment variables to docker build, but that's not possible (I can't understand why).
So, ideas? What's the canonical/correct thing to do here? I can't be the first person with this issue.

I'll start with the easiest part, which I think is a common misconception:
Ideally I'd be able to pass environment variables to docker build, but that's not possible (I can't understand why).
A docker build is meant to be reproducible. Given the same context (the files under the same directory as the Dockerfile) the resulting image is the same. They are also meant to be simple. Both things together explain the absence of environment options or other conditionals.
Now, because the build needs to be reproducible, the execution of each command is cached. If you run the build twice, the git pull will only run the first time.
By your comment, this is not what you intend:
so on any new image build, we always want the newest version of the repo
To trigger a new build you need to either change the context or the Dockerfile.
The canonical way (I'm probably abusing the word, but this is how the automated builds work) is to include the Dockerfile in git.
This allows a simple workflow of git pull ; docker build ... and avoids the problem with storing your git credentials.

Related

Skip Updating crates.io index when using cargo run

I have a simple Program written in Rust.
When I type cargo run in terminal it always shows:
Updating crates.io index...
And this takes around 40 seconds.
But I just wan to execute my Program and I think cargo does not need to update the index every time I run the Program, since this makes testing very slow...
Is there an option to skip that?
I figured it out:
Since I am running cargo in a Docker container, I need to store the cargo cache persistently because it resets every time the container restarts.
There is The Cargo Book that contains all the information you'd ever want to know about cargo. See this for disabling index update.
I've tried to use this feature myself, and here's the command that worked:
cargo +nightly run -Z no-index-update
The +nightly thing is new to me as well, but I find it here.
This answer has been brought up by users thefeiter and Captain Fim but I think a more complete answer could be cool rust/linux newcomers
When we use docker run, the index is updated every time the container is run because the cache is not shared between runs. So to skip the index update, as Captain Fim mentioned, you need to set the CARGO_HOME environment variable on the container. This environment variable should contain the path to a persistent folder. One simple solution is using the docker volumes to share cache between host and container.
In my case, I created at cargo_home folder in my project (could be somewhere else) on my host. I have passed the whole project folder to the container and set the docker env variable of CARGO_HOME to the container path to the cargo_home folder.
The command to build my app looks like this
docker run --rm --user "$(id -u)":"$(id -g)" -e CARGO_HOME=/usr/src/myapp/cargo_home -v "$PWD":/usr/src/myapp -w /usr/src/myapp rust-compiler cargo build
The first time you will run this command, it will take some time, but you should see the cargo_home folder getting filled with files. The next time you run the command, it should use the cargo_home folder as cache. This should be almost instant if your app source code did not change.

How to exclude the VENV in Docker PUSH

I have a basic Python script that I would like to containerize. As part of the script, I am to pip install az.cli which is almost 500 mb in size. Locally, it works all right. When I docker build it, and docker run it, it works just as how it's supposed to work. The issue is when I want to docker push it (to dockerhub for now).
It's packaging the entire project with the venv, that's ~550 mb. I'd like to avoid that, if possible. I added the venv directory in the .dockerignore file, but that doesn't seem to help. I know it's pushing the whole container to dockerhub, so I guess, esentially, is there a way to build/run the docker application without the essence az.cli baked in?
FYI: I am new to Docker, so what I am asking may not make sense.

Add a comment to SVN file

I'm exporting a file in one folder and moving it to production without performing an change, using this command:
svn export --username user --password passwd --non-interactive --force svn://svnserver.com/trunk/patch/115/sql/TestFile.sql
After the movement I would like to add a comment/tag that file was moved successfully. For this, I tried the commands below but they didn't work:
> svn commit -m " Test" TestFile.sql
svn: '/home//SVNTEST/1' is not a working copy
> svn commit -m "Test" svn://svnserver.com/trunk/patch/115/sql/TestFile.sql
svn: Must give local path (not URL) as the target of a commit
Will it be possible if yes how to do so?
To summarize, you're copying a file from the repository to your local machine, then you want to somehow indicate in the repository that this action happened.
Without knowing more about your setup, I think creating a tag is probably the most straightforward way to do this. Use this command:
svn copy svn://svnserver.com/trunk/ svn://svnserver.com/tags/115 -m "Test"
Use whatever unique key for the tag name (here I used '115' since it seemed like that was a patch identifier).
Let's discuss why the commands you tried did not work.
Since you're exporting rather than checking out the file, you don't have a working copy. Exporting is basically equivalent to downloading a file from an HTTP or FTP server; there are no strings attached.
Now, the subcommand commit requires a working copy (in order to know where in the repository to put your local changes), which explains why in the first command you tried the error indicated you aren't in a working copy. The second command errored because you (I think) are trying to tell SVN the remote location to upload your local TestFile.sql, which is not a valid use-case for the commit subcommand.
My suggestion creates a tag, but does so entirely on the server which means you don't need a working copy.

GIT Pull using Ubuntu 16.04.2 LTS

I want to be able to pull down my source code from GitHub automatically, but currently doing it manual using the process below.
Currently i have to do this by going
$ sudo -i
Then I CD to my directory, once in I run the following command
$ git pull origin master
the command then asks for me to enter my key password
Enter passphrase for key '/root/.ssh/id_rsa':
This then pulls the latest code down from GitHub.
A beginning note: does your repo really need to be pulled as root? Why? It's probably not a good idea to be doing that.
Github supplies https pull links that anyone can use to pull without needing a key. So, we can add another remote, used specifically for this purpose, that pulls using the https link.
git remote add autopull https://github.com/<user>/<repo>.git
Now you can change your pull command to:
git pull autopull master
From there, you can put it in .profile, .bash_profile, .bashrc, or maybe even a cron script. You haven't specified how you want to automate it, though, so I can't give any specific examples.
It should be noted that this can only work for public repositories. If this is a private repository, you should create another keypair that has no passphrase required. If it isn't your default keypair you can make use of a trick.
ssh-agent bash -c 'ssh-add /path/to/yourkey; git pull autopull master'
Or another answer to the same question:
GIT_SSH_COMMAND='ssh -i /path/to/yourkey' git pull autopull master

Cannot install inside docker container

I'm quite new at docker, but I'm facing a problem I have no idea how to solve it.
I have a jenkins (docker) image running and everything was fine. A few days ago I created a job so I can run my nodejs tests every time a pull request is made. one of the job's build steps is to run npm install. And the job is constantly failing with this error:
tar (child): bzip2: Cannot exec: No such file or directory
So, I know that I have to install bzip2 inside the jenkins container, but how do I do that? I've already tried to run docker run jenkins bash -c "sudo apt-get bzip2" but I got: bash: sudo: command not found.
With that said, how can I do that?
Thanks in advance.
Answer to this lies inside the philosophy of dcoker containers. Docker containers are/should be immutable. So, this is what you can try to fix this issue.
Treat your base image i.e, jenkins as starting point.
login to this base image and install bzip2.
commit these changes and this should result in a new image.
Now use above image from step 3 to install any other package like npm.
Now commit above image.
Note: To execute commands in much controlled way, I always prefer to use something like this;
docker exec -it jenkins bash
In nutshell, answer to both of your current issues lie in the fact that images are immutable so to make any change that will get propagated is to commit them and use newly created image to make further changes. I hope this helps.
Lots of issues here, but the biggest one is that you need to build your images with the tools you need rather than installing inside of a running container. As techtrainer mentions, images are immutable and don't change (at least from your running container), and containers are disposable (so any changes you make inside them are lost when you restart them unless your data is stored outside the container in a volume).
I do disagree with techtrainer on making your changes in a container and committing them to an image with docker commit. This will work, but it's the hand built method that is very error prone and not easily reproduced. Instead, you should leverage a Dockerfile and use docker build. You can either modify the jenkins image you're using by directly modifying it's Dockerfile, or you can create a child image that is FROM jenkins:latest.
When modifying this image, the Jenkins image is configured to run as the user "jenkins", so you'll need to switch to root to perform your application installs. The "sudo" app is not included in most images, but external to the container, you can run docker commands as any user. From the cli, that's as easy as docker run -u root .... And inside your Dockerfile, you just need a USER root at the top and then USER jenkins at the end.
One last piece of advice is to not run your builds directly on the jenkins container, but rather run agents with your needed build tools that you can upgrade independently from the jenkins container. It's much more flexible, allows you to have multiple environments with only the tools needed for that environment, and if you scale this up, you can use a plugin to spin up agents on demand so you could have hundreds of possible agents to use and only be running a handful of them concurrently.

Resources