Skip Updating crates.io index when using cargo run - rust

I have a simple Program written in Rust.
When I type cargo run in terminal it always shows:
Updating crates.io index...
And this takes around 40 seconds.
But I just wan to execute my Program and I think cargo does not need to update the index every time I run the Program, since this makes testing very slow...
Is there an option to skip that?

I figured it out:
Since I am running cargo in a Docker container, I need to store the cargo cache persistently because it resets every time the container restarts.

There is The Cargo Book that contains all the information you'd ever want to know about cargo. See this for disabling index update.
I've tried to use this feature myself, and here's the command that worked:
cargo +nightly run -Z no-index-update
The +nightly thing is new to me as well, but I find it here.

This answer has been brought up by users thefeiter and Captain Fim but I think a more complete answer could be cool rust/linux newcomers
When we use docker run, the index is updated every time the container is run because the cache is not shared between runs. So to skip the index update, as Captain Fim mentioned, you need to set the CARGO_HOME environment variable on the container. This environment variable should contain the path to a persistent folder. One simple solution is using the docker volumes to share cache between host and container.
In my case, I created at cargo_home folder in my project (could be somewhere else) on my host. I have passed the whole project folder to the container and set the docker env variable of CARGO_HOME to the container path to the cargo_home folder.
The command to build my app looks like this
docker run --rm --user "$(id -u)":"$(id -g)" -e CARGO_HOME=/usr/src/myapp/cargo_home -v "$PWD":/usr/src/myapp -w /usr/src/myapp rust-compiler cargo build
The first time you will run this command, it will take some time, but you should see the cargo_home folder getting filled with files. The next time you run the command, it should use the cargo_home folder as cache. This should be almost instant if your app source code did not change.

Related

Dockerizing Node.js app - what does: ENV PATH /app/node_modules/.bin:$PATH

I went through one of very few good dockerizing Vue.js tutorials and there is one thing I don't understand why is mandatory in Dockerfile:
# add `/app/node_modules/.bin` to $PATH
ENV PATH /app/node_modules/.bin:$PATH
COPY package.json /usr/src/app/package.json #not sure though how it relates to PATH...
I found only one explanation here which says:
We expose all Node.js binaries to our PATH environment variable and
copy our projects package.json to the app directory. Copying the JSON
file rather than the whole working directory allows us to take
advantage of Docker’s cache layers.
Still, it doesn't made me any smarter. Anyone able to explain it in plain english?
Error prevention
I think this is just a simple method of preventing an error where Docker wasn't able to find the correct executables (or any executables at all). Besides adding another layer to your image, there is in general as far as I know no downside in adding that line to your Dockerfile.
How does it work?
Adding node_modules/bin to the PATH environment variable ensures that the executables created during the npm build or the yarn build processes can be found. You could also COPY your locally builded node_modules folder to the image but it's advised to build it inside the Docker container to ensure all binaries are adapted to the underlying OS running in the container. The best practice would be to use multistage builds.
Furthermore, adding the node_modules/bin at the beginning of the PATH environment variable ensures that exactly these executables (from the node_modules folder) are used instead of any other executables which might also be installed on the system inside the Docker image.
Do I need it?
Short answer: Usually no. It should be optional.
Long answer: It should be enough to set the WORKDIR to the path where the node_modules is located for the issued RUN, CMD or ENTRYPOINT commands in your Dockerfile to find the correct binaries and therefore to successfully get executed. But I for example had a case where Docker wasn't able to find the files (I had a pretty complex setup with a so called devcontainer in VSCode). Adding the line ENV PATH /app/node_modules/.bin:$PATH solved my problem.
So, if you want to increase the stability of your Docker setup in order to make sure that everything works as expected, just add the line.
So I think the benefit of this line is to add the node_modules path from the Docker container to the list of PATHs on the relevant container. If you're on a Mac (or Linux I think) and run:
$ echo $PATH
You should see a list of paths which are used to run global commands from your terminal i.e. gulp, husky, yarn and so on.
The above command will add node_modules path to the list of PATHs in your docker container so that such commands if needed can be run globally inside the container they will work.
.bin (short for 'binaries') is a hidden directory, the period before the bin indicates that it is hidden. This directory contains executable files of your app's modules.
PATH is just a collection of directories/folders that contains executable files.
When you try to do something that requires a specific executable file, the shell looks for it in the collection of directories in PATH.
ENV PATH /app/node_modules/.bin:$PATH adds the .bin directory to this collection, so that when node tries to do something that requires a specific module's executable, it will look for it in the .bin folder.
For each command, like FROM, COPY, RUN, CMD, ..., Docker creates a image with the result of this command, and this images are called as layers. The final image is the result of merge of all layers.
If you use the COPY command to store all the code in one layer, it will be greater than store a environment variable with path of the code.
That's why the cache layers is a benefit.
For more info about layers, take a look at this very good article.

Docker: set a return value as an environment variable

I'm trying to create a temporary folder and then set the path as an environment variable for use in later Dockerfile instructions:
FROM alpine
RUN export TEMPLATES_DIR=$(mktemp -d)
ENV TEMPLATES_DIR=$TEMPLATES_DIR
RUN echo $TEMPLATES_DIR
Above is what I've tried, any idea how I can achieve this?
Anything you run in a Dockerfile will be persisted forever in the resulting Docker image. As a general statement, you don't need to use environment variables to specify filesystem paths, and there's not much point in creating "temporary" paths. Just pick a path; it doesn't even need to be a "normal" Linux path since the container filesystem is isolated.
RUN mkdir /templates
It's common enough for programs to use environment variables for configuration (this is a key part of the "12-factor" design) and so you can set the environment variable to the fixed path too
ENV TEMPLATES_DIR=/templates
In the sequence you show, every RUN step creates a new container with a new shell, and so any environment variables you set in a RUN command get lost at the end of that step. You can't set a persistent environment variable in quite the way you're describing; Create dynamic environment variables at build time in Docker discusses this further.
If it's actually a temporary directory, and you're intending to clean it up, there are two more possibilities. One is to do all of the work you need inside a single RUN step that runs multiple commands. The environment variable won't outlive that RUN step, but it will be accessible within it.
RUN export TEMPLATES_DIR=$(mktemp -d) \
&& echo "$TEMPLATES_DIR" \
&& rm -rf "$TEMPLATES_DIR"
A second is to use a multi-stage build to do your "temporary" work in one image, but then copy the "permanent" parts of it out of that image into the final image you're actually shipping.

Why is COPY in docker build not detecting updates

I run a build on a node application and then use the artifacts to build a docker image. The COPY command that moves my source in place isn't detecting changes to the source files after a build; its just using the cache.
Step 9/12 : COPY server /home/nodejs/app/server
---> Using cache
---> bee2f9334952
Am I doing something wrong with COPY or is there a way to not cache a particular step?
I found this in the Docker documentation:
For the ADD and COPY instructions, the contents of the file(s) in the image are examined and a checksum is calculated for each file. The last-modified and last-accessed times of the file(s) are not considered in these checksums. During the cache lookup, the checksum is compared against the checksum in the existing images. If anything has changed in the file(s), such as the contents and metadata, then the cache is invalidated.
So, as far as I understand, the cache should be invalidated. You can use the --no-cache command-line option to make sure. If you get the correct behavior with --no-cache and an incorrect behavior without it, you would have discovered a bug and should report it.
This was interesting. I found out that COPY WAS working, it just looked like it wasn't.
I was rebuilding the images and restarting my containers, but the container was still using the old image. I had to remove my containers, and then when I started them up they used the newer image that was created, and I could see my changes.
Here is another thread that deals with this more accurately diagnosed (in my case).
For me, the problem was in my interpretation of Docker build output. I did not realize not only the last version of a layer is cached, but also all previous ones.
I was testing cache invalidation by changing a single file back and forth. After the first change, the cache was invalidated OK, but after changing back, the layer was taken from cache, which seemed as if the invalidation logic did not work properly.
https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#leverage-build-cache
It is likely a bug, but hard to replicate. It happens to me in Jenkins builds when I copy a new file to existing folder that used to be copied its entirety using single Dockerfile COPY command. To make cache invalidation work correctly (and avoid rebuilding earlier layers as --no-cache would), it is necessary to run docker build --tag <REPO>/<IMAGE> . on the host (outside of Jenkins).
You could try with ADD instead. It will invalidate the cache for the copy. The bad side is that it will also invalidate the cache for the other commands after it. If your ADD is in the last steps it shouldn't impact to much the build process.
Note: The first encountered ADD instruction will invalidate the cache for all following instructions from the Dockerfile if the contents of have changed. This includes invalidating the cache for RUN instructions. See the Dockerfile Best Practices guide for more information. https://docs.docker.com/engine/reference/builder/#add
Had the same issue. After considering #Nick Brady's post (thanks for the suggestion!), here is my current update procedure that seems to be working fine:
svn update --non-interactive --no-auth-cache --username UUU --password PPP
docker build . -f deploy/Dockerfile -t myimage
docker stop mycontainer
docker rm mycontainer
docker run --name=mycontainer -p 80:3100 -d --restart=always \
--env-file=deploy/.env.production myimage
The magic here is to not simply restart the container (docker restart mycontainer), as this would actually stop and run again the old container that was instantiated from a previous version of myimage. Stopping and destroying the old container and running a new one instead results in a fresh container instantiated from the newly built myimage.
From the point of view of Docker this is just like any other command.
Docker sees that this line didn't change, so it caches it.
Similarly if you have a curl command in your Dockerfile, Docker doesn't fetch the URL just to change if it changed. It checks if the command changed or not, not it's result.

Cannot install inside docker container

I'm quite new at docker, but I'm facing a problem I have no idea how to solve it.
I have a jenkins (docker) image running and everything was fine. A few days ago I created a job so I can run my nodejs tests every time a pull request is made. one of the job's build steps is to run npm install. And the job is constantly failing with this error:
tar (child): bzip2: Cannot exec: No such file or directory
So, I know that I have to install bzip2 inside the jenkins container, but how do I do that? I've already tried to run docker run jenkins bash -c "sudo apt-get bzip2" but I got: bash: sudo: command not found.
With that said, how can I do that?
Thanks in advance.
Answer to this lies inside the philosophy of dcoker containers. Docker containers are/should be immutable. So, this is what you can try to fix this issue.
Treat your base image i.e, jenkins as starting point.
login to this base image and install bzip2.
commit these changes and this should result in a new image.
Now use above image from step 3 to install any other package like npm.
Now commit above image.
Note: To execute commands in much controlled way, I always prefer to use something like this;
docker exec -it jenkins bash
In nutshell, answer to both of your current issues lie in the fact that images are immutable so to make any change that will get propagated is to commit them and use newly created image to make further changes. I hope this helps.
Lots of issues here, but the biggest one is that you need to build your images with the tools you need rather than installing inside of a running container. As techtrainer mentions, images are immutable and don't change (at least from your running container), and containers are disposable (so any changes you make inside them are lost when you restart them unless your data is stored outside the container in a volume).
I do disagree with techtrainer on making your changes in a container and committing them to an image with docker commit. This will work, but it's the hand built method that is very error prone and not easily reproduced. Instead, you should leverage a Dockerfile and use docker build. You can either modify the jenkins image you're using by directly modifying it's Dockerfile, or you can create a child image that is FROM jenkins:latest.
When modifying this image, the Jenkins image is configured to run as the user "jenkins", so you'll need to switch to root to perform your application installs. The "sudo" app is not included in most images, but external to the container, you can run docker commands as any user. From the cli, that's as easy as docker run -u root .... And inside your Dockerfile, you just need a USER root at the top and then USER jenkins at the end.
One last piece of advice is to not run your builds directly on the jenkins container, but rather run agents with your needed build tools that you can upgrade independently from the jenkins container. It's much more flexible, allows you to have multiple environments with only the tools needed for that environment, and if you scale this up, you can use a plugin to spin up agents on demand so you could have hundreds of possible agents to use and only be running a handful of them concurrently.

Accessing Secrets/Private Files Needed for Building in Dockerfile?

I'm trying to build an image in Docker that requires a few secret files to do things like pulling from a private git repo. I've seen a lot of people with code like this:
ADD id_rsa /root/.ssh/id_rsa
RUN chmod 700 /root/.ssh/id_rsa
RUN touch /root/.ssh/known_hosts
RUN ssh-keyscan github.com >> /root/.ssh/known_hosts
RUN git clone git#github.com:some/repo.git /usr/local/some_folder
Although that works, it means I have to store my private id_rsa with my image, which strikes me as a bad idea. What I'd much rather do is keep my secret files in some cloud storage like s3, and just pass in credentials as environment variables to be able to pull everything else down.
I know that I can pass environment variables in at docker run with the -e switch, but if I need some files at build time (like the id_rsa to perform a git clone), what can I do? Ideally I'd be able to pass environment variables to docker build, but that's not possible (I can't understand why).
So, ideas? What's the canonical/correct thing to do here? I can't be the first person with this issue.
I'll start with the easiest part, which I think is a common misconception:
Ideally I'd be able to pass environment variables to docker build, but that's not possible (I can't understand why).
A docker build is meant to be reproducible. Given the same context (the files under the same directory as the Dockerfile) the resulting image is the same. They are also meant to be simple. Both things together explain the absence of environment options or other conditionals.
Now, because the build needs to be reproducible, the execution of each command is cached. If you run the build twice, the git pull will only run the first time.
By your comment, this is not what you intend:
so on any new image build, we always want the newest version of the repo
To trigger a new build you need to either change the context or the Dockerfile.
The canonical way (I'm probably abusing the word, but this is how the automated builds work) is to include the Dockerfile in git.
This allows a simple workflow of git pull ; docker build ... and avoids the problem with storing your git credentials.

Resources