Is there a way to tell Cargo to install and build all my dependencies, but not attempt to build my application?
I thought cargo install would do that, but it actually goes all the way to building my app too. I want to get to a state where cargo build would find all dependencies ready to use, but without touching the /src directory.
What I'm really trying to accomplish:
I'm trying to build a Docker image for a Rust application, where I'd like to do the following steps:
Build time (docker build .):
import a docker image with rust tooling installed
add my Cargo.toml and Cargo.lock files
download and build all dependencies
add my source directory to the image
build my source code
Run time (docker run ...):
run the application
I've tried the following Dockerfile, but the indicated step builds my application as well (which of course fails since the source directory isn't there yet):
FROM jimmycuadra/rust
ADD Cargo.toml /source
ADD Cargo.lock /source
RUN cargo install # <-- failure here
ADD src /source/src
RUN cargo build
ENTRYPOINT cargo run
The reason I want to separate the install dependencies step from actually building my application, is that if I don't change the dependencies, I want Docker to be able use a cached image with all dependencies already installed and built. Thus, I can't ADD /src /source/src until after installing the dependecies, as that would invalidate the cached image when I change my own code.
There is no native support for building just the dependencies in Cargo, as far as I know. There is an open issue for it. I wouldn't be surprised if you could submit something to Cargo to accomplish it though, or perhaps create a third-party Cargo addon. I've wanted this functionality for cargo doc as well, when my own code is too broken to compile ;-)
However, the Rust playground that I maintain does accomplish your end goal. There's a base Docker container that installs Rustup and copies in a Cargo.toml with all of the crates available for the playground. The build steps create a blank project (with a dummy src/lib.rs), then calls cargo build and cargo build --release to compile the crates:
RUN cd / && \
cargo new playground
WORKDIR /playground
ADD Cargo.toml /playground/Cargo.toml
RUN cargo build
RUN cargo build --release
RUN rm src/*.rs
All of the downloaded crates are stored in the Docker image's $HOME/.cargo directory and all of the built crates are stored in the applications target/{debug,release} directories.
Later on, the real source files are copied into the container and cargo build / cargo run can be executed again, using the now-compiled crates.
If you were building an executable project, you'd want to copy in the Cargo.lock as well.
If you add a dummy main or lib file, you can use cargo build to just pull down the dependencies. I'm currently using this solution for my Docker based project:
COPY Cargo.toml .
RUN mkdir src \
&& echo "// dummy file" > src/lib.rs \
&& cargo build
I'm using --volumes, so I'm done at this point. The host volumes come in and blow away the dummy file, and cargo uses the cached dependencies when I go to build the source later. This solution will work just as well if you want to add a COPY (or ADD) later and use the cached dependencies though.
Based on a GitHub comment
FROM rust:1.37
WORKDIR /usr/src
# Create blank project
RUN USER=root cargo new PROJ
# We want dependencies cached, so copy those first.
COPY Cargo.toml /usr/src/PROJ/
COPY Cargo.lock /usr/src/PROJ/
WORKDIR /usr/src/PROJ
# This is a dummy build to get the dependencies cached.
RUN cargo build --release
# Now copy in the rest of the sources
COPY MyPROJECT/src /usr/src/PROJ/src/
# This is the actual build.
RUN cargo build --release \
&& mv target/release/appname /bin \
&& rm -rf /usr/src/PROJ
WORKDIR /
EXPOSE 8888
CMD ["/bin/appname"]
The cargo-chef tool is designed to solve this problem. Here's an example from the README on how you can use it in the Dockerfile:
FROM lukemathwalker/cargo-chef as planner
WORKDIR app
COPY . .
RUN cargo chef prepare --recipe-path recipe.json
FROM lukemathwalker/cargo-chef as cacher
WORKDIR app
COPY --from=planner /app/recipe.json recipe.json
RUN cargo chef cook --release --recipe-path recipe.json
FROM rust as builder
WORKDIR app
COPY . .
# Copy over the cached dependencies
COPY --from=cacher /app/target target
COPY --from=cacher $CARGO_HOME $CARGO_HOME
RUN cargo build --release --bin app
FROM rust as runtime
WORKDIR app
COPY --from=builder /app/target/release/app /usr/local/bin
ENTRYPOINT ["/usr/local/bin/app"]
I just wanted to post this here so others will see it going forward. There's an experimental tool for Docker I've just started using called cargo-wharf (https://github.com/denzp/cargo-wharf/tree/master/cargo-wharf-frontend). It's a Docker BuildKit frontend that caches built cargo dependencies for you. If you only change one of your source files, that's the only thing that gets rebuilt when you call docker build. You use it by annotating your Cargo.toml file, then directing Docker to your Cargo.toml instead of a Dockerfile. Go check it out, it's exactly what I wanted. (I am in no way affiliated with the project.)
It can be done via cargo init, cargo build, and cargo install. For example, for a project called foo, define the following Dockerfile:
FROM rust:slim-bullseye
# Build dependencies only.
RUN cargo init foo
COPY Cargo.toml foo/
RUN cargo build --release; \
rm -rf foo
# Install `foo`.
COPY . .
RUN echo "// force Cargo cache invalidation" >> foo/src/main.rs; \
cargo install --path foo
CMD ["foo"]
Here, cargo init creates the placeholder files that Cargo expects, cargo build builds the dependencies which were specified in the Cargo.toml, and cargo install creates the foo binary. For some reason, the Docker kept building the default project created by cargo init foo. This problem is resolved above by forcing an update for main.rs by appending // force Cargo cache invalidation.
To avoid slow builds due to large build contexts and large layers, make sure that unimportant folders such as target are ignored via .dockerignore. For example, define the following .dockerignore:
**/*.lock
LICENSE
README.md
target
Related
I want to add a feature to https://github.com/opentripplanner/otp-react-redux/ which is pulled in from the https://github.com/opentripplanner/otp-ui/tree/master/packages/geocoder package (add another geocoder).
Coming from the PHP world and composer, I normally do in such cases
composer install
rm -r vendor/foo/bar
composer install --prefer-source
cd vendor/foo/bar
git remote set-url origin <myforkURL>
git checkout main
Now I can easily edit that package in-place and make a pull request.
My question is: Is there a similar work-flow possible for node packages using yarn?
I already tried
yarn add "#opentripplanner/geocoder#master"
but no .git folder appeared in otp-react-redux/node_modules/#opentripplanner or otp-react-redux/node_modules/#opentripplanner/geocoder
Also it looks like that multiple packages are created from the #opentripplanner repo, which might complicate things.
I could try to simply edit the files in node_modules and then copy them to the a manually checked-out git repository, but when running yarn start everything is also overwritten.
EDIT: As the packages come from a monorepo I tried to delete all the #opentripplanner lines from packages.json and added:
yarn add opentripplanner/otp-ui#main
This now causes the build to fail.
I noticed, that the base package.json requires different package versions from the monorepo, so it will not work to require the complete the full main branch.
EDIT2: I found some clue here:
https://github.com/opentripplanner/otp-ui#development
but that also caused dependencies to not resolve properly.
EDIT3: yarn link actually looked promissing:
cd ..
git clone https://github.com/opentripplanner/otp-ui
cd otp-ui/packages/geocoder
yarn link
Now in the main project code (otp-react-redux)
yarn link "#opentripplanner/geocoder"
This creates a symlink in the node_modules folder to the specific folder in the monorepo I have cloned.
Unfortunately the build does not work:
Module not found: Can't resolve '#opentripplanner/geocoder' in 'otp-react-redux/lib/actions'
I even tried to match the version which is used in the main project by checking out the revision of 1.2.1
yarn link does the job!
cd ..
git clone https://github.com/opentripplanner/otp-ui
cd otp-ui
yarn
cd packages/geocoder
yarn link
Now in the main project code (otp-react-redux)
yarn link "#opentripplanner/geocoder"
This creates a symlink in the node_modules folder to the specific folder in the monorepo I have cloned.
To make the build work, the important part is that we run yarn in the monorepo before!
EDIT: Unfortunately the link process needs to be repeated for each of the #opentripplanner modules which require geocoder:
cd node_modules
$ find -name geocoder -type d
./trip-details/node_modules/#opentripplanner/geocoder
./vehicle-rental-overlay/node_modules/#opentripplanner/geocoder
./transitive-overlay/node_modules/#opentripplanner/geocoder
./endpoints-overlay/node_modules/#opentripplanner/geocoder
./zoom-based-markers/node_modules/#opentripplanner/geocoder
./trip-viewer-overlay/node_modules/#opentripplanner/geocoder
./trip-form/node_modules/#opentripplanner/geocoder
./transit-vehicle-overlay/node_modules/#opentripplanner/geocoder
./itinerary-body/node_modules/#opentripplanner/geocoder
./icons/node_modules/#opentripplanner/geocoder
./route-viewer-overlay/node_modules/#opentripplanner/geocoder
./printable-itinerary/node_modules/#opentripplanner/geocoder
./stop-viewer-overlay/node_modules/#opentripplanner/geocoder
./stops-overlay/node_modules/#opentripplanner/geocoder
./location-field/node_modules/#opentripplanner/geocoder
./park-and-ride-overlay/node_modules/#opentripplanner/geocoder
cd trip-details
yarn link "#opentripplanner/geocoder"
repeat for each of them until they are all links:
otp-react-redux$ find node_modules/ -name geocoder -type l
node_modules/#opentripplanner/trip-details/node_modules/#opentripplanner/geocoder
node_modules/#opentripplanner/vehicle-rental-overlay/node_modules/#opentripplanner/geocoder
node_modules/#opentripplanner/transitive-overlay/node_modules/#opentripplanner/geocoder
node_modules/#opentripplanner/endpoints-overlay/node_modules/#opentripplanner/geocoder
node_modules/#opentripplanner/zoom-based-markers/node_modules/#opentripplanner/geocoder
node_modules/#opentripplanner/trip-viewer-overlay/node_modules/#opentripplanner/geocoder
node_modules/#opentripplanner/trip-form/node_modules/#opentripplanner/geocoder
node_modules/#opentripplanner/transit-vehicle-overlay/node_modules/#opentripplanner/geocoder
node_modules/#opentripplanner/itinerary-body/node_modules/#opentripplanner/geocoder
node_modules/#opentripplanner/icons/node_modules/#opentripplanner/geocoder
node_modules/#opentripplanner/route-viewer-overlay/node_modules/#opentripplanner/geocoder
node_modules/#opentripplanner/printable-itinerary/node_modules/#opentripplanner/geocoder
node_modules/#opentripplanner/stop-viewer-overlay/node_modules/#opentripplanner/geocoder
node_modules/#opentripplanner/stops-overlay/node_modules/#opentripplanner/geocoder
node_modules/#opentripplanner/location-field/node_modules/#opentripplanner/geocoder
node_modules/#opentripplanner/park-and-ride-overlay/node_modules/#opentripplanner/geocoder
node_modules/#opentripplanner/base-map/node_modules/#opentripplanner/geocoder
node_modules/#opentripplanner/geocoder
yalc seems a good solution for this kind of problem
cd ~/projects/otp-ui/packages/itinerary-body
yarn tsc
yalc publish
cd ~/projects/otp-react-redux
yalc link #opentripplanner/itinerary-body
Now each time you change something in the package:
cd ~/projects/otp-ui/packages/itinerary-body
yarn tsc && yalc publish
cd ~/projects/otp-react-redux
yalc update
yarn start
I have a NextJS App that I want to build into a docker image and run as a container later. I'm using the Dockerfile from https://nextjs.org/docs/deployment#docker-image.
When I run docker build . Everything works fine until Step 10/23:
yarn run v1.22.15
$ next build
info - Checking validity of types...
info - Creating an optimized production build...
Error: Error loading shared library ld-linux-x86-64.so.2: No such file or directory (needed by /app/node_modules/#next/swc-linux-x64-gnu/next-swc.linux-x64-gnu.node)
I found out that this is caused by SWC and alpine, but does anyone know how to solve this?
Maybe this can help: https://github.com/vercel/next.js/issues/30713
RUN rm -r node_modules/#next/swc-linux-x64-gnu
adding that und yarn install actually fixes that bug
For us, some of the team members had the older versions of the npm, and that created the problem in package-lock.json.
The solution to this is to delete the node_modules and package-lock.json from the project and run npm install
Note: If are building a docker image and your dockerfile has COPY package*.json ./ line then the new package-lock.json has to be updated to the repository from where the build will happen
I have an issue regarding one dependency in my yarn.lock file. The issue is with ldapjs, the latest version has a bug regarding special characters in user or password so I want to freeze it in the latest working version which is 1.0.2.
As I commited my code to master branch, the step of building this project started to fail saying the message of the title.
Here is my dockerfile
FROM repository/node-oracle:10.15.3
LABEL maintainer="Me"
RUN yarn cache clean
# Add Tini
ENV TINI_VERSION v0.18.0
ADD https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini /tini
RUN chmod +x /tini
ENTRYPOINT ["/tini", "--"]
WORKDIR /usr/src/auth
COPY . .
RUN yarn install --frozen-lockfile --non-interactive --silent
ENV PATH /usr/src/auth/node_modules/.bin:$PATH
EXPOSE 3000
CMD ["node", "./bin/www"]
Any work around on how can I make this work?
Also as an extra info, I was able to run the pipeline with this step in a feature branch, the message started in develop and master branch.
[UPDATE]
These are the dependencies updated and freezed in my yarn.lock file
activedirectory#^0.7.2:
version "0.7.2"
resolved "https://registry.yarnpkg.com/activedirectory/-/activedirectory-0.7.2.tgz#19286d10c6b24a98cc906dc638256191686fa91f"
integrity sha1-GShtEMaySpjMkG3GOCVhkWhvqR8=
dependencies:
async ">= 0.1.22"
bunyan ">= 1.3.5"
**ldapjs "=1.0.2"**
underscore ">= 1.4.3"
***ldapjs#1.0.2***:
version "1.0.2"
resolved "https://registry.yarnpkg.com/ldapjs/-/ldapjs-1.0.2.tgz#346e040a95a936e90c47edd6ede5df257dd21ee6"
integrity sha512-XzF2BEGeM/nenYDAJvkDMYovZ07fIGalrYD+suprSqUWPCWpoa+a4vWl5g8o/En85m6NHWBpirDFNClWLAd77w==
dependencies:
asn1 "0.2.1"
assert-plus "0.1.5"
bunyan "0.22.1"
nopt "2.1.1"
pooling "0.4.6"
optionalDependencies:
dtrace-provider "0.2.8"
I was stuck in the same error and the issue was that my yarn.lock file was not up to date. I followed the following link and it fixed my issue.
Apparently, I just had to run yarn install to update my yarn.lock file and push to the repository.
Just an Update. After a few attempts I was finally able to do what i wanted. Removing the ^ from ldap.js and from active directory (which contains the ldap.js library) did the job as expected.
Sometimes the error occurs if the yarn install is run from a folder which contains no yarn.lock file. For example if building inside a docker which contains separate frontend and backend.
Solution 1
In that case go to the specific frontend folder which contains the package.json and yarn.lock folder and run the yarn install from there.
Solution 2
run yarn add <package> which will generate a file yarn.lock in the project base folder if the command is run from the base folder. Copy the contents of that file to the existing yarn.lock. This should solve the problem. Here is a link for yarn add package.
i am trying to deploy my Go app with Alpine in docker, I was able to use it on my Mac and then going to Production with Centos 8 got issues
here is my Dockerfile:
FROM golang:alpine
RUN apk add --no-cache postgresql
RUN apk update && apk add --no-cache gcc && apk add --no-cache libc-dev && apk add --no-cache --update make
# Set the current working Directory inside the container
WORKDIR /app
# Copy go mod and sum files
COPY go.mod go.sum ./
# Download all dependencies. they will be cached of the go.mod and go.sum files are not changed
RUN go mod download
# Copy the source from the current directory to the WORKDIR inisde the container
COPY . .
# Build the Go app
RUN go build .
RUN rm -rf /usr/local/var/postgres/postmaster.pid
// this commands below like "psql -c ;'DROP DATABASE IF EXISTS prod'"
// "psql -c ;'CREATE USER prod'"
RUN make setup
# Exporse port 3000 or 8000 to the outisde world
EXPOSE 3000..
CMD ["make", "run" ]
then i got error:
psql: error: could not connect to server: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
on my make setup i do the migration, create user, database
can make SUPERUSER on psql for that alpine also??
what u can see on the above syntax, is there any wrong and how to correct it? I have stuck from yesterday
Delete your original docker file's from 8th line to 20th and add these.
If your folder structure like this :
- directory
|
-> Dockerfile
-> go.mod
-> go.sum
-> go source files
# Copy go mod and sum files
COPY . /app
# Set the current working Directory inside the container
WORKDIR /app
RUN go mod download
RUN go build .
You cannot run database commands in a Dockerfile.
By analogy, consider the go generate command: you can embed special comments in your Go source code that ask the Go compiler to run programs for you, typically to generate other source files. Say you //go:generate: psql ... in your source code and run go generate ... && go install . Now you run that compiled binary on a different system. Since you're not pointing at the same database any more, the database setup is lost.
In the same way, a Dockerfile produces a compiled artifact (in this case the Docker image) and it needs to run independently of its host environment. In your example you could docker push the image you built on MacOS to a registry, and docker run it from the CentOS host without rebuilding it (and that's probably better practice for a production system).
For the specific commands you show in the question, you could put them in a database container's /docker-entrypoint-initdb.d directory, or otherwise just run them once pointing at your database. For more general-purpose database setup you might look at running a database migration tool at application startup, either in your program's main() function or in a wrapper entrypoint script.
I'm using Gitlab CI in order to implement CI for my Node.js app. I'm already using artifacts and sharing the dependencies between jobs, however, I would like to make it faster. Every time a pipeline starts, it installs the dependencies during the first job and I'm thinking to prevent this by having all dependencies in a Docker image and pass that image to test & production stages. However, I have been unable to do so. Apparently Gitlab doesn't run the code inside my image's WORKDIR.
Following is my Dockerfile:
FROM node:6.13-alpine
WORKDIR /home/app
COPY package.json .
RUN npm install
CMD [“sh”]
And following is my gitlab-ci.yml:
test:
image: azarboon/dependencies-test
stage: test
script:
— pwd
— npm run test
Looking at logs, pwd results in /builds/anderson-martin/lambda-test, which is different from the defined WORKDIR and also installed dependencies are not found. Do you have any recommendation for me how can I Dockerize my dependencies and speed up the build stage?
Probably the easiest way to solve your issue is to symlink the node_modules folder from your base image into the gitlab CI workspace like this:
test:
image: azarboon/dependencies-test
stage: test
script:
— ln -s /home/app/node_modules ./node_modules
— npm run test
The syntax for symlinking is ln -s EXISTING_FILE_OR_DIRECTORY SYMLINK_NAME.
Please note that /home/app/ is the workspace which you´re using in your base image.
Gitlab also provides other functionality to share dependencies. On the one hand you have caching and on the other job artifacts.