I am building a docker container with a nodejs application, which will be build from meteorJS. For the build a shell runner is used (`meteor build /opt/project/build/core --directory) as this is all done in a gitlab CI.
build:
stage: build
tags:
- deploy
before_script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
- meteor npm install
- meteor build /opt/project/build/core --directory
script:
- cd /opt/project/build/core/bundle
- docker build -t $CI_REGISTRY_IMAGE:latest .
So the files of the application are now at /opt/project/build/core. Now I want to copy those file into another docker image (project-e2e:latest)
I tried to do
docker cp /opt/project/build/core/bundle project-e2e:latest/opt/project/build/core
But this gives me the error
Error response from daemon: No such container: project-e2e
But I see, the container is running:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a238132e37a2 project-e2e:latest "/bin/bash" 14 hours ago Up 14 hours clever_kirch
Maybe the problem is, that I'm trying to copy out of the shell runner docker image and the target project-e2e is 'outside'?
If you want to get the files generated into the container you can copy them by using docker:
docker cp nightwatch:/opt/project/build/core/your_file <your_local_path>
Basically the pattern is:
docker cp <source> <target>
If the source/target is a container, you have to use:
<container_name>:<path_inside_container>
Related
Here is my docker file a very simple docker file
FROM node:19-alpine
COPY package.json /app/
COPY src /app/
WORKDIR /app
RUN npm install
RUN ls
CMD [ "node", "src/index.js" ] //it has error, it should be index.js
When I try to execute docker build command it shoes the container id however docker ps does not show any containers, which means container was not launched successfully and there is problem with docker file.
the error is reported if I try to run container from docker client GUI.
How to view the errors from command line if docker file has a problem and it wasn't reported by docker build command?
docker ps only shows running containers.
If you do docker ps -a you should see the container and that it has the 'exited' status.
You can then do docker logs <container name> to see any error messages.
You need first to be clear on these two points:
1. docker file is used to build an image.
Use this command to build an image
you can also refer here
docker build -t node:test --progress=plain .
if run successly, your will get the image ID
2. but you also need to start a container based by this image.
docker run -itd --name=node-test node:test
docker ps | grep node-test
Also, check this container if or not based by your image
I'm new to BitBucket Piepline and trying to use it as GitLab Ci way.
Now I happen to face an issues where I was trying to build docker in docker container using dnd.
error during connect: Post "http://docker:2375/v1.24/auth": dial tcp: lookup docker on 100.100.2.136:53: no such host
The above error show and I did some research believe was from the docker daemon.
Yet the atlassin claim there were no intention to work on privilege mode thus I think of any other option.
bitbucker-pipelines.yml
definitions:
services:
docker: # can only be used with a self-hosted runner
image: docker:23.0.0-dind
pipelines:
default:
- step:
name: 'Login'
runs-on:
- 'self.hosted'
services:
- docker
script:
- echo $ACR_REGISTRY_PASSWORD | docker login -u $ACR_REGISTRY_USERNAME registry-intl.ap-southeast-1.aliyuncs.com --password-stdin
- step:
name: 'Build'
runs-on:
- 'self.hosted'
services:
- docker
script:
- docker build -t $ACR_REGISTRY:latest .
- docker tag $(docker images | awk '{print $1}' | awk 'NR==2') $ACR_REGISTRY:$CI_PIPELINE_ID
- docker push $ACR_REGISTRY:$CI_PIPELINE_ID
- step:
Dockerfile
FROM node:14.17.0
RUN mkdir /app
#working DIR
WORKDIR /app
# Copy Package Json File
COPY ["package.json","./"]
# Expose port 80
EXPOSE 80
# Install git
RUN npm install git
# Install Files
RUN npm install
# Copy the remaining sources code
COPY . .
# Run prisma db
RUN npx prisma db pull
# Run prisma client
RUN npm i #prisma/client
# Build
RUN npm run build
CMD [ "npm","run","dev","node","build/server.ts"]
I have a GitLab instance self-managed and one of my project has a folder which contains 3 sub-directories, these 3 sub-directories have a Dockerfile.
All my Dockerfile's have a grep command to get the latest version from the CHANGELOG.md which is located in the root directory.
I tried something like this to go back 2 steps but it doesn't work (grep: ../../CHANGELOG.md: No such file or directory)
Dockerfile:
grep -m 1 '^## v.*$' "../../CHANGELOG.md"
example:
Link:
https://mygitlab/project/images/myproject
repo content:
.
├──build
├──image1
├──image2
├──image3
├──CHANGELOG.md
gitlab-ci.yaml
script:
- docker build --network host -t $VAL_IM ./build/image1
- docker push $VAL_IM
The issue is happening when I build the images.
docker build --network host -t $VAL_IM ./build/image1
Here, you have set the build context to ./build/image1 -- builds cannot access directories or files outside of the build context. Also keep in mind that if you use RUN in a docker build, it can only access files that have already been copies inside the container (and as stated you can't copy files outside the build context!) so this doesn't quite make sense as stated.
If you're committed to this versioning strategy, what you probably want to do is perform your grep command as part of your GitLab job before calling docker build and pass in the version as a build arg.
In your Dockerfile, add an ARG:
FROM # ...
ARG version
# now you can use the version in the build... eg:
LABEL com.example.version="$version"
RUN echo version is "$version"
Then your GitLab job might be like:
script:
- version=$(grep -m 1 '^## v.*$' "./CHANGELOG.md")
- docker build --build-arg version="${version}" --network host -t $VAL_IM ./build/image1
- docker push $VAL_IM
I installed gitlab_runner.exe and Docker Desktop on Windows 10 and try to execute the following from gitlab-ci.yml
.docker-build:
image: ${CI_DEPENDENCY_PROXY_GROUP_IMAGE_PREFIX}/docker:19.03.12
services:
- name: ${CI_DEPENDENCY_PROXY_GROUP_IMAGE_PREFIX}/docker:19.03.12-dind
alias: docker
before_script:
- docker info
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
script:
- docker build -t $CI_REGISTRY/$CI_PROJECT_PATH/$IMAGE_NAME:$CI_PIPELINE_ID -t $CI_REGISTRY/$CI_PROJECT_PATH/$IMAGE_NAME:$TAG -f $DOCKER_FILE $DOCKER_PATH
- docker push $CI_REGISTRY/$CI_PROJECT_PATH/$IMAGE_NAME:$TAG
- docker push $CI_REGISTRY/$CI_PROJECT_PATH/$IMAGE_NAME:$CI_PIPELINE_ID
As I am running locally, variables CI_REGISTRY is not getting set. I tried the following but nothing worked
1. gitlab-runner-windows-amd64.exe exec shell --env "CI_REGISTRY=gitco.com:4004" .docker-build
2. set CI_REGISTRY=gitco.com:4004 from Windows command prompt
3. Tried setting the variable within .gitlab-ci.yml
No matter, whatever I try, it does not recognize the CI_REGISTRY value and errored as follows
Error response from daemon: Get https://$CI_REGISTRY/v2/: dial tcp: lookup $CI_REGISTRY: no such host
I googled but unable to find relevant link for this issue. Any help is highly appreciated
Summary
docker run doesn't seem to build a container (but it also doesn't throw an error) despite docker build successfully building the container image.
Input and Output
1. Successful docker image creation..
$ docker build -t minitwitter:latest .
...
Successfully built da191988e0db
Successfully tagged minitwitter:latest
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
minitwitter latest da191988e0db 6 seconds ago 173MB
python 3.7-alpine b11d2a09763f 9 days ago 98.8MB
2. ..and docker run completes without error..
$ docker run --name minitwitter -d -p 8000:5000 --rm minitwitter:latest
e8835f1b4c72c8e1a8736589c74d56ee2d12ec7bcfb4695531759fb1c2cf0e48
3. ..but docker container doesn't seem to exist.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
And navigating to the port where the app should be exposed, http://localhost:8000, returns the connection error ERR_CONNECTION_REFUSED.
Docker file, boot.sh
The Dockerfile and boot.sh files are pretty simple I think:
Dockerfile
FROM python:3.7-alpine
RUN adduser -D minitwitter
WORKDIR /home/minitwitter
COPY requirements.txt requirements.txt
RUN python -m venv env
RUN env/bin/pip install -r requirements.txt
RUN env/bin/pip install gunicorn
COPY app app
COPY migrations migrations
COPY minitwitter.py config.py boot.sh ./
RUN chmod a+x boot.sh
ENV FLASK_APP minitwitter.py
RUN chown -R minitwitter:minitwitter ./
USER minitwitter
EXPOSE 5000
ENTRYPOINT ["./boot.sh"]
boot.sh
# BOOTS A DOCKER CONTAINER
#!/bin/sh
source env/bin/activate
flask db upgrade
exec gunicorn -b :5000 --access-logfile - --error-logfile - minitwitter:app
Place the 'shebang' -- #!/bin/sh -- on the first line of the boot.sh shell script.
How I found this answer: This blog post which refers to this Stackoverflow post.
The problem: the original script has a comment on the first line and the shebang on the second line.
Note: The title of the 'Question' is misleading: a docker container was built. The container, however, was short-lived and given I used the -rm option in the docker run command, the container was deleted after it terminated within 2 seconds; this is why it didn't appear in the docker images -a command.