I have setup a gitlab runner in docker on my local machine. It all is working fine. When the pipeline starts, the runner seems to handle the workload
But, I would like to do this locally. For example, if you downloaded the gitlab-runner executable, you can run stages locally
$> gitlab-runner exec docker linting
But in my case I have the runner started in docker and I don't have the executable gitlab-runner. So my question is, is there a docker equivalent for this?
gitlab-runner exec works as its own standalone command and runs the job in the context of where the executable is called -- it will not cause the job to be run on a registered runner. It is solely a local operation, so you must have the binary available.
You can run gitlab-runner exec inside of your runner container or a new container if you want.
For example (assuming Linux platform):
docker run --rm \
-v $(pwd):/workspace \ # mount directory to the container
--workdir /workspace \ # set working directory
-v /var/run/docker.sock:/var/run/docker.sock \ # enable docker
--privileged \
--entrypoint="gitlab-runner" \ # set entrypoint
gitlab/gitlab-runner:latest exec docker MY_JOB
Or use doker exec to run the command on your existing container:
docker exec -it my-running-container-name gitlab-runner exec docker linting
Though, you will want to make sure the .gitlab-ci.yml file and project files are present in the container when you start it...
I have a docker file:
FROM debian:bullseye as MY-IMAGE
# #############################
# DO some customization as per need
CMD ["/bin/bash", "-l"]
I am building the image using docker build ..
Case 1:
When I run docker image without /bin/bash , path does not get appended inside the container
$ docker run -it --rm --env PATH="/ADDITIONAL/PATH:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games" MY-IMAGE
my-image$ ehco ${PATH}
my-image$ /usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
Case 2:
If I run docker image with /bin/bash , path get appended to PATH inside the container
docker run -it --rm --env PATH="/ADDITIONAL/PATH:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games" MY-IMAGE
my-image$ ehco ${PATH}
my-image$ /ADDITIONAL/PATH/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
``
May I please know how can get the path appended without passing `/bin/bash` while running image?
I'm new to Docker and I'm having a hard time to setup the docker container as I want. I have a nodejs app can take two parameters when start. For example, I can use
node server.js 0 dev
or
node server.js 1 prod
to switch between production mode and dev mode and determine if it should turn the cluster on. Now I want to create docker image with arguments to do the similar thing, the only thing I can do so far is to adjust the Dockerfile to have a line
CMD [ "node", "server.js", "0", "dev"]
and
docker build -t me/app . to build the docker.
Then docker run -p 9000:9000 -d me/app to run the docker.
But If I want to switch to prod mode, I need to change the Dockerfile CMD to be
CMD [ "node", "server.js", "1", "prod"] ,
and I need to kill the old one listening on port 9000 and rebuild the image.
I wish I can have something like
docker run -p 9000:9000 environment=dev cluster=0 -d me/app
to create an image and run the nodejs command with "environment" and "cluster" arguments, so I don't need to change the Dockerfile and rebuild the docker any more. How can I accomplish this?
Make sure your Dockerfile declares an environment variable with ENV:
ENV environment default_env_value
ENV cluster default_cluster_value
The ENV <key> <value> form can be replaced inline.
Then you can pass an environment variable with docker run. Note that each variable requires a specific -e flag to run.
docker run -p 9000:9000 -e environment=dev -e cluster=0 -d me/app
Or you can set them through your compose file:
node:
environment:
- environment=dev
- cluster=0
Your Dockerfile CMD can use that environment variable, but, as mentioned in issue 5509, you need to do so in a sh -c form:
CMD ["sh", "-c", "node server.js ${cluster} ${environment}"]
The explanation is that the shell is responsible for expanding environment variables, not Docker. When you use the JSON syntax, you're explicitly requesting that your command bypass the shell and be executed directly.
Same idea with Builder RUN (applies to CMD as well):
Unlike the shell form, the exec form does not invoke a command shell.
This means that normal shell processing does not happen.
For example, RUN [ "echo", "$HOME" ] will not do variable substitution on $HOME. If you want shell processing then either use the shell form or execute a shell directly, for example: RUN [ "sh", "-c", "echo $HOME" ].
When using the exec form and executing a shell directly, as in the case for the shell form, it is the shell that is doing the environment variable expansion, not docker.
Another option is to use ENTRYPOINT to specify that node is the executable to run and CMD to provide the arguments. The docs have an example in Exec form ENTRYPOINT example.
Using this approach, your Dockerfile will look something like
FROM ...
ENTRYPOINT [ "node", "server.js" ]
CMD [ "0", "dev" ]
Running it in dev would use the same command
docker run -p 9000:9000 -d me/app
and running it in prod you would pass the parameters to the run command
docker run -p 9000:9000 -d me/app 1 prod
You may want to omit CMD entirely and always pass in 0 dev or 1 prod as arguments to the run command. That way you don't accidentally start a prod container in dev or a dev container in prod.
Option 1) Use ENV variable
Dockerfile
# we need to specify default values
ENV ENVIRONMENT=production
ENV CLUSTER=1
# there is no need to use parameters array
CMD node server.js ${CLUSTER} ${ENVIRONMENT}
Docker run
$ docker run -d -p 9000:9000 -e ENVIRONMENT=dev -e CLUSTER=0 -me/app
Option 2) Pass arguments
Dockerfile
# use entrypoint instead of CMD and do not specify any arguments
ENTRYPOINT node server.js
Docker run
Pass arguments after docker image name
$ docker run -p 9000:9000 -d me/app 0 dev
The typical way to do this in Docker containers is to pass in environment variables:
docker run -p 9000:9000 -e NODE_ENV=dev -e CLUSTER=0 -d me/app
Going a bit off topic, build arguments exist to allow you to pass in arguments at build time that manifest as environment variables for use in your docker image build process:
$ docker build --build-arg HTTP_PROXY=http://10.20.30.2:1234 .
Late joining the discussion. Here's a nifty trick you can use to set default command line parameters while also supporting overriding the default arguments with custom ones:
Step#1 In your dockerfile invoke your program like so:
ENV DEFAULT_ARGS "--some-default-flag=123 --foo --bar"
CMD ["/bin/bash", "-c", "./my-nifty-executable ${ARGS:-${DEFAULT_ARGS}}"]
Step#2 When can now invoke the docker-image like so:
# this will invoke it with DEFAULT_ARGS
docker run mydockerimage
# but this will invoke the docker image with custom arguments
docker run --env ARGS="--alternative-args --and-then-some=123" mydockerimage
You can also adjust this technique to do much more complex argument-evaluation however you see fit. Bash supports many kinds of one-line constructs to help you towards that goal.
Hope this technique helps some folks out there save a few hours of head-scratching.
Not sure if this helps but I have used it this way and it worked like a charm
CMD ["node", "--inspect=0.0.0.0:9229", "--max-old-space-size=256", "/home/api/index.js"]
I found this at docker-compose not setting environment variables with flask
docker-compose.yml
version: '2'
services:
app:
image: python:2.7
environment:
- BAR=FOO
volumes:
- ./app.py:/app.py
command: python app.py
app.py
import os
print(os.environ["BAR"])
I am using vagrant to build docker host and then i have shell script which basically install all required packages for host and That script also build and run containers
vagrant file
config.vm.provision :shell, :inline => "sudo /vagrant/bootstrap.sh"
Inside that i run containers like
docker run -d . .bla bla .. .
This works fine but i have to ssh into container and run make deploy to install all the stuff.
Is there any way i can run that make deploy from within my bootsrap.sh.
The one way is make that as entry point but then that will do with every run,
I just want that when i provision host then that command should run inside some container and show me output like vagarnt shows for host
use docker exec
see the doc
http://docs.docker.com/reference/commandline/exec/
for example
docker exec -it container_id make deploy
or
docker exec -it container_id bash
and then
make deploy
inside your container
How to set node ENV process.env.mysql-host with docker run?
Can i somehow do like this? docker run --mysql-host:127.0.0.1 -p 80:80 -d myApp
I am using FROM node:onbuild as image.
Node's process.env is an object containing the user environment. Docker's CLI allows you to set the environment variable for the container using the -e or --env options.
You can run
docker run --env mysql_host=127.0.0.1 -p 80:80 -d myApp
To pass the mysql_host into the container.
I don't know much about node, but I think you just need to do:
docker run -e mysql-host=127.0.0.1 -p 80:80 -d myApp
Note that this will look for mysql-host in the same container, not on the host, if that's what you're expecting. I think what you really want to do is:
$ docker run -d --name db mysql
...
$ docker run -d --link db:mysql-host -p 80:80 -d myApp
Which will run the myApp container linked to the db container and resolvable as "mysql-host" inside the myApp container with no need for environment variables.
you could also set node ENV process.env.mysql-host inside your dockerfile
FROM node:latest
WORKDIR /home/app
ADD . /home/app
ENV PORT 3000
ENV mysql-host 127.0.0.1
EXPOSE 3000