Unable to start node app with shell script - node.js

I created an node.js Docker image.
Using CMD node myapp.js in the end of my Dockerfile, it starts.
But when I use CMD /root/start.sh, then it fails.
This is how my start.sh looks like:
#!/bin/bash
node myapp.js
And here are the important lines of my Dockerfile:
FROM debian:latest
COPY config/start.sh /root/start.sh
RUN chmod +x /root/start.sh
WORKDIR /my/app/directory
RUN apt-get install -y wget && \
wget https://nodejs.org/dist/latest-v5.x/node-v5.12.0-linux-x64.tar.gz && \
tar -C /usr/local --strip-components 1 -xzf node-v5.12.0-linux-x64.tar.gz && \
rm -f node-v5.12.0-linux-x64.tar.gz && \
ln -s /usr/bin/nodejs /usr/bin/node
# works:
CMD node myapp.js
# doesn't work:
CMD /root/start.sh
Using docker logs I get: standard_init_linux.go:175: exec user process caused "no such file or directory"
But I don't understand, because if I add RUN ls /root in my Dockerfile, I can see the file exists.
I also tried with full paths in my script:
#!/bin/bash
/usr/bin/node /my/app/directory/myapp.js
but nothing changed. So what can be the problem?

Use docker run -entrypoint="/bin/bash" -i your_image.
What you used is the shell form of dockerfile CMD. As described in the doc, the default shell binary is /bin/sh, not as your expected /bin/bash in start.sh line 1.
Or try using exec form, that is CMD ["/root/start.sh"].

Most common error I've seen is creating the start.sh on a Windows system and saving the file either with a different character encoding or including windows linefeeds. The /bin/bash^M is not the same as /bin/bash but you won't see that linefeed on Windows. You also want to save the file in ascii encoding, not any of the multi-character UTF encodings.

Related

How can I solve this error, b'/bin/bash: line 1: nerdctl: command not found\n'?

I have written a python script to run the commands to execute nerdctl shell commands using subprocess,
res = subprocess.run(
f"nerdctl --host '/host/run/containerd/containerd.sock' --namespace k8s.io commit {container} {image}",
shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, executable='/bin/bash')
I'm running this python script inside a ubuntu container, When I sh inside the conatiner and run this script by passing arguments
python3 run_script.py b3425e7a0d1e image1
it executes properly, but when I run it using debug mode
kubectl debug node/pool-93oi9uqaq-mfs8b -it --image=registry.digitalocean.com/test-registry-1/nerdctl#sha256:56b2e5690e21a67046787e13bb690b3898a4007978187800dfedd5c56d45c7b2 -- python3 run_script.py b3425e7a0d1e image1
I'm getting the error
b'/bin/bash: line 1: nerdctl: command not found\n'
can some one help/suggest where it is going wrong?
run_script.py
import subprocess
import sys
container = sys.argv[1]
image = sys.argv[2]
res = subprocess.run(
f"nerdctl --host '/host/run/containerd/containerd.sock' --namespace k8s.io commit {container} {image}",
shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, executable='/bin/bash')
print(res)
Dockerfile
FROM ubuntu:latest
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
LABEL version="0.1.0"
RUN apt-get -y update
RUN apt-get install wget curl -y
RUN wget -q "https://github.com/containerd/nerdctl/releases/download/v1.0.0/nerdctl-full-1.0.0-linux-amd64.tar.gz" -O /tmp/nerdctl.tar.gz
RUN mkdir -p ~/.local/bin
RUN tar -C ~/.local/bin/ -xzf /tmp/nerdctl.tar.gz --strip-components 1 bin/nerdctl
RUN echo -e '\nexport PATH="${PATH}:~/.local/bin"' >> ~/.bashrc
RUN source ~/.bashrc
Mechanically: the binary you're installing isn't in $PATH anywhere. You unpack it into probably /root/.local/bin in the container filesystem but never add that to $PATH. The final RUN source line has no effect since each RUN command runs in a new shell (and technically a new container) and so the changes it makes are lost immediately. The preceding line tries to change a shell dotfile, but most paths to running things in Docker don't read shell dotfiles at all.
The easiest solution here is to unpack the binary into a directory that's already in $PATH, like /usr/local/bin.
FROM ubuntu:latest
LABEL version="0.1.0"
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install --no-install-recommends --assume-yes \
wget
RUN wget -q "https://github.com/containerd/nerdctl/releases/download/v1.0.0/nerdctl-full-1.0.0-linux-amd64.tar.gz" -O /tmp/nerdctl.tar.gz \
&& tar -C /usr/local -xzf /tmp/nerdctl.tar.gz bin/nerdctl \
&& rm /tmp/nerdctl.tar.gz
WORKDIR /app
...
CMD ["./run_script.py"]
You'll have a second bigger problem running this, though. A container doesn't normally have access to the host's container runtime to be able to manipulate containers. In standard Docker you can trivially root the host system if you can launch a container; it's possible to mount the Docker socket into a container but does require thinking hard about the security implications.
Your question has several hints at Kubernetes and I'd expect a responsible cluster administrator to make it hard-to-impossible to bypass the cluster container runtime and potentially compromise nodes this way. If you're using Kubernetes you probably can't access the host container runtime at all, whether it's Docker proper or something else.
Philosophically, it looks like you're trying to script a commit command. Using commit at all is almost never a best practice. Again, there are several practical problems with it around Kubernetes (which replica would you be committing? how would you save the resulting image? how would you reuse it?) but having an image you can't recreate from source can lead to later problems around for example taking security updates.

docker RUN mkdir does not work when folder exist in prev image

the only difference between them is that the "dev" folder exists in centos image,
check the comment in this piece of code(while executing docker build),appreciate it if anyone can explain why?
FROM centos:latest
LABEL maintainer="xxxx"
RUN dnf clean packages
RUN dnf -y install sudo openssh-server openssh-clients curl vim lsof unzip zip
**below works well!**
# RUN mkdir -p oop/script
# RUN cd oop/script
# ADD text.txt /oop/script
**/bin/sh: line 0: cd: dev/script: No such file or directory**
RUN mkdir -p dev/script
RUN cd dev/script
ADD text.txt /dev/script
EXPOSE 22
There are two things going on here.
The root of your problem is that /dev is a special directory, and is re-created for each RUN command. So while RUN mkdir -p dev/script successfully creates a /dev/script directory, that directory is gone once the RUN command is complete.
Additionally, a command like this...
RUN cd /some/directory
...is a complete no-op. This is exactly the same thing as running sh -c "cd /some/directory" on your local system; while the cd is successful, the cd only affects the process running the cd command, and has no effect on the parent process or subsequent commands.
If you really need to place something into /dev, you can copy it into a different location in your Dockerfile (e.g., COPY test.txt /docker/test.txt), and then at runtime via your CMD or ENTRYPOINT copy it into an appropriate location in /dev.

How to run environment initialization shell script from Dockerfile

I am trying to build an API wrapped in a docker image that serves Openvino model. How do I run the "setupvars.sh" from Dockerfile itself so that my application can access it?
I have tried running the script using RUN. For ex: RUN /bin/bash setupvars.sh
or RUN ./setupvars.sh . However, none of them work and I get ModelNotFoundError: no module named openvino
RUN $INSTALL_DIR/install_dependencies/install_openvino_dependencies.sh
RUN cd /opt/intel/openvino/deployment_tools/model_optimizer/install_prerequisites && sudo ./install_prerequisites_tf.sh
COPY . /app
WORKDIR /app
RUN apt autoremove -y && \
rm -rf /openvino /var/lib/apt/lists/*
RUN /bin/bash -c "source $INSTALL_DIR/bin/setupvars.sh"
RUN echo "source $INSTALL_DIR/bin/setupvars.sh" >> /root/.bashrc
CMD ["/bin/bash"]
RUN python3 -m pip install opencv-python
RUN python3 test.py
I want OpenVino accessible to my gunicorn application that will serve the model in a docker image
Next commands works for me.
ARG OPENVINO_DIR=/opt/intel/computer_vision_sdk
# Unzip the OpenVINO installer
RUN cd ${APP_DIR} && tar -xvzf l_openvino_toolkit*
# installing OpenVINO dependencies
RUN cd ${APP_DIR}/l_openvino_toolkit* && \
./install_cv_sdk_dependencies.sh
# installing OpenVINO itself
RUN cd ${APP_DIR}/l_openvino_toolkit* && \
sed -i 's/decline/accept/g' silent.cfg && \
./install.sh --silent silent.cfg
# Setup the OpenVINO environment
RUN /bin/bash -c "source ${OPENVINO_DIR}/bin/setupvars.sh"
You need to re-run it every time you start the container, because those variables are only for the session.
Option 1:
Run your application something like this:
CMD /bin/bash -c "source /opt/intel/openvino/bin/setupvars.sh && python test.py"
Option 2 (not tested):
Add the source command to your .bashrc so it will be run every time on startup
# Assuming running as root
RUN echo "/bin/bash -c 'source /opt/intel/openvino/bin/setupvars.sh'" >> ~root/.bashrc
CMD python test.py
For the rest of the Dockerfile, there is a guide here (also not tested, and it doesn't cover the above):
https://docs.openvinotoolkit.org/latest/_docs_install_guides_installing_openvino_docker_linux.html
As mentioned in the two previous answers, the setupvars.sh script sets the environment variables required by OpenVINO.
But rather than running this every time, you can add the variables to your Dockerfile. While writing your Dockerfile run:
CMD /bin/bash -c "source /opt/intel/openvino/bin/setupvars.sh && printenv"
This will give you the values that the environment variables are set to. You might also want to run printenv without setting the OpenVINO variables:
CMD /bin/bash printenv
Comparing the two outputs will let you figure out exactly what the setupvars.sh script is setting.
Once you know the values set by the script, you can set these as part of the Dockerfile using the ENV instruction. I wouldn't copy this because it's likely to be specific to your setup, but in my case, this ended up looking like:
ENV PATH=/opt/intel/openvino/deployment_tools/model_optimizer:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
ENV LD_LIBRARY_PATH=/opt/intel/openvino/opencv/lib:/opt/intel/openvino/deployment_tools/ngraph/lib:/opt/intel/openvino/deployment_tools/inference_engine/external/tbb/lib::/opt/intel/openvino/deployment_tools/inference_engine/external/hddl/lib:/opt/intel/openvino/deployment_tools/inference_engine/external/omp/lib:/opt/intel/openvino/deployment_tools/inference_engine/external/gna/lib:/opt/intel/openvino/deployment_tools/inference_engine/external/mkltiny_lnx/lib:/opt/intel/openvino/deployment_tools/inference_engine/lib/intel64
ENV INTEL_CVSDK_DIR=/opt/intel/openvino
ENV OpenCV_DIR=/opt/intel/openvino/opencv/cmake
ENV TBB_DIR=/opt/intel/openvino/deployment_tools/inference_engine/external/tbb/cmake
# The next one will be whatever your working directory is
ENV PWD=/workspace
ENV InferenceEngine_DIR=/opt/intel/openvino/deployment_tools/inference_engine/share
ENV ngraph_DIR=/opt/intel/openvino/deployment_tools/ngraph/cmake
ENV SHLVL=1
ENV PYTHONPATH=/opt/intel/openvino/python/python3.6:/opt/intel/openvino/python/python3:/opt/intel/openvino/deployment_tools/tools/post_training_optimization_toolkit:/opt/intel/openvino/deployment_tools/open_model_zoo/tools/accuracy_checker:/opt/intel/openvino/deployment_tools/model_optimizer
ENV HDDL_INSTALL_DIR=/opt/intel/openvino/deployment_tools/inference_engine/external/hddl
ENV _=/usr/bin/printenv

Sourcing ("dotting") shell script from Docker [duplicate]

I have a Dockerfile that I am putting together to install a vanilla python environment (into which I will be installing an app, but at a later date).
FROM ubuntu:12.04
# required to build certain python libraries
RUN apt-get install python-dev -y
# install pip - canonical installation instructions from pip-installer.org
# http://www.pip-installer.org/en/latest/installing.html
ADD https://bitbucket.org/pypa/setuptools/raw/bootstrap/ez_setup.py /tmp/ez_setup.py
ADD https://raw.github.com/pypa/pip/master/contrib/get-pip.py /tmp/get-pip.py
RUN python /tmp/ez_setup.py
RUN python /tmp/get-pip.py
RUN pip install --upgrade pip
# install and configure virtualenv
RUN pip install virtualenv
RUN pip install virtualenvwrapper
ENV WORKON_HOME ~/.virtualenvs
RUN mkdir -p $WORKON_HOME
RUN source /usr/local/bin/virtualenvwrapper.sh
The build runs ok until the last line, where I get the following exception:
[previous steps 1-9 removed for clarity]
...
Successfully installed virtualenvwrapper virtualenv-clone stevedore
Cleaning up...
---> 1fc253a8f860
Step 10 : ENV WORKON_HOME ~/.virtualenvs
---> Running in 8b0145d2c80d
---> 0f91a5d96013
Step 11 : RUN mkdir -p $WORKON_HOME
---> Running in 9d2552712ddf
---> 3a87364c7b45
Step 12 : RUN source /usr/local/bin/virtualenvwrapper.sh
---> Running in c13a187261ec
/bin/sh: 1: source: not found
If I ls into that directory (just to test that the previous steps were committed) I can see that the files exist as expected:
$ docker run 3a87 ls /usr/local/bin
easy_install
easy_install-2.7
pip
pip-2.7
virtualenv
virtualenv-2.7
virtualenv-clone
virtualenvwrapper.sh
virtualenvwrapper_lazy.sh
If I try just running the source command I get the same 'not found' error as above. If I RUN an interactive shell session however, source does work:
$ docker run 3a87 bash
source
bash: line 1: source: filename argument required
source: usage: source filename [arguments]
I can run the script from here, and then happily access workon, mkvirtualenv etc.
I've done some digging, and initially it looked as if the problem might lie in the difference between bash as the Ubuntu login shell, and dash as the Ubuntu system shell, dash not supporting the source command.
However, the answer to this appears to be to use '.' instead of source, but this just causes the Docker runtime to blow up with a go panic exception.
What is the best way to run a shell script from a Dockerfile RUN instruction to get around this (am running off the default base image for Ubuntu 12.04 LTS).
Original Answer
FROM ubuntu:14.04
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
This should work for every Ubuntu docker base image. I generally add this line for every Dockerfile I write.
Edit by a concerned bystander
If you want to get the effect of "use bash instead of sh throughout this entire Dockerfile", without altering and possibly damaging* the OS inside the container, you can just tell Docker your intention. That is done like so:
SHELL ["/bin/bash", "-c"]
* The possible damage is that many scripts in Linux (on a fresh Ubuntu install grep -rHInE '/bin/sh' / returns over 2700 results) expect a fully POSIX shell at /bin/sh. The bash shell isn't just POSIX plus extra builtins. There are builtins (and more) that behave entirely different than those in POSIX. I FULLY support avoiding POSIX (and the fallacy that any script that you didn't test on another shell is going to work because you think you avoided basmisms) and just using bashism. But you do that with a proper shebang in your script. Not by pulling the POSIX shell out from under the entire OS. (Unless you have time to verify all 2700 plus scripts that come with Linux plus all those in any packages you install.)
More detail in this answer below. https://stackoverflow.com/a/45087082/117471
The default shell for the RUN instruction is ["/bin/sh", "-c"].
RUN "source file" # translates to: RUN /bin/sh -c "source file"
Using SHELL instruction, you can change default shell for subsequent RUN instructions in Dockerfile:
SHELL ["/bin/bash", "-c"]
Now, default shell has changed and you don't need to explicitly define it in every RUN instruction
RUN "source file" # now translates to: RUN /bin/bash -c "source file"
Additional Note: You could also add --login option which would start a login shell. This means ~/.bashrc for example would be read and you don't need to source it explicitly before your command
Simplest way is to use the dot operator in place of source, which is the sh equivalent of the bash source command:
Instead of:
RUN source /usr/local/bin/virtualenvwrapper.sh
Use:
RUN . /usr/local/bin/virtualenvwrapper.sh
If you are using Docker 1.12 or newer, just use SHELL !
Short Answer:
general:
SHELL ["/bin/bash", "-c"]
for python vituralenv:
SHELL ["/bin/bash", "-c", "source /usr/local/bin/virtualenvwrapper.sh"]
Long Answer:
from https://docs.docker.com/engine/reference/builder/#shell
SHELL ["executable", "parameters"]
The SHELL instruction allows the default shell used for the shell form
of commands to be overridden. The default shell on Linux is
["/bin/sh", "-c"], and on Windows is ["cmd", "/S", "/C"]. The SHELL
instruction must be written in JSON form in a Dockerfile.
The SHELL instruction is particularly useful on Windows where there
are two commonly used and quite different native shells: cmd and
powershell, as well as alternate shells available including sh.
The SHELL instruction can appear multiple times. Each SHELL
instruction overrides all previous SHELL instructions, and affects all
subsequent instructions. For example:
FROM microsoft/windowsservercore
# Executed as cmd /S /C echo default
RUN echo default
# Executed as cmd /S /C powershell -command Write-Host default
RUN powershell -command Write-Host default
# Executed as powershell -command Write-Host hello
SHELL ["powershell", "-command"]
RUN Write-Host hello
# Executed as cmd /S /C echo hello
SHELL ["cmd", "/S"", "/C"]
RUN echo hello
The following instructions can be affected by the SHELL instruction
when the shell form of them is used in a Dockerfile: RUN, CMD and
ENTRYPOINT.
The following example is a common pattern found on Windows which can
be streamlined by using the SHELL instruction:
...
RUN powershell -command Execute-MyCmdlet -param1 "c:\foo.txt"
...
The command invoked by docker will be:
cmd /S /C powershell -command Execute-MyCmdlet -param1 "c:\foo.txt"
This is inefficient for two reasons. First, there is an un-necessary
cmd.exe command processor (aka shell) being invoked. Second, each RUN
instruction in the shell form requires an extra powershell -command
prefixing the command.
To make this more efficient, one of two mechanisms can be employed.
One is to use the JSON form of the RUN command such as:
...
RUN ["powershell", "-command", "Execute-MyCmdlet", "-param1 \"c:\\foo.txt\""]
...
While the JSON form is unambiguous and does not use the un-necessary
cmd.exe, it does require more verbosity through double-quoting and
escaping. The alternate mechanism is to use the SHELL instruction and
the shell form, making a more natural syntax for Windows users,
especially when combined with the escape parser directive:
# escape=`
FROM microsoft/nanoserver
SHELL ["powershell","-command"]
RUN New-Item -ItemType Directory C:\Example
ADD Execute-MyCmdlet.ps1 c:\example\
RUN c:\example\Execute-MyCmdlet -sample 'hello world'
Resulting in:
PS E:\docker\build\shell> docker build -t shell .
Sending build context to Docker daemon 4.096 kB
Step 1/5 : FROM microsoft/nanoserver
---> 22738ff49c6d
Step 2/5 : SHELL powershell -command
---> Running in 6fcdb6855ae2
---> 6331462d4300
Removing intermediate container 6fcdb6855ae2
Step 3/5 : RUN New-Item -ItemType Directory C:\Example
---> Running in d0eef8386e97
Directory: C:\
Mode LastWriteTime Length Name
---- ------------- ------ ----
d----- 10/28/2016 11:26 AM Example
---> 3f2fbf1395d9
Removing intermediate container d0eef8386e97
Step 4/5 : ADD Execute-MyCmdlet.ps1 c:\example\
---> a955b2621c31
Removing intermediate container b825593d39fc
Step 5/5 : RUN c:\example\Execute-MyCmdlet 'hello world'
---> Running in be6d8e63fe75
hello world
---> 8e559e9bf424
Removing intermediate container be6d8e63fe75
Successfully built 8e559e9bf424
PS E:\docker\build\shell>
The SHELL instruction could also be used to modify the way in which a
shell operates. For example, using SHELL cmd /S /C /V:ON|OFF on
Windows, delayed environment variable expansion semantics could be
modified.
The SHELL instruction can also be used on Linux should an alternate
shell be required such as zsh, csh, tcsh and others.
The SHELL feature was added in Docker 1.12.
I had the same problem and in order to execute pip install inside virtualenv I had to use this command:
RUN pip install virtualenv virtualenvwrapper
RUN mkdir -p /opt/virtualenvs
ENV WORKON_HOME /opt/virtualenvs
RUN /bin/bash -c "source /usr/local/bin/virtualenvwrapper.sh \
&& mkvirtualenv myapp \
&& workon myapp \
&& pip install -r /mycode/myapp/requirements.txt"
I hope it helps.
Building on the answers on this page I would add that you have to be aware that each RUN statement runs independently of the others with /bin/sh -c and therefore won't get any environment vars that would normally be sourced in login shells.
The best way I have found so far is to add the script to /etc/bash.bashrc and then invoke each command as bash login.
RUN echo "source /usr/local/bin/virtualenvwrapper.sh" >> /etc/bash.bashrc
RUN /bin/bash --login -c "your command"
You could for instance install and setup virtualenvwrapper, create the virtual env, have it activate when you use a bash login, and then install your python modules into this env:
RUN pip install virtualenv virtualenvwrapper
RUN mkdir -p /opt/virtualenvs
ENV WORKON_HOME /opt/virtualenvs
RUN echo "source /usr/local/bin/virtualenvwrapper.sh" >> /etc/bash.bashrc
RUN /bin/bash --login -c "mkvirtualenv myapp"
RUN echo "workon mpyapp" >> /etc/bash.bashrc
RUN /bin/bash --login -c "pip install ..."
Reading the manual on bash startup files helps understand what is sourced when.
According to https://docs.docker.com/engine/reference/builder/#run the default [Linux] shell for RUN is /bin/sh -c. You appear to be expecting bashisms, so you should use the "exec form" of RUN to specify your shell.
RUN ["/bin/bash", "-c", "source /usr/local/bin/virtualenvwrapper.sh"]
Otherwise, using the "shell form" of RUN and specifying a different shell results in nested shells.
# don't do this...
RUN /bin/bash -c "source /usr/local/bin/virtualenvwrapper.sh"
# because it is the same as this...
RUN ["/bin/sh", "-c", "/bin/bash" "-c" "source /usr/local/bin/virtualenvwrapper.sh"]
If you have more than 1 command that needs a different shell, you should read https://docs.docker.com/engine/reference/builder/#shell and change your default shell by placing this before your RUN commands:
SHELL ["/bin/bash", "-c"]
Finally, if you have placed anything in the root user's .bashrc file that you need, you can add the -l flag to the SHELL or RUN command to make it a login shell and ensure that it gets sourced.
Note: I have intentionally ignored the fact that it is pointless to source a script as the only command in a RUN.
According to Docker documentation
To use a different shell, other than ‘/bin/sh’, use the exec form passing in the desired shell. For example,
RUN ["/bin/bash", "-c", "echo hello"]
See https://docs.docker.com/engine/reference/builder/#run
I also had issues in running source in a Dockerfile
This runs perfectly fine for building CentOS 6.6 Docker container, but gave issues in Debian containers
RUN cd ansible && source ./hacking/env-setup
This is how I tackled it, may not be an elegant way but this is what worked for me
RUN echo "source /ansible/hacking/env-setup" >> /tmp/setup
RUN /bin/bash -C "/tmp/setup"
RUN rm -f /tmp/setup
If you have SHELL available you should go with this answer -- don't use the accepted one, which forces you to put the rest of the dockerfile in one command per this comment.
If you are using an old Docker version and don't have access to SHELL, this will work so long as you don't need anything from .bashrc (which is a rare case in Dockerfiles):
ENTRYPOINT ["bash", "--rcfile", "/usr/local/bin/virtualenvwrapper.sh", "-ci"]
Note the -i is needed to make bash read the rcfile at all.
You might want to run bash -v to see what's being sourced.
I would do the following instead of playing with symlinks:
RUN echo "source /usr/local/bin/virtualenvwrapper.sh" >> /etc/bash.bashrc
This is my solution on "Ubuntu 20.04"
RUN apt -y update
RUN apt -y install curl
SHELL ["/bin/bash", "-c"]
RUN curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.38.0/install.sh | bash
RUN source /root/.bashrc
RUN bash -c ". /root/.nvm/nvm.sh && nvm install v16 && nvm alias default v16 && nvm use default"
This might be happening because source is a built-in to bash rather than a binary somewhere on the filesystem. Is your intention for the script you're sourcing to alter the container afterward?
I ended up putting my env stuff in .profile and mutated SHELL something like
SHELL ["/bin/bash", "-c", "-l"]
# Install ruby version specified in .ruby-version
RUN rvm install $(<.ruby-version)
# Install deps
RUN rvm use $(<.ruby-version) && gem install bundler && bundle install
CMD rvm use $(<.ruby-version) && ./myscript.rb
If you're just trying to use pip to install something into the virtualenv, you can modify the PATH env to look in the virtualenv's bin folder first
ENV PATH="/path/to/venv/bin:${PATH}"
Then any pip install commands that follow in the Dockerfile will find /path/to/venv/bin/pip first and use that, which will install into that virtualenv and not the system python.
Here is an example Dockerfile leveraging several clever techniques to all you to run a full conda environment for every RUN stanza. You can use a similar approach to execute any arbitrary prep in a script file.
Note: there is a lot of nuance when it comes to login/interactive vs nonlogin/noninteractive shells, signals, exec, the way multiple args are handled, quoting, how CMD and ENTRYPOINT interact, and a million other things, so don't be discouraged if when hacking around with these things, stuff goes sideways. I've spent many frustrating hours digging through all manner of literature and I still don't quite get how it all clicks.
## Conda with custom entrypoint from base ubuntu image
## Build with e.g. `docker build -t monoconda .`
## Run with `docker run --rm -it monoconda bash` to drop right into
## the environment `foo` !
FROM ubuntu:18.04
## Install things we need to install more things
RUN apt-get update -qq &&\
apt-get install -qq curl wget git &&\
apt-get install -qq --no-install-recommends \
libssl-dev \
software-properties-common \
&& rm -rf /var/lib/apt/lists/*
## Install miniconda
RUN wget -nv https://repo.anaconda.com/miniconda/Miniconda3-4.7.12-Linux-x86_64.sh -O ~/miniconda.sh && \
/bin/bash ~/miniconda.sh -b -p /opt/conda && \
rm ~/miniconda.sh && \
/opt/conda/bin/conda clean -tipsy && \
ln -s /opt/conda/etc/profile.d/conda.sh /etc/profile.d/conda.sh
## add conda to the path so we can execute it by name
ENV PATH=/opt/conda/bin:$PATH
## Create /entry.sh which will be our new shell entry point. This performs actions to configure the environment
## before starting a new shell (which inherits the env).
## The exec is important! This allows signals to pass
RUN (echo '#!/bin/bash' \
&& echo '__conda_setup="$(/opt/conda/bin/conda shell.bash hook 2> /dev/null)"' \
&& echo 'eval "$__conda_setup"' \
&& echo 'conda activate "${CONDA_TARGET_ENV:-base}"' \
&& echo '>&2 echo "ENTRYPOINT: CONDA_DEFAULT_ENV=${CONDA_DEFAULT_ENV}"' \
&& echo 'exec "$#"'\
) >> /entry.sh && chmod +x /entry.sh
## Tell the docker build process to use this for RUN.
## The default shell on Linux is ["/bin/sh", "-c"], and on Windows is ["cmd", "/S", "/C"]
SHELL ["/entry.sh", "/bin/bash", "-c"]
## Now, every following invocation of RUN will start with the entry script
RUN conda update conda -y
## Create a dummy env
RUN conda create --name foo
## I added this variable such that I have the entry script activate a specific env
ENV CONDA_TARGET_ENV=foo
## This will get installed in the env foo since it gets activated at the start of the RUN stanza
RUN conda install pip
## Configure .bashrc to drop into a conda env and immediately activate our TARGET env
RUN conda init && echo 'conda activate "${CONDA_TARGET_ENV:-base}"' >> ~/.bashrc
ENTRYPOINT ["/entry.sh"]
I've dealing with a similar scenario for an application developed with Django web web framework and these are the steps that worked perfectly for me:
content of my Dockerfile
[mlazo#srvjenkins project_textile]$ cat docker/Dockerfile.debug
FROM malazo/project_textile_ubuntu:latest
ENV PROJECT_DIR=/proyectos/project_textile PROJECT_NAME=project_textile WRAPPER_PATH=/usr/share/virtualenvwrapper/virtualenvwrapper.sh
COPY . ${PROJECT_DIR}/
WORKDIR ${PROJECT_DIR}
RUN echo "source ${WRAPPER_PATH}" > ~/.bashrc
SHELL ["/bin/bash","-c","-l"]
RUN mkvirtualenv -p $(which python3) ${PROJECT_NAME} && \
workon ${PROJECT_NAME} && \
pip3 install -r requirements.txt
EXPOSE 8000
ENTRYPOINT ["tests/container_entrypoint.sh"]
CMD ["public/manage.py","runserver","0:8000"]
content of the ENTRYPOINT file "tests/container_entrypoint.sh":
[mlazo#srvjenkins project_textile]$ cat tests/container_entrypoint.sh
#!/bin/bash
# *-* encoding : UTF-8 *-*
sh tests/deliver_env.sh
source ~/.virtualenvs/project_textile/bin/activate
exec python "$#"
finally, the way I deploy the container was :
[mlazo#srvjenkins project_textile]$ cat ./tests/container_deployment.sh
#!/bin/bash
CONT_NAME="cont_app_server"
IMG_NAME="malazo/project_textile_app"
[ $(docker ps -a |grep -i ${CONT_NAME} |wc -l) -gt 0 ] && docker rm -f ${CONT_NAME}
docker run --name ${CONT_NAME} -p 8000:8000 -e DEBUG=${DEBUG} -e MYSQL_USER=${MYSQL_USER} -e MYSQL_PASSWORD=${MYSQL_PASSWORD} -e MYSQL_HOST=${MYSQL_HOST} -e MYSQL_DATABASE=${MYSQL_DATABASE} -e MYSQL_PORT=${MYSQL_PORT} -d ${IMG_NAME}
I really hope this would be helpful for somebody else.
Greetings,
I had the same issue. If you also use a python base image you can change the shebang line in your shell script to #!/bin/bash.
See for example the container_entrypoint.sh from Manuel Lazo.

Docker Bash prompt does not display color output

I use the command docker run --rm -it govim bash -l to run Docker images, but it does not display color output.
If I source ~/.bash_profile or run bash -l again, output will then correctly be output with color.
Bash Prompt Image
My bash_profile and bash_prompt files.
The OP SolomonT reports that docker run with env do work:
docker run --rm -it -e "TERM=xterm-256color" govim bash -l
And Fernando Correia adds in the comments:
To get both color support and make tmux work, I combined both examples:
docker exec -it my-container env TERM=xterm-256color script -q -c "/bin/bash" /dev/null
As chepner commented (earlier answer), .bash_profile is sourced (itis an interactive shell), since bash_prompt is called by .bash_profile.
But docker issue 9299 illustrates that TERM doesn't seem to be set right away, forcing the users to open another bash with:
docker exec -ti test env TERM=xterm-256color bash -l
You have similar color issues with issue 8755.
To illustrate/reproduce the problem:
docker exec -ti $CONTAINER_NAME tty
not a tty
The current workaround is :
docker exec -ti `your_container_id` script -q -c "/bin/bash" /dev/null
Both are supposing you have a running container first, which might not be convenient here.
Based on VonC's answer I adding the following to my Dockerfile (which allows me to run the container without typing the environment variables on the command line every time):
ENV TERM xterm-256color
#... more stuff
CMD ["bash", "-l"]
And sure enough it works with:
docker run -it my-image:tag
For tmux to work with color, in my ~/.tmux.conf I need:
set -g default-terminal "screen-256color"
and for UTF-8 support in tmux, in my ~/.bashrc:
alias tmux='tmux -u'
My Dockerfile:
FROM fedora:26
ENV TERM xterm-256color
RUN dnf upgrade -y && \
dnf install golang tmux git vim -y && \
mkdir -p /app/go/{bin,pkg,src} && \
echo 'export GOPATH=/app/go' >> $HOME/.bashrc && \
echo 'export PATH=$PATH:$GOPATH/bin' >> $HOME/.bashrc && \
mkdir -p ~/.vim/autoload ~/.vim/bundle && \
curl -LSso ~/.vim/autoload/pathogen.vim \
https://tpo.pe/pathogen.vim && \
git clone https://github.com/farazdagi/vim-go-ide.git \
~/.vim_go_runtime && \
bash ~/.vim_go_runtime/bin/install && \
echo "alias govim='vim -u ~/.vimrc.go'" >> ~/.bashrc && \
echo "alias tmux='tmux -u'" >> ~/.bashrc && \
echo 'set -g default-terminal "screen-256color"' >> ~/.tmux.conf
CMD ["bash", "-l"]
The Dockerfile builds an image based off Fedora 26, updates it, installs a few packages (Git, Vim, golang and tmux), installs the pathogen plugin for Vim, then it installs a Git repository from here vim-go-ide and finally does a few tweaks to a few configuration files to get color and UTF-8 working fine. You just need to add persistent storage, probably mounted under /app/go.
If you have an image with all the development tools already installed, just make a Dockerfile with ENV statement and add the commands to modify the configuration files in a RUN statement without the installation commands and use your base image in the FROM statement. I prefer this solution because I'm lazy and (besides the initial setup) it saves typing when you want to run the image.
Using Vim and plugins within tmux
Adding -t is working for me:
docker exec -t vendor/bin/phpunit
Adding to VonC's answer, I made this Bash function:
drun() { # start container with the specified entrypoint and colour terminal
if [[ $# -lt 2 ]]; then
echo "drun needs 2+ arguments: image entrypoint" >&2
return
fi
docker run -ti -e "TERM=xterm-256color" "$#"
}
I think this is something that you'd have to implement manually. My container has Python, so here's how I print in color using a single line:
Example Docker file:
FROM django:python3
RUN python -c "print('\033[90m HELLO_WORLD \033[0m')"
RUN python -c "print('\033[91m HELLO_WORLD \033[0m')"
RUN python -c "print('\033[92m HELLO_WORLD \033[0m')"
RUN python -c "print('\033[93m HELLO_WORLD \033[0m')"
RUN python -c "print('\033[94m HELLO_WORLD \033[0m')"
RUN python -c "print('\033[95m HELLO_WORLD \033[0m')"
RUN python -c "print('\033[96m HELLO_WORLD \033[0m')"
RUN python -c "print('\033[97m HELLO_WORLD \033[0m')"
RUN python -c "print('\033[98m HELLO_WORLD \033[0m')"
Standard terminal:
You need to add the following line to your Dockerfile:
RUN echo PS1="'"'\[\e]0;\u#\h: \w\a\]${debian_chroot:+($debian_chroot)}\ \033[01;32m\]\u#\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ '"'" >> /app/.bashrc
Change the /app/.bashrc to where your .bashrc file is in the docker.
If you want ls command to have colors too add this line:
RUN echo alias ls="'"'ls --color=auto'"'" >> /app/.bashrc

Resources