cannot find .sh script from the docker container - linux

I have Dockerfile that looks like this
FROM alpine:3.7
WORKDIR /home/tmp
RUN apk add autoconf && apk add py-pip && apk add python3 &&\
pip install --upgrade pip && pip install wheel
Originally I wanted to execute a .sh script on startup (via ENTRYPOINT) and immediately destroy the container. However as it failed to find the file I decided to do that manually.
I run container like this
docker run -it --rm -v c:/projects/mega-nz-sdk:/home/tmp mega_sdk_python
And it connects me to bash in the container.
In the list of files I can see the script I want to execute
/home/tmp # ls
Dockerfile compile.sh sdk-develop
/home/tmp #
However when I try to run it it cannot find the script
/home/tmp # ./compile.sh
/bin/sh: ./compile.sh: not found
/home/tmp #
What is the problem?
Script compile.sh looks like this
#!/bin/bash
cd sdk-develop
sh autogen.sh
./configure --disable-silent-rules --enable-python --disable-examples &&\
make
cd /bindings/python
python setup.py bdist_wheel
Ideally I would like to execute during instantiation of the container in order to have already configured container on startup (without need to run script each I run the container).

It seems in order to execute my .sh file I need to run it like this
sh compile.sh
So I added
CMD ["sh", "compile.sh"]
And I started to work (though failed with other errors like missing make etc. but that's due to missing packages in Alpine Linux itself so a separate matter).
Guess It is something to do with Alpine Linux itself. But I am not sure.

Alpine Linux is a very minimal distribution; it includes a minimal version of most Unix tools that conform to the POSIX specification, but no more. In particular it does not include GNU Bash.
Your script doesn't actually use any special Bash features, so it would be enough to change the first line of the script to run the default system Bourne shell
#!/bin/sh
Using the Alpine apk package manager to install bash would work too, but it's not necessary for what you're showing here.
Usually you'd run the sorts of "compile" commands you show during the course of building an image, not when the image starts up. I'd expect a much more typical Dockerfile to COPY the application source code and in then RUN the commands you show. That would happen just once, when you docker build the image, and not every time you want to run the packaged application.

you mounted the directory from your win host to the docker machine. I BELIEVE this is a permission problem - the file is not executable flagged.
Show detailed listing
# ls -lh
copy the folder to internal dir and add executable bit
cp /home/tmp /home/tmp2 -r
chmod +x /home/tmp2/*.sh
/home/tmp2/compile.sh

Related

Connot find executable after installation

I am trying to install KICS into AWS EC2 (Ubuntu). I am suing the one-line install script:
curl -sfL 'https://raw.githubusercontent.com/Checkmarx/kics/master/install.sh' | bash
However when I run:
kics version
or
which kics
It seems like it cannot find the command. It forces me to reboot before being able to see it, however rebooting is not an option in my use-case.
As per the documentation of KICS (https://docs.kics.io/latest/getting-started/#one-liner_install_script):
Run the following command to download and install kics. It will detect your current OS and download the appropriate binary package, defaults installation to ./bin and the queries will be placed alongside the binary in ./bin/assets/queries:
curl -sfL 'https://raw.githubusercontent.com/Checkmarx/kics/master/install.sh' | bash
If you want to place it somewhere else like /usr/local/bin:
sudo curl -sfL 'https://raw.githubusercontent.com/Checkmarx/kics/master/install.sh' | bash -s -- -b /usr/local/bin
So by default, it will install in /home/<user>/bin folder if using the first command. This folder may not be in PATH environment variable because of which which command doesn't work.
So, you need to install using the second command in order to install in /usr/local/bin which should probably be there in PATH and after that which command will also work.

Docker not extracting .tar.gz and cannot find the file in the image

I am having trouble with extracting a .tar.gz file and accessing its files on a docker image. I've tried looking around Stackoverflow, but the solutions didn't fix my problem... Below is my folder structure and my Dockerfile. I've made an image called modus.
Folder structure:
- modus
Dockerfile
ModusToolbox_2.1.0.1266-linux-install.tar.gz
Dockerfile:
FROM ubuntu:latest
USER root
RUN apt-get update -y && apt-get upgrade -y && apt-get install git -y
COPY ./ModusToolbox_2.1.0.1266-linux-install.tar.gz /root/
RUN cd /root/ && tar -C /root/ -zxzf ModusToolbox_2.1.0.1266-linux-install.tar.gz
I've been running the commands below, but when I try to check /root/ the extracted files aren't there...
docker build .
docker run -it modus
root#e19d081664e4:/# cd root
root#e19d081664e4:/# ls
<prints nothing>
There should be a folder called ModusToolBox, but I can't find it anywhere. Any help is appreciated.
P.S
I have tried changing ADD to COPY, but both don't work.
You didn't provide a tag option with -t but you're using a tag in
docker run -it modus. By doing that you run some other modus image, not the one you have just built. Docker should say something
like Successfully built <IMAGE_ID> at the end of the build, run
docker run -it <IMAGE_ID> to run a newly built image if you don't want to provide a tag.

Docker and Plotly

I created a python script using plotly dash to draw graphs, then using plotly-orca to export a static image of the created graph. I want to dockerise this script but my problem is I build and run the image i get a "The orca executable is required in order to export figures as static images" error. My question now is how do I include the executable as part of my docker image?
It's a bit complicated due to the nature of plotly-orca, but it can be done, according to this Dockerfile based on this advice. Add this to your Dockerfile:
# Download orca AppImage, extract it, and make it executable under xvfb
RUN apt-get install --yes xvfb
RUN wget https://github.com/plotly/orca/releases/download/v1.1.1/orca-1.1.1-x86_64.AppImage -P /home
RUN chmod 777 /home/orca-1.1.1-x86_64.AppImage
# To avoid the need for FUSE, extract the AppImage into a directory (name squashfs-root by default)
RUN cd /home && /home/orca-1.1.1-x86_64.AppImage --appimage-extract
RUN printf '#!/bin/bash \nxvfb-run --auto-servernum --server-args "-screen 0 640x480x24" /home/squashfs-root/app/orca "$#"' > /usr/bin/orca
RUN chmod 777 /usr/bin/orca
RUN chmod -R 777 /home/squashfs-root/
I would just upgrade to plotly 4.9 or newer, and use kaleido through pip - an official substitute for Orca which has been a pain to setup with docker. https://plotly.com/python/static-image-export/

"cp: Command not found" when recreating and extending centos6-i386 Docker base image

I'm currently rebuilding our build server, and creating a set of Docker images for our various projects, as each has rather different toolchain and library requirements. Since Docker currently only runs on 64-bit hosts, the build server will be a x86_64 Fedora 22 machine.
These images must be able to build historical/tagged releases of our projects without modification; we can make changes to the build process for each project if needed, but only for current trunk and future releases.
Now, one of my build environments needs to reproduce an old i686 build server. For executing 32-bit programs I can simply install i686 support libraries (yum install glibc.i686 ncurses-libs.i686), but that doesn't help me to build 32-bit programs, without having to modify Makefiles to pass -m32 to GCC … and, as stated above, I do not wish to alter historical codebases at all.
So, my current idea is to basically fake a i686 version of CentOS in a Docker container by installing all i686 packages, including GCC. That way, although uname -a will report the host's x86_64 architecture, everything else within the container should be pretty consistent. I took the idea (and centos6.tar.gz) from the "centos-i386" base image which, in essence, I'm trying to reproduce for my own local image.
Sadly, it's not going very well.
Here's a minimal-ish Dockerfile:
FROM scratch
# Inspiration from https://hub.docker.com/r/toopher/centos-i386/~/dockerfile/
ADD centos6.tar.gz /
RUN echo "i686" > /etc/yum/vars/arch && \
echo "i386" > /etc/yum/vars/basearch
ENTRYPOINT ["linux32"]
# Base packages
RUN yum update -y && yum -y install epel-release patch sed subversion bzip zip
# AT91SAM9260 ARM compiler
ADD arm-2009q1-203-arm-none-linux-gnueabi-i686-pc-linux-gnu.tar.bz2 /usr/local/
ENV PATH $PATH:/usr/local/arm-2009q1/bin
# AT91SAM9260 & native cxxtest
ADD cxxtest-3.10.1.tar.gz /staging/
WORKDIR /staging/cxxtest/
RUN cp -r cxxtest /usr/local/arm-2009q1/arm-none-linux-gnueabi/include/
RUN cp -r cxxtest /usr/local/include/
RUN cp cxxtestgen.pl /usr/bin/
RUN ln -s /usr/bin/cxxtestgen.pl /usr/bin/cxxtestgen
WORKDIR /
RUN rm -rf /staging/
The build fails on the first "RUN" in the cxxtest installation step:
/bin/sh: cp: command not found
The command '/bin/sh -c cp -r cxxtest /usr/local/arm-2009q1/arm-none-linux-gnueabi/include/' returned a non-zero code: 127
What's wrong?
Because your image is being built from "scratch", not from the "centos6" base image (as is the case with the published "centos6-i686" image), even though you unpacked CentOS 6 into the filesystem as your first step, Bash was started up before that so your shell context has no meaningful PATH set. Adding the following after your "ENTRYPOINT" will result in all the usual binaries being accessible again, for the duration of the build process:
ENV PATH /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Containers created from your image (had it built; say, by not trying to build cxxtest) would never have been affected, as the fresh Bash instances would have had the PATH correctly set through /etc/profile.

how to set supervisor to run a shell script

Setting up a Dockerfile to install node prereqs and then set up supervisor in order to run the final npm install command. Running Docker in CoreOS under VirtualBox.
I have a Dockerfile that sets everything up correctly:
FROM ubuntu
MAINTAINER <<Me>>
# Install docker basics
RUN echo "deb http://archive.ubuntu.com/ubuntu precise main universe" > /etc/apt/sources.list
RUN apt-get update
RUN apt-get upgrade -y
# Install dependencies and nodejs
RUN apt-get update
RUN apt-get install -y python-software-properties python g++ make
RUN add-apt-repository ppa:chris-lea/node.js
RUN apt-get update
RUN apt-get install -y nodejs
# Install git
RUN apt-get install -y git
# Install supervisor
RUN apt-get install -y supervisor
RUN mkdir -p /var/log/supervisor
# Add supervisor config file
ADD ./etc/supervisord.conf /etc/supervisor/conf.d/supervisord.conf
# Bundle app source
ADD . /src
# create supervisord user
RUN /usr/sbin/useradd --create-home --home-dir /usr/local/nonroot --shell /bin/bash nonroot
RUN chown -R nonroot: /src
# set install script to executable
RUN /bin/chmod +x /src/etc/install.sh
#set up .env file
RUN echo "NODE_ENV=development\nPORT=5000\nRIAK_SERVERS={SERVER}" > /src/.env
#expose the correct port
EXPOSE 5000
# start supervisord when container launches
CMD ["/usr/bin/supervisord"]
And then I want to set up supervisord to launch one of a few possible processes, including an installation shell script that I've confirmed to work correctly, install.sh, which is located in the application's /etc directory:
#!/bin/bash
cd /src; npm install
export PATH=$PATH:node_modules/.bin
However, I'm very new to supervisor syntax, and I can't get it to launch the shell script correctly. This is what I have in my supervisord.conf file:
[supervisord]
nodaemon=true
[program:install]
command=install.sh
directory=/src/etc/
user=nonroot
When I run the Dockerfile, everything runs correctly, but when I launch the image, I get the following:
2014-03-15 07:39:56,854 CRIT Supervisor running as root (no user in config file)
2014-03-15 07:39:56,856 WARN Included extra file "/etc/supervisor/conf.d/supervisord.conf" during parsing
2014-03-15 07:39:56,913 INFO RPC interface 'supervisor' initialized
2014-03-15 07:39:56,913 WARN cElementTree not installed, using slower XML parser for XML-RPC
2014-03-15 07:39:56,914 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2014-03-15 07:39:56,915 INFO supervisord started with pid 1
2014-03-15 07:39:57,918 INFO spawnerr: can't find command 'install.sh'
2014-03-15 07:39:58,920 INFO spawnerr: can't find command 'install.sh'
Clearly, I have not set up supervisor correctly to run this shell script -- is there part of the syntax that I'm screwing up?
The best way that I found was setting this:
[program:my-program-name]
command = /path/to/my/command.sh
startsecs = 0
autorestart = false
startretries = 1
think I got this sorted: needed the full path in command, and instead of having user=nonroot in the .conf file, I put su nonroot into the install.sh script.
I had a quick look in the source code for supervisor and noticed that if the command does not contain a forward slash /, it will look in the PATH environmental variable for that file. This imitates the behaviour of execution via shell.
The following methods should fix your initial problem:
Specify the full path of the script (like you have done in your own answer)
Prefix the command with ./, i.e. ./install.sh (in theory, but untested)
Prefix the command with the shell executable, i.e. /bin/bash install.sh
I do not understand why user= does not work for you (have you tried it after fixing execution?), but the problem you encountered in your own answer was probably due to the incorrect usage of su which does not work like sudo. su will create its own interactive shell and will therefore hang while waiting for standard input. To run commands with su, use the -c flag, i.e. su -c "some-program" nonroot. An explicit shell can also be specified with the -s flag if necessary.
I had this issue too. For me, the root cause was failing to set the shebang line. Even if the script can run in bash fine, for supervisord to be able to exec() it, it has to begin with e.g. #!/bin/bash.

Resources