Execution CMake from within a Bash Script - linux

I've built a script to automate a CMake build of OpenCV4. The relevant part of the script is written as:
install.sh
#!/bin/bash
#...
cd /home/pi/opencv
mkdir build
cd build
cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib/modules \
-D ENABLE_NEON=ON \
-D ENABLE_VFPV3=ON \
-D BUILD_TESTS=OFF \
-D OPENCV_ENABLE_NONFREE=ON \
-D INSTALL_PYTHON_EXAMPLES=ON \
-D BUILD_EXAMPLES=OFF ..
This part of the code is first executed from /home/pi/ directory. If I execute these lines within the cli they work and the cmake file is built without error. If I run this same code form a bash script it results in the cmake command resulting in -- Configuring incomplete, errors occurred!.
I believe this is similar to these two SO threads (here and here) in as much as they both describe situations where calling a secondary script from the first script creates a problem (or at least that's what I think they are saying). If that is the case, how can you start a script from /parent/, change to /child/ within the script, execute secondary script (CMake) as though executed from the /child/ directory?
If I've missed my actual problem - highlighting taht would be even more helpful.
Update with Full Logs
Output log for CMakeOutput.log and CMakeError.log as unsuccessfully run from the bash script.
When executed from the cli the successful logs are success_CMakeOutput.log and success_CMakeError.log
Update on StdOut
I looked through the files above and they look the same... Here is the failed screen output (noting the bottom lines) and the successful screen output.

You are running your script as the root user with the /root home directory, while the opencv_contrib directory is in /home/pi directory. The /home/pi is most probably the home directory of the user pi.
Update the:
-D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib/modules \
With proper path to opencv_contrib. Either provide opencv_contrib in the home directory of the root user, if you aim to run the script as root, or provide full, non-dependent on HOME, path to opencv_contrib directory.
-D OPENCV_EXTRA_MODULES_PATH=/home/pi/opencv_contrib/modules \

Related

Execute two node commands from a bash file as Docker CMD

I am trying to run a nodejs app from a file named start.sh.
I am using this file because I want to execute two process in a serial way as a CMD in a Dockerfile.
This is the content:
#!/bin/sh
node /get-secrets.mjs -c /secrets-config.json -t /.env.production
npm run start -p 8000
As you can notice, I want to perform two things:
First execute the get-secrets.mjs file, that is a small script that internally uses commander.js two read the flags -c and -t (--config and --target respectively). These flags receive strings arguments to locate files.
The second command is just the start for my node js app.
I have no idea how should I write this file, because those commands work on my machine but in the container it seems my format is wrong.
This is the problem so far:
How should I pass the arguments to the mjs script?
If someone else fine this useful, I solved my issue using this:
FROM node:16.14-alpine
# Dependencies that my script needs
RUN npm i -g #aws-sdk/client-secrets-manager#3.121.0 commander#9.3.0
# Folder for my source code
WORKDIR /app
# SOME OTHER COMMANDS
# ...
# In this folder I have the start.sh and the get-secrets.mjs files
COPY ${SOURCE_PATH}/server-helpers/* ./
# This was required since I am working on windows
RUN sed -i -e 's/\r$//' start.sh
RUN chmod +x start.sh
ENTRYPOINT ["/app/start.sh" ]

docker RUN mkdir does not work when folder exist in prev image

the only difference between them is that the "dev" folder exists in centos image,
check the comment in this piece of code(while executing docker build),appreciate it if anyone can explain why?
FROM centos:latest
LABEL maintainer="xxxx"
RUN dnf clean packages
RUN dnf -y install sudo openssh-server openssh-clients curl vim lsof unzip zip
**below works well!**
# RUN mkdir -p oop/script
# RUN cd oop/script
# ADD text.txt /oop/script
**/bin/sh: line 0: cd: dev/script: No such file or directory**
RUN mkdir -p dev/script
RUN cd dev/script
ADD text.txt /dev/script
EXPOSE 22
There are two things going on here.
The root of your problem is that /dev is a special directory, and is re-created for each RUN command. So while RUN mkdir -p dev/script successfully creates a /dev/script directory, that directory is gone once the RUN command is complete.
Additionally, a command like this...
RUN cd /some/directory
...is a complete no-op. This is exactly the same thing as running sh -c "cd /some/directory" on your local system; while the cd is successful, the cd only affects the process running the cd command, and has no effect on the parent process or subsequent commands.
If you really need to place something into /dev, you can copy it into a different location in your Dockerfile (e.g., COPY test.txt /docker/test.txt), and then at runtime via your CMD or ENTRYPOINT copy it into an appropriate location in /dev.

File Not Found When Running Docker Script But Is Found In Another Docker Script

I am trying to run a script inside a docker that was published by google.
The command I use mounts some datafiles onto the docker in a file called '/input' (inside the docker).
When I run the script, it says that it does not find the input file.
However, I do use the -v flag, and I ran a script that makes sure the input file is there (inside the docker).
So in summary - when I run
find /input -name "*.fasta"
It outputs:
/input/ucsc.hg19.chr20.unittest.fasta
As needed, but when I run the script, it says
./dv-quick-start: 19: ./dv-quick-start: --ref=/input/ucsc.hg19.chr20.unittest.fasta: not found
Full Script:
#!/bin/sh
BIN_VERSION="1.0.0"
INPUT_DIR="${PWD}/quickstart-testdata"
DATA_HTTP_DIR="https://storage.googleapis.com/deepvariant/quickstart-testdata"
OUTPUT_DIR="${PWD}/quickstart-output"
sudo docker run \
-v "${INPUT_DIR}":"/input" \
-v "${OUTPUT_DIR}":"/output" \
google/deepvariant:"${BIN_VERSION}" \
find /input -name "*.fasta"
sudo docker run \
-v "${INPUT_DIR}":"/input" \
-v "${OUTPUT_DIR}":"/output" \
google/deepvariant:"${BIN_VERSION}" \
/opt/deepvariant/bin/run_deepvariant \
--model_type=WGS \ **Replace this string with exactly one of the following [WGS,WES,PACBIO,HYBRID_PACBIO_ILLUMINA]**
--ref=/input/ucsc.hg19.chr20.unittest.fasta \
--reads=/input/NA12878_S1.chr20.10_10p1mb.bam \
--regions "chr20:10,000,000-10,010,000" \
--output_vcf=/output/output.vcf.gz \
--output_gvcf=/output/output.g.vcf.gz \
--intermediate_results_dir /output/intermediate_results_dir \ **This flag is optional. Set to keep the intermediate results.
Full output:
/input/ucsc.hg19.chr20.unittest.fasta
--ref is required.
Pass --helpshort or --helpfull to see help on flags.
./dv-quick-start: 19: ./dv-quick-start: --ref=/input/ucsc.hg19.chr20.unittest.fasta: not found
I feel there is some misunderstanding on my behalf, and I would appreciate any help.
Should more information be needed to answer the question, let me know.
You have some extraneous text in your shell script that's causing a problem. Delete the "replace this string" and "this flag is optional" text and all of the whitespace before them, making the \ the very last character on those lines.
In a shell script you can break commands across multiple lines using a \. But, the \ must be the absolute very last character in the line; if it's not, it escapes the character that comes after it.
# one line: ls -al $HOME
ls -al \
$HOME
# two lines: ls -al " " more text here; $HOME
ls -al \ more text here
$HOME
In your example you've left some explanatory text in
sudo docker run \
...\
--model_type=WGS \ **Replace this string with exactly one of the following [WGS,WES,PACBIO,HYBRID_PACBIO_ILLUMINA]**
# This is seen as a separate command
--ref=/input/ucsc.hg19.chr20.unittest.fasta \
...
Since the "Replace this string..." text makes the \ not be the absolute last character in the line, it causes the shell to break the command. You then get two commands, a docker run command without the --ref option and what looks like trying to run --ref=... as a separate command; that corresponds to the two errors you get.

Why do I get "s6-log: fatal: unable to open_append /run/service/app/lock: Not a directory"?

I'm learning about s6 and I've come to a point where I want to use s6-log. I have the following Dockerfile
FROM alpine:3.10
RUN apk --no-cache --update add s6
WORKDIR /run/service
COPY \
./rootfs/run \
./rootfs/app /run/service/
CMD ["s6-supervise", "."]
with ./rootfs/app being just a simple sh script
#!/bin/sh
while true;
do
sleep 1
printf "Hello %s\n" "$(date)"
done
and run being
#!/bin/execlineb -P
fdmove -c 2 1
s6-log -b n20 s1000000 t /var/log/app/
/run/service/app
Why do I keep getting
s6-log: fatal: unable to open_append /run/service/app/lock: Not a directory
? Without the s6-log line it all works fine.
So it seems that I've been doing this incorrectly. Namely I should've used s6-svscan instead of s6-supervice.
Using s6-svscan I can create a log/ subdirectory in my service's directory so that my app's stdout is redirected to logger's stdin as described on s6-svscan's website:
For every new subdirectory dir it finds, the scanner spawns a s6-supervise process on it. If dir/log exists, it spawns a s6-supervise process on both dir and dir/log, and maintains a never-closing pipe from the service's stdout to the logger's stdin.
I've written run script like so:
#!/bin/execlineb -P
s6-log -b n20 s512 T /var/log/app
and with that I've changed the CMD to
CMD ["s6-svscan", "/run/"]
where /run/service/ contains both run script for my service (without s6-log call) and log subdirectory with the run script above.

Docker can't write to directory mounted using -v unless it has 777 permissions

I am using the docker-solr image with docker, and I need to mount a directory inside it which I achieve using the -v flag.
The problem is that the container needs to write to the directory that I have mounted into it, but doesn't appear to have the permissions to do so unless I do chmod 777 on the entire directory. I don't think setting the permission to allows all users to read and write to it is the solution, but just a temporary workaround.
Can anyone guide me in finding a more canonical solution?
Edit: I've been running docker without sudo because I added myself to the docker group. I just found that the problem is solved if I run docker with sudo, but I am curious if there are any other solutions.
More recently, after looking through some official docker repositories I've realized the more idiomatic way to solve these permission problems is using something called gosu in tandem with an entry point script. For example if we take an existing docker project, for example solr, the same one I was having trouble with earlier.
The dockerfile on Github very effectively builds the entire project, but does nothing to account for the permission problems.
So to overcome this, first I added the gosu setup to the dockerfile (if you implement this notice the version 1.4 is hardcoded. You can check for the latest releases here).
# grab gosu for easy step-down from root
RUN mkdir -p /home/solr \
&& gpg --keyserver pool.sks-keyservers.net --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4 \
&& curl -o /usr/local/bin/gosu -SL "https://github.com/tianon/gosu/releases/download/1.4/gosu-$(dpkg --print-architecture)" \
&& curl -o /usr/local/bin/gosu.asc -SL "https://github.com/tianon/gosu/releases/download/1.4/gosu-$(dpkg --print-architecture).asc" \
&& gpg --verify /usr/local/bin/gosu.asc \
&& rm /usr/local/bin/gosu.asc \
&& chmod +x /usr/local/bin/gosu
Now we can use gosu, which is basically the exact same as su or sudo, but works much more nicely with docker. From the description for gosu:
This is a simple tool grown out of the simple fact that su and sudo have very strange and often annoying TTY and signal-forwarding behavior.
Now the other changes I made to the dockerfile were these adding these lines:
COPY solr_entrypoint.sh /sbin/entrypoint.sh
RUN chmod 755 /sbin/entrypoint.sh
ENTRYPOINT ["/sbin/entrypoint.sh"]
just to add my entrypoint file to the docker container.
and removing the line:
USER $SOLR_USER
So that by default you are the root user. (which is why we have gosu to step-down from root).
Now as for my own entrypoint file, I don't think it's written perfectly, but it did the job.
#!/bin/bash
set -e
export PS1="\w:\u docker-solr-> "
# step down from root when just running the default start command
case "$1" in
start)
chown -R solr /opt/solr/server/solr
exec gosu solr /opt/solr/bin/solr -f
;;
*)
exec $#
;;
esac
A docker run command takes the form:
docker run <flags> <image-name> <passed in arguments>
Basically the entrypoint says if I want to run solr as per usual we pass the argument start to the end of the command like this:
docker run <flags> <image-name> start
and otherwise run the commands you pass as root.
The start option first gives the solr user ownership of the directories and then runs the default command. This solves the ownership problem because unlike the dockerfile setup, which is a one time thing, the entry point runs every single time.
So now if I mount directories using the -d flag, before the entrypoint actually runs solr, it will chown the files inside of the docker container for you.
As for what this does to your files outside the container I've had mixed results because docker acts a little weird on OSX. For me, it didn't change the files outside of the container, but on another OS where docker plays more nicely with the filesystem, it might change your files outside, but I guess that's what you'll have to deal with if you want to mount files inside the container instead of just copying them in.

Resources