How to detect if the current script is running in a docker build? - linux

Suppose I have a Dockerfile which runs a script,
RUN ./myscript.sh
How could I write the myscript.sh so that it could detect if itself is launched by the RUN command during a docker build?
#! /bin/bash
# myscript.sh
if <What should I do here?>
then
echo "I am in a docker build"
else
echo "I am not in a docker build"
fi
Ideally, it should not require any changes in the Dockerfile, so that the caller of myscript.sh does not need specialized knowledge about myscript.sh.

Try this :
#!/bin/bash
# myscript.sh
isDocker(){
local cgroup=/proc/1/cgroup
test -f $cgroup && [[ "$(<$cgroup)" = *:cpuset:/docker/* ]]
}
isDockerBuildkit(){
local cgroup=/proc/1/cgroup
test -f $cgroup && [[ "$(<$cgroup)" = *:cpuset:/docker/buildkit/* ]]
}
isDockerContainer(){
[ -e /.dockerenv ]
}
if isDockerBuildkit || (isDocker && ! isDockerContainer)
then
echo "I am in a docker build"
else
echo "I am not in a docker build"
fi

Just to chime in on this, it seems I cannot comment (nor edit as the edit is two characters long... not enough for SO) on the accepted answer but it contains a typo.
the isDockerContainer should read
isDockerContainer(){
[ -e /.dockerenv ]
}
which created a silent bug in our case.
Cheers

In your Dockerfile, you can try this to run the script
ADD myscript.sh .
RUN chmod +x myscript.sh
ENTRYPOINT ["myscript.sh"]

Related

error checking context: file XXX not found or excluded by .dockerignore

I have an interesting setup with my project where I have a directory structure like this:
script.sh
scripts/
create-docker-container.sh
...
src/
...
The idea is that you can run any script from the root directory by proxying the commands through the script.sh script which does the following:
#!/bin/bash
FOUND_SCRIPT=`ls scripts | grep ^$1$`;
if [ "$FOUND_SCRIPT" != "$1" ]; then
echo "Could not find script: $1";
echo "Available...";
ls scripts;
exit 1;
fi
TMPFILE=`mktemp tmp.XXXX`
cp ./scripts/$1 "$TMPFILE" && chmod +x "$TMPFILE";
function cleanup {
rm "$TMPFILE";
}
trap cleanup EXIT
trap cleanup SIGINT
shift;
"./$TMPFILE" "$#";
This normally works fine, however my create-docker-container.sh script isn't working anymore, I get the error:
error checking context: file ('/home/circleci/project/tmp.MK8H') not found or excluded by .dockerignore
My dockerignore looks like
tmp.*
I'm not really sure why this script suddenly started failing with the above error. I assume its because the docker script itself is running from tmp.XXX, which at some point is deleted by script.sh, and it then fails for some sort of context switching issue.
I hope someone with more knowledge about docker can help me.
Thanks :)
I have tried modifying the .dockerignore and removing the cleanup step, but neither have worked.
EDIT:
Here is my dockerfile:
WORKDIR /usr/src/app
COPY . /usr/src/app
CMD [ "./script", "run", "${serviceName}"]
And here is the script to deploy the container:
function cleanup {
./script temp-install-restore
}
trap cleanup EXIT
BUNDLE_FILE=$(node -e "console.log(require(\"./services.json\")[\"$1\"].services[0][\"emit-point\"])")/main.js
./script build $1 >/dev/null;
./script temp-install $(cat "$BUNDLE_FILE" | grep -o 'require("[^"]*")')
./script generate-dockerfile $1 | docker build --no-cache -q -f - .

Bash syntax issue, 'syntax error: unexpected "do" (expecting "fi")'

I have a sh script that I am using on Windows and Mac/Linux machines, and seems to work with no issues normally.
#!/bin/bash
if [ -z "$jmxname" ]
then
cd ./tests/Performance/JMX/ || exit
echo "-- JMX LIST --"
# set the prompt used by select, replacing "#?"
PS3="Use number to select a file or 'stop' to cancel: "
# allow the user to choose a file
select jmxname in *.jmx
do
# leave the loop if the user says 'stop'
if [[ "$REPLY" == stop ]]; then break; fi
# complain if no file was selected, and loop to ask again
if [[ "$jmxname" == "" ]]
then
echo "'$REPLY' is not a valid number"
continue
fi
# now we can use the selected file, trying to get it to run the shell script
rm -rf ../../Performance/results/* && cd ../jmeter/bin/ && java -jar ApacheJMeter.jar -Jjmeter.save.saveservice.output_format=csv -n -t ../../JMX/"$jmxname" -l ../../results/"$jmxname"-reslut.jtl -e -o ../../results/HTML
# it'll ask for another unless we leave the loop
break
done
else
cd ./tests/Performance/JMX/ && rm -rf ../../Performance/results/* && cd ../jmeter/bin/ && java -jar ApacheJMeter.jar -Jjmeter.save.saveservice.output_format=csv -n -t ../../JMX/"$jmxname" -l ../../results/"$jmxname"-reslut.jtl -e -o ../../results/HTML
fi
I am now trying to do some stuff with a Docker container and have used a node:alpine image, as the rest of my project is NodeJS based, but for some reason the script will not run in the Docker container giving the following -
line 12: syntax error: unexpected "do" (expecting "fi")
How can I fix that? The script seems to be working for every system it's been run on so far, and not thrown up any issues.
The error message indicates that the script is executed as '/bin/sh', and not as /bin/bash. You can see the message with '/bin/sh -n script.sh'
Check how the script is invoked. On different systems /bin/sh is symlinked to bash or other shell that is less feature rich.
In particular, the problem is with the select statement, included in bash, but not part of the POSIX standard.
Another option is that bash on your docker is set to be POSIX compliant by default
#dash-o was correct, and adding -
RUN apk update && apk add bash
to my dockerfile added bash into the container and now it works fine :)

script to check script is running and start it, if it’s stopped

I have a script with this name : Run.sh
I run this script with this command :
./run.sh
I don't like stop this script but this script Suddenly stops and need run again.
I need a script to check it , if my run.sh stopped , run it again.
this is run.sh codes:
#!/usr/bin/env bash
install() {
sudo apt-get update
sudo apt-get upgrade
}
if [ "$1" = "install" ]; then
install
else
if [ ! -f ./tg/tgcli ]; then
echo "tg not found"
echo "Run $0 install"
exit 1
fi
#sudo service redis-server restart
#./tg/tgcli -s ./bot/bot.lua -l 1 -E $#
./tg/tgcli -s ./bot/bot.lua $#
fi
And i want run this script at boot (with screen or tmux) if my server restart
i have Ubuntu 16.04 version
Thank you Ljm Dullaart
Can you help me about this ?
You should not need to run the complete bash script again. Changing
./tg/tgcli -s ./bot/bot.lua $#
to
while :; do
./tg/tgcli -s ./bot/bot.lua $#
done
will restart bot.lua everytime it exits.
You can check if your run.sh is running and re-run it if stopped with a single command:
$ if ! pgrep run.sh ;then /path/to/run.sh;fi
If script runs pgrep will return exit status 0 = success and will print the pid of run.sh
If script does not run pgrep will return exit status 1 and then script will be called.
You can also use pgrep inst.sh >/dev/null to "mute" pgrep in case script is running.

How to make a script run commands as root

I'm new to Ubuntu and bash scripts, but I just made runUpdates.sh and added this to my .profile to run it:
if [ -f "$HOME/bin/runUpdates.sh" ]; then
. "$HOME/bin/runUpdates.sh"
fi
The problem I'm having is, I want the script to run as if root is running it (because I don't want to type my sudo password)
I found a few places that I should be able to do sudo chown root.root <my script> and sudo chmod 4755 <my script> and when I run it, it should run as root. But it's not...
The script looks good to me. What am I missing? -rwxr-xr-x 1 root root 851 Mar 23 21:14 runUpdates.sh*
Can you please help me run the commands in this script as root? I don't really want to change the sudors file, I really just want to run the commands in this script at root (if possible).
#!/bin/sh
echo "user is ${USER}"
#check for updates
update=`cat /var/lib/update-notifier/updates-available | head -c 2 | tail -c 1`;
if [ "$update" = "0" ]; then
echo -e "No updates found.\n";
else
read -p "Do you wish to install updates? [yN] " yn
if [ "$yn" != "y" ] && [ "$yn" != "Y" ]; then
echo -e 'No\n';
else
echo "Please wait...";
echo `sudo apt-get update`;
echo `sudo apt-get upgrade`;
echo `sudo apt-get dist-upgrade`;
echo -e "Done!\n";
fi
fi
#check for restart
restartFile=`/usr/lib/update-notifier/update-motd-reboot-required`;
if [ ! -z "$restartFile" ]; then
echo "$restartFile";
read -p "Do you wish to REBOOT? [yN] " yn
if [ "$yn" != "y" ] && [ "$yn" != "Y" ]; then
echo -e 'No\n';
else
echo `sudo shutdown -r now`;
fi
fi
I added the user is to debug, it always outputs my user not root, and prompts for the sudo password (since I'm calling the commands with sudo) or tells me are you root? (if I remove sudo)
Also, is there a way to output the update commands stdout in real time, not just one block when they finish?
(I also tried with the shebang as #!/bin/bash)
setuid does not work on shell scripts for security reasons. If you want to run a script as root without a password, you can edit /etc/sudoers to allow it to be run with sudo without a password.
To "update in real time", you would run the command directly instead of using echo.
Its not safe to do, you should probably use sudoers but if you really need/want to, you can do it with something like this:
echo <root password> | sudo -S echo -n 2>/dev/random 1>/dev/random
sudo <command>
This works because sudo doesn't require a password for a brief window after successfully being used.
SUID root scripts were phased out many years ago if you really want to run scripts as root you need to wrap them in an executable, you can see an example on how to do this on my blog:
http://scriptsandoneliners.blogspot.com/2015/01/sanitizing-dangerous-yet-useful-commands.html
The example is how to change executable permissions and place a filter around other executables using a shell script but the concept of wrapping a shell script works for SUID as well, the resulting executable file from the shell script can be made SUID.
https://help.ubuntu.com/community/Sudoers

Bash script output not going to stdout

I have a build process, kicked off by Make, that executes a lot of child scripts.
A couple of these child scripts require root privileges, so instead of running everything as root, or everything as sudo, I'm trying to only execute the scripts that need to be as root, as root.
I'm accomplishing this like so:
execute_as_user() {
su "$1" -s /bin/bash -c "$2;exit \$?"
}
Arg $1 is the user to run the script as, arg $2 is the script.
Arg $1 is either root (gotten with: $(whoami) since everything is under sudo), or the current user's account (gotten with: $(logname))
The entire build is kicked off as:
sudo make all
Sample from the Makefile:
LOG="runtime.log"
ROTATE_LOG:=$(shell bash ./scripts/utils/rotate_log.sh)
system:
/bin/bash -c "time ./scripts/system.sh 2>&1 | tee ${LOG}"
My problem is... none of the child scripts are printing output to stdout. I believe it to be some sort of issue with an almost recursive call of su root... but I'm unsure. From my understanding, these scripts should already be outputting to stdout, so perhaps I'm mistaken where the output is going?
To be clear, I'm seeing no output in either the logfile nor displaying to the terminal (stdout).
Updating for clarity:
Previously, I just ran all the scripts either with sudo or just as the logged in user... which with my makefile above, would print to the terminal (stdout) and logfile. Adding the execute_as_user() function is where the issue cropped up. The scripts execute and build the project... just no display "that it's working" and no logs.
UPDATE
Here is some snippets:
system.sh snippet:
execute_script() {
echo "Executing as user $3: $2"
RETURN=$(execute_as_user $3 ${SYSTEM_SCRIPTS}/$2)
if [ ${RETURN} -ne ${OK} ]
then
error $1 $2 ${RETURN}
fi
}
build_package() {
local RETURN=0
case "$1" in
system)
declare -a scripts=(\
"rootfs.sh" \
"base_files.sh" \
"busybox.sh" \
"iana-etc.sh" \
"kernel.sh" \
"firmware.sh" \
"bootscripts.sh" \
"network.sh" \
"dropbear.sh" \
"wireless_tools.sh" \
"e2fsprogs.sh" \
"shared_libs.sh"
)
for SCRIPT_NAME in "${scripts[#]}"; do
execute_script $1 ${SCRIPT_NAME} $(logname)
echo ""
echo -n "${SCRIPT_NAME}"
show_status ${OK}
echo ""
done
# finalize base system
echo ""
echo "Finalizing base system"
execute_script $1 "finalize.sh" $(whoami)
echo ""
echo -n "finalize.sh"
show_status ${OK}
echo ""
# package into tarball
echo ""
echo "Packing base system"
execute_script $1 "archive.sh" $(whoami)
echo ""
echo -n "archive.sh"
show_status ${OK}
echo ""
echo ""
echo -n "Build System: "
show_status ${OK}
;;
*)
echo "$1 is not supported!"
exit 1
esac
}
sample child script executed by system.sh
cd ${CLFS_SOURCES}/
tar -xvjf ${PKG_NAME}-${PKG_VERSION}.tar.bz2
cd ${CLFS_SOURCES}/${PKG_NAME}-${PKG_VERSION}/
make distclean
RESPONSE=$?
if [ ${RESPONSE} -ne 0 ]
then
pkg_error ${RESPONSE}
exit ${RESPONSE}
fi
ARCH="${CLFS_ARCH}" make defconfig
RESPONSE=$?
if [ ${RESPONSE} -ne 0 ]
then
pkg_error ${RESPONSE}
exit ${RESPONSE}
fi
# fixup some bugs with musl-libc
sed -i 's/\(CONFIG_\)\(.*\)\(INETD\)\(.*\)=y/# \1\2\3\4 is not set/g' .config
sed -i 's/\(CONFIG_IFPLUGD\)=y/# \1 is not set/' .config
etc...
Here's the entire system.sh script:
https://github.com/SnakeDoc/LiLi/blob/master/scripts/system.sh
(i know the project is messy... it's a learn-as-you-go style project)
Previously, I just ran all the scripts either with sudo or just as the
logged in user... which with my makefile above, would print to the
terminal (stdout) and logfile. Adding the execute_as_user() function
is where the issue cropped up. The scripts execute and build the
project... just no display "that it's working" and no logs.
Just a guess, but you're probably not calling your function or not calling it properly:
execute_as_user() {
su "$1" -s /bin/bash -c "$2;exit \$?"
}
execute_as_user "$#"
I also noticed that you're not passing any argument to the script at all. Is this meant?
./scripts/system.sh ???

Resources