error checking context: file XXX not found or excluded by .dockerignore - linux

I have an interesting setup with my project where I have a directory structure like this:
script.sh
scripts/
create-docker-container.sh
...
src/
...
The idea is that you can run any script from the root directory by proxying the commands through the script.sh script which does the following:
#!/bin/bash
FOUND_SCRIPT=`ls scripts | grep ^$1$`;
if [ "$FOUND_SCRIPT" != "$1" ]; then
echo "Could not find script: $1";
echo "Available...";
ls scripts;
exit 1;
fi
TMPFILE=`mktemp tmp.XXXX`
cp ./scripts/$1 "$TMPFILE" && chmod +x "$TMPFILE";
function cleanup {
rm "$TMPFILE";
}
trap cleanup EXIT
trap cleanup SIGINT
shift;
"./$TMPFILE" "$#";
This normally works fine, however my create-docker-container.sh script isn't working anymore, I get the error:
error checking context: file ('/home/circleci/project/tmp.MK8H') not found or excluded by .dockerignore
My dockerignore looks like
tmp.*
I'm not really sure why this script suddenly started failing with the above error. I assume its because the docker script itself is running from tmp.XXX, which at some point is deleted by script.sh, and it then fails for some sort of context switching issue.
I hope someone with more knowledge about docker can help me.
Thanks :)
I have tried modifying the .dockerignore and removing the cleanup step, but neither have worked.
EDIT:
Here is my dockerfile:
WORKDIR /usr/src/app
COPY . /usr/src/app
CMD [ "./script", "run", "${serviceName}"]
And here is the script to deploy the container:
function cleanup {
./script temp-install-restore
}
trap cleanup EXIT
BUNDLE_FILE=$(node -e "console.log(require(\"./services.json\")[\"$1\"].services[0][\"emit-point\"])")/main.js
./script build $1 >/dev/null;
./script temp-install $(cat "$BUNDLE_FILE" | grep -o 'require("[^"]*")')
./script generate-dockerfile $1 | docker build --no-cache -q -f - .

Related

How to detect if the current script is running in a docker build?

Suppose I have a Dockerfile which runs a script,
RUN ./myscript.sh
How could I write the myscript.sh so that it could detect if itself is launched by the RUN command during a docker build?
#! /bin/bash
# myscript.sh
if <What should I do here?>
then
echo "I am in a docker build"
else
echo "I am not in a docker build"
fi
Ideally, it should not require any changes in the Dockerfile, so that the caller of myscript.sh does not need specialized knowledge about myscript.sh.
Try this :
#!/bin/bash
# myscript.sh
isDocker(){
local cgroup=/proc/1/cgroup
test -f $cgroup && [[ "$(<$cgroup)" = *:cpuset:/docker/* ]]
}
isDockerBuildkit(){
local cgroup=/proc/1/cgroup
test -f $cgroup && [[ "$(<$cgroup)" = *:cpuset:/docker/buildkit/* ]]
}
isDockerContainer(){
[ -e /.dockerenv ]
}
if isDockerBuildkit || (isDocker && ! isDockerContainer)
then
echo "I am in a docker build"
else
echo "I am not in a docker build"
fi
Just to chime in on this, it seems I cannot comment (nor edit as the edit is two characters long... not enough for SO) on the accepted answer but it contains a typo.
the isDockerContainer should read
isDockerContainer(){
[ -e /.dockerenv ]
}
which created a silent bug in our case.
Cheers
In your Dockerfile, you can try this to run the script
ADD myscript.sh .
RUN chmod +x myscript.sh
ENTRYPOINT ["myscript.sh"]

Make execute commands on folder context multiple times

I'm having a makefile which under proj root dir.
Folder proj is the main folder and there is folders such as ws-led or tools-ext etc under it which contains docker files.
In addition, there is also Makefile which is under the root that needs to run all the commands.
This is the folder structure
proj
- ws-led
— Dockerfile
- tools-ext
— Dockerfile
- Makefile
What I need is to cd to each of the folders under the rot (we have many more) and run:
docker build <folder name> .
Example: ( exactly like running the following command manually )
cd ws-led
docker build -t ws-led .
cd tools-ext
docker build -t tools-ext .
I try with the following (maybe instead of repo param I get run on all the folders in the same level of the Makefile )
Like (CURDIR) …
all: pre docker-build
.PHONY: pre docker-build
repos := ws-led tools-ext
pre:
$(patsubst %,docker-build,$(repos))
docker-build:pre
cd $*; docker build -t $* . >&2 | tee docker-build
while using this im getting an error:
invalid argument "." for "-t, --tag" flag: invalid reference format
Any idea what is wrong here ? or i could do it better?
As I've many repos/folders I want to use make to handle it
There's more than one way to do it.
You could use a bash for loop:
docker-build:
for dir in $(repos); do cd $$dir; docker build -t $$dir . >&2 | tee docker-build; done
Or use a pattern rule (or in this case a static pattern rule):
REPO_BUILDS := $(addsuffix -build, $(repos))
docker-build: $(REPO_BUILDS)
.PHONY: $(REPO_BUILDS)
$(REPO_BUILDS): %-build:
cd $*; docker build -t $* . >&2 | tee docker-build

Unable to run shell script in crontab

I am unable to make a script execute successfully from crontab.
When the script is executed manually, it works fine. When added to the crontab it gives errors.
When the script is executed manually as follows it all works fine:
cd /home/admin/git/Repo
./lunchpad2.sh
The script is added to crontab as follows:
sudo crontab -e
30 13 * * * /home/admin/git/Repo/lunchpad2.sh > /home/admin/git/Repo/outcome.err
lunchpad2.sh has 744 permissions set;
The script itself:
#!/bin/bash -p
PATH=$PATH:/home/admin/git/Repo
echo "--> Starting!"
echo "--> Stopping docker"
docker-compose down
echo "--> Switching files"
mv dc_conf_standby.py dc_conf_aboutready.py
mv dc_conf.py dc_conf_standby.py
mv dc_conf_aboutready.py dc_conf.py
echo "--> Building docker"
docker-compose up -d --build
echo "--> Completed!"
The errors that are generated:
/home/admin/git/Repo/lunchpad2.sh: line 7: docker-compose: command not found
mv: cannot stat ‘dc_conf_standby.py’: No such file or directory
mv: cannot stat ‘dc_conf.py’: No such file or directory
mv: cannot stat ‘dc_conf_aboutready.py’: No such file or directory
/home/admin/git/Repo/lunchpad2.sh: line 15: docker-compose: command not found
I see two issues here:
You need to either cd in the script or in the cron job. Cron runs the command in your home directory. You can echo "$PWD" to confirm.
You need to specify docker-compose executable path (Run "which docker-compose" to confirm)
#!/bin/bash -p
cd /home/admin/git/Repo
echo "--> Starting!"
echo "--> Stopping docker"
/usr/bin/docker-compose down
echo "--> Switching files"
mv dc_conf_standby.py dc_conf_aboutready.py
mv dc_conf.py dc_conf_standby.py
mv dc_conf_aboutready.py dc_conf.py
echo "--> Building docker"
/usr/bin/docker-compose up -d --build
echo "--> Completed!"

How to update a bash script with the old version of this script?

I have a linux bash script, which has a parameter to update the script self. My problem is, that the script can't update itself, while it's used. Well.. Does someone have a solution?
Currently I try to update the script as following:
# Download latest version
wget -q https://github.com/TS3Tools/TS3UpdateScript/archive/master.zip
# Unzip latest version
unzip master.zip TS3UpdateScript-master/* -x TS3UpdateScript-master/configs/ && mv -f TS3UpdateScript-master/* . && rmdir TS3UpdateScript-master/
But I receive the following error by the script:
replace TS3UpdateScript-master/LICENSE_GNU_GPL.txt? [y]es, [n]o, [A]ll, [N]one, [r]ename: A
ateScript-master/configs
caution: excluded filename not matched: TS3UpdateScript-master/configs/
# many arguments
I hope, someone can help me. Thanks in advance!
It seems that your error comes from file name wildcard without quotes. Bash does globbing first and replaces * with lots of filenames and then runs unzip with this parameters. Try master.zip 'TS3UpdateScript-master/*' -x 'TS3UpdateScript-master/configs/' .
Then there will be a problem with running a new version of script instead of old one running. I think it should be done like that:
#!/bin/bash
version=4
if [ "$UPDATED" != "$0" ]; then
cp self_update.new.sh self_update.sh
exec env UPDATED="$0" "$0" "$#"
fi
echo "This script's version is $version"
Thanks for your help and ideas! I've "outsourced" the code to another script, which contains following code:
#!/usr/bin/env bash
sleep 5s
# Download latest version
wget -q https://github.com/TS3Tools/TS3UpdateScript/archive/master.zip
# Unzip latest version
if [[ $(unzip master.zip TS3UpdateScript-master/* -x TS3UpdateScript-master/configs/*) ]]; then
if [ $(cp -Rf TS3UpdateScript-master/* . && rm -rf TS3UpdateScript-master/) ]; then
rm -rf master.zip
exit 1;
fi
else
rm -rf master.zip
exit 0;
fi

Recursively override a rc file in bash

This is similar to a .htaccess for directories.
I have following:
File: ~/.myapprc
APP_USER=alagu
APP_DOMAIN=goyaka.com
File: ~/testapp/.myapprc
APP_USER=alagu_test
APP_DOMAIN=localhost
What I want:
[alagu#~ ]$ echo $APP_USER
alagu
[alagu#~ ]$ cd ~/testapp
[alagu#~ ]$ echo $APP_USER
alagu_test
How do I get this done?
Looks like you want to source .myapprc whenever you change directory.
There's two avenues you could use that I can think of - PROMPT_COMMAND, and the DEBUG trap.
To do this with the first, you'd run the following once:
PROMPT_COMMAND="[ -f .myapprc ] && . .myapprc"
and with the second:
trap "[ -f .myapprc ] && . .myapprc" DEBUG
These will source the file once for every prompt, so if sourcing that file is expensive you could extend it to check if $PWD has changed.
You could also override cd, but this may break some shell scripts:
alias cd=cd_
function cd_
{
\cd "$#"
local ret=$?
[ -f .myapprc ] && . .myapprc
return $ret
}
But doing any of these really isn't a good idea - hey're all huge security holes since you'll end up running whatever commands are in .myapprc in whatever your current working dir is.
Late edit for Joachim - Use this with the PROMPT_COMMAND/trap solutions can avoid excessive execution of .myapprc with the following:
PROMPT_COMMAND='if [ -f .myapprc -a "$PWD" != "$PWDLAST" ]; then PWDLAST="$PWD"; source .myapprc; fi'
You can create a function in your .bashrc that overrides the cd command:
cd() {
# "$#" to preserve quoting/whitespace
builtin cd "$#"
[ -f ".myapprc" ] && source .myapprc
}
You can customize your environment based on your working directory with direnv. It's at http://direnv.net .

Resources