I created a pipeline in Azure with Ubuntu 18.04. My requirement was to run a docker image using bash script and for the same below script was created but on execution I received an error "docker command does not exist and docker: invalid reference format.
test.sh
#!/bin/bash
#
echo "=== docker Images==="
docker images
echo "==== Starut running a jmeter/image ===="
docker run "justb4/jmeter:latest"
echo "==== Finish ===="
Error
Starting: Bash Script scripts/test.sh
==============================================================================
Task : Bash
Description : Run a Bash script on macOS, Linux, or Windows
Version : 3.163.2
Author : Microsoft Corporation
Help : https://learn.microsoft.com/azure/devops/pipelines/tasks/utility/bash
==============================================================================
Generating script.
Formatted command: bash '/home/vsts/work/1/s/scripts/test.sh' 'justb4/jmeter:latest' mainTest.jmx qa-url 30 1 60 15 2 60 1000
========================== Starting Command Output ===========================
/bin/bash --noprofile --norc /home/vsts/work/_temp/5c641c31-4e55-4ab8-be9e-4cf850432bab.sh
=== docker Images===
docker: 'images
' is not a docker command.
See 'docker --help'
==== Starut running a jmeter/image ====
docker: invalid reference format.
See 'docker run --help'.
==== Finish ====
Finishing: Bash Script scripts/test.sh
To describe in detail, here are the tasks created on az pipeline :-
1. Install a docker
Output of Task1
2. Just for debugging purpose, I added below bash task **with inline commands** to see if docker
commands work and it worked perfectly fine with no issues. But in task3, when I tried to execute the
scripts with the same commands it failed.
Output of Task2
3. Task 3 to execute test.sh script having docker commands
Output of Task3
The problem was with Windows or DOS-style line endings but while executing in Azure pipeline it did not throw the actual error. Later it was understood that each line was being terminated with a Carriage Return followed by a Line Feed character. If a script file was saved with Windows line endings, Bash sees the file as
#!/bin/bash^M
^M
cd "src"^M
having a special character at the end.
Try running dos2unix on the script solved the problem.
http://dos2unix.sourceforge.net/
Or just rewrite the script in your Unix env using vi and test.
Unix uses different line endings so can't read the file you created on Windows. Hence it is seeing ^M as an illegal character.
Related
I am using a data centre to compute a simulation for a project, but in between sessions the node I get assigned may change and will not have conda or FEniCS activated. It is also necessary for my program that I change the cache dir due to the permissions on the server. I therefore would like a shell script to execute all of my start up commands to save me having to retype 6+ commands each time I disconnect. However, whilst the commands work as expected when I type them in the command line, they do not work in the script.
My shell script:
\#!/bin/bash
echo "================================="
srun -p cpu_inter -t 120 --exclusive -N 1 -n 32 --pty bash
echo "srun"
source /opt/conda/bin/activate
echo "conda"
source activate fenicsproject
echo "fenics"
export DIJISTSO_CACHE_DIR=$HOME
echo "home dir"
mysrun
echo "use 'scancel \<jodId\>' if needed"
echo "================================="
I run the script:
username#chome:\~$ ./shellscriptname.sh
Output:
=================================
username#nodename:\~$
(so only the first line was executed, srun was successfully started)
I rerun but with the first command commented out:
username#nodename:~$ ./shellscriptname.sh
=================================
srun
conda
fenics
home dir
./shellscriptname.sh: ligne 12: mysrun : commande introuvable
use 'scancel <jodId>' if needed
=================================
username#nodename:~$
So not only is the mysrun command not working, but none of the others did, as there is no (fenicsproject) prefix in the input line. See expected result below from when each command executed individually:
username#nodename:~$ source /opt/conda/bin/activate
(base) username#nodename:~$ source activate fenicsproject
(fenicsproject) username#nodename:~$ export DIJISTSO_CACHE_DIR=$HOME
(fenicsproject) username#nodename:~$ mysrun
22841 bash RUNNING 4:37 2:00:00 1 nodename cpu_name
(fenicsproject) username#nodename:~$
I have also tried replacing the first line of the script with \#!/bin/sh but in this case I get the 'source: not found' error.
I have a docker image running java application, its main class is dynamic, in a file called start-class. Traditionally, I started the application like this.
java <some_options_ignored> `cat start-class`
Now I want to run these applications in docker containers. This is my Dockerfile.
FROM openjdk:8
##### Ignored
CMD ["java", "`cat /app/classes/start-class`"]
I built the image and run the containers. The command actually executed was this.
$ docker ps --no-trunc | grep test
# show executed commands
"java '`cat /app/classes/start-class`"
Single quotes was automatically wrapped outside the backticks. How can I fix this??
You're trying to run a shell command (expanding a sub-command) without a shell (the json/exec syntax of CMD). You need to switch to the shell syntax (or explicitly run a shell with the exec syntax). That would look like:
CMD exec java `cat /app/classes/start-class`
Without the json formatting, docker will run
sh -c "exec java `cat /app/classes/start-class`"
The exec in this case will replace the shell in pid 1 with the java process to improve signal handling.
We've recently moved to Gitlab and have started using pipelines. We've set up a build server (an Ubuntu 16.04 instance) and installed a runner that uses a Shell executor but I'm unsure of how it actually executes the scripts defined in the .gitlab-ci.yml file. Consider the following snippet of code:
script:
- sh authenticate.sh $DEPLOY_KEY
- cd MAIN && sh deploy.sh && cd ..
- sh deploy_service.sh MATCHMAKING
- sh deauthenticate.sh
I was under the impression that it will just pipe these commands to Bash, and hence I was expecting the default Bash behaviour. What happens, however, is that the deploy.sh fails because of an ssh error ; Bash then continues to execute deploy_service.sh (which is expected behaviour) however this fails with a can't open deploy_service.sh error and the job terminates without Bash executing the last statement.
From what I understand, Bash will only abort on error if you do a set -e first and hence I was expecting all the statements to be executed. I've tried adding the set -e as the first statement but this makes no difference whatsoever - it doesn't terminate on the first ssh error.
I've added the exact output from Gitlab below:
Without set -e
$ cd MAIN && sh deploy.sh && cd ..
deploy.sh: 72: deploy.sh: Bad substitution
Building JS bundles locally...
> better-npm-run build
running better-npm-run in x
Executing script: build
to be executed: node ./bin/build
-> building js bundle...
-> minifying js bundle...
Uploading JS bundles to server temp folder...
COMMENCING RESTART. 5,4,3,2,1...
ssh: Could not resolve hostname $: Name or service not known
$ sh deploy_service.sh MATCHMAKING
sh: 0: Can't open deploy_service.sh
ERROR: Job failed: exit status 1
With set -e
$ set -e
$ cd MAIN && sh deploy.sh && cd ..
deploy.sh: 72: deploy.sh: Bad substitution
Building JS bundles locally...
> better-npm-run build
running better-npm-run in x
Executing script: build
to be executed: node ./bin/build
-> building js bundle...
-> minifying js bundle...
Uploading JS bundles to server temp folder...
COMMENCING RESTART. 5,4,3,2,1...
ssh: Could not resolve hostname $: Name or service not known
$ sh deploy_service.sh MATCHMAKING
sh: 0: Can't open deploy_service.sh
ERROR: Job failed: exit status 1
Why is it, without set -e, terminating on error (also, why is it terminating on the second error only and not the ssh error)? Any insights would be greatly appreciated.
Gitlab script block is actuality an array of shell scripts.
https://docs.gitlab.com/ee/ci/yaml/#script
Failure in each element of array will fail a whole array.
To workaround put your script block in some script.sh file
like
script:
- ./script.sh
I don't think your sh deploy.sh is generating a non-zero exit code.
You are using set -e to tell the current process to exit if a command exits with a non-zero return code, but you are creating a sub-process to run the shell script.
Here's a simple example script that I've called deploy.sh:
#!/bin/bash
echo "First."
echox "Error"
echo "Second"
If I run the script, you can see how the error is not handled:
$ sh deploy.sh
First.
deploy.sh: line 5: echox: command not found
Second
If I run set -e first, you will see it has no effect.
$ set -e
$ sh deploy.sh
First.
deploy.sh: line 5: echox: command not found
Second
Now, I add -e to the /bin/bash shebang:
#!/bin/bash -e
echo "First."
echox "Error"
echo "Second"
When I run the script with sh the -e still takes no effect.
$ sh ./deploy.sh
First.
./deploy.sh: line 3: echox: command not found
Second
When this script is run directly using bash, the -e takes effect.
$ ./deploy.sh
First.
./deploy.sh: line 3: echox: command not found
To fix your issue I believe you need to:
Add -e to the script shebang line (#!/bin/bash -e)
Call the script direct from bash using ./deploy.sh and not run the script through sh.
Bear in mind that if deploy.sh does fail then the cd .. will not run (&& means run the next command if the preceding one succeeded), which would mean you were in the wrong directory to run the deploy_service.sh. You would be better with cd MAIN; sh deploy.sh; cd .., but I suggest replacing your call to deploy.sh with simpler alternative:
script:
- sh authenticate.sh $DEPLOY_KEY
- (cd MAIN && sh deploy.sh)
- sh deploy_service.sh MATCHMAKING
- sh deauthenticate.sh
This is not wildly different, but will result in the cd MAIN && sh deploy.sh to be run in a sub-process (that's what the brackets do), which means that the current directory of the overall script is not affected. Think of it like "spawn a sub-process, and in the sub-process change directory and run that script", and when the sub-process finishes you end up where you started.
As other users have commented, you're actually running your scripts in sh, not bash, so all round this might be better:
script:
- ./authenticate.sh $DEPLOY_KEY
- (cd MAIN && ./deploy.sh)
- ./deploy_service.sh MATCHMAKING
- ./deauthenticate.sh
I have some script I need to run during a Docker build which requires a tty (which Docker does not provide during a build). Under the hood the script uses the read command. With a tty, I can do things like (echo yes; echo no) | myscript.sh.
Without it I get strange errors I don't completely understand. So is there any way to use this script during the build (given that its not mine to modify?)
EDIT: Here's a more definite example of the error:
FROM ubuntu:14.04
RUN echo yes | read
which fails with:
Step 0 : FROM ubuntu:14.04
---> 826544226fdc
Step 1 : RUN echo yes | read
---> Running in 4d49fd03b38b
/bin/sh: 1: read: arg count
The command '/bin/sh -c echo yes | read' returned a non-zero code: 2
RUN <command> in Dockerfile reference:
shell form, the command is run in a shell, which by default is /bin/sh -c on Linux or cmd /S /C on Windows
let's see what exactly /bin/sh is in ubuntu:14.04:
$ docker run -it --rm ubuntu:14.04 bash
root#7bdcaf403396:/# ls -n /bin/sh
lrwxrwxrwx 1 0 0 4 Feb 19 2014 /bin/sh -> dash
/bin/sh is a symbolic link of dash, see read function in dash:
$ man dash
...
read [-p prompt] [-r] variable [...]
The prompt is printed if the -p option is specified and the standard input is a terminal. Then a line
is read from the standard input. The trailing newline is deleted from the line and the line is split as
described in the section on word splitting above, and the pieces are assigned to the variables in order.
At least one variable must be specified. If there are more pieces than variables, the remaining pieces
(along with the characters in IFS that separated them) are assigned to the last variable. If there are
more variables than pieces, the remaining variables are assigned the null string. The read builtin will
indicate success unless EOF is encountered on input, in which case failure is returned.
By default, unless the -r option is specified, the backslash ``\'' acts as an escape character, causing
the following character to be treated literally. If a backslash is followed by a newline, the backslash
and the newline will be deleted.
...
read function in dash:
At least one variable must be specified.
let's see read function in bash:
$ man bash
...
read [-ers] [-a aname] [-d delim] [-i text] [-n nchars] [-N nchars] [-p prompt] [-t timeout] [-u fd] [name...]
If no names are supplied, the line read is assigned to the variable REPLY. The return code is zero,
unless end-of-file is encountered, read times out (in which case the return code is greater than
128), or an invalid file descriptor is supplied as the argument to -u.
...
So I guess your script myscript.sh is start with #!/bin/bash or something else but not /bin/sh.
Also, you can change your Dockerfile like below:
FROM ubuntu:14.04
RUN echo yes | read ENV_NAME
Links:
https://docs.docker.com/engine/reference/builder/
http://linux.die.net/man/1/dash
http://linux.die.net/man/1/bash
Short answer : You can't do it straightly because docker build or either buildx didn't implement [/dev/tty, /dev/console]. But there is a hacky solution where you can achieve what you need but I highly discourage using it since it break the concept of CI. That's why docker didn't implement it.
Hacky solution
FROM ubuntu:14.04
RUN echo yes | read #tty requirement command
As mentioned in docker reference document the RUN consist of two stage, first is execution of command and the second is commit to the image as a new layer. So you can do the stages manually on your own where we will provide tty to first stage(execution) and then commit the result.
Code:
cd
cat >> tty_wrapper.sh << EOF
echo yes | read ## Your command which needs tty
rm /home/tty_wrapper.sh
EOF
docker run --interactive --tty --detach --privileged --name name1 ubuntu:14.04
docker cp tty_wrapper.sh name1:/home/
docker exec name1 bash -c "cd /home && chmod +x tty_wrapper.sh && ./tty_wrapper.sh "
docker commit name1 your:tag
Your new image is ready.
Here is a description about the code.
At first we make a bash script which wrap our tty to it and then remove itself after fist execute. Then we run a container with provided tty option(you can remove privileged if you don't need). Next step we copy wrapped bash script inside container and do the execution & commit stage on our own.
You don't need a tty for feeding your data to your script . just doing something like (echo yes; echo no) | myscript.sh as you suggested will do. also please make sure you copy your file first before trying to execute it . something like COPY myscript.sh myscript.sh
Most likely you don't need a tty. As the comment on the question shows, even the example provided is a situation where the read command was not properly called. A tty would turn the build into an interactive terminal process, which doesn't translate well to automated builds that may be run from tools without terminals.
If you need a tty, then there's the C library call to openpty that you would use when forking a process that includes a pseudo tty. You may be able to solve your problem with a tool like expect, but it's been so long that I don't remember if it creates a ptty or not. Alternatively, if your application can't be built automatically, you can manually perform the steps in a running container, and then docker commit the resulting container to make an image.
I'd recommend against any of those and to work out the procedure to build your application and install it in a non-interactive fashion. Depending on the application, it may be easier to modify the installer itself.
I am running the cmd
script install-log.txt
the terminal successfully returns
Script started, file is install-log.txt
If I begin typing commands and receiving output to the screen
lsblk
fdisk -l
ls
echo ok
when I check the install-log.txt
nano install-log.txt
it is empty.
I thought all cmd was supposed to be saved there until the session is finished?
I am using Arch-Linux installation CD, and wanted to save this log to record my installation setup cmds.
You need to terminate script operation by running 'exit' command. That wont exit your terminal as such. Then you can view your log file.
Here is the duplicate with more detailed info -> Bash script: Using "script" command from a bash script for logging a session