Azure DevOps Pipeline error when running docker-compose up - azure

I have an image that runs the Postman Newman tests collection with an HTML reporter. There is also a pipeline created in Azure DevOps.
Everything used to work fine, but recently the pipeline stopped running docker-compose up, although no changes have been made. Locally, everything continues to work.
Here is my Docker file:
FROM postman/newman:alpine
RUN npm install -g newman-reporter-htmlextra
RUN apk add --update gettext
RUN apk add --update jq
WORKDIR /etc/newman
COPY path/run.sh .
RUN chmod +x run.sh
ENTRYPOINT [ "sh", "path/run.sh" ]
Pipeline crashes with the following message:
ENOENT: no such file or directory, open 'path/run.sh'
Still, the strangest thing for me is that everything used to work, but no changes were made to these files, and now I get an error.
Maybe something was updated in Azure itself, but I didn't find any information about it?
My *.sh file ending with LF

The problem was with the Azure Pipelines YAML file.
I also have a docker-compose.override.yml and adding the additionalDockerComposeFiles parameter with the path to that file solved the problem.
The fact that everything worked before, I think is related to changes in Azure DevOps, although I did not find any detailed information about this.

Related

Cmakefiles missing after code coverage execution

I am trying to generate code coverage report using LCOV in Ubuntu platform. I referred
https://github.com/QianYizhou/gtest-cmake-gcov-example
It is working.
I am giving cmake --build ../application/build --target install in my shell script.
After script execution, I can see that the cmakefiles are generated in the build folder.
cd build && make test
cd build && make coverage_TEST_NAME //To check the coverage
I did execute above in my build folder and I generated report.
My problem is, I use this in gitlab pipeline. There is no build folder I can see. So I don't know how to do make coverage_TEST_NAME in my yml file
Generate code generation output in gitlab pipeline.
Problem resolved. Just like in my Ubuntu virtual machine, the application folder was there in my docker image.
I just added a ../application/build command in my yml script, it navigated there.

How does `autoreconf` create m4/ folder?

I hit a problem - and I detected the very strange situation:
I run Docker image locally and run there autoreconf -i and I get correct and robust ./configure script.
Then I run autoreconf -i in the same Docker image but under Gitlab CI. And I get broken ./configure script - some of M4 macro were not substituted to their shell code, so Bash cannot execute them and treats them as syntax errors.
The difference is in m4/ folder in the both runs: successful m4/ folder contains files like:
aria2_arg.m4
ax_check_compile_flag.m4
ax_cxx_compile_stdcxx_11.m4
codeset.m4
fallocate.m4
fcntl-o.m4
gettext.m4
... # and so on
but in the failed (Gitlab CI) m4/ folder there are:
gettext.m4
fcntl-o.m4
# ... and so on
and aria2_arg.m4, ax_check_compile_flag.m4, ax_cxx_compile_stdcxx_11.m4, fallocate.m4 and others are missing. I don't know how is it possible if the Docker image is the same in both cases, but... how does autoreconf create m4/ folder? If its content's source is the Docker image itself (I don't know is it true, it's my suggestion only), then why is the content different in both cases?
No any magic. The missing m4 files do exist in the original Aria2 Github repository (in m4/) folder. autoreconf -i adds another .m4 files to this folder. But it has .gitignore with .m4 rule. I added it to another git repo (to build it in Gitlab) but m4/ folder was ignored. So:
Aria2 -> local folder -> run docker -> OK
Github with m4/ locally m4/ exists
works fine, but:
Aria2 -> another git repo -> run Gitlab CI -> Failure
Github (now no m4/) (.m4 missing)
So, it seems the reason of the problem was - the miss of m4/ in the second git repo (at least I have got the first successful build)

docker image directory does not exist during build

I'm building a simple image from a Dockerfile: (note, pm3 is the name of the folder this Dockerfile lives in)
FROM continuumio/miniconda3
MINTAINER Jordan Miller
ENV PORT=5000
COPY . /opt/repos/
WORKDIR /opt/repos/pm3/
RUN ls -la
RUN python /opt/repos/pm3/lib/acquire_requirements.py
EXPOSE $PORT
ENTRYPOINT ["python","/opt/repos/pm3/src/web/api.py"]
I use docker build -f Dockerfile -t jm/pm3 . to build it. Now I thought this was working great last week, but I made some changes and it broke. so I ran docker system prune to clean everything out. But that didn't fix it so I think it's something wrong with the code.
At any rate, here's the error I get:
Step 7/9 : RUN python /opt/repos/pm3/lib/acquire_requirements.py
---> Running in f842a282a6a0
Invalid requirement: '/opt/repos/pm3/lib/acquire_requirements.py'
File '/opt/repos/pm3/lib/acquire_requirements.py' does not exist.
But it really is there, in my windows machine there's a lib folder in the pm3 folder and there is a acquire_requirements.py in the lib folder. should I not include the entire path to it on the linux box or something?
I included that line RUN ls -la after it gave me the error because I wanted to see if it copied over the folder correctly. but the output of that didn't show me anything had copied over, it showed an empty file. so I don't understand really what's going on. If the working directory really is /opt/repos/pm3 then shouldn't I see src when I run ls?
I'm hoping there's something obvious about linux or docker that I'm missing here. any ideas?
so I discovered by creating an image without the RUN python /opt/repos/pm3/lib/acquire_requirements.py that it doens't create a folder after the /opt/repos/ called pm3 like my root directory of the Dockerfile is in.
So I had to add that in manually to the command:
COPY . /opt/repos/ -> COPY . /opt/repos/pm3/

Pulling a git repo from a startup script on google cloud compute engine

To show my team how the app that I am building is progressing, I created a small dev server on google cloud compute engine. This server is usually switched off to save cost and only is switched on when we are working together. I am developing and pushing to a git repo when the server is not on. When I start the server, the latest changes should be pulled, the node packages installed and the node server should be started. To do this I have created the following startup script:
#! /bin/bash
cd /to/my/server/folder
git pull
sudo npm install --no-progress
nohup node src/ &
I have created an ssh key and added that as a read only deploy key in my gitlab account on this particular repo. The script is tested on the server and works totally fine. Now the fun part.
When the script is run as a startup script (https://cloud.google.com/compute/docs/startupscript) it doesn't work. The error:
permission denied (public key)
fatal: could not read from repo make sure it exists.
I tried these fixes:
Getting permission denied (public key) on gitlab. The problem being that they can not pull git repos in general. In my case it works fine from command line, it works fine from shell script, but it just doesn't work from startup script.
I also tried a whole bunch of other stuff on the whole spectrum from 'could be it' to 'a wild guess'. Clearly there is something I am missing here. Could anyone help me out?
Finally found the answer here: https://superuser.com/a/868699/852795. Apparently something goes wrong with the SSH keys that are used in a google startup script. The solution is to explicitly tell git what key to use. Like this: GIT_SSH_COMMAND="ssh -i ~/.ssh/id_rsa" git pull.

'docker build' gives error that 'docker run' doesn't. How are they different?

My project is setup like this:
./ -
Dockerfile
package.json
build
compiled files from frontend and backend directories get put here
backend
app.js
frontend
frontend files...
scripts
startServer.sh
build.sh
startServer.sh:
docker build ../ --tag myImage
# The build script compiles all my assets and places
# them in the top level 'build' directory which i am
# trying to link to my docker image so I can recompile
# on each file change and have the changes show in the docker image.
./build.sh
docker run --volume /path/to/build/dir:/src/app myImage
Dockerfile:
FROM node:4.4.7
RUN ls src/app
The RUN command in the Dockerfile gives me this error when the build command from the startServer script is called:
ls: cannot access src/app: No such file or directory
If I change RUN to CMD it gives no error. Also, even after the build gives that error, it finishes the build and the docker run command gives no error.
Is the 'docker build' command actually trying to add the 'build' folder to the image from which containers are launched? Or is it just compiling some commands for the images to use when they are made?
If it is the later, how do you make one Dockerfile that is used for both building and running that works in both cases?
I feel like I might be missing a crucial concept with Docker, but I've gone through the tutorials and docs and couldn't solve this.
There is no src/app folder in the node image, so this is an expected error. The node image expects you to add your own /usr/src/app, either with a COPY step in your build, or with a volume mapping after the build is finished.
The RUN gives a step to run to add a layer to the resulting built image, so an ls makes little sense there since you didn't modify the image with new content.
The CMD gives a default command to run if one is not passed at the end of the docker run, so if you do a docker run node /bin/bash, the ls src/app CMD will never be run. This also runs after other steps in your build, and after any volume mounts you may be running on your container, which would create this folder.
When you run the docker image, you mount data volume to /src/app
But in docker script, you tried to access src/app
Because the default working directory is not a root. you cannot access src directory.
so, edit your docker file to
FROM node:4.4.7
RUN ls /src/app

Resources