Running cron within a docker debian:jessie container - cron

After installing cron with RUN apt-get update && apt-get install cron -y, I am unable to run it. if I try run cron I get an error saying cron is not in PATH. How do I go about using cron within my container?
Note The specific container is the official Nginx container provided by docker
Edit I am running the command through compose.

Figured it out. I was building with the docker cli - docker build . and running with docker-compose. But compose runs its own build with its own image name attached and so was using an outdated image. docker-compose build solved it

Related

Meteor build is hanging with Docker build but working inside the container

I am trying to create a docker file to create a windows container with meteor app bundle. But my docker build hangs at "meteor build" step and sometime it completes in hours.
ONBUILD RUN meteor build --server-only --allow-superuser --directory "c:\tmp\bundle-dir" --architecture os.windows.x86_64
But if i commented this step and completes my docker build to create a windows server 2019 based docker image than build completes successfully. And I can run "meteor build" inside the container, after running a container with this image.
docker run -it <image_name> cmd
I dont know what is going on here. My changes are available on github at https://github.com/singh-ajeet/meterd-windows.
I am using Google cloud VM with configuration - 8 core and 30GB RAM.

Issue docker commands on Jenkins slave

I have a Jenkins master running on Windows Server 2016. I need to be able to run linux containers to run some automated e2e tests. For reasons I won't get into, I cannot enable hyper-v on this machine. This is preventing me from installing lcow and docker on my Jenkins master
What I've done instead is setup a Ubuntu 18.04 VM in virtualbox and installed docker there. I've configured the VM as a Jenkins slave using ssh to login as the jenkins user. I've setup and configured everything for this user to be able to run docker commands without using sudo. If I manually ssh into the server as the jenkins user I can run docker commands without an issue. Everything works the way you would expect.
I've then setup a test build to check that everything was working correctly. The problem is that when I try to run docker commands using the Execute Shell build step I'm getting a docker: not found error. From what I can tell, the build is running as the correct user. I added who -u to the build step so I could check which user the build was running as.
Here is the output from my build:
[TEST - e2e - TEST] $ /bin/sh -xe /tmp/jenkins16952572249375249520.sh
+ who -u
jenkins pts/0 2018-08-10 16:43 . 10072 (10.0.2.2)
+ docker run hello-world
/tmp/jenkins16952572249375249520.sh: 3: /tmp/jenkins16952572249375249520.sh: docker: not found
As I mentioned, the jenkins user has been added to the docker group and Docker has been added to $PATH (/snap/bin/):
jenkins#jenkins-docker-slave:~$ which docker
/snap/bin/docker
jenkins#jenkins-docker-slave:~$ $PATH
-bash:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin: No such file or directory
jenkins#jenkins-docker-slave:~$ who -u
jenkins pts/0 2018-08-10 16:43 . 10072 (10.0.2.2)
jenkins#jenkins-docker-slave:~$ cat /etc/group | grep docker
docker:x:1001:qctesting,jenkins
As you can see by this snippet I can successfully run docker commands by logging into the server as the jenkins user:
jenkins#jenkins-docker-slave:~$ docker run hello-world
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/engine/userguide/
I have also configured the path to docker in the slaves node properties as I thought it would fix my issue. As you can see I have both git and docker listed. Git commands are working just fine. It is only the docker commands that are giving me problems. I have tried both /snap/bin and /snap/bin/docker with no luck.
I am trying to build a jenkins job that will clone a git repo, spin up the containers I need using docker-compose and some build parameters I pass in at build time, and run my e2e tests against any environment (qa, staging, production, etc.). I just can't get the jenkins slave to run the docker commands. What am I missing? How can I get the slave to recognize that docker is already installed on the system and the user has the correct permissions to execute those commands.
NOTE: I am NOT trying to run docker in docker. Practically all questions/documentation I've found on running docker commands on a jenkins slave describe how to solve this issue by running the slave in a docker container and installing the docker client in the slave container. That is not what I'm trying to accomplish. I am trying to ssh from a jenkins master into a jenkins slave that already has docker installed and run docker commands on that server as the jenkins user.
I finally figured this out thanks to the answer for this question. After reading that answer I realized I had installed the wrong version of docker on Ubuntu. I removed the previous installation and installed the correct docker package using sudo curl -sSL https://get.docker.com/ | sh. I then restarted my jenkins slave and everything started working.

Building a custom Node-RED image

I would like to make my own Node-RED docker image so when I start it the flows are loaded and Node-RED is ready to go.
The flow I want to load is placed in a 'flows.json' file. And when I import it manually via the interface it works fine.
The Node-RED documentation for docker suggests the following line for starting Node-RED with a custom flow
$ docker run -it -p 1880:1880 -e FLOWS=my_flows.json nodered/node-red-docker
However when I try to do this the flow ends up empty.
I suspect this has to do something with the fact that the flow I'm trying to load is using the 'node-red-node-mongodb' plug-in, which is not installed by default.
How can I build a Node-RED image where the 'node-red-node-mongodb' is already installed?
If anymore information is required please ask.
UPDATE
I made the following Dockerfile:
FROM nodered/node-red-docker
RUN npm install node-red-node-mongodb
Then I build it with:
docker build -t testenvironment/nodered .
And started it with:
docker run -d -p 1880:1880 -e FLOWS=flows.json --name node-red testenvironment/nodered
But when I go to the Node-RED interface there is no flow. Also I don't see the MongoDB node in the sidebar.
The documentation on the Node-RED site includes instructions for how to customise a Docker image and add extra nodes. You can either do it by logging into the existing image using docker exec and installing the node by hand with npm
# Open a shell in the container
docker exec -it mynodered /bin/bash
# Once inside the container, npm install the nodes in /data
cd /data
npm install node-red-node-mongodb
exit
# Restart the container to load the new nodes
docker stop mynodered
docker start mynodered
Else you can extend the image by creating your own Docker file:
FROM nodered/node-red-docker
RUN npm install node-red-node-mongodb
And then build it with
docker build -t mynodered:<tag> .

How stop service jira in docker for updating jira?

I need to update JIRA to v7.3.3 in a docker.
When I try to start atlassian-jira-core-7.3.3-x64.bin (jira wasn't shutdawn) I see the error:
JIRA failed to shutdown. Please kill the process manually before proceeding.
Continue [c, Enter], Exit [e]
If I stop jira then the docker stops too.
How can i update jira to latest version?
I prefer to work with docker-compose (configure with declarative language your docker images using .yml files). When you execute docker-compose down followed by docker-compose up -d containers are rebuilt and this kind of problems are avoided.
If you prefer to work without docker-compose then delete containers manually by using
docker stop <container-name>
and then
docker rm <container-name>
or
docker rmi <container-id>

Create docker container with Node.js/NPM preinstalled but no package.json

I am looking for a Docker image that is just some *nix flavor with NPM and Node.js installed.
This image
https://hub.docker.com/_/node/
requires that a package.json file is present, and the Docker build uses COPY to copy the package.json file over, and it also looks for a Node.js script to start when the build is run.
...I just need a container to run a shell script using this technique:
docker exec mycontainer /path/to/test.sh
Which I discovered via:
Running a script inside a docker container using shell script
I don't need a package.json file or a Node.js start script, all I want is
a container image
Node.js and NPM installed
Does anyone know if there is an a Docker image for Node.js / NPM that does not require a package.json file? Perhaps I should just use a plain old container image and just add the code to install Node myself?
Alright, I tried to make this a simple question, unfortunately nobody could provide a simple answer...until now!
Instead of using this base image:
FROM node:5-onbuild
We use this instead:
FROM node:5
I read about onbuild and could not figure out what it's about, but it adds more than I needed for my use case.
the below code is in our Dockerfile
# 1. start with this image as a base
FROM node:5
# 2. copy the script from real-life into the container (magic)
COPY script.sh /usr/src/app/
# 3. define container entry point which will run our script
ENTRYPOINT ["/bin/bash", "/usr/src/app/script.sh"]
you build the docker image like so:
docker build -t foo .
then you run the image like so, which will "run the entrypoint":
docker run -it --rm foo
The container stdout should stream to the terminal where you ran docker run which is good (am I asking too much?).

Resources