azure self hosted agent linux do not run with "--once" parameter - azure

i like to run the self-hosted Linux container only once per pipeline
that means when the pipeline is done i like the container to stop
i saw that there is a parameter called "--once"
please this link in the bottom :
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/docker?view=azure-devops
but when i start the docker like this with the once after the run :
docker run --once --rm -it -e AZP_WORK=/home/working_dir -v /home/working_dir:/azp -e AZP_URL=https://dev.azure.com/xxxx -e AZP_TOKEN=nhxxxxxu76mlua -e AZP_AGENT_NAME=ios_dockeragent xxx.xxx.com:2000/azure_self_hosted_agent/agent:latest
I'm getting :
unknown flag: --once
See 'docker run --help'.
also if i put it in the docker file
as
COPY ./start.sh .
RUN chmod +x start.sh
CMD ["./start.sh --once"]
Im getting error when trying to run the docker :
docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"./start.sh --once\": stat ./start.sh --once: no such file or directory": unknown
where do i need to set this "--once" command in dockerized agent?

Is for the agent's run, not the docker run. from the docs:
For agents configured to run interactively, you can choose to have the
agent accept only one job. To run in this configuration:
./run.sh --once
Agents in this mode will accept only one job and then spin down
gracefully (useful for running in Docker on a service like Azure
Container Instances).
So, you need to add it in the bash script you configure the docker image:
FROM ubuntu:18.04
# To make it easier for build and release pipelines to run apt-get,
# configure apt to not require confirmation (assume the -y argument by default)
ENV DEBIAN_FRONTEND=noninteractive
RUN echo "APT::Get::Assume-Yes \"true\";" > /etc/apt/apt.conf.d/90assumeyes
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
ca-certificates \
curl \
jq \
git \
iputils-ping \
libcurl4 \
libicu60 \
libunwind8 \
netcat
WORKDIR /azp
COPY ./start.sh .
RUN chmod +x start.sh --once

As far as I know, there's no way to pass it in from the outside; you have to go into the container and edit the start.sh file to add the --once argument to the appropriate line.
exec ./externals/node/bin/node ./bin/AgentService.js interactive --once & wait $!
cleanup
Side note: depending on your requirements, you might also take the opportunity to remove the undocumented web-server from start.sh.

Related

configure kubectl to reach cluster on docker

I'm facing an interesting challenge, I'm trying to run kubectl in a docker image with a proper configuration, to reach my cluster.
I've been able to create the image, kubecod
FROM ubuntu:xenial
WORKDIR /project
RUN apt-get update && apt-get install -y \
curl
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubectl
RUN chmod +x ./kubectl
RUN mv ./kubectl /usr/local/bin/kubectl
ENTRYPOINT ["kubectl"]
#
CMD ["version"]
When I run the image, the container is functionning correctly, giving me the expected answer.
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.2", GitCommit:"f5743093fd1c663cb0cbc89748f730662345d44d", GitTreeState:"clean", BuildDate:"2020-09-16T13:41:02Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.9-eks-d1db3c", GitCommit:"d1db3c46e55f95d6a7d3e5578689371318f95ff9", GitTreeState:"clean", BuildDate:"2020-10-20T22:18:07Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
However, my aim is to create an image with the kubectl connecting to my node. Reading the doc, I need to add a configuration file in the following folder ~/.kube/config
I've created another Dockerfile to build another image, kubedock, with the proper config file and the creation of the requisite directory, .kube
FROM ubuntu:xenial
#setup a working directory
WORKDIR /project
RUN apt-get update && apt-get install -y \
curl
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubectl
RUN chmod +x ./kubectl
RUN mv ./kubectl /usr/local/bin/kubectl
#create the directory
RUN mkdir .kube
#Copy the config file to the .kube folder
COPY ./config .kube
ENTRYPOINT ["kubectl"]
CMD ["cluster-info dump"]
However, when I run the new image in a container, I have the following message
me#os:~/_projects/kubedock$ docker run --name kubecont kubedock
Error: unknown command "cluster-info dump" for "kubectl"
Run 'kubectl --help' for usage.
Not sure what I'm missing.
Any hints are welcomed.
Cheers.
It's not clear to me where your K8s cluster is running,
If you run your cluster in GKE you will need to run something like:
gcloud container clusters get-credentials $CLUSTER_NAME --zone $CLUSTER_ZONE
Which will create the ~/.config/gcloud tree of files in the users home directory.
On AWS EKS you will need to setup ~/.aws/credentials and other IAM settings.
I suggest you post the details of where your K8s cluster is running and we can take it from there.
PS maybe if you mount/copy the host home directory of a working user into the docker it will work.
The answer is that
CMD ["cluster-info","dump"]
Or when there is a space in the kubectl command line, separate it with a comma.

how to run feedconsumers and consumers multiple for kafka in docker?

So I have this docker file and i want to run feed-consumers and consumers multiple times and i tried to do so. We have a node.js application for feed-consumers and consumer and pass user_levels to it.
I just want to ask is this the right approach?
FROM ubuntu:18.04
# Set Apt to noninteractive mode
ENV DEBIAN_FRONTEND noninteractive
# Install Helper Commands
ADD scripts/bin/* /usr/local/bin/
RUN chmod +x /usr/local/bin/*
RUN apt-install-and-clean curl \
build-essential \
git >> /dev/null 2>&1
RUN install-node-12.16.1
RUN mkdir -p /usr/src/app
COPY . /usr/src/app
WORKDIR /usr/src/app
#RUN yarn init-cache
#RUN yarn init-temp
#RUN yarn init-user
RUN yarn install
RUN yarn build
RUN node ./feedsconsumer/consumer.js user_level=0
RUN for i in {1..10}; do node ./feedsconsumer/consumer.js user_level=1; done
RUN for i in {1..20}; do node ./feedsconsumer/consumer.js user_level=2; done
RUN for i in {1..20}; do node ./feedsconsumer/consumer.js user_level=3; done
RUN for i in {1..30}; do node ./feedsconsumer/consumer.js user_level=4; done
RUN for i in {1..40}; do node ./feedsconsumer/consumer.js user_level=5; done
RUN for i in {1..10}; do node ./consumer/consumer.js; done
ENTRYPOINT ["tail", "-f", "/dev/null"]
Or is there any other way around?
Thanks
A container runs exactly one process. Your container's is
ENTRYPOINT ["tail", "-f", "/dev/null"]
This translates to "do absolutely nothing, in a way that's hard to override". I typically recommend using CMD over ENTRYPOINT, and the main container command shouldn't ever be an artificial "do nothing but keep the container running" command.
Before that, you're trying to RUN the process(es) that are the main container process. The RUN only happens during the image build phase, the running process(es) aren't persisted in the image, the build will block until these processes complete, and they can't connect to other containers or data stores. These are the lines you want to be the CMD.
A container only runs one processes, but you can run multiple containers off the same image. It's somewhat easier to add parameters by setting environment variables than by adjusting the command line (you have to replace the whole thing), so in your code look for process.env.USER_LEVEL. Also make sure the process stays as a foreground process and doesn't use a package to daemonize itself.
Then the final part of the Dockerfile just needs to set a default CMD that launches one copy of your application:
...
COPY package.json yarn.lock .
RUN yarn install
COPY . .
RUN yarn build
CMD node ./feedsconsumer/consumer.js
Now you can start a single container running this process
docker build -t my/consumer .
docker run -d --name consumer my/consumer
And you can start multiple containers to run the whole set of them
for user_level in `seq 5`; do
for i in `seq 10`; do
docker run -d \
--name "feed-consumer-$user_level-$i" \
-e "USER_LEVEL=$user_level" \
my/consumer
done
done
for i in `seq 10`; do
docker run -d --name "consumer-$i" \
my/consumer \
node ./consumer/consumer.js
done
Notice this last invocation overrides the CMD to run the alternate script; this becomes a more contorted invocation if it needs to override ENTRYPOINT instead. (docker run --entrypoint node my/consumer ./consumer/consumer.js)
If you're looking forward to cluster environments like Kubernetes, it's often straightforward to run multiple identical copies of a container, which is what you're trying to do here. A Kubernetes Deployment object has a replicas: count, and you can kubectl scale deployment feed-consumer-5 --replicas=40 to change what's in the question, or potentially configure a HorizontalPodAutoscaler to set it dynamically based on the topic length (this last is involved, but possible and rewarding).

Unable to ssh localhost within a running Docker container

I'm building a Docker image for an application which requires to ssh into localhost (i.e ssh user#localhost)
I'm working on a Ubuntu desktop machine and started with a basic ubuntu:16.04 container.
Following is the content of my Dockerfile:
FROM ubuntu:16.04
RUN apt-get update && apt-get install -y \
openjdk-8-jdk \
ssh && \
groupadd -r custom_group && useradd -r -g custom_group -m user1
USER user1
RUN ssh-keygen -b 2048 -t rsa -f ~/.ssh/id_rsa -q -N "" && \
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
Then I build this container using the command:
docker build -t test-container .
And run it using:
docker run -it test-container
The container opens with the following prompt and the keys are generated correctly to enable ssh into localhost:
user1#0531c0f71e0a:/$
user1#0531c0f71e0a:/$ cd ~/.ssh/
user1#0531c0f71e0a:~/.ssh$ ls
authorized_keys id_rsa id_rsa.pub
Then ssh into localhost and greeted by the error:
user1#0531c0f71e0a:~$ ssh user1#localhost
ssh: connect to host localhost port 22: Cannot assign requested address
Is there anything I'm doing wrong or any additional network settings that needs to be configured? I just want to ssh into localhost within the running container.
First you need to install the ssh server in the image building script:
RUN sudo apt-get install -y openssh-server
Then you need to start the ssh server:
RUN sudo /etc/init.d/ssh start
or probably even in the last lines of the Dockerfile ( you must have one binary instantiated to keep the container running ... )
USER root
CMD [ "sh", "/etc/init.d/ssh", "start"]
on the host than
# init a container from an the image
run -d --name my-ssh-container-name-01 \
-v /opt/local/dir:/opt/container/dir my-image-01
As #user2915097 stated in the OP comments, this was due to the ssh instance in the container was attempting to connect to the host using IPv6.
Forcing connection over IPv4 using -4 solved the issue.
$ docker run -it ubuntu ssh -4 user#hostname
For Docker Compose I was able to add the following to my .yml file:
network_mode: "host"
I believe the equivalent in Docker is:
--net=host
Documentation:
https://docs.docker.com/compose/compose-file/compose-file-v3/#network_mode
https://docs.docker.com/network/#network-drivers
host: For standalone containers, remove network isolation between the
container and the Docker host, and use the host’s networking directly.
See use the host network.
I also faced this error today, here's how to fix it:
If(and only if) you are facing this error inside a running container that isn't in production.
Do this:
docker exec -it -u 0 [your container id here] /bin/bash
then when you entered the container in god mode, run this:
service ssh start
then you can run your ssh based commands.
Of course it is best practice to do it in your Dockerfile before all these, but no need to sweat if you are not done with your image built process just yet.

Daemonized buildbot start

I'm trying to compose the simplest possible docker buildbot master image that runs buildbot start in ENTRYPOINT/CMD Dockerfile instructions.
I've tried to use a lot of combinations of dumb-init, gosu and exec, but with no success.
The situation is as follows:
When I try to run deamonized buildroot with the command docker run -d -v $local/vol/bldbot/master:/var/lib/buildbot buildbot-master-test, the container starts successfully, but it is terminated abruptly. The log looks as follows:
[timestamp] [-] Log opened.
[timestamp] [-] twistd 16.0.0 (/usr/bin/python 2.7.12) starting up.
[timestamp] [-] reactor class: twisted.internet.epollreactor.EPollReactor.
[timestamp] [-] Starting BuildMaster -- buildbot.version: 0.9.2
[timestamp] [-] Loading configuration from '/var/lib/buildbot/master.cfg'
[timestamp] [-] Setting up database with URL 'sqlite:/state.sqlite'
[timestamp] [-] setting database journal mode to 'wal'
[timestamp] [-] doing housekeeping for master 1 c8aa8b0d5ca3:/var/lib/buildbot
[timestamp] [-] adding 1 new changesources, removing 0
[timestamp] [-] adding 1 new builders, removing 0
[timestamp] [-] adding 2 new schedulers, removing 0
[timestamp] [-] No web server configured on this master
[timestamp] [-] adding 1 new workers, removing 0
[timestamp] [-] PBServerFactory starting on 9989
[timestamp] [-] Starting factory
[timestamp] [-] BuildMaster is running
When I run the container in an interactive mode with the command docker run --rm -it -v $local/vol/bldbot/master:/var/lib/buildbot buildbot-master-test /bin/sh and next I run the command buildbot start all works like charm.
I've already studied the content of official buildbot master docker image, i.e. buildbot/buildbot-master. I see that authors decided to use the command exec twistd -ny $B/buildbot.tac in start_buildbot.sh, not their own buildbot start.
So the question is, how to compose the ENTRYPOINT/CMD instructions in the Dockerfile that runs simply buildbot start.
ADDENDUM 1
Dockerfile content
FROM alpine:3.4
ENV BASE_DIR=/var/lib/buildbot SRC_DIR=/usr/src/buildbot
COPY start $SRC_DIR/
RUN \
echo #testing http://nl.alpinelinux.org/alpine/edge/testing >> /etc/apk/repositories && \
echo #community http://nl.alpinelinux.org/alpine/edge/community >> /etc/apk/repositories && \
apk add --no-cache \
python \
py-pip \
py-twisted \
py-cffi \
py-cryptography#community \
py-service_identity#community \
py-sqlalchemy#community \
gosu#testing \
dumb-init#community \
py-jinja2 \
tar \
curl && \
# install pip dependencies
pip install --upgrade pip setuptools && \
pip install "buildbot" && \
rm -r /root/.cache
WORKDIR $BASE_DIR
RUN \
adduser -D -s /bin/sh bldbotmaster && \
chown bldbotmaster:bldbotmaster .
VOLUME $BASE_DIR
CMD ["dumb-init", "/usr/src/buildbot/start","buildbot","master"]
ADDENDUM 2
start script content
#!/bin/sh
set -e
BASE_DIR=/var/lib/buildbot
if [[ "$1" = 'buildbot' && "$2" = 'master' ]]; then
if [ -z "$(ls -A "$BASE_DIR/master.cfg" 2> /dev/null)" ]; then
gosu bldbotmaster buildbot create-master -r $BASE_DIR
gosu bldbotmaster cp $BASE_DIR/master.cfg.sample $BASE_DIR/master.cfg
fi
exec gosu bldbotmaster buildbot start $BASE_DIR
fi
exec "$#"
Buildbot bootstrap is based on Twisted's ".tac" files, which are expected to be started using twistd -y buildbot.tac.
The buildbot start script is actually just a convenience wrapper around twistd. It actually just run twistd, and then watches for the logs to confirm buildbot successfully started. There is no value added beyond this log watching, so it is not strictly mandatory to start buildbot with buildbot start.
You can just start it with twistd -y buildbot.tac.
As you pointed up the official docker image is starting buildbot with twistd -ny buildbot.tac
If you look at the help of twistd, -y means the Twisted daemon will run a .tac file, and the -n means it won't daemonize.
This is because docker is doing process watching by itself, and do not want its entrypoint to daemonize.
The buildbot start command also has a --nodaemon option, which really only is 'exec'ing to twistd -ny.
So for your dockerfile, you can as well us twistd -ny or buildbot start --nodaemon, this will work the same.
Another Docker specific is that the buildbot.tac is different. It configured the twistd logs to output to stdout instead of outputing to twisted.log.
This is because docker design expects logs to be in stdout so that you can configure any fancy cloud log forwarder independently from the application's tech.
I've studied the docker reference and buildbot manual again and have found one hints.
There is a remark with an ngnix example
Do not pass a service x start command to a detached container. For example, this command attempts to start the nginx service.
$ docker run -d -p 80:80 my_image service nginx start
This succeeds in starting the nginx service inside the container. However, it fails the detached container paradigm in that, the root process (service nginx start) returns and the detached container stops as designed. As a result, the nginx service is started but could not be used. Instead, to start a process such as the nginx web server do the following:
$ docker run -d -p 80:80 my_image nginx -g 'daemon off;'
On the other hand there is an option
The --nodaemon option instructs Buildbot to skip daemonizing. The process will start in the foreground. It will only return to the command-line when it is stopped.
Both of the above trails yield
exec gosu bldbotmaster buildbot start --nodaemon $BASE_DIR
line in the start script's line that solves at least abrupt termination phenomenon.

Docker cannot run on build when running container with a different user

I don't know the specifics why the node application does not run. Basically I added a dockerfile in a nodejs app, and here is my Dockerfile
FROM node:0.10-onbuild
RUN mv /usr/src/app /ghost && useradd ghost --home /ghost && \
cd /ghost
ENV NODE_ENV production
VOLUME ["/ghost/content"]
WORKDIR /ghost
EXPOSE 2368
CMD ["bash", "start.bash"]
Where start.bash looks like this:
#!/bin/bash
GHOST="/ghost"
chown -R ghost:ghost /ghost
su ghost << EOF
cd "$GHOST"
NODE_ENV={$NODE_ENV:-production} npm start
EOF
I usually run docker like so:
docker run --name ghost -d -p 80:2368 user/ghost
With that I cannot see what is going on, and I decided to run it like this:
docker run --name ghost -it -p 80:2368 user/ghost
And I got this output:
> ghost#0.5.2 start /ghost
> node index
Seems, like starting, but as I check the status of the container docker ps -a , it is stopped.
Here is the repo for that but, the start.bash and dockerfile is different, because I haven't committed the latest, since both are not working:
JoeyHipolito/Ghost
I manage to make it work, there is no error in the start bash file nor in the Dockerfile, it's just that I failed to build the image again.
With that said, you can checkout the final Dockerfile and start.bash file in my repository:
Ghost-blog__Docker (https://github.com/joeyhipolito/ghost)
At the time I write this answer, you can see it in the feature-branch, feature/dockerize.

Resources