configure kubectl to reach cluster on docker - linux

I'm facing an interesting challenge, I'm trying to run kubectl in a docker image with a proper configuration, to reach my cluster.
I've been able to create the image, kubecod
FROM ubuntu:xenial
WORKDIR /project
RUN apt-get update && apt-get install -y \
curl
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubectl
RUN chmod +x ./kubectl
RUN mv ./kubectl /usr/local/bin/kubectl
ENTRYPOINT ["kubectl"]
#
CMD ["version"]
When I run the image, the container is functionning correctly, giving me the expected answer.
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.2", GitCommit:"f5743093fd1c663cb0cbc89748f730662345d44d", GitTreeState:"clean", BuildDate:"2020-09-16T13:41:02Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.9-eks-d1db3c", GitCommit:"d1db3c46e55f95d6a7d3e5578689371318f95ff9", GitTreeState:"clean", BuildDate:"2020-10-20T22:18:07Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
However, my aim is to create an image with the kubectl connecting to my node. Reading the doc, I need to add a configuration file in the following folder ~/.kube/config
I've created another Dockerfile to build another image, kubedock, with the proper config file and the creation of the requisite directory, .kube
FROM ubuntu:xenial
#setup a working directory
WORKDIR /project
RUN apt-get update && apt-get install -y \
curl
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubectl
RUN chmod +x ./kubectl
RUN mv ./kubectl /usr/local/bin/kubectl
#create the directory
RUN mkdir .kube
#Copy the config file to the .kube folder
COPY ./config .kube
ENTRYPOINT ["kubectl"]
CMD ["cluster-info dump"]
However, when I run the new image in a container, I have the following message
me#os:~/_projects/kubedock$ docker run --name kubecont kubedock
Error: unknown command "cluster-info dump" for "kubectl"
Run 'kubectl --help' for usage.
Not sure what I'm missing.
Any hints are welcomed.
Cheers.

It's not clear to me where your K8s cluster is running,
If you run your cluster in GKE you will need to run something like:
gcloud container clusters get-credentials $CLUSTER_NAME --zone $CLUSTER_ZONE
Which will create the ~/.config/gcloud tree of files in the users home directory.
On AWS EKS you will need to setup ~/.aws/credentials and other IAM settings.
I suggest you post the details of where your K8s cluster is running and we can take it from there.
PS maybe if you mount/copy the host home directory of a working user into the docker it will work.

The answer is that
CMD ["cluster-info","dump"]
Or when there is a space in the kubectl command line, separate it with a comma.

Related

How to push docker image from local to azure environment?

I have a docker image and I can run the docker image as a container. But I have to bind two folders local to the docker container. And I do it like this:
docker run -d -p 80:80 --name cntr-apache2
and then I go to: localhost:80. And the app is running.
But now I want to deploy it to the azure environment.
So Logged in in the azure portal: azure login.
and in docker I logged in: docker login webadres.azurecr.io.
and then I did a push: docker push webaddres.azurecr.io/webaddress:latest.
And I got this response:
latest: digest: sha256:7eefd5631cbe8907afb4cb941f579a7e7786ab36f52ba7d3eeadb4524fc5dddd size: 4909
And I made a web app for docker.
But now if I go to the url: https://docker-webaddress.azurewebsites.net
the index is empty. And that is logical. Because I didnt bind the two directories to the docker container in azure.
So my question is:
How now to bind the two direcories with azure docker?
Thank you
And this is my dockerfile:
FROM php:7.3-apache
# Copy virtual host into container
COPY 000-default.conf /etc/apache2/sites-available/000-default.conf
# Enable rewrite mode
RUN a2enmod rewrite
# Install necessary packages
RUN apt-get update && \
apt-get install \
libzip-dev \
wget \
git \
unzip \
-y --no-install-recommends
# Install PHP Extensions
RUN docker-php-ext-install zip pdo_mysql
COPY ./install-composer.sh ./
COPY ./php.ini /usr/local/etc/php/
EXPOSE 80
# Cleanup packages and install composer
RUN apt-get purge -y g++ \
&& apt-get autoremove -y \
&& rm -r /var/lib/apt/lists/* \
&& rm -rf /tmp/* \
&& sh ./install-composer.sh \
&& rm ./install-composer.sh
WORKDIR /var/www
RUN chown -R www-data:www-data /var/www
CMD ["apache2-foreground"]
One option is to use App Service persistent storage.
You need to enable the functionality first:
az webapp config appsettings set --resource-group <group-name> --name <app-name> --settings WEBSITES_ENABLE_APP_SERVICE_STORAGE=TRUE
Then create a mapping in a Docker Compose file even if you have a single container to deploy. The {WEBAPP_STORAGE_HOME} environment variable will point to /home in the App Service VM. You can use the Advanced Tools (Kudu) to create the folders and copy the needed files.
Wordpress:
image: <image name:tag>
volumes:
- ${WEBAPP_STORAGE_HOME}/folder1:/etc/apache2/sites-enabled/
- ${WEBAPP_STORAGE_HOME}/folder2:/var/www/html/ docker_webcrawler2
Documentation
Another option is to mount Azure Storage.
Note that mounts using Blobs are read-only and File Share are read-write.
Start by creating a storage account and a File Share.
Create the mount. I find that it's easier using the Portal. See how in the doc link below.
az storage share-rm create \
--resource-group $resourceGroupName \
--storage-account $storageAccountName \
--name $shareName \
--quota 1024 \
--enabled-protocols SMB \
--output none
Documentation

azure self hosted agent linux do not run with "--once" parameter

i like to run the self-hosted Linux container only once per pipeline
that means when the pipeline is done i like the container to stop
i saw that there is a parameter called "--once"
please this link in the bottom :
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/docker?view=azure-devops
but when i start the docker like this with the once after the run :
docker run --once --rm -it -e AZP_WORK=/home/working_dir -v /home/working_dir:/azp -e AZP_URL=https://dev.azure.com/xxxx -e AZP_TOKEN=nhxxxxxu76mlua -e AZP_AGENT_NAME=ios_dockeragent xxx.xxx.com:2000/azure_self_hosted_agent/agent:latest
I'm getting :
unknown flag: --once
See 'docker run --help'.
also if i put it in the docker file
as
COPY ./start.sh .
RUN chmod +x start.sh
CMD ["./start.sh --once"]
Im getting error when trying to run the docker :
docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"./start.sh --once\": stat ./start.sh --once: no such file or directory": unknown
where do i need to set this "--once" command in dockerized agent?
Is for the agent's run, not the docker run. from the docs:
For agents configured to run interactively, you can choose to have the
agent accept only one job. To run in this configuration:
./run.sh --once
Agents in this mode will accept only one job and then spin down
gracefully (useful for running in Docker on a service like Azure
Container Instances).
So, you need to add it in the bash script you configure the docker image:
FROM ubuntu:18.04
# To make it easier for build and release pipelines to run apt-get,
# configure apt to not require confirmation (assume the -y argument by default)
ENV DEBIAN_FRONTEND=noninteractive
RUN echo "APT::Get::Assume-Yes \"true\";" > /etc/apt/apt.conf.d/90assumeyes
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
ca-certificates \
curl \
jq \
git \
iputils-ping \
libcurl4 \
libicu60 \
libunwind8 \
netcat
WORKDIR /azp
COPY ./start.sh .
RUN chmod +x start.sh --once
As far as I know, there's no way to pass it in from the outside; you have to go into the container and edit the start.sh file to add the --once argument to the appropriate line.
exec ./externals/node/bin/node ./bin/AgentService.js interactive --once & wait $!
cleanup
Side note: depending on your requirements, you might also take the opportunity to remove the undocumented web-server from start.sh.

Azure Container generates error '503' reason 'site unavailable'

The issue
I have built a docker image on my local machine running windows 10
running docker-compose build, docker-compose up -dand docker-compse logs -f do generate expected results (no error)
the app runs correctly by running winpty docker container run -i -t -p 8000:8000 --rm altf1be.plotly.docker-compose:2019-12-17
I upload the docker image on a private Azure Container Registry
I deploy the web app based on the docker image Azure Portal > Container registry > Repositories > altf1be.plotly.docker-compose > v2019-12-17 > context-menu > deploy to web app
I run the web app and I get The service is unavailable
what is wrong with my method?
Thank you in advance for the time you will invest on this issue
docker-compose.yml
version: '3.7'
services:
twikey-plot_ly_service:
# container_name: altf1be.plotly.docker-container-name
build: .
image: altf1be.plotly.docker-compose:2019-12-17
command: gunicorn --config=app/conf/gunicorn.conf.docker.staging.py app.webapp:server
ports:
- 8000:8000
env_file: .env.staging
.env/staging
apiUrl=https://api.beta.alt-f1.be
authorizationUrl=/api/auth/authorization/code
serverUrl=https://dunningcashflow-api.alt-f1.be
transactionFeedUrl=/creditor/tx
api_token=ANICETOKEN
Dockerfile
# read the Dockerfile reference documentation
# https://docs.docker.com/engine/reference/builder
# build the docker
# docker build -t altf1be.plotly.docker-compose:2019-12-17.
# https://learn.microsoft.com/en-us/azure/app-service/containers/tutorial-custom-docker-image#use-a-docker-image-from-any-private-registry-optional
# Use the docker images used by Microsoft on Azure
FROM mcr.microsoft.com/oryx/python:3.7-20190712.5
LABEL Name=altf1.be/plotly Version=1.19.0
LABEL maintainer="abo+plotly_docker_staging#alt-f1.be"
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
# copy the code from the local drive to the docker
ADD . /code/
# non interactive front-end
ARG DEBIAN_FRONTEND=noninteractive
# update the software repository
ENV SSH_PASSWD 'root:!astrongpassword!'
RUN apt-get update && apt-get install -y \
apt-utils \
# enable SSH
&& apt-get install -y --no-install-recommends openssh-server \
&& echo "$SSH_PASSWD" | chpasswd
RUN chmod u+x /code/init_container.sh
# update the python packages and libraries
RUN pip3 install --upgrade pip
RUN pip3 install --upgrade setuptools
RUN pip3 install --upgrade wheel
RUN pip3 install -r requirements.txt
# copy sshd_config file. See https://man.openbsd.org/sshd_config
COPY sshd_config /etc/ssh/
EXPOSE 8000 2222
ENV PORT 8000
ENV SSH_PORT 2222
# install dependencies
ENV ACCEPT_EULA=Y
ENV APPENGINE_INSTANCE_CLASS=F2
ENV apiUrl=https://api.beta.alt-f1.be
ENV serverUrl=https://dunningcashflow-api.alt-f1.be
ENV DOCKER_REGISTRY altf1be.azurecr.io
ENTRYPOINT ["/code/init_container.sh"]
/code/init_container.sh
gunicorn --config=app/conf/gunicorn.conf.docker.staging.py app.webapp:server
app/conf/gunicorn.conf.docker.staging.py
# -*- coding: utf-8 -*-
workers = 1
# print("workers: {}".format(workers))
bind = '0.0.0.0'
timeout = 600
log_level = "debug"
reload = True
print(
f"workers={workers} bind={bind} timeout={timeout} --log-level={log_level} --reload={reload}"
)
container settings
application settings
web app running - 'the service is unavailable'
Kudu - 'the service is unavailable'
Kudu - http pings on port 8000 (the app was not running)
ERROR - Container altf1be-plotly-docker_0_ee297002 for site altf1be-plotly-docker has exited, failing site start
ERROR - Container altf1be-plotly-docker_0_ee297002 didn't respond to HTTP pings on port: 8000, failing site start. See container logs for debugging.
What I can see is your use the wrong environment variable, it should be WEBSITES_PORT, you miss the s behind the WEBSITE. You can add it and try to deploy the image again.
And I think Azure Web App will not help you set the environment variables for you as you did in the docker-compose file with option env_file. So I suggest you create the image through command docker build and test it locally with setting the environment variables through -e. When the images run well, then you can push it to ACr and deploy it from the ACR also with the environment variables in Web App.
Or you can still use the docker-compose file to deploy your image in Web App, instead of the env_file and build with environment and image when the image is in ACR.

Unable to ssh localhost within a running Docker container

I'm building a Docker image for an application which requires to ssh into localhost (i.e ssh user#localhost)
I'm working on a Ubuntu desktop machine and started with a basic ubuntu:16.04 container.
Following is the content of my Dockerfile:
FROM ubuntu:16.04
RUN apt-get update && apt-get install -y \
openjdk-8-jdk \
ssh && \
groupadd -r custom_group && useradd -r -g custom_group -m user1
USER user1
RUN ssh-keygen -b 2048 -t rsa -f ~/.ssh/id_rsa -q -N "" && \
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
Then I build this container using the command:
docker build -t test-container .
And run it using:
docker run -it test-container
The container opens with the following prompt and the keys are generated correctly to enable ssh into localhost:
user1#0531c0f71e0a:/$
user1#0531c0f71e0a:/$ cd ~/.ssh/
user1#0531c0f71e0a:~/.ssh$ ls
authorized_keys id_rsa id_rsa.pub
Then ssh into localhost and greeted by the error:
user1#0531c0f71e0a:~$ ssh user1#localhost
ssh: connect to host localhost port 22: Cannot assign requested address
Is there anything I'm doing wrong or any additional network settings that needs to be configured? I just want to ssh into localhost within the running container.
First you need to install the ssh server in the image building script:
RUN sudo apt-get install -y openssh-server
Then you need to start the ssh server:
RUN sudo /etc/init.d/ssh start
or probably even in the last lines of the Dockerfile ( you must have one binary instantiated to keep the container running ... )
USER root
CMD [ "sh", "/etc/init.d/ssh", "start"]
on the host than
# init a container from an the image
run -d --name my-ssh-container-name-01 \
-v /opt/local/dir:/opt/container/dir my-image-01
As #user2915097 stated in the OP comments, this was due to the ssh instance in the container was attempting to connect to the host using IPv6.
Forcing connection over IPv4 using -4 solved the issue.
$ docker run -it ubuntu ssh -4 user#hostname
For Docker Compose I was able to add the following to my .yml file:
network_mode: "host"
I believe the equivalent in Docker is:
--net=host
Documentation:
https://docs.docker.com/compose/compose-file/compose-file-v3/#network_mode
https://docs.docker.com/network/#network-drivers
host: For standalone containers, remove network isolation between the
container and the Docker host, and use the host’s networking directly.
See use the host network.
I also faced this error today, here's how to fix it:
If(and only if) you are facing this error inside a running container that isn't in production.
Do this:
docker exec -it -u 0 [your container id here] /bin/bash
then when you entered the container in god mode, run this:
service ssh start
then you can run your ssh based commands.
Of course it is best practice to do it in your Dockerfile before all these, but no need to sweat if you are not done with your image built process just yet.

Docker container not showing volume mounted - Access issue

root#centdev01$ grep -e CMD -e RUN Dockerfile
RUN apt-get update
RUN apt-get -y install ruby ruby-dev build-essential redis-tools
RUN gem install --no-rdoc --no-ri sinatra json redis
RUN mkdir -p /opt/webapp
RUN chmod 777 /opt/webapp
CMD ["/opt/webapp/bin/webapp"]
root#centdev01$ docker build -t "alok87/sinatra" .
root#centdev01$ docker run -d -p 80 --name ubunsin10 -v $PWD/webapp:/opt/webapp alok87/sinatra
25ekgjalgjal25rkg
root#centdev01$ docker logs ubunsin10
/opt/webapp/bin/webapp: Permission Denied - /opt/webapp/bin/webapp ( Errno:EACCESS)
The issue is the volume is being mounted to the container but from the container it is not having any acces to the mounted volume. I can cd to /opt/webapp/bin but i can not ls /opt/webapp/bin.
Please suggest how it can be fixed. The host mount has all files having 777 permission.
Docker processes have the svirt_lxc_net_t default type. By default these processes are not allowed to access your content in /var, /root and /home.
You have specify a suitable type label for your host folder, to allow the container processes to access the content. You can do this by giving the $PWD/webapp folder the type label svirt_sandbox_file_t.
chcon -Rt svirt_sandbox_file_t $PWD/webapp
After this, you can access the folder from within the container. Read more about it in Dan Walsh's article - Bringing new security features to Docker

Resources