Actually I'm trying to deploy Kubernetes via Rancher on a single server.
I created a new Cluster and added a new node.
But after a several time, an error occurred:
This cluster is currently Provisioning; areas that interact directly with it will not be available until the API is ready.
[controlPlane] Failed to bring up Control Plane: Failed to verify healthcheck: Failed to check https://localhost:6443/healthz for service [kube-apiserver] on host [172.26.116.42]: Get https://localhost:6443/healthz: dial tcp [::1]:6443: connect: connection refused, log: standard_init_linux.go:190: exec user process caused "permission denied"
And when I'm checking my docker container, one of them is always restarting, the rancher/hyperkube:v1.11.3-rancher1
I'm run docker logs my_container_id
And I show standard_init_linux.go:190: exec user process caused "permission denied"
On the cloud vm, the config is:
OS: Ubuntu 18.04.1 LTS
Docker Version: 18.06.1-ce
Rancher: Rancher v2
Do you have any issues about this error ?
Thank a lot ;)
What is your type of architecture?
Please run:
uname --all
or
docker info | grep -i "Architecture"
to check this.
Rancher is only supported on x86.
Finally, I called the vm sub-contractor and they created a vm with a nonexec var partition.
After a remount, it's worked.
Related
While trying to run the podman docker container in Linux server (Rhel 8) facing below issue.
WARN[0000] error mounting subscriptions, skipping entry in /usr/share/containers/mounts.conf: getting host subscription data: failed to read subscriptions from "/usr/share/rhel/secrets": open /usr/share/rhel/secrets/redhat.repo: permission denied
Execution command: podman run -d --name redis_server -p 6377:6377 redis
I have followed these steps to run the container
Could you please suggest a solution to this issue?
giving reference as this solved my issue quoting answer:
I solved my specific problem. The original user account I was using had an empty mounts.conf file (copy the one in usr/share/containers).
use touch ~/.config/containers/mounts.conf
1874621 – Rootless Podman Unable to Use Host Subscriptions
I'm trying to run a container job on my self hosted agent to read a container image from my ACR.
When you use a Microsoft hosted agent this code works:
pool:
vmImage: 'ubuntu-18.04'
container:
image: myprivate.azurecr.io/windowsservercore:1803
endpoint: my_acr_connection
So what I want is to use the following code:
pool: Default
container:
image: myprivate.azurecr.io/windowsservercore:1803
endpoint: my_acr_connection
But I get this error in the "Initialize job" step when I run the pipeline:
##[error]File not found: 'docker'
I only have one agent in Default agent pools. My agent was created following this documentation: https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/docker?view=azure-devops#linux
I guess this is something related to the agent capabilities, but I wanted to know if what I'm trying is actually feasible and if so if you could provide me some advice on how to resolve my issue.
Thank you in advance.
EDIT:
I was able to handle this error. I just needed to install docker inside the container, within the Dockerfile.
Now I receive another error, in the "Initialize containers" pipeline's step:
Error response from daemon: OCI runtime create failed: container_linux.go:370: starting container process caused: exec: "/__a/externals/node/bin/node": stat /__a/externals/node/bin/node: no such file or directory: unknown
So I had the very same problem of:
"Error response from daemon: OCI runtime create failed: container_linux.go:370: starting container process caused: exec: "/__a/externals/node/bin/node": stat /__a/externals/node/bin/node: no such file or directory: unknown"
In my case, I was using a self hosted agent running on a WSL1 ubuntu on Windows 10 with Docker4Windows in WSL2. The problem is that WSL1 with docker has a problem with the volumes mounted as directories, that have to be defined with the windows path (/c/users....) instead of the local linux path (/home/user/linuxuser/...).
The problem was resolved after moving my agent to a WSL2, where the mounting problem does not happen.
Regardless of the WSL issue, the problem is that the volume /__a/externals cannot be mounted on the container in the agent node. One thing to test is to check in azure devops pipeline for the failed step logs and check the docker commands that failed. You will see something like this:
/usr/bin/docker create --name .... -v "/home/user/agent_directory/externals":"/__a/externals":ro ... CONTAINER_IMAGE
docker start b39b96fb ....
Error response from daemon: OCI runtime create failed ...
So you can check to launch your image manually mounting the volumes to see if they are correctly mounted, to detect the problem with docker (can be permission issues or other docker linux related problems).
docker run --rm -ti -v "/home/user/agent_directory/externals":"/__a/externals":ro mcr.microsoft.com/dotnet/core/sdk:3.1 bash
(In my case the image was dotnet/core but the problem may occur with any image).
Issue here was the default pool might not have docker installed in it, once the default pool had docker installed then the error is gone.
I am stuck with the OCI Runtime error too
"Error response from daemon: OCI runtime create failed: container_linux.go:370: starting container process caused: exec: "/__a/externals/node/bin/node": stat /__a/externals/node/bin/node: no such file or directory: unknown"
When in Azure Shell I type the following
PS Azure:\> systemctl start docker.service
I get the following error message
Error getting authority: Error initializing authority: Could not
connect: No such file or directory (g-io-error-quark, 1) Failed to
connect to bus: No such file or directory
How can I resolve it?
Thank you in advance
Azure Cloud Shell offers a browser-accessible, pre-configured shell experience for managing Azure resources without the overhead of installing, versioning, and maintaining a machine yourself. You could read the supported features and tools here.
Moreover, you could not run the docker daemon in the Azure cloud shell since Cloud Shell utilizes a container to host your shell environment, as a result running the daemon is disallowed. You could Utilize docker-machine to manage Docker containers from a remote Docker host.
I tried installing docker on a server of mine using this tutorial.
I want to run docker images remotely and use the portainer web-interface to administrate everything.
However, when I get to the point where I need to test my installation and I enter the command $ sudo docker run hello-world, I only get the following error:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:344: starting container process caused "process_linux.go:424: container init caused \"join session keyring: create session key: disk quota exceeded\"": unknown. ERRO[0000] error waiting for container: context canceled
I tried the following methods:
"Install Docker CE / Install using the convenience script"
"Install Docker CE / Install using the repository"
This also happens when I try to run other images (eg. portainer).
I hope this is enough information.
I am new to docker, so I don't know how I should debug it efficiently.
Try to increase maxkeys kernel parameter:
echo 50000 > /proc/sys/kernel/keys/maxkeys
see: https://discuss.linuxcontainers.org/t/error-with-docker-inside-lxc-container/922/2
So, as it turns out, I connected to the wrong vServer.
The one I was connected to is using LXD (as you might have seen in my previous comment), which doesn't support Docker (at least not the way this guide advises).
When I ran the same setup on a vServer using a bare-metal(type 1) hypervisor, it worked without a problem.
I think this has to do with automatic storage allocation under LXD, but this is just a guess.
I'm new to Docker so please be kind but I am testing it out on a Windows 10 image on Azure (I know I could run it directly but I wanted to try it in a VM first).
I have a fresh Windows 10 image that I have installed Docker for Windows 2.0.0 on.
Note: I did not tick the option to use Windows containers instead of linux containers.
Once it installed (and rebooted) I was prompted to install Hyper-V and Containers features (causing restarts).
Once it was all installed I open an Administrative PowerShell window to download Jenkins:
docker run -p 8080:8080 -p 50000:50000 -v jenkins_home:/var/jenkins_home jenkins/jenkins:lts
This gave me the error:
C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: driver failed programming external connectivity on endpoint goofy_lederberg (deaba2deeea0486c92ba8a1a32740295f03859b1b5829d39e39eff0b24613ebf): Error starting userland proxy: Bind for 0.0.0.0:50000: unexpected error Permission denied.
I thought this was strange as 50000 wasn't a port that I expected to be in use, changing this to different ports (50001) produced the same error.
Running:
netstat -a -n -o
Showed that the port was not in use.
If I remove -p 50000:50000 from the command it can bind and start Jenkins but I assume it needs this port mapping to work correctly.
Previous posts have suggested stopping the World Wide Web Publishing service but that isn't installed.
There are no other running Docker containers.
I assume the port is in use or something is stopping the port mapping.
Assuming a user has permission to create a port binding from their terminal are there any other techniques beside netstat to determine if something is bound to a port - either something internal to docker's own checking process or something at the host OS level?
Rather embarrassingly this worked this morning with no changes other than the VM was shutdown over the weekend.
Maybe all it needed was a reboot?