How to cache docker images with GitHub Codespaces devcontainer prebuild? - vscode-devcontainer

I'm using github codespaces and I have pre-built my devcontainer with feature kubernetes-helm-minikube which contains a minikube pre-installed in the dev container.
My question is, with prebuilding, my devcontainer users still have to download gcr.io/k8s-minikube/kicbase image when they run minikube start in a newly created codespace. Is there a way to cache the minikube start resources in the prebuild process? Or at least cache the gcr.io/k8s-minikube/kicbase image with prebuild?
Could anyone point out the right way to do that? Greatly appreciate any suggestions.
What I've tried:
I've read docs in: https://docs.github.com/en/codespaces/prebuilding-your-codespaces/configuring-prebuilds and https://containers.dev/implementors/json_reference/#_lifecycle-scripts
I tried run minikube start on container create:
"onCreateCommand": [
"nohup bash -c 'minikube start &' > minikube.log 2>&1"
]
and the prebuild workflow failed with error $ nohup bash -c 'minikube start &' > minikube.log 2>&1 OCI runtime exec failed: exec failed: container_linux.go:380: starting container process caused: exec: "nohup bash -c 'minikube start &' > minikube.log 2>&1": executable file not found in $PATH: unknown
I tried run docker pull gcr.io/k8s-minikube/kicbase:v0.0.36 on container create and similar error occurred: starting container process caused: exec: "docker pull gcr.io/k8s-minikube/kicbase:v0.0.36": stat docker pull gcr.io/k8s-minikube/kicbase:v0.0.36: no such file or directory: unknown
I tried run minikube start with postCreateCommand and the result made it clear that the postCreateCommand is only executed after the dev container has been assigned to a user, not during prebuild (as the prebuild doc states).

Related

how to mount a disk partition in docker

I have the below sd card partition from sudo blkid
/dev/sdb1: PARTLABEL="uboot" PARTUUID="5e6c4af7-015f-46df-9426-d27fb38f1d87"
...
...
...
/dev/sdb8: UUID="5f38be2e-3d5d-4c42-8d66-8aa6edc3eede" BLOCK_SIZE="1024" TYPE="ext2" PARTLABEL="userdata" PARTUUID="dceeb110-7c3e-4973-b6ba-c60f8734c988"
/dev/sdb9: UUID="51e83a43-830f-48de-bcea-309a784ea35c" BLOCK_SIZE="4096" TYPE="ext4" PARTLABEL="rootfs" PARTUUID="c58164a5-704a-4017-aeea-739a0941472f"
I am trying to mount /dev/sdb9 into a docker container so that I can reformat it and do other stuffs with it.
But I am not able to attach it as a volume in docker container.
This is what I've done:
docker volume create --driver=local --opt type=ext4 --opt device=/dev/disk/by-uuid/51e83a43-830f-48de-bcea-309a784ea35c my-vol
docker run <image id> -v my-vol:/my-vol -it bash
However, it came up with the error: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "-v": executable file not found in $PATH: unknown.
Any ideas how i can mount /dev/sdb9 into a docker container?
You need to change the order of your docker run command so that the options come before the image. Everything after the image is considered as args, you need to provide options such as volume before the image name. From the docker run docs https://docs.docker.com/engine/reference/commandline/container_run/:
docker container run [OPTIONS] IMAGE [COMMAND] [ARG...]
$ docker run -it ubuntu -v $(pwd):/local
docker: Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "-v": executable file not found in $PATH: unknown.
$ docker run -it -v $(pwd):/local ubuntu
root#8fa69b8861d8:/#

OCI runtime exec failed: exec failed: container_linux.go:380: starting container process caused: exec: <PATH>:

I've created a astronomer airflow directory home\acoppers\astronomer. I ran docker db init and docker astro start to get my containers running. I want to authenticate my scheduler container to gcloud so I tried the command:
docker container exec -it 6903e8589b00 /home/acoppers/google-cloud-sdk/bin/gcloud auth application-default login --no-launch-browser
Since I installed google-cloud-sdk in my home directory. However I am getting the following error when I run this command:
OCI runtime exec failed: exec failed: container_linux.go:380: starting container process caused: exec: "/home/acoppers/google-cloud-sdk/bin/gcloud": stat /home/acoppers/google-cloud-sdk/bin/gcloud: no such file or directory: unknown
Can someone tell me what I am doing wrong? Thank you.
Someone might find this useful. I was unable exec into the docker container like above. I got:
OCI runtime exec failed: exec failed: container_linux.go:380:
starting container process caused: setup user: no such file or directory: unknown
Turned out - in my case - NodeJS child process caused /dev/null to disappear as soon as I restored it
mknod /dev/null c 1 3
chmod 666 /dev/null
I was able to log in again (tested with two shells one was in the other out)

ERRO[0000] error waiting for container: context canceled

I just installed docker on my linux and the first command I ran docker run alpine -d
is giving this error.
docker: Error response from daemon: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "-d": executable file not found in $PATH: unknown.
ERRO[0000] error waiting for container: context canceled
I'm new to docker, please help !!
Move the '-d' flag next to the run command then the image name.
docker run -d alpine

Docker flag "--gpu" does not work without sudo command

I'm ubuntu user. I use the following docker image, tensorflow/tensorflow:nightly-gpu
If I try to run this command
$ docker run -it --rm --gpus all tensorflow/tensorflow:nightly-gpu bash
There's permission denied error.
docker: Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: Running hook #0:: error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: mount error: open failed: /sys/fs/cgroup/devices/user.slice/devices.allow: permission denied: unknown.
Of course, I can run this command if I am using sudo, but I want to use gpu without sudo.
Is there any good solution? Any leads, please?
As your problem seems to be only when running "--gpu".
Add/update these two sections of /etc/nvidia-container-runtime/config.toml
[nvidia-container-cli]
no-cgroups = true
[nvidia-container-runtime]
debug = "/tmp/nvidia-container-runtime.log"
Source: https://github.com/containers/podman/issues/3659#issuecomment-543912380
If you can't use docker without sudo at all
If you are running in a Linux environment, you need to create a user for docker so you won't need to use sudo every time. Below are the steps to create:
$ sudo groupadd docker
$ sudo usermod -aG docker $USER
$ newgrp docker
Source: https://docs.docker.com/engine/install/linux-postinstall/

Error Response Contrainer is not running

I am trying to Create and Join channel by entering into cli container using command : docker exec -it cli bash
But, i am getting following error response :
Error Response from daemon : Container dasdjha343343xxxxx is not running.
First stop all your running containers and remove them, try to rerun the exact container, and lastly, when you try to bash to explicit container on Windows.10 use $winpty
$docker stop $(docker ps -a -q)
$docker ps -qa|xargs docker rm
$cd fabric-samples/first-network
$docker-compose -f docker-compose-cli.yaml up -d
$winpty docker exec -it cli bash
In your working folder, update to latest version by:
$ git clone https://github.com/hyperledger/fabric-samples.git
Newer version should resolve this issue.

Resources