How to update minikube latest version? - linux

When I command minikube status it shows but with a GitHub link says that update minikube. Can you tell me how can I do this in a simple way?
$ minikube status
⚠️ There is a newer version of minikube available (v1.3.1). Download it here:
https://github.com/kubernetes/minikube/releases/tag/v1.3.1
To disable this notification, run the following:
minikube config set WantUpdateNotification false
host: Stopped
kubelet:
apiserver:
kubectl:

The script below removes everything (pods, services, secrets, etc.) that are found in Minikube, deletes old Minikube file, install latest Minikube file and then enables ingress and dashboard addons.
#! /bin/sh
# Minikube update script file
minikube delete && \
sudo rm -rf /usr/local/bin/minikube && \
sudo curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 && \
sudo chmod +x minikube && \
sudo cp minikube /usr/local/bin/ && \
sudo rm minikube && \
minikube start &&\
# Enabling addons: ingress, dashboard
minikube addons enable ingress && \
minikube addons enable dashboard && \
minikube addons enable metrics-server && \
# Showing enabled addons
echo '\n\n\033[4;33m Enabled Addons \033[0m' && \
minikube addons list | grep STATUS && minikube addons list | grep enabled && \
# Showing current status of Minikube
echo '\n\n\033[4;33m Current status of Minikube \033[0m' && minikube status
(To make use of dashboard addons, execute the command of minikube dashboard on the terminal)
Sample terminal output after script run:

While updating for my Ubuntu 18.04 I did following
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube_latest_amd64.deb
sudo dpkg -i minikube_latest_amd64.deb
minikube version # to check the version
minikube start # start minikube
minikube addons list # then check addons
For linux it saves it states at home .minikube directory so no need to delete previous minikube and then enabling addons it will automatically pick the addons and enable once it read states from .minikube directory.

$ sudo minikube delete
$ curl -Lo minikube https://storage.googleapis.com/minikube/releases/v1.3.1/minikube-linux-amd64 && chmod +x minikube && sudo cp minikube /usr/local/bin/ && rm minikube
$ sudo minikube start --vm-driver=none

For those running mk on windows, follow these steps : (you will get the latest version of mk)
1: minikube stop
2: choco upgrade minikube
3: visit https://github.com/kubernetes/minikube/releases --> see latest version of kubernetes supported.
4: minikube start --kubernetes-version=1.xx.x
5: choco upgrade kubernetes-cli
6: kubectl version : to verify the update

I had the same issue. I found running minikube delete doesn't actually delete binary /usr/local/bin/minikube. Either delete it manually or you need to copy latest minikube into /usr/local/bin manually

Related

How to install Minikube on Azure Linux VM - ubuntu 18.04

I want to install Minikube on Azure Linux VM - ubuntu 18.04. But couldn't find the appropriate article. So I would like to the appropriate steps, how to install Minikube on Azure Linux VM and work.
Pré-Requisites before create your VM on Azure:
It’s necessary a machine with nested virtualization. The CPU family with suffix _v3 gives this support, ex: Standard D2s v3, Standard D4s v3.
Standard D2s v3 is a good choice to start
I am using: Linux (ubuntu 18.04)
Login to VM using putty:
Installing Docker
$ curl -fsSL https://get.docker.com | sh
Installing VirtualBox
$ sudo apt install virtualbox virtualbox-ext-pack
Installing Minikube
Updating the system:
$ sudo apt update -y
$ sudo apt upgrade -y
To install the latest minikube stable release on x86–64 Linux using binary download:
$ sudo apt install -y curl wget apt-transport-https
$ curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
$ sudo install minikube-linux-amd64 /usr/local/bin/minikube
Installing Kubectl
$ curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
$ sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
$ kubectl version --client
Start Minikube
$ minikube start
Check Status
$ minikube status
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
Get Nodes
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
minikube Ready control-plane 64s v1.25.0
Addons
Only a few addons are enabled by default during the installation but you can turn on
$ minikube addons list
---To activate, run:
$ minikube addons enable <addon-name>
Running the First Deployment
$ kubectl create deployment my-nginx --image=nginx
$ kubectl get deployments.apps my-nginx
$ kubectl get pods

GLIBC_2.27 not Found in Docker Container

I am now running a Docker container in my Linux machine. The dockerfile is as follows:
# 1. basic image
FROM tensorflow/tensorflow:1.12.0-devel-py3
ENV DEBIAN_FRONTEND=noninteractive LANG='en_US.UTF-8' LANGUAGE='en_US:en' LC_CTYPE="UTF-8"
# 2. apps
RUN apt update && apt install -y --no-install-recommends \
software-properties-common && \
add-apt-repository -y ppa:ubuntu-desktop/ubuntu-make && \
apt update \
&& apt install -y --no-install-recommends \
build-essential \
vim \
ubuntu-make \
&& umake ide pycharm /root/.local/share/umake/ide/pycharm
Everything goes on well, but when I enter the Docker container using the following command:
sudo docker run --ipc=host --gpus all --net=host -it -d --rm -h docker -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix:rw
-v /usr/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu, -v /usr/lib/i386-linux-gnu:/usr/lib/i386-linux-gnu
--privileged
Then I try command such as apt update, I will receive the following messages:
apt: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.27' not found (required by /usr/lib/x86_64-linux-gnu/libapt-pkg.so.5.0)
However, this will not happen if the command is invoked in the Docker image file Dockerfile. For example, at the end of the Dockerfile, if I invoke
RUN apt update && apt install -y firefox, no errors appear.
I cannot understand why only in the Docker container is GLIBC_2.27 link problem identified.
I got the answer thanks to the help of #KamilCuk.
The reason why I got this error is because my host machine is Ubuntu 18.04 and my guest machine (Container) is Ubuntu 16.04.
This is no problem, but when I enter the Container, I share the two folders between host machine and guest machine:
/usr/lib/x86_64-linux-gnu
/usr/lib/i386-linux-gnu
As a result, the guest system tries to use libraries in the host machine, which is wrong. I forgot the reason why I decide to share host machine's libraries with the guest machine. Anyway, if I disable it, everything goes on well.

Docker run doesn't work as part of a terraform startup script

I'm using terraform to provision a bunch of machines at once. Each one should run the same docker container. The startup script looks like this:
sudo apt-get remove docker docker-engine docker.io containerd runc -Y
sudo apt-get update -Y
sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common -Y
curl https://get.docker.com | sh && sudo systemctl --now enable docker
sudo docker build -t dockertest /path/to/dockerfile
sudo docker run --gpus all -it -v /path/to/mount:/usr/src/app dockertest script.py -b 03
Basically it installs docker and then builds the container and then runs it.
Only the last line doesn't work. If I ssh into the machine, it works fine. But not as part of the startup script.
How can I get it to work as part of the startup script? It's a hassle to ssh into each of a swarm of machines.
If anyone else encounters this problem: the solution is simply to take -it out of the docker run command.

Docker command not found while docker.service is Active (running)

I've installed docker on CentOS 7, but when I run docker, I get bash: docker: command not found...
Other apps that require docker gave this error: "docker": executable file not found in $PATH
which docker returns: no docker in (/usr/.....
whereis docker returns: docker: /etc/docker /usr/libexec/docker /usr/share/man/man1/docker.1.gz
This is how I installed it:
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
sudo yum update -y && sudo yum install -y \
containerd.io-1.2.13 \
docker-ce-19.03.11 \
docker-ce-cli-19.03.11
mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF
sudo systemctl enable docker
I would recommend you to read the official documentation at docs.docker.com.
Did you successfully meet the OS requirements?
To install Docker Engine you need a maintained version of CentOS 7,
archived versions are not supported or tested.
The centos-extras repository must be activated. This repository is
activated by default, but if you deactivated it, you have to activate
it again.
The Overlay2 storage driver is recommended.
Have you deleted the older versions?
Older versions of Docker were called docker or docker-engine. If
these are installed, uninstall them, along with associated
dependencies.
$ sudo yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-engine
I have sent you some document texts from the official Docker documentation page, I would recommend you to read the whole document page.

docker - cannot find aws credentials in container although they exist

Running the following docker command on mac works and on linux, running ubuntu cannot find the aws cli credentials. It returns the following message: Unable to locate credentials
Completed 1 part(s) with ... file(s) remaining
The command which runs an image and mounts a data volume and then copies a file from and s3 bucket, and starts the bash shell in the docker container.
sudo docker run -it --rm -v ~/.aws:/root/.aws username/docker-image sh -c 'aws s3 cp s3://bucketname/filename.tar.gz /home/emailer && cd /home/emailer && tar zxvf filename.tar.gz && /bin/bash'
What am I missing here?
This is my Dockerfile:
FROM ubuntu:latest
#install node and npm
RUN apt-get update && \
apt-get -y install curl && \
curl -sL https://deb.nodesource.com/setup | sudo bash - && \
apt-get -y install python build-essential nodejs
#install and set-up aws-cli
RUN sudo apt-get -y install \
git \
nano \
unzip && \
curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip" && \
unzip awscli-bundle.zip
RUN sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
# Provides cached layer for node_modules
ADD package.json /tmp/package.json
RUN cd /tmp && npm install
RUN mkdir -p /home/emailer && cp -a /tmp/node_modules /home/emailer/
Mounting $HOME/.aws/ into the container should work. Make sure to mount it as read-only.
It is also worth mentioning, if you have several profiles in your ~/.aws/config -- you must also provide the AWS_PROFILE=somethingsomething environment variable. E.g. via docker run -e AWS_PROFILE=xxx ... otherwise you'll get the same error message (unable to locate credentials).
Update: Added example of the mount command
docker run -v ~/.aws:/root/.aws …
You can use environment variable instead of copying ~/.aws/credentials and config file into container for aws-cli
docker run \
-e AWS_ACCESS_KEY_ID=AXXXXXXXXXXXXE \
-e AWS_SECRET_ACCESS_KEY=wXXXXXXXXXXXXY \
-e AWS_DEFAULT_REGION=us-west-2 \
<img>
Ref: AWS CLI Doc
what do you see if you run
ls -l ~/.aws/config
within your docker instance?
the only solution that worked for me in this case is:
volumes:
- ${USERPROFILE}/.aws:/root/.aws:ro
There are a few things that could be wrong. One, as mentioned previously you should check if your ~/.aws/config file is set accordingly. If not, you can follow this link to set it up. Once you have done that you can map the ~/.aws folder using the -v flag on docker run.
If your ~/.aws folder is mapped correctly, make sure to check the permissions on the files under ~/.aws so that they are able to be accessed safely by whatever process is trying to access them. If you are running as the user process, simply running chmod 444 ~/.aws/* should do the trick. This will give full read permissions to the file. Of course, if you want write permissions you can add whatever other modifiers you need. Just make sure the read octal is flipped for your corresponding user and/or group.
The issue I had was that I was running Docker as root. When running as root it was unable to locate my credentials at ~/.aws/credentials, even though they were valid.
Directions for running Docker without root on Ubuntu are here: https://askubuntu.com/a/477554/85384
You just have to pass the credential in order to be the AWS_PROFILE, if you do not pass anything it will use the default, but if you want you can copy the default and add your desired credentials.
In Your credentials
[profile_dev]
aws_access_key_id = xxxxxxxxxxxxxxxxxxxxxxxx
aws_secret_access_key = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
output = json
region = eu-west-1
In Your docker-compose
version: "3.8"
services:
cenas:
container_name: cenas_app
build: .
ports:
- "8080:8080"
environment:
- AWS_PROFILE=profile_dev
volumes:
- ~/.aws:/app/home/.aws:ro

Resources