Unable to provision OpenEBS volume on RancherOS - openebs

I am using Rancher v2 as the k8s management platform and running RancherOS nodes on VMware vSphere. I manually installed open-iSCSI and mounted a 50GB volumes on the worker nodes for use by OpenEBS (will have to figure out how to automate that on node creation). I also created a cStor storage class and that all looks good. However, I have not been able to get a container to provision a pv using a pvc.
Warning FailedMount Unable to mount volumes for pod "web-test-54d9845456-bc8fc_infra-test(10f856c1-6882-11e9-87a2-0050568eb63d)": timeout expired waiting for volumes to attach or mount for pod "infra-test"/"web-test-54d9845456-bc8fc". list of unmounted volumes=[cstor-vol-01]. list of unattached volumes=[web-test-kube-pvc vol1 man-volmnt-01 cstor-vol-01 default-token-lxffz]
Warning FailedMount MountVolume.WaitForAttach failed for volume "pvc-b59c9b5d-6857-11e9-87a2-0050568eb63d" : failed to get any path for iscsi disk, last err seen: iscsi: failed to sendtargets to portal 10.43.48.95:3260 output: iscsiadm: Could not open /run/lock/iscsi: No such file or directory iscsiadm: Could not open /run/lock/iscsi: No such file or directory iscsiadm: Could not open /run/lock/iscsi: No such file or directory iscsiadm: Could not add new discovery record. , err exit status
I have followed below steps to enable iSCSI on RancherOS from Prerquisitie section for RancherOS from OpenEBS documentation.
sudo ros s up open-iscsi
sudo ros config set rancher.services.user-volumes.volumes [/home:/home,/opt:/opt,/var/lib/kubelet:/var/lib/kubelet,/etc/kubernetes:/etc/kubernetes,/var/openebs]
sudo system-docker rm all-volumes
sudo reboot

From the github repository of Rancher OS, found that we need to create a lock directory, and make sure to create this directory every boot using the following way
$ mkdir /run/lock
# update cloud-config
#cloud-config
runcmd:
- [mkdir, /run/lock]
Reference path: github repo of rancher. Then find issue number 2435 under rancher/OS

Related

How to resolve the file processing issue during docker volume mount in linux?

I am trying to containerize my application. The application basically process files and place it in a different folder after renaming it. The source folder "opt/fileprocessing/input" and target it "opt/fileprocessing/output"
Scenario 1. - without volume mount
When I start my docker container and place file in the source folder us docker cp command, the application process it and place it successfully in the target folder
Scenario 2 . - with volume mount with the host
docker run -d -v /opt/input:/opt/fileprocessing/input -v /opt/output:/opt/fileprocessing/output --name new_container processor
when I place the file in the /opt/input folder of the host, the application throws an error that it cant place the file in the destination. If I go inside the container and view the input folder I see the file in the input folder that confirms that the mount has happened succesfullu.It fails when renaming and posting it in the destination (well this is application level code error , no much help there I get).
I tried the following to make it work.
Made sure the host and container users are the same and has the same uid and gid
File has 775 permission set.
The container folder has 777 permission
Same file has been placed that was used for scenario 1.
File name same and format as well
container OS
NAME="CentOS Linux"
VERSION="8"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="8"
PLATFORM_ID="platform:el8"
host os
NAME="Red Hat Enterprise Linux Server"
VERSION="7.6 (Maipo)"
ID="rhel"
ID_LIKE="fedora"
VARIANT="Server"
Scenario - 3 - Mounted the file in a different way , as below
docker run -d -v /opt/fileprocessing:/opt/fileprocessing -v /opt/fileprocessing:/opt/fileprocessing --name new_container processor
while the fileprocessing folder in the container and the host has two subdirectories name input and output.
This way of mount seems to work for me with out any issues.
Please let me know why scenario 2 failed to work and how to fix it ?

Automatically changing the docker container file permissions in a directory in Linux

We have a docker container running in Linux VMs. This container is writing the logs inside a directory in the container.
Container log directory - /opt/log/
This directory in volume mounted to host machine so that all the log files will also be available in host.
Host directory - /var/log/
Here we see container is creating the log files with 600 (-rw-------+) permission. There is no group read permission assigned to these files.
Same permissions are reflecting in host directory also. We need to add group read permission (640) (-rw-r-----+) automatically for all the files getting created in this directory so that other logging agents can read these files.
I have tried setting ACL also for adding this permission on host but these permissions are not getting set for the files inside this directory.
setfacl -Rdm g::r-- /var/log/
Is there a way we can add group read permission automatically for all the files getting created in this host directory?
From the following article,
https://dille.name/blog/2018/07/16/handling-file-permissions-when-writing-to-volumes-from-docker-containers/
There is a parameter to set the user id and the group id for example,
docker run -it --rm --volume $(pwd):/source --workdir /source --user $(id -u):$(id -g) ubuntu
To set the permissions of the user, when starting the container.

Kubernaties unable to mount NFS FS on Google Container Engine

I am following the basic nfs server tutorial here, however when I am trying to create the test busybox replication controler I get an error indicating that the mount has failed.
Can someone point out what am I doing wrong ?
MountVolume.SetUp failed for volume
"kubernetes.io/nfs/4e247b33-a82d-11e6-bd41-42010a840113-nfs"
(spec.Name: "nfs") pod "4e247b33-a82d-11e6-bd41-42010a840113" (UID:
"4e247b33-a82d-11e6-bd41-42010a840113") with: mount failed: exit
status 32 Mounting arguments: 10.63.243.192:/exports
/var/lib/kubelet/pods/4e247b33-a82d-11e6-bd41-42010a840113/volumes/kubernetes.io~nfs/nfs
nfs [] Output: mount: wrong fs type, bad option, bad superblock on
10.63.243.192:/exports, missing codepage or helper program, or other error (for several filesystems (e.g. nfs, cifs) you might need a
/sbin/mount. helper program) In some cases useful info is found
in syslog - try dmesg | tail or so
I have tried using a ubuntu vm as well just to see if I can manage to mitigate a possible missble /sbin/mount.nfs dependency by running apt-get install nfs-common, but that too fails with the same error.
Which container image are you using? On 18th of October Google announce a new container image, which doesn't support NFS, yet. Since Kubernetes 1.4 this image (called gci) is the default. See also https://cloud.google.com/container-engine/docs/node-image-migration#known_limitations

Freenas cannot mount NFS

Today I installed FreeNas 9.2.1.8 and now I am trying to set up a NFS.
First I created a Volume with the volume manager. Then I created a dataset.
Now I want to set up a NFS for this dataset.
So I go to share, add UNIX(NFS) share, as mount point I select the path of my created dataset.
As mapalluser and mapallgroup I select nouser and nogroup scince I changed the permission of the dataset to it.
As a final step I have gone to services and switchen on NFS.
When I now try to mount the nfs on Ubuntu 13.10 with
sudo mount -t nfs 192.168.1.5:/mnt/Storage/NFS /home/tm/freenas/
It says mount.nfs Connection timed out
On the FreeNAS i got an message: rpcb_unset failed.
Does someone know what the problem here is?
Ok I solved the problem appearently I had to add my client to the host name database of my freenas server. The Setting can be found at Network Settings-> Global Configuration
And then I add it like:
192.168.1.4 clientmachinename

Mount Netapp NFS share permanently on RHEL 6.4

I am trying to mount a volume on a RHEL 6.4 virtual machine permanently.
My fstab entry is as:
172.17.4.228:/bp_nfs_test1 /mnt1 nfs rsize=8192,wsize=8192,intr
And I mounted the volume as:
mount 172.17.4.228:/bp_nfs_test1 /mnt1
When I run df -h I can see the volume and able to access it properly.
But when I reboot the VM, the mount is gone and not able to access it anymore even though the entry in /etc/fstab is present
I have to manually mount the volume again (mount -a), then only I am able to see my volume in df -h and access it.
Any help is appreciated
The mount process on boot is very early, so your network won't be online thus preventing the nfs share from being mounted. You'll need to enable netfs, which manages network file shares, and runs after the network is up. Your desired process is:
Standard mounts processed.
NFS share is skipped during initial mounts (by adding _netdev to options).
After network is online, netfs will process network file systems like nfs and bring them online.
To prevent automounter for attempting to mount your nfs share before the network services are available, add _netdev to your options:
172.17.4.228:/bp_nfs_test1 /mnt1 nfs rsize=8192,wsize=8192,intr,_netdev
Enable netfs:
chkconfig netfs on
Alternatively, you could also configure the share through the /etc/auto.master configuration and have it mount when the share is accessed.

Resources