I am following the basic nfs server tutorial here, however when I am trying to create the test busybox replication controler I get an error indicating that the mount has failed.
Can someone point out what am I doing wrong ?
MountVolume.SetUp failed for volume
"kubernetes.io/nfs/4e247b33-a82d-11e6-bd41-42010a840113-nfs"
(spec.Name: "nfs") pod "4e247b33-a82d-11e6-bd41-42010a840113" (UID:
"4e247b33-a82d-11e6-bd41-42010a840113") with: mount failed: exit
status 32 Mounting arguments: 10.63.243.192:/exports
/var/lib/kubelet/pods/4e247b33-a82d-11e6-bd41-42010a840113/volumes/kubernetes.io~nfs/nfs
nfs [] Output: mount: wrong fs type, bad option, bad superblock on
10.63.243.192:/exports, missing codepage or helper program, or other error (for several filesystems (e.g. nfs, cifs) you might need a
/sbin/mount. helper program) In some cases useful info is found
in syslog - try dmesg | tail or so
I have tried using a ubuntu vm as well just to see if I can manage to mitigate a possible missble /sbin/mount.nfs dependency by running apt-get install nfs-common, but that too fails with the same error.
Which container image are you using? On 18th of October Google announce a new container image, which doesn't support NFS, yet. Since Kubernetes 1.4 this image (called gci) is the default. See also https://cloud.google.com/container-engine/docs/node-image-migration#known_limitations
Related
Scenario: I have two docker container: A(ubuntu) and B(debian). My host is a ubuntu server.
Container A sniff the traffic on the host and write pcap file on a mounted volume (bind). Container B access the same volume (mounted, bind) to extract object from pcap files.
When I run the tshark command tshark -r pcapfile.pcap --export-objects "dicom, targetfolder" inside container B the output is "Segmentation fault (core dumped)".
My best guess until now is that I have a permission problem although both containers are accessing the volume as root and changing the file permission also didn't help.
Am I on the wrong path? Is this error related to a permission problem? What I can do to make both containers share the same mounted volume on the host?
EDIT:
The bug has been fixed. Refer to Wireshark bug 16748.
Am I on the wrong path?
Yes.
Is this error related to a permission problem?
No.
It's related to a bug in Wireshark; "tshark ... gives "Segmentation fault (core dumped)"" means "there is a bug in tshark".
Please report this as a bug on the Wireshark Bugzilla.
I have a tcpdump application in a CentOS container. I was trying to run tcpdump as nonroot. Following this forum post: https://askubuntu.com/questions/530920/tcpdump-permissions-problem (and some other documentation that reinforced this), I tried to use setcap cap_net_admin+eip /path/to/tcpdump in the container.
After running this, I tried to run tcpdump as a different user (with permissions to tcpdump) and I got "Operation Not Permitted". I then tried to run it as root which had previously been working and also got, "Operation Not Permitted". After running getcap, I verified that the permissions were what they should be. I thought it may be my specific use case so I tried running the setcap command against several other executables. Every single executable returned "Operation Not Permitted" until I ran setcap -r /filepath.
Any ideas on how I can address this issue, or even work around it without using root to run tcpdump?
The NET_ADMIN capability is not included in containers by default because it could allow a container process to modify and escape any network isolation settings applied on the container. Therefore explicitly setting this permission on a binary with setcap is going to fail since root and every other user in the container is blocked from that capability. To run a container with this, you would need to add this capability onto the container with the command used to start your container. e.g.
docker run --cap-add NET_ADMIN ...
However, I believe all you need is NET_RAW (setcap cap_net_raw) which is included in the default capabilities. From man capabilities:
CAP_NET_RAW
* Use RAW and PACKET sockets;
* bind to any address for transparent proxying.
I am using Rancher v2 as the k8s management platform and running RancherOS nodes on VMware vSphere. I manually installed open-iSCSI and mounted a 50GB volumes on the worker nodes for use by OpenEBS (will have to figure out how to automate that on node creation). I also created a cStor storage class and that all looks good. However, I have not been able to get a container to provision a pv using a pvc.
Warning FailedMount Unable to mount volumes for pod "web-test-54d9845456-bc8fc_infra-test(10f856c1-6882-11e9-87a2-0050568eb63d)": timeout expired waiting for volumes to attach or mount for pod "infra-test"/"web-test-54d9845456-bc8fc". list of unmounted volumes=[cstor-vol-01]. list of unattached volumes=[web-test-kube-pvc vol1 man-volmnt-01 cstor-vol-01 default-token-lxffz]
Warning FailedMount MountVolume.WaitForAttach failed for volume "pvc-b59c9b5d-6857-11e9-87a2-0050568eb63d" : failed to get any path for iscsi disk, last err seen: iscsi: failed to sendtargets to portal 10.43.48.95:3260 output: iscsiadm: Could not open /run/lock/iscsi: No such file or directory iscsiadm: Could not open /run/lock/iscsi: No such file or directory iscsiadm: Could not open /run/lock/iscsi: No such file or directory iscsiadm: Could not add new discovery record. , err exit status
I have followed below steps to enable iSCSI on RancherOS from Prerquisitie section for RancherOS from OpenEBS documentation.
sudo ros s up open-iscsi
sudo ros config set rancher.services.user-volumes.volumes [/home:/home,/opt:/opt,/var/lib/kubelet:/var/lib/kubelet,/etc/kubernetes:/etc/kubernetes,/var/openebs]
sudo system-docker rm all-volumes
sudo reboot
From the github repository of Rancher OS, found that we need to create a lock directory, and make sure to create this directory every boot using the following way
$ mkdir /run/lock
# update cloud-config
#cloud-config
runcmd:
- [mkdir, /run/lock]
Reference path: github repo of rancher. Then find issue number 2435 under rancher/OS
Docker version 18.06.1-ce, build e68fc7a
CentOS Linux release 7.5.1804 (Core)
My docker file is
FROM node:8
When I execute - docker build -t my-image . i got following error
Sending build context to Docker daemon 44.03kB
Step 1/1 : FROM node:8
8: Pulling from library/node
f189db1b88b3: Extracting [==================================================>] 54.25MB/54.25MB
3d06cf2f1b5e: Download complete
687ebdda822c: Download complete
99119ca3f34e: Download complete
e771d6006054: Download complete
b0cc28d0be2c: Download complete
7225c154ac40: Download complete
7659da3c5093: Download complete
failed to register layer: ApplyLayer exit status 1 stdout: stderr: archive/tar: invalid tar header
Any clue? Any suggestion what can I do to fix it ?
I have the same error when running docker run -it ubuntu
The error message indicates that the image you are attempting to download has been corrupted. There are a few places I can think of where that would happen:
On the remote registry server
In transit
In memory
On disk
By the application
Given the popularity of the image, I would rule out the registry server having issues. Potentially you have an unstable server with memory or disk issues that were triggered when downloading a large image. On Linux, you'd likely see kernel errors from this in dmesg.
The version of docker is recent enough that any past issues on this have long since been fixed. There's only a single issue on the tar file processing related to very large layers (over 8GB) which doesn't apply to the image you are pulling. The tar processing is embedded directly into docker, so changing or upgrading your tar binary won't affect docker.
Potentially you could have an issue with the storage driver and the backend storage device. Changing from devicemapper to overlay2 if you haven't already would be a good first step if docker hasn't already defaulted to this (you can see your current storage driver in docker info and change it with an entry in /etc/docker/daemon.json).
My first guess on that list is the "in transit" part. Since the request will be over https, this won't be from a bad packet. But a proxy on the network that intercepts all web traffic could be the culprit. If you have a proxy, make sure docker is configured to login and use your proxy. For more details on that, see https://docs.docker.com/config/daemon/systemd/#httphttps-proxy
Try unpack your image with: tar tvf yourarchive
If there no errors, try update docker(if it possible)
If error presents try rebuild your archive.
Similar issue described there
when you have the same error on tar extraction, the fetched image might indeed be corrupt.
comments on issue 15561 hint for, that building locally still works.
I have a AWS micro instance running Ubuntu 12.04 LTS and last night when I SSH in, I did apt-get update and it gave me an error (I don't recall which). So I thought I would give my instance a reboot. This morning, it says that my instance has failed an Instance Sstaus Check and I am unable to SSH into it. The bottom of my system log is below. Is there any way to save this and if not, anyway to save the data?
Thank you!
Loading, please wait...
[35914369.823672] udevd[81]: starting version 175
Begin: Loading essential drivers ... done.
Begin: Running /scripts/init-premount ... done.
Begin: Mounting root file system ... Begin: Running /scripts/local-top ... done.
Begin: Running /scripts/local-premount ... done.
[35914370.187877] EXT4-fs (xvda1): mounted filesystem with ordered data mode. Opts: (null)
Begin: Running /scripts/local-bottom ... done.
done.
Begin: Running /scripts/init-bottom ... done.
[35914373.347844] init: mountall main process (183) terminated with status 1
General error mounting filesystems.
A maintenance shell will now be started.
CONTROL-D will terminate this shell and reboot the system.
Give root password for maintenance
(or type Control-D to continue):
It depends on how badly broken the filesystem is.
You can start a new instance in AWS and then attach the EBS volume to your new instance. That may help you recover the data.
Don't terminate the instance, otherwise you could lose the filesystem completely.
Always use the "Create AMI" option of a running instance before doing an apt-get update/upgrade or yum update/upgrade. This way, if your system fails to come up after a reboot (after the update), you can spin up a 'before' version (i.e. bootable) instance using the AMI you just created.
In this case, Ubuntu probably tried to install a new kernel and/or ram file system (ramfs) and since this is an AWS virtual machine with kernel and ramfs dependencies, the standard Ubuntu build probably did not meet those dependencies and your virtual machine is now toast.
As mentioned, if you need to recover data from the unbootable system, mount it's EBS volume to a working system. It may complain that it is in use. If so, and to keep the EBS volume, you must check the option that lets you preserve the volume before you terminate it. The default on termination of an instance is to destroy it's EBS volume because, the assumption is, you booted from an EBS backed AMI that you previously (or regularly) created.