I have run the following code from a databricks notebook:
dbutils.fs.mount('<ip_address>', '/mnt/efs')
And I get the following error:
IllegalArgumentException: Unsupported scheme: null. Allowed schemes are: gs,s3a,s3n,wasbs,adl,abfss.
I have have pinged my EFS IP address from my Databricks notebook and I do not get errors on that side.
Moreover I have tried following this blog post, but by running this command:
%sh
mkdir -m 777 /mnt/efs
mount -t nfs4 -o nosuid,nodev <ip_address>:/ /mnt/efs
I get this error:
mount: /mnt/efs: cannot mount <ip_address>:/ read-only.
For your information, I am using Databricks on AWS.
Related
I'm trying to follow these steps to get a docker container running NextCloud on my RaspberryPI. The steps seem very straight forward except I can't seem to get this working. The biggest difference is that I want to use an external drive as the data location. Here's what's happening:
I run sudo docker run -d -p 4442:4443 -p 442:443 -p 79:80 -v /mnt/nextclouddata:/data --name nextcloud ownyourbits/nextcloudpi-armhf
but when I go to https://pi_ip_address:442/activate (or any of the other ports), I get "problem loading page". I've also tried using https://raspberrypi.local:442/activate as well as appending both the IP and the name to the end of the command (where the DOMAIN is listed in the instructions).
I've seen some posts talking about how this is a problem with how docker accesses mounted drives, but I can't seem to get it working. When I type sudo docker logs -f nextcloud I get the following errors:
/run-parts.sh: line 47: /etc/services-enabled.d/010lamp: Permission denied
/run-parts.sh: line 47: /etc/services-enabled.d/020nextcloud: Permission denied
Init done
Does anyone have any steps to help get this working? I can't seem to find a consistent/working answer.
Thanks!
I create an Azure File Share on my Storage Account v2. Going under the label Connect I copied the command lines to mount the File Share with Samba v3.0
I didn't achieve my goal. Error received: Mount error(115): Operation now in progress
Useless the link of Azure: https://learn.microsoft.com/en-us/azure/storage/files/storage-troubleshoot-linux-file-connection-problems#mount-error115-operation-now-in-progress-when-you-mount-azure-files-by-using-smb-30
I have a Debian 10 fresh-updated ( yesterday ). I tried also with a docker image ubuntu:18.04, but the result didn't change, so I guess that there are more than my errors or possible mistakes.
The error is returned by the latest instruction:
$> mount -t cifs //MY_ACCOUNT.file.core.windows.net/MY_FILE_SHARE /mnt/customfolder -o vers=3.0,credentials=/etc/smbcredentials/MY_CREDENTIALS,dir_mode=0777,file_mode=0777,serverino
My tentatives:
I tried to change the version of Samba from 3.0 to 3.11 ---> NOTHING
I tried to use username and password instead of credentials ---> NOTHING
Using smbclient -I IP -p 445 -e -m SMB3 -U MY_USERNAME \\\\MY_ACCOUNT.file.core.windows.net\\MY_FILE_SHARE ----> NOTHING
Thanks for help.
I am using Rancher v2 as the k8s management platform and running RancherOS nodes on VMware vSphere. I manually installed open-iSCSI and mounted a 50GB volumes on the worker nodes for use by OpenEBS (will have to figure out how to automate that on node creation). I also created a cStor storage class and that all looks good. However, I have not been able to get a container to provision a pv using a pvc.
Warning FailedMount Unable to mount volumes for pod "web-test-54d9845456-bc8fc_infra-test(10f856c1-6882-11e9-87a2-0050568eb63d)": timeout expired waiting for volumes to attach or mount for pod "infra-test"/"web-test-54d9845456-bc8fc". list of unmounted volumes=[cstor-vol-01]. list of unattached volumes=[web-test-kube-pvc vol1 man-volmnt-01 cstor-vol-01 default-token-lxffz]
Warning FailedMount MountVolume.WaitForAttach failed for volume "pvc-b59c9b5d-6857-11e9-87a2-0050568eb63d" : failed to get any path for iscsi disk, last err seen: iscsi: failed to sendtargets to portal 10.43.48.95:3260 output: iscsiadm: Could not open /run/lock/iscsi: No such file or directory iscsiadm: Could not open /run/lock/iscsi: No such file or directory iscsiadm: Could not open /run/lock/iscsi: No such file or directory iscsiadm: Could not add new discovery record. , err exit status
I have followed below steps to enable iSCSI on RancherOS from Prerquisitie section for RancherOS from OpenEBS documentation.
sudo ros s up open-iscsi
sudo ros config set rancher.services.user-volumes.volumes [/home:/home,/opt:/opt,/var/lib/kubelet:/var/lib/kubelet,/etc/kubernetes:/etc/kubernetes,/var/openebs]
sudo system-docker rm all-volumes
sudo reboot
From the github repository of Rancher OS, found that we need to create a lock directory, and make sure to create this directory every boot using the following way
$ mkdir /run/lock
# update cloud-config
#cloud-config
runcmd:
- [mkdir, /run/lock]
Reference path: github repo of rancher. Then find issue number 2435 under rancher/OS
My server configuration is as follows
A ceph cluster server(10.1.1.138)
B ceph cluster server(10.1.1.54)
C ceph client (10.1.1.238)
I could mount using the following ceph-fuse command
sudo ceph-fuse -k /etc/ceph/ceph.client.admin.keyring -m 10.1.1.138:6789 /mnt/mycephfs/
But I don't know how to mount with /etc/fstab
The following setting is failed.
sudo vim /etc/fstab
10.1.1.138:/ /mnt/mycephfs fuse.ceph name=admin,secretfile=/home/ec2-user/admin.secret,noatime 0 2
sudo mount -a
-> Syntax error occured.
Using kerner driver mount instead of ceph-fuse is work.
sudo vim /etc/fstab
10.1.1.138:/ /mnt/mycephfs ceph name=admin,secretfile=/home/ec2-user/admin.secret,noatime 0 2
sudo mount -a
-> success
IP specification can not be found even in official tutorial
http://docs.ceph.com/docs/kraken/cephfs/fstab/
I don't know why there is no way to specfiy IP of each cluster server in offical tutorial.
If it could be mount without specifying IP, I would like to know its principle.
Am i misunderstanding something?
let me know there is something to be a hint.
Thank you for reading my question.
I am following the basic nfs server tutorial here, however when I am trying to create the test busybox replication controler I get an error indicating that the mount has failed.
Can someone point out what am I doing wrong ?
MountVolume.SetUp failed for volume
"kubernetes.io/nfs/4e247b33-a82d-11e6-bd41-42010a840113-nfs"
(spec.Name: "nfs") pod "4e247b33-a82d-11e6-bd41-42010a840113" (UID:
"4e247b33-a82d-11e6-bd41-42010a840113") with: mount failed: exit
status 32 Mounting arguments: 10.63.243.192:/exports
/var/lib/kubelet/pods/4e247b33-a82d-11e6-bd41-42010a840113/volumes/kubernetes.io~nfs/nfs
nfs [] Output: mount: wrong fs type, bad option, bad superblock on
10.63.243.192:/exports, missing codepage or helper program, or other error (for several filesystems (e.g. nfs, cifs) you might need a
/sbin/mount. helper program) In some cases useful info is found
in syslog - try dmesg | tail or so
I have tried using a ubuntu vm as well just to see if I can manage to mitigate a possible missble /sbin/mount.nfs dependency by running apt-get install nfs-common, but that too fails with the same error.
Which container image are you using? On 18th of October Google announce a new container image, which doesn't support NFS, yet. Since Kubernetes 1.4 this image (called gci) is the default. See also https://cloud.google.com/container-engine/docs/node-image-migration#known_limitations