I'm using GlusterFS 3.3.2. Two servers, a brick on each one. The Volume is "ARCHIVE80"
I can mount the volume on Server2; if I touch a new file, it appears inside the brick on Server1.
However, if I try to mount the volume on Server1, I have an error:
Mount failed. Please check the log file for more details.
The log gives:
[2013-11-11 03:33:59.796431] I [rpc-clnt.c:1654:rpc_clnt_reconfig] 0-ARCHIVE80-client-0: changing port to 24011 (from 0)
[2013-11-11 03:33:59.796810] I [rpc-clnt.c:1654:rpc_clnt_reconfig] 0-ARCHIVE80-client-1: changing port to 24009 (from 0)
[2013-11-11 03:34:03.794182] I [client-handshake.c:1614:select_server_supported_programs] 0-ARCHIVE80-client-0: Using Program GlusterFS 3.3.2, Num (1298437), Version (330)
[2013-11-11 03:34:03.794387] W [client-handshake.c:1320:client_setvolume_cbk] 0-ARCHIVE80-client-0: failed to set the volume (Permission denied)
[2013-11-11 03:34:03.794407] W [client-handshake.c:1346:client_setvolume_cbk] 0-ARCHIVE80-client-0: failed to get 'process-uuid' from reply dict
[2013-11-11 03:34:03.794418] E [client-handshake.c:1352:client_setvolume_cbk] 0-ARCHIVE80-client-0: SETVOLUME on remote-host failed: Authentication failed
[2013-11-11 03:34:03.794426] I [client-handshake.c:1437:client_setvolume_cbk] 0-ARCHIVE80-client-0: sending AUTH_FAILED event
[2013-11-11 03:34:03.794443] E [fuse-bridge.c:4256:notify] 0-fuse: Server authenication failed. Shutting down.
How comes I can mount on one server and not on the other one???
It may a permission problem. Could you try if it can be resolved by set auth.allow:
[root#test1 ~]#gluster volume set ARCHIVE80 auth.allow 'SERVER1IPADDRESS'
It works on my side.
Related
I've created a simple nbd-server instance which is sharing a single 1GB file which I created with:
dd if=/dev/zero of=nbd.file bs=1048576 count=1024
The nbd.conf file looks like this:
[generic]
[export1]
exportname = /Users/michael/Downloads/nbd-3.21/nbd.file
I start the server on my Mac as follows:
nbd-server -d -C /Users/michael/Downloads/nbd-3.21/nbd.conf
But when I try to connect the Linux client I get an error:
$ nbd-client -p -b 4096 nbd-server.local -N export1 /dev/nbd0
Negotiation: ..size = 1024MB
Error: Failed to setup device, check dmesg
Exiting.
There is nothing in dmesg and I can't find any documentation on exactly what went wrong. The server output looks like this, showing no obvious errors:
** Message: 20:05:55.820: virtstyle ipliteral
** Message: 20:05:55.820: connect from 192.168.1.105, assigned file is /Users/michael/Downloads/nbd-3.21/nbd.file
** Message: 20:05:55.820: No authorization file, granting access.
** Message: 20:05:55.820: Size of exported file/device is 1073741824
** Message: 20:05:55.821: Starting to serve
Error: Connection dropped: Connection reset by peer
Exiting.
All of these error messages lead me to believe the issue is on the client: it doesn't like something, so it terminates the connection. If I daemonize the server it happily lets the client try to reconnect.
I thought perhaps I should have more lines in my config file, but I don't see any obvious optional config items that would help. I thought perhaps there was some minimum file size, so I bumped it up from 16MB to 1GB.
What does the error "Failed to setup device" mean? How can I troubleshoot what is going wrong or fix it?
Try to run the client as root: sudo nbd-client ...
my kafka use the glusterfs as the storage, and when i apply the yaml of the kafka, the pod is always in the status of ContainerCreating, then i check the describe of the pod. I get the following err:
Warning FailedMount 24m kubelet, 10.0.0.156 MountVolume.SetUp failed for volume "pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b" : mount failed: mount failed: exit status 1
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/a32117ca-3ce6-4fc4-b75a-15b63b859b71/volumes/kubernetes.io~glusterfs/pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b --scope -- mount -t glusterfs -o auto_unmount,backup-volfile-servers=10.0.0.154:10.0.0.155:10.0.0.156,log-file=/var/lib/kubelet/plugins/kubernetes.io/glusterfs/pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b/kafka-0-glusterfs.log,log-level=ERROR 10.0.0.155:vol_5fcfa0f585ce3677e573cf97f40191d3 /var/lib/kubelet/pods/a32117ca-3ce6-4fc4-b75a-15b63b859b71/volumes/kubernetes.io~glusterfs/pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b
Output: Running scope as unit run-10840.scope.
[2020-03-14 13:56:14.771098] E [glusterfsd.c:825:gf_remember_backup_volfile_server] 0-glusterfs: failed to set volfile server: File exists
Mount failed. Please check the log file for more details.
, the following error information was pulled from the glusterfs log to help diagnose this issue:
[2020-03-14 13:56:14.782472] E [glusterfsd-mgmt.c:1958:mgmt_getspec_cbk] 0-glusterfs: failed to get the 'volume file' from server
[2020-03-14 13:56:14.782519] E [glusterfsd-mgmt.c:2151:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:vol_5fcfa0f585ce3677e573cf97f40191d3)
Warning FailedMount 24m kubelet, 10.0.0.156 MountVolume.SetUp failed for volume "pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b" : mount failed: mount failed: exit status 1
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/a32117ca-3ce6-4fc4-b75a-15b63b859b71/volumes/kubernetes.io~glusterfs/pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b --scope -- mount -t glusterfs -o auto_unmount,backup-volfile-servers=10.0.0.154:10.0.0.155:10.0.0.156,log-file=/var/lib/kubelet/plugins/kubernetes.io/glusterfs/pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b/kafka-0-glusterfs.log,log-level=ERROR 10.0.0.154:vol_5fcfa0f585ce3677e573cf97f40191d3 /var/lib/kubelet/pods/a32117ca-3ce6-4fc4-b75a-15b63b859b71/volumes/kubernetes.io~glusterfs/pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b
Output: Running scope as unit run-11012.scope.
[2020-03-14 13:56:15.441030] E [glusterfsd.c:825:gf_remember_backup_volfile_server] 0-glusterfs: failed to set volfile server: File exists
Mount failed. Please check the log file for more details.
, the following error information was pulled from the glusterfs log to help diagnose this issue:
[2020-03-14 13:56:15.452832] E [glusterfsd-mgmt.c:1958:mgmt_getspec_cbk] 0-glusterfs: failed to get the 'volume file' from server
[2020-03-14 13:56:15.452871] E [glusterfsd-mgmt.c:2151:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:vol_5fcfa0f585ce3677e573cf97f40191d3)
Warning FailedMount 24m kubelet, 10.0.0.156 MountVolume.SetUp failed for volume "pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b" : mount failed: mount failed: exit status 1
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/a32117ca-3ce6-4fc4-b75a-15b63b859b71/volumes/kubernetes.io~glusterfs/pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b --scope -- mount -t glusterfs -o auto_unmount,backup-volfile-servers=10.0.0.154:10.0.0.155:10.0.0.156,log-file=/var/lib/kubelet/plugins/kubernetes.io/glusterfs/pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b/kafka-0-glusterfs.log,log-level=ERROR 10.0.0.154:vol_5fcfa0f585ce3677e573cf97f40191d3 /var/lib/kubelet/pods/a32117ca-3ce6-4fc4-b75a-15b63b859b71/volumes/kubernetes.io~glusterfs/pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b
Output: Running scope as unit run-11236.scope.
[2020-03-14 13:56:16.646525] E [glusterfsd.c:825:gf_remember_backup_volfile_server] 0-glusterfs: failed to set volfile server: File exists
Mount failed. Please check the log file for more details.
, the following error information was pulled from the glusterfs log to help diagnose this issue:
[2020-03-14 13:56:16.658118] E [glusterfsd-mgmt.c:1958:mgmt_getspec_cbk] 0-glusterfs: failed to get the 'volume file' from server
[2020-03-14 13:56:16.658168] E [glusterfsd-mgmt.c:2151:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:vol_5fcfa0f585ce3677e573cf97f40191d3)
Warning FailedMount 24m kubelet, 10.0.0.156 MountVolume.SetUp failed for volume "pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b" : mount failed: mount failed: exit status 1
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/a32117ca-3ce6-4fc4-b75a-15b63b859b71/volumes/kubernetes.io~glusterfs/pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b --scope -- mount -t glusterfs -o auto_unmount,backup-volfile-servers=10.0.0.154:10.0.0.155:10.0.0.156,log-file=/var/lib/kubelet/plugins/kubernetes.io/glusterfs/pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b/kafka-0-glusterfs.log,log-level=ERROR 10.0.0.154:vol_5fcfa0f585ce3677e573cf97f40191d3 /var/lib/kubelet/pods/a32117ca-3ce6-4fc4-b75a-15b63b859b71/volumes/kubernetes.io~glusterfs/pvc-4cebf743-e9a3-4bc0-b96a-e3bca2d7c65b
Output: Running scope as unit run-11732.scope.
How can I solve the problem?
Ensure you have the right name of your volume in the yaml file under path: <the_volume_name>.
To show all gluster volumes use:
sudo gluster volume status all
Restart the volume (in this case my volume is just called gfs):
gluster volume stop gfs
gluster volume start gfs
Now delete your pod and create it again.
Alternatively try Kadlu.io or Ceph Storage.
I am trying to transfer a file from Arista switch to an external machine using scp using pyeapi run_command.
Reference:
http://www.nycnetworkers.com/upgrade/copying-from-a-sftp-server/
Code:
import pyeapi
node = pyeayi.connect(transport='https', host='x.x.x.x', username='yyy', password='zzz', enable_pwd='xxx', return_node=True)
node.run_command(['copy flash:/file.log.gz scp://user:password#hostname/location/'], encoding='json')
But it throws the following error,
pyeapi.eapilib.CommandError: Error [1000]: CLI command 2 of 2 'copy flash:/file.log.gz scp:user:password#hostname/location/' failed: could not run command
[Error copying flash:/file.log.gz to scp:user:password#hostname/location/ (Warning: Permanently added 'hostname' (ECDSA) to the list of known hosts.
Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password). lost connection)]
I am fairly new to Linux (and brand new to chef) and I have ran into an issue when setting up my chef server. I am trying to create an admin user with the command
sudo chef-server-ctl user-create admin Admin Ladmin admin#example.com
examplepass -f admin.pem
but after I keep getting this error:
ERROR: Connection refused connecting...
ERROR: Connection refused connecting to https://127.0.0.1/users/, retry 5/5
ERROR: Network Error: Connection refused - Connection refused
connecting to https://..., giving up
Check your knife configuration and network settings
I also noticed that when I ran chef-server-ctl I got this output:
[2016-12-21T13:24:59-05:00] ERROR: Running exception handlers Running
handlers complete
[2016-12-21T13:24:59-05:00] ERROR: Exception
handlers complete Chef Client failed. 0 resources updated in 01 seconds
[2016-12-21T13:24:59-05:00] FATAL: Stacktrace dumped to
/var/opt/opscode/local-mode-cache/chef-stacktrace.out
[2016-12-21T13:24:59-05:00] FATAL: Please provide the contents of the
stacktrace.out file if you file a bug report
[2016-12-21T13:24:59-05:00] FATAL:
Chef::Exceptions::CannotDetermineNodeName: Unable to determine node
name: configure node_name or configure the system's hostname and fqdn
I read that this error is due to a prerequisite mistake but I'm uncertain as to what it means or how to fix it. So any input would be greatly appreciated.
Your server does not have a valid FQDN (aka full host name). You'll have to fix this before installing Chef server.
I am installing glusterfs filesystem on ubuntu and linux 6, trying to connect these two servers as peer in volume.
I successfully installed it on both servers, but due to some replication issues I stopped glusterfs service on the linux 6 server. After which I am unable to start the service on same server.
Digging through the logs I found the following errors.
glusterfs tail -f cli.log [2015-06-22 16:36:13.342468] W
[socket.c:642:__socket_rwv] 0-glusterfs: readv on
/var/run/glusterd.socket failed (Invalid argument) [2015-06-22
16:36:13.342496] I [socket.c:2409:socket_event_handler] 0-transport:
disconnecting now [2015-06-22 16:36:15.490601] W
[cli-rl.c:106:cli_rl_process_line] 0-glusterfs: failed to process line
[2015-06-22 16:36:16.344012] W [socket.c:642:__socket_rwv]
0-glusterfs: readv on /var/run/glusterd.socket failed (Invalid
argument) [2015-06-22 16:36:16.344067] I
[socket.c:2409:socket_event_handler] 0-transport: disconnecting now
[2015-06-22 16:36:19.346585] W [socket.c:642:__socket_rwv]
0-glusterfs: readv on /var/run/glusterd.socket failed (Invalid
argument) [2015-06-22 16:36:19.346641] I
[socket.c:2409:socket_event_handler] 0-transport: disconnecting now
[2015-06-22 16:36:20.545822] W [cli-rl.c:106:cli_rl_process_line]
0-glusterfs: failed to process line [2015-06-22 16:36:22.346947] W
[socket.c:642:__socket_rwv] 0-glusterfs: readv on
/var/run/glusterd.socket failed (Invalid argument) [2015-06-22
16:36:22.347001] I [socket.c:2409:socket_event_handler] 0-transport:
disconnecting now
No glusterfs process is running on same server.
Thanks in advance.