fuse Transport endpoint is not connected - fuse

I have a question.when I mount my fuse ,it shows that Transport endpoint is not connected.I don't know how to solve this problem.
Here is my code.
[root#localhost u_fs]# ./nl_fs /tmp/fuse

Related

glusterfs error when delete directory - Transport endpoint is not connected

I have mounted glusterfs on my CentOS 8 server but strange thing is, i can create directory but when i try to delete i get error Transport endpoint is not connected
Here is my mount point
$ mount | grep gluster
10.10.217.21:gluster_vol2/voyager on /mnt/glusterfs type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
I have created dir
$ mkdir foo
$ rmdir foo
rmdir: failed to remove 'foo': Transport endpoint is not connected
But strange issue i can create file and successfully able to delete. I have verify basic things like firewall etc and all looks good. (I have no control on Gluster Storage so what else i can do from client side to debug?)
Looks like the problem is in the network.
The way to solve this is to remount the mountpoint:
sudo umount /mnt/glusterfs
sudo mount /mnt/glusterfs

tshark with --export-dicom gives “Segmentation fault (core dumped)”

Scenario: I have two docker container: A(ubuntu) and B(debian). My host is a ubuntu server.
Container A sniff the traffic on the host and write pcap file on a mounted volume (bind). Container B access the same volume (mounted, bind) to extract object from pcap files.
When I run the tshark command tshark -r pcapfile.pcap --export-objects "dicom, targetfolder" inside container B the output is "Segmentation fault (core dumped)".
My best guess until now is that I have a permission problem although both containers are accessing the volume as root and changing the file permission also didn't help.
Am I on the wrong path? Is this error related to a permission problem? What I can do to make both containers share the same mounted volume on the host?
EDIT:
The bug has been fixed. Refer to Wireshark bug 16748.
Am I on the wrong path?
Yes.
Is this error related to a permission problem?
No.
It's related to a bug in Wireshark; "tshark ... gives "Segmentation fault (core dumped)"" means "there is a bug in tshark".
Please report this as a bug on the Wireshark Bugzilla.

Raspbian Wheezy Owncloud and NFS together

I am trying to setup a file/DLNA server on raspberry pi (raspbian wheezy) for the files to be shared by all the devices I use - android and Linux to the minimum.
I have a USB drive with some decent storage where I have all my files. So far, I had NFS and dlna serving the USB drive contents.
Recently, I installed owncloud. It required the owncloud data directory to be owned by www-data. I have mounted (from fstab) the USB drive with options rw,user,uid=33,gid=33,mask=007. The owncloud worked fine (though it is very slow to render the contents).
My nfs exports is as follows:
/owncloud_data/mystuff *(rw,all_squash,anonuid=33,anongid=33,no_subtree_check)
My shomount -e localhost displays the following:
Export list for localhost:
/owncloud_data/mystuff (everyone)
However, when I issue
sudo mount localhost:/owncloud_data/mystuff /my_nfs
I get the following error:
mount.nfs: access denied by server while mounting localhost:/owncloud_data/mystuff
I don't understand why. I kind of guess that this is because the /owncloud_data/mystuff is owned by the www-data. But, the nfs-server is run as root; should it not be able to read the data? Or am I missing anything in this regard? I dont get any useful logs in the /var/log/messages; I tried including the --debug all option in the nfs config.
I haven't started with the dlna yet (I have installed minidlna which was working with NFS before I installed the owncloud).
OR, is there a better solution for what I am trying to do?
Please let me know if you need more information in this regard.
Thanks
I wont tick this as an answer. It is a work around.
The problem is if I export the /owncloud_data/mystuff the nfs mount is not working. If I export all /owncloud_data, it is working fine (along with the export options I have mentioned in the original post). I just mount /owncloud_data/mystuff on the client side (though technically I can mount /owncloud_data there).
I will be happy if anybody can explain this behaviour and solve to export /owncloud_data/mystuff.

Device node getting created but device driver not getting linked

I have written a simple device driver. Only loading the module my device file is getting created. But when my application tries to open the device file I am getting an error -1 (operation not permitted). When I have tried to look at device characteristics by executing the command:
$udevadm info -a -p /sys/class/char/<devname>
I get the output:
KERNEL=="<devname>"
SUBSYSTEM=="char"
DRIVER==" "
So apparently my device node is not getting linked to the device driver.
Can anybody please help me out with this.
Thanks
Have you checked the permissions on the device node udev created?
Udev manages the permissions of those device nodes, and unless you're running as root it's quite likely you're not allowed to read/write from/to the device node.
Edit
If you're running as root the permissions on the device node won't be a factor. Please show us the content of /proc/devices, the output of ls -la /dev/my-device-node and your code.

Connecting to ALSA

When I try to connect to the ALSA sound system as another user on one of our machines I get the following message- "ALSA lib pcm_dmix.c:975:(snd_pcm_dmix_open) unable to create IPC semaphore". The machine has been logged in as another user in our system. It doesn't matter if I use aplay or my application I get the same message. If I run as root the application connects to the ALSA system and plays the sound. If I su to the user who is logged into the console I get the same failure.
Does anyone have any ideas? I have tried to use setcap on my program but this failed due to "Operation not supported". This maybe because my application is on a NFS mounted partition.
Try setting ipc_key_add_uid in your .asoundrc file. see ALSA documentation on pcm plugins for more information.

Resources