How to completely delete a GlusterFS volume [closed] - glusterfs

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I have some GlusterFS(Version 3.7.11) volumes created and started, after some test, I stopped hand deleted the volumes, but they are still remain in the GlusterFS servers.
For example, I have 3 servers, and bricks saved under /gfs:
[vagrant#gfs-server-2 ~]$ sudo gluster volume create test-vol gfs-server-1:/gfs/test-vol gfs-server-2:/gfs/test-vol gfs-server-3:/gfs/test-vol force
volume create: test-vol: success: please start the volume to access data
[vagrant#gfs-server-2 ~]$ sudo gluster volume start test-vol
volume start: test-vol: success
[vagrant#gfs-server-2 ~]$ mkdir /tmp/test
[vagrant#gfs-server-2 ~]$ sudo mount -t glusterfs gfs-server-1:/test-vol /tmp/test
[vagrant#gfs-server-2 ~]$ sudo touch /tmp/test/`date +%s`.txt
[vagrant#gfs-server-2 ~]$ sudo touch /tmp/test/`date +%s`.txt
[vagrant#gfs-server-2 ~]$ sudo touch /tmp/test/`date +%s`.txt
[vagrant#gfs-server-2 ~]$ sudo touch /tmp/test/`date +%s`.txt
[vagrant#gfs-server-2 ~]$ sudo ls /tmp/test/
1469617442.txt 1469617446.txt 1469617447.txt 1469617449.txt
[vagrant#gfs-server-2 ~]$ ls /gfs/test-vol/
1469617449.txt
[vagrant#gfs-server-2 ~]$ sudo umount /tmp/test
After delete the volume, I can still see the files remain in the GlusterFS servers:
[vagrant#gfs-server-2 ~]$ sudo gluster volume stop test-vol
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: test-vol: success
[vagrant#gfs-server-2 ~]$ sudo gluster volume delete test-vol
Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
volume delete: test-vol: success
[vagrant#gfs-server-2 ~]$ ls /gfs/test-vol/
1469617449.txt

gluster volume delete does not delete the data from the back-end, you would need to manually delete them from the bricks. (rm -rf /gfs/test-vol/)

To delete the the data from the back-end you need to remove it manually e.g in your case you need to do
rm -rf /gfs/test-vol/

You can mount the volume at a point for example /mnt/data, then use rm -rf /mnt/data/*. And then remove the volume from gluster.

Disconnect the mount (umount) the dir. and then force remove the volume. Followed by manually delete the contents in the volume recursively to permanently remove the persistent data in the volume.

Related

Shared folder permission anomaly with linux [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
So, i have this shared folder symbolically linked to 'shared'. The folder is associated to 2 user (bill and karen) who both has membership of 'bill-karen' group. Problem is, i can't seem to create a new file from both user unless I run a new shell with su - [USER]
This seems odd to me as the folder is owned by root:bill-karen while the permission for the folder is 2775 (-rwxrwsr-x). Is there any reason behind this? I'm using ubuntu 20.04 LTS anyway.
How i configured the shared folder:
*Note that i already has user bill
sudo adduser karen
sudo addgroup bill-karen
sudo usermod -aG bill-karen bill
sudo usermod -aG bill-karen karen
sudo mkdir /usr/local/share/shared_folder
sudo chown :bill-karen /usr/local/share/shared_folder
sudo chmod 2775 /usr/local/share/shared_folder
ln -s /usr/local/share/shared_folder /shared
Then i tried creating a new file as bill|karen:
cd /shared
> foo.txt
It says bash: foo.txt: Permission denied
I have created the same setup as you did and could not reproduce your issue.
I ran the following as root in an empty ubuntu 20.04 docker container:
useradd bill
useradd karen
groupadd bill-karen
usermod -aG bill-karen bill
usermod -aG bill-karen karen
mkdir /shared
chown root:bill-karen /shared
chmod 2775 /shared
ln -s /shared /link_to_shared
su - bill
Then as bill I was able to run this:
cd /link_to_shared
touch created_by_bill
exit
When logging as karen:
su - karen
I could do the following as well:
cd /link_to_shared
rm created_by_bill
touch created_by_karen
exit
I suspect something in your configuration doesn't match the description in your post.
Perhaps you modified the group membership recently and your currently open sessions have not taken it into account?
Try to run the id command to make sure that your users are in the expected groups:
$ id
uid=1000(bill) gid=1000(bill) groups=1000(bill),1002(bill-karen)

Permission denied when I copy letsencrypt folder using scp [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
I want to copy letsencrypt folder from my remote ec2 machine to my local folder.
So I run this command:
scp -i key.pem -r ubuntu#ec2-3-188-92-58.us-east-2.compute.amazonaws.com:/etc/letsencrypt my-letsencrypt
Some files are copied but other failed with this error Permission denied:
scp: /etc/letsencrypt/archive: Permission denied
scp: /etc/letsencrypt/keys: Permission denied
I want to avoid to change ec2 files permissions.
What can I do to copy this folder to my local filesystem?
You are logging in with the account ubuntu on the server, but that account doesn't have the correct permission to read (and therefore) copy all the files. Most likely some of the files are owned by root and are not readable by others.
You can check the permission yourself with ls -l /etc/letsencrypt.
To copy the files anyway, here's two options:
1. Make a readable copy
on the remote server (logged in via SSH), you can make a copy of the folder, and change the permissions of the files:
sudo cp -r /etc/letsencrypt ~/letsencrypt-copy
sudo chown -R ubuntu:ubuntu ~/letsencrypt-copy
Now you can copy the files from there:
scp -i key.pem -r ubuntu#ec2-3-188-92-58.us-east-2.compute.amazonaws.com:letsencrypt-copy my-letsencrypt
2. copy from root
If you have ssh access on the root account, then just copy using that account:
scp -r root#ec2-3-188-92-58.us-east-2.compute.amazonaws.com:letsencrypt-copy my-letsencrypt
Here you need public read permission
- First SSH to your remote server ubuntu#ec2-3-188-92-58.us-east-2.compute.amazonaws.com
sudo su - (make sure you are a root user)
chmod -R 0744 /etc/letsencrypt
now try to download again with SCP again
after download put back permissions to 0700
chmod -R 0700 /etc/letsencrypt
Check the file permissions for archive & keys. It should be 400. Just change to 600. After the change, try copying again.
chmod -R 600 ./archive ./keys

ssh key gen Permission Denied [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I am trying to make ssh key for a the deployer user
[deployer#server /]$ ssh-keygen -t rsa -b 4096 -C "email#yahoo.com"
Generating public/private rsa key pair.
Enter file in which to save the key (/home/deployer/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
open /home/deployer/.ssh/id_rsa failed: Permission denied.
Saving the key failed: /home/deployer/.ssh/id_rsa.
i have tried all of theses
[root#server /]# chmod -R 644 /home/deployer
[root#server /]# chmod -R 755 home/deployer
[root#server /]# chmod -R 755 /home/deployer
[root#server /]# chmod -R 755 home/deployer
Looks like deployer is not the owner of its own home directory. Try giving him the ownership:
[root#server /]# chown -R deployer: /home/deployer/
It looks like you don't have the privileges to save the files necessary to complete the operation. Try running the same command using sudo:
sudo ssh-keygen -t rsa -b 4096 -C "email#yahoo.com"
When prompted for a password, enter your password. If this doesn't work, you can try using the command su, which will switch to the root user directly.
try the following:
1) cd /home/deployer
2) ssh-keygen --t
3) chmod 700 .ssh

Docker: Why is /etc/resolv.conf unreadable? Breaks DNS [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I'm using Docker 1.6 on a CentOS 7 host, using CentOS 7 containers.
In most of my containers, DNS doesn't work, because /etc/resolv.conf cannot be read, even by root:
[root#7ba55011e7ab etc]# ls -l /etc/resolv.conf
ls: cannot access /etc/resolv.conf: Permission denied
This happens in most of my containers, even containers that are created directly from the standard Docker centos:latest image. (This problem also occurred when I was using the standard Docker debian image.) The only container in which resolv.conf is readable is the very first one I created from the stock centos image.
Needless to say, I've bounced Docker multiple times, as well as rebooted the host machine. I've also tried using --dns hostname in the OPTIONS in /etc/sysconfig/docker. But of course that doesn't help because it's not the contents of resolv.conf that are a problem, but rather the fact that I can't read it (even as root).
I understand that /etc/resolv.conf is "bind mounted" from the host's /etc/resolv.conf. The host's copy of this file looks fine, and the permissions look reasonable:
[root#co7mars2 etc]# ls -l /etc/resolv.conf
-rw-r--r--. 1 root root 106 Apr 30 18:08 /etc/resolv.conf
I am not able to umount /etc/resolv.conf from within the container:
umount -f -r /etc/resolv.conf
umount: /etc/resolv.conf: must be superuser to umount
Is there a fix for this?
I see some related issues on the Docker github site, such as https://github.com/docker/docker/issues/2267, but they address enhancements for complex use cases, rather than my situation, where I'm just dead in the water.
By the way, I've tried this on two separate and unrelated CentOS 7 hosts, with the same results.
Thanks.
To add to Daniel t's comment, issue 11396 mentions that you can give the container write access (meaning at least read access too) in any one of the following ways:
Disable SELinux for the entire host: setenforce 0
See issue 7952:
# Example of proper behavior on fresh btrfs system when SELinux is in Permissive mode
[~]$ getenforce
Enforcing
[~]$ sudo setenforce 0
[~]$ getenforce
Permissive
[~]$ sudo docker run fedora echo "hello world"
hello world
[~]$ sudo setenforce 1
[~]$ sudo docker run fedora echo "hello world"
echo: error while loading shared libraries: libc.so.6: cannot open shared object file: Permission denied
Set the directory SELinux policy to allow any container access:
chcon -Rt svirt_sandbox_file_t /var/db
Make the container --privileged.
This disables not only SELinux constraints but also the default cgroups restrictions:
docker run --privileged -v /var/db:/data1 -i -t fedora
Disable SELinux policy constraints for this container only:
docker run --security-opt label:disable -v /var/db:/data1 -i -t fedora
Run the container processes as SELinux process type unconfined_t:
docker run --security-opt label:type:unconfined_t -v /var/db:/data1 -i -t fedora

chown command returning Operation not permitted [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I am working on a raspberry pi and am having a tough time giving permissions to an external hard drive that I have mounted using the following tutorial:
http://www.howtogeek.com/139433/how-to-turn-a-raspberry-pi-into-a-low-power-network-storage-device/
I have now created folders on that external hard drive and when I do a ls -l command I get the following returned:
drwxr-xr-x 2 root root 512 Aug 28 23:24 test
That is located in: /media/USBHDD1/shares
Now I'm trying to give it all write read and execute permissions or even change the owner and group to pi:pi
However, chmod 777 is not working – it doesn't return an error, just seems to have no effect
And when I use
sudo chown -R pi:pi test/
I get the error
chown: changing ownership of `test/': Operation not permitted
This is a linux question but I think someone with background and knowledge of using a raspberry pi can help me out here.
Extra info as requested:
When I run pi#raspberrypi /media $ grep USBHDD1 /etc/mtab
it returns:
/dev/sda1 /media/USBHDD1 vfat rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,errors=remount-ro 0 0
The reason is because the ownership and permissions are defined at mount time for the vfat FS.
Manual page mount(8):
Mount options for fat ..
uid=value and gid=value
Set the owner and group of all files. (Default: the uid and gid
of the current process.)
umask=value
Set the umask (the bitmask of the permissions that are not
present). The default is the umask of the current process. The
value is given in octal.
There are at least three things you can do:
(1) Give pi:pi access to the entire /media/USBHDD1 mount:
mount -o remount,gid=<pi's gid>,uid=<pi's uid> /media/USBHDD1
To determine pi's uid:
cat /etc/passwd |grep pi
To determine pi's gid:
cat /etc/group |grep pi
(2) Give everyone access to /media/USBHDD1 by changing the umask and dmask (not recommended):
mount -o remount,umask=000,dmask=000 /media/USBHDD1
(3) Change the partition to a different file system. Only do this if you're not accessing the the external hard drive from a windows computer:
You won't be able to convert the file system from VFAT to a Unix-compatible FS, so you'll have to backup the contents of the drive, format as EXT3+ or reiserfs, then copy the contents back. You can find tutorials for doing this on the web.

Resources