Can I mount a S3 bucket mounted to Linux EC2 to a Windows EC2 with Samba(smb)? - linux

I have a situation where in I have to mount AWS S3 bucket to a Linux EC2 instance(Linux) using FUSE(S3FS).
Once the Bucket it mounted, it need to be mounted in an Windows EC2 instance with Samba(smb).
I am able to mount the S3 bucket to the Linux EC2 instance using FUSE (S3FS),
but while trying to change content for Samba mount I get the below error:
[centos#sharedfs-23-117 /]$ sudo chcon -t samba_share_t /s3fs-ed/
chcon: failed to change context of ‘/s3fs-ed/’ to ‘system_u:object_r:samba_share_t:s0’: Operation not supported.
Note: My S3 bucket is mounted on /s3fs-ed.
Observation: df -h List all the mounts including the S3 mount:
[centos#sharedfs-23-117 /]$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 472M 0 472M 0% /dev
tmpfs 493M 240K 493M 1% /dev/shm
tmpfs 493M 13M 480M 3% /run
tmpfs 493M 0 493M 0% /sys/fs/cgroup
/dev/xvda1 8.0G 1.1G 7.0G 13% /
/dev/xvdb1 8.0G 574M 7.5G 8% /store
tmpfs 99M 0 99M 0% /run/user/1000
s3fs 256T 0 256T 0% /s3fs-ed
But When I try to list the disk by UUID, I do not see the S3 mount being listed:
[centos#sharedfs-23-117 /]$ ls -lha /dev/disk/by-uuid
total 0
drwxr-xr-x. 2 root root 80 Jul 16 09:07 .
drwxr-xr-x. 5 root root 100 Jul 16 09:07 ..
lrwxrwxrwx. 1 root root 11 Jul 16 09:07 388a99ed-9486-4a46-aeb6-06eaf6c47675 -> ../../xvda1
lrwxrwxrwx. 1 root root 11 Jul 16 09:07 fce0d14e-24c6-4f69-b04b-b80041cd636f -> ../../xvdb1
I need help to mount the S3 mount using Samba to the Windows EC2 instance.
Note : Due to some limitation I cannot mount S3 to Windows with rclone (https://rclone.org/).
I have tried disabling the SELinux also , Still the same error
[centos#sharedfs-23-117 /]$ sestatus
SELinux status: disabled

I mount a bucket with s3fs with the following command:
s3fs bucketname /mnt/s3/ -o passwd_file=/etc/.passwd-s3fs -o umask=0000 -o allow_other -o kernel_cache
And I share the folder /mnt/s3 with samba with no problems.
I you still have trouble, share your s3fs mount command, and the smb.conf section you use to share the mount point.

Related

Paramiko exec_command not working with mkfs?

Some issue executing the following bash with Paramiko:
def format_disk(self, device, size, dformat, mount, name):
stdin_, stdout_, stderr_ = self.client.exec_command(f"pvcreate {device};" \
f"vgcreate {name}-vg {device};" \
f"lvcreate -L {size} --name {name}-lv {name}-vg;" \
f"mkfs.{dformat} /dev/{name}-vg/{name}-lv;" \
f"mkdir {mount};" \
f"echo '/dev/{name}-vg/{name}-lv {mount} {dformat} defaults 0 0' >> /etc/fstab")
print(f"mkfs.{dformat} /dev/{name}-vg/{name}-lv;")
Print statement outputs: mkfs.ext4 /dev/first_try-vg/first_try-lv; If I copy and paste this exact command on the server there are no errors and it formats the disk as expected.
Troubleshooting steps
Server before running python script:
ls: cannot access /first_try: No such file or directory
[root#localhost ~]# vgs
[root#localhost ~]# lvs
[root#localhost ~]# cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Thu Feb 25 07:32:51 2021
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=38b7e96a-71e5-4089-a348-bd23828f9dc8 / xfs defaults 0 0
UUID=72fd2a6a-85db-4596-9fc2-6604d0d865a3 /boot xfs defaults 0 0
Server after running python script:
[root#localhost ~]# ls /first_try/
[root#localhost ~]# vgs
VG #PV #LV #SN Attr VSize VFree
first_try-vg 1 1 0 wz--n- <20.00g <15.00g
[root#localhost ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
first_try-lv first_try-vg -wi-a----- 5.00g
[root#localhost ~]# cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Thu Feb 25 07:32:51 2021
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=38b7e96a-71e5-4089-a348-bd23828f9dc8 / xfs defaults 0 0
UUID=72fd2a6a-85db-4596-9fc2-6604d0d865a3 /boot xfs defaults 0 0
/dev/first_try-vg/first_try-lv /first_try ext4 defaults 0 0
[root#localhost ~]# mount -a
mount: wrong fs type, bad option, bad superblock on /dev/mapper/first_try--vg-first_try--lv,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so.
The error from mount -a indicates that the disk is not formatted.
If I format the disk manually and run mount -a it works.
Example:
[root#localhost ~]# mkfs.ext4 /dev/first_try-vg/first_try-lv
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
327680 inodes, 1310720 blocks
65536 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1342177280
40 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): mdone
Writing superblocks and filesystem accounting information: done
[root#localhost ~]# mount -a
[root#localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 18G 4.7G 14G 27% /
devtmpfs 471M 0 471M 0% /dev
tmpfs 487M 0 487M 0% /dev/shm
tmpfs 487M 8.4M 478M 2% /run
tmpfs 487M 0 487M 0% /sys/fs/cgroup
/dev/sda1 297M 147M 151M 50% /boot
tmpfs 98M 12K 98M 1% /run/user/42
tmpfs 98M 0 98M 0% /run/user/0
/dev/mapper/first_try--vg-first_try--lv 4.8G 20M 4.6G 1% /first_try
Pariminko could not handle the output from mkfs. I changed the command to use the -q quiet flag and was able to get the script to run successfully.
New commmand mkfs -q -t {dformat} /dev/{name}-vg/{name}-lv

/dev/vda1 is full but cannot find why

I have a server running Centos 7. This is the result of df -h
Filesystem Size Used Avail Use% Mounted on
udev 7.4G 0 7.4G 0% /dev
tmpfs 1.5G 139M 1.4G 10% /run
/dev/vda1 46G 44G 0 100% /
tmpfs 7.4G 0 7.4G 0% /dev/shm
tmpfs 7.4G 0 7.4G 0% /sys/fs/cgroup
/dev/vda15 99M 3.6M 95M 4% /boot/efi
/dev/mapper/LVMVolGroup-DATA_VOLUME 138G 17G 114G 13% /mnt/data
tmpfs 1.5G 0 1.5G 0% /run/user/0
Even if there are 2GB of free space on / , it shows that the filesystem is at 100% of usage, and I can't install new packages because it tells me there's no space left on device.
Besides, if I type sudo du -sh /* | sort -rh | head -15
the result is:
17G /mnt
1.1G /usr
292M /var
208M /root
139M /run
49M /boot
48M /tmp
32M /etc
28K /home
16K /lost+found
12K /anaconda-post.log
4.0K /srv
4.0K /opt
4.0K /media
0 /sys
So it seems that there are no big files filling up the disk, and the sum of the sizes of the directories is not even equal to 44GB.
Additional info: the only service running on the server is Jenkins, but its home is under /mnt/data/jenkins.
How can I solve the problem?
Found the solution.
The problem was related to some deleted files kept open by Jenkins.
Restarting the service the problem was solved.
The problem was related to the system cache/temp storage. Linux system created the cache files and its archive from time to time, especially when some long option is run like DB import or crone job etc.. or sometimes server up from sines long.
Restarting the service or server
so due to that, the cache/ temp files were deleted and the problem was solved.
even in windows, we faced that kind of performance issue when RAM is low, and restarting the system is the primary solution for that.

/media directory not working anymore

I can't automount USB sticks on my linux because I have several problems with /media directory.
Here is my ls -al result on / (I just kept the media and mnt directories for you) :
total 116
drwxr-xr-x 25 root root 4096 juin 13 09:39 .
drwxr-xr-x 25 root root 4096 juin 13 09:39 ..
drwx------ 8 acarbonaro acarbonaro 8192 janv. 1 1970 media
drwxr-xr-x 2 root root 4096 avril 11 2014 mnt
This already seems strange as for other users it is often owned by root.
When I try to sudo chown root:root media it says permission denied.
When I try to sudo chown 755 media it doesn't say anything but when I ls -l after nothing has changed.
The other problem : I don't know why but the media directory is empty I can't find the user directory that used to be in it.
When I plug a USB flash drive, it cannot auto mount. I have to mount it manually in another directory, which is not impossible but clearly not handy.
Thank you for your help.
EDIT:
Here is my df -T result :
Sys. de fichiers Type blocs de 1K Utilisé Disponible Uti% Monté sur
udev devtmpfs 4015584 8 4015576 1% /dev
tmpfs tmpfs 805680 1212 804468 1% /run
/dev/sda1 ext4 115214888 9815468 99523708 9% /
none tmpfs 4 0 4 0% /sys/fs/cgroup
none tmpfs 5120 0 5120 0% /run/lock
none tmpfs 4028392 522580 3505812 13% /run/shm
none tmpfs 102400 600 101800 1% /run/user
/dev/sda2 ext4 130654772 18532260 105462572 15% /home
/dev/sdb2 vfat 14938864 218480 14720384 2% /media
EDIT:
I don't know the answer to my problem, but rebooting reset the /media directory as it was before and it works agian.
I assume the problem was that you have yanked the USB stick out of port without unmounting. UNIX is not very keen to parts of its FS disappearing. Next time, umount it first, then remove.

ec2 - Do temporary files fill up root ebs volume if there is no ephemeral instance store?

Related to: How do I add instance storage to an existing Windows EC2 instance?
My root 60G ebs volume fill up very quickly, and I can't find the culprit in the file system. My actual files only take up about 10G. I've found out that if I "Stop" and then "Start" the instance, it frees up the remaining 50G.
Note: I started the instance as free micro and then later upgraded to m3.medium. Apparantly micro instances don't have ephemeral storage and you can only add "instance store" upon launching an instance. So I'm thinking I don't have access to ephemeral storage and that it is instead eating up my root ebs volume space with temporary files. Is that possible?
#df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 59G 47G 13G 80% /
devtmpfs 1.9G 12K 1.9G 1% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
#du -sh /* | sort -n
0 /proc
0 /sys
4.0K /local
4.0K /media
4.0K /mnt
4.0K /selinux
4.0K /srv
7.3M /etc
8.0K /tmp
8.1M /bin
8.6G /var
9.7M /sbin
12K /dev
16K /lost+found
17M /root
21M /lib64
24K /run
26M /home
49M /boot
59M /opt
122M /lib
858M /usr
The problem had nothing to do with ephemeral storage. It was that httpd wasn't restarting after logrotate.

My mounted EBS volume is not showing up

Trying to mount a 384G volume from old instance to a newly configure instance (8G). Attached 384G volume shows up on lsblk but on df -h it doesn't come up at all. What am I doing wrong?
[ec2-user#ip-10-111-111-111 ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvdf 202:80 0 384G 0 disk
xvda1 202:1 0 8G 0 disk /
[ec2-user#ip-10-111-111-111 ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 7.9G 1.5G 6.4G 19% /
tmpfs 1.9G 0 1.9G 0% /dev/shm
Note: On EC2 instance dashboard it displays
Root device: /dev/sda1
Block devices: /dev/sda1 /dev/sdf
The df -k will only show mounted volumes.
You will need to mount your volume first, like this mount /dev/xvdf /mnt then you will be able to access it's content from /mnt and see it when typing df -k
For those landing here after not finding their xvdf devices on aws ec2 c5 or m5 instances, it's renamed to /dev/nvme... as per the docs
For C5 and M5 instances, EBS volumes are exposed as NVMe block
devices. The device names that you specify are renamed using NVMe
device names (/dev/nvme[0-26]n1). For more information, see Amazon EBS
and NVMe.
If this is windows follow this:
https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/recognize-expanded-volume-windows.html#recognize-expanded-volume-windows-disk-management

Resources