AWS ECS volumes do not share any files - linux

I have an EBS volume I have mounted to an AWS ECS cluster instance. This EBS volume is mounted under /data:
$ cat /etc/fstab
...
UUID=xxx /data ext4 defaults,nofail 0 2
$ ls -la /data
total 28
drwxr-xr-x 4 1000 1000 4096 May 14 06:11 .
dr-xr-xr-x 26 root root 4096 May 15 21:18 ..
drwxr-xr-x 4 root root 4096 May 14 06:11 .ethereum
drwx------ 2 1000 1000 16384 May 14 05:29 lost+found
Edit: Output of /proc/mounts
[ec2-user#xxx ~]$ cat /proc/mounts
proc /proc proc rw,relatime 0 0
sysfs /sys sysfs rw,relatime 0 0
/dev/xvda1 / ext4 rw,noatime,data=ordered 0 0
devtmpfs /dev devtmpfs rw,relatime,size=4078988k,nr_inodes=1019747,mode=755 0 0
devpts /dev/pts devpts rw,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /dev/shm tmpfs rw,relatime 0 0
none /proc/sys/fs/binfmt_misc binfmt_misc rw,relatime 0 0
cgroup /cgroup/blkio cgroup rw,relatime,blkio 0 0
cgroup /cgroup/cpu cgroup rw,relatime,cpu 0 0
cgroup /cgroup/cpuacct cgroup rw,relatime,cpuacct 0 0
cgroup /cgroup/cpuset cgroup rw,relatime,cpuset 0 0
cgroup /cgroup/devices cgroup rw,relatime,devices 0 0
cgroup /cgroup/freezer cgroup rw,relatime,freezer 0 0
cgroup /cgroup/hugetlb cgroup rw,relatime,hugetlb 0 0
cgroup /cgroup/memory cgroup rw,relatime,memory 0 0
cgroup /cgroup/perf_event cgroup rw,relatime,perf_event 0 0
/dev/xvdf /data ext4 rw,relatime,data=ordered 0 0
Now, I would like to mount /data/.ethereum as a Docker volume to /geth/.ethereum in my ECS task definition:
{
...
"containerDefinitions": [
{
...
"volumesFrom": [],
"mountPoints": [
{
"containerPath": "/geth/.ethereum",
"sourceVolume": "ethereum_datadir",
"readOnly": null
}
],
...
}
],
...
"volumes": [
{
"host": {
"sourcePath": "/data/.ethereum"
},
"name": "ethereum_datadir"
}
],
...
}
It appears that the volume is correctly mounted after running the task:
$ docker inspect -f '{{ json .Mounts }}' f5c36d9ea0d6 | python -m json.tool
[
{
"Destination": "/geth/.ethereum",
"Mode": "",
"Propagation": "rprivate",
"RW": true,
"Source": "/data/.ethereum"
}
]
How ever, if I create file inside the container inside the mount point, it will not be there on the host machine.
[ec2-user#xxx .ethereum]$ docker exec -it f5c36d9ea0d6 bash
root#f5c36d9ea0d6:/geth# cat "Hello World!" > /geth/.ethereum/hello_world.txt
cat: Hello World!: No such file or directory
root#f5c36d9ea0d6:/geth# echo "Hello World!" > /geth/.ethereum/hello_world.txt
root#f5c36d9ea0d6:/geth# cat /geth/.ethereum/hello_world.txt
Hello World!
root#f5c36d9ea0d6:/geth# exit
exit
[ec2-user#xxx ~]$ cat /data/.ethereum/hello_world.txt
cat: /data/.ethereum/hello_world.txt: No such file or directory
Somehow the file systems are not being shared. Any ideas?

Found the issue.
It seems like with Docker, any mount point (e.g. for EBS volumes) on the host instance has to be created before the Docker daemon has started, otherwise Docker will write the files into the instance's root file system without you even noticing.
I stopped Docker, unmounted the EBS volume, cleaned everything up, mounted the EBS volume again and started Docker afterwards. Now Docker seems to recognize the mount point and writes everything into my EBS volume as it should.

Related

bash sh script with user permissions 755, cannot be run

Why can't run it?
If I run it in the following way, it works:
[usuario#MyPC ~]$ sh ./x11vnc.sh
PORT=5900
First, the permissions, so that you can see that it is in 755.
ls -l
-rw-rw-rw- 1 usuario users 4485 dic 2 11:35 x11vnc.log
-rwxr-xr-x 1 usuario users 117 nov 7 14:06 x11vnc.sh
Second,the script file
cat x11vnc.sh
#!/bin/bash
x11vnc -nap -wait 30 -noxdamage -passwd somepass -display :0 -forever -o ~/x11vnc.log -bg -rfbport 5900
Third, I must clarify the structure of the disks
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 3,6T 0 disk
├─md126 9:126 0 3,6T 0 raid1
│ ├─md126p1 259:3 0 3,6T 0 part /home/usuario
│ └─md126p2 259:4 0 8G 0 part [SWAP]
└─md127 9:127 0 0B 0 md
sdb 8:16 0 3,6T 0 disk
├─md126 9:126 0 3,6T 0 raid1
│ ├─md126p1 259:3 0 3,6T 0 part /home/usuario
│ └─md126p2 259:4 0 8G 0 part [SWAP]
└─md127 9:127 0 0B 0 md
nvme0n1 259:0 0 232,9G 0 disk
├─nvme0n1p1 259:1 0 232,6G 0 part /
└─nvme0n1p2 259:2 0 256M 0 part /boot
I am the user usuario.
I can edit and modify the x11vnc.sh file as I wish, but I can't run it, and I need to run it to include in the auto-start session of the plasma.
[usuario#MyPC ~]$ ~/x11vnc.sh
-bash: /home/usuario/x11vnc.sh: permission denied
Why can't run it?
If I run it in the following way, it works:
[usuario#MyPC ~]$ sh ./x11vnc.sh
PORT=5900
Thank you all, specially to #CharlesDuffy
I change the fstab line from
UUID=16b711b6-789f-4c27-9d6c-d0f744407f00 /home/usuario ext4 auto,exec,rw,user,relatime 0 2
to
UUID=16b711b6-789f-4c27-9d6c-d0f744407f00 /home/usuario ext4 auto,rw,user,exec,relatime 0 2
The position of exec is important, since user also applies noexec. By putting exec after user, you ensure that exec is set. The most important options should be listed last

Xmobar DiskU not get the root partition information in Slackware 15

I trying to show the root partition usage in the Xmobar (with XMonad), but not working!! Without any verbose or error message.
I don't know if the problem is the way as the Slackware loading the root partition or its way of the xmobar working.
Explaining the context:
The disk have three partitions: "swap", "/" and "/home"
/dev/sda1 is the /
/dev/sda2 is the swap
/dev/sda3 is the /home
in the Slackware, the system creates a virtual mount point to "/dev/sda1" calling it the "/dev/root" with mount point to "/"
In the Xmobar (.xmobarrc file), any of the options below not work:
- Run DiskU [ ( "/", "<size>" ) ] [] 20
- Run DiskU [ ( "root", "<size>" ) ] [] 20
- Run DiskU [ ( "/dev/root", "<size>" ) ] [] 20
- Run DiskU [ ( "sda1", "<size>" ) ] [] 20
- Run DiskU [ ( "/dev/sda1", "<size>" ) ] [] 20
and calling
- Run DiskU [ ( "/", "<size>" ), ("/home", "<size>") ] [] 20
where "/home" is the "/dev/sda3" partition, works fine to get the information about "/home"
Reading Xmobar sources, i see that the list of the available partition is read from "/etc/mtab". In my case, the "/etc/mtab" have de list of partitions below:
/dev/root / ext4 rw,relatime 0 0
...
/dev/sda3 /home ext4 rw,relatime 0 0
but i don't get the DiskU function works...
Any idea is welcome to solve this problem...
thanks in advance!
With a mount table containing
/dev/sda1 / ext4 rw,relatime 0 0
# ...
/dev/sda3 /home ext4 rw,relatime 0 0
an xmobar configuration along these lines should actually work:
Config
{ -- ...
, template = "... %disku% ..."
-- ...
, commands =
[ -- ...
, Run DiskU
[ ( "/", "Root: <usedp> (<used>/<size>)")
, ( "/home", "Home: <usedp> (<used>/<size>)")
]
[] 20
-- ...
]
-- ...
}
The solution:
Slackware set the mount point of /dev/sda1 to /dev/root automatically in /etc/mtab, but /dev/root is a virtual device and not exists in /dev folder.
Xmobar try to access the device /dev/root and not found it.
The most simple solution is creation of a symbolic link to /dev/root on system startup.
In this case, in Slackware, edit the file /etc/rc.d/rc.local and add:
ln -s /dev/sda1 /dev/root
Problem solved!

Replace the option noexec to exec in /etc/fstab file through a shell script

Only the /tmp option of noexec to exec should change. The /var/tmp option of noexec to exec shouldn't change.
contents of /etc/fstab
UUID=f229a689-a31e-4f1a-a823-9a69ee6ec558 / xfs defaults 0 0
UUID=eeb1df48-c9b0-408f-a693-38e2f7f80895 /boot xfs defaults 1 2
UUID=b41e6ef9-c638-4084-8a7e-26ecd2964893 swap swap defaults 0 0
UUID=79aa80a1-fa97-4fe1-a92d-eadf79721204 /var xfs defaults 1 2
UUID=644be3d0-433c-4ed5-bf12-7f61d5b99860 /tmp xfs defaults,nodev,nosuid,noexec 1 2
UUID=decda446-34ac-45b6-826c-ae3f090ed717 /var/log xfs defaults 1 2
UUID=a74170bc-0309-4b3b-862e-722fb7a6882d /var/tmp xfs defaults,nodev,nosuid,noexec 1 2
Using awk:
$ cat 1.awk
$2=="/tmp" { n=split($4,a,",");
str=""
for (i=1; i <= n; i++ ) {
if (a[i] != "noexec") {
if (length(str))
str=str","
str=str""a[i]
}
}
$4=str; print }
$2 != "/tmp" { print }
$ awk -f 1.awk fstab
UUID=f229a689-a31e-4f1a-a823-9a69ee6ec558 / xfs defaults 0 0
UUID=eeb1df48-c9b0-408f-a693-38e2f7f80895 /boot xfs defaults 1 2
UUID=b41e6ef9-c638-4084-8a7e-26ecd2964893 swap swap defaults 0 0
UUID=79aa80a1-fa97-4fe1-a92d-eadf79721204 /var xfs defaults 1 2
UUID=644be3d0-433c-4ed5-bf12-7f61d5b99860 /tmp xfs defaults,nodev,nosuid 1 2
UUID=decda446-34ac-45b6-826c-ae3f090ed717 /var/log xfs defaults 1 2
UUID=a74170bc-0309-4b3b-862e-722fb7a6882d /var/tmp xfs defaults,nodev,nosuid,noexec 1 2
Note, alignment of fields in the modified line can easily be improved using printf. I cannot tell if you were using tabs or spaces between the various fields.

linux - df command says disk 100% used, but it actually it's not

I have an EC2 instance running jenkins service and almost everyday it stops working saying the disk is full.
So I enter the instance to check and it really says 100% used
$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 987M 60K 987M 1% /dev
tmpfs 997M 0 997M 0% /dev/shm
/dev/xvda1 32G 32G 0 100% /
So I use NCDU to check what is occuping the space and it says only 8gb is used.
ncdu 1.10 ~ Use the arrow keys to navigate, press ? for help
--- / -------------------------------------------------------
3.7GiB [##########] /var
1.7GiB [#### ] /home
1.6GiB [#### ] /usr
1.0GiB [## ] swapfile
323.6MiB [ ] /opt
133.8MiB [ ] /lib
47.3MiB [ ] /boot
43.8MiB [ ] /root
23.1MiB [ ] /public
19.8MiB [ ] /lib64
12.3MiB [ ] /sbin
10.7MiB [ ] /etc
7.0MiB [ ] /bin
3.7MiB [ ] /tmp
60.0KiB [ ] /dev
e 16.0KiB [ ] /lost+found
16.0KiB [ ] /.gnupg
12.0KiB [ ] /run
e 4.0KiB [ ] /srv
e 4.0KiB [ ] /selinux
e 4.0KiB [ ] /mnt
e 4.0KiB [ ] /media
e 4.0KiB [ ] /local
e 4.0KiB [ ] /cgroup
. 0.0 B [ ] /proc
0.0 B [ ] /sys
0.0 B [ ] .autorelabel
0.0 B [ ] .autofsck
Total disk usage: 8.6GiB Apparent size: 8.6GiB Items: 379695
Then I reboot the instance and the free space comes back to only 28% used
$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 987M 60K 987M 1% /dev
tmpfs 997M 0 997M 0% /dev/shm
/dev/xvda1 32G 8.7G 23G 28% /
What can be causing this problem?
Additional data:
PSTREE
init─┬─acpid
├─agetty
├─amazon-ssm-agen───6*[{amazon-ssm-agen}]
├─atd
├─auditd───{auditd}
├─crond
├─dbus-daemon
├─2*[dhclient]
├─java─┬─sh───sudo───gulp --prod───9*[{gulp --prod}]
│ └─67*[{java}]
├─lvmetad
├─lvmpolld
├─6*[mingetty]
├─rngd
├─rpc.statd
├─rpcbind
├─rsyslogd───3*[{rsyslogd}]
├─2*[sendmail]
├─sshd───sshd───sshd───bash───pstree
├─supervisord───2*[php]
└─udevd───2*[udevd]
CRONTAB
* * * * * cd /home/ec2-user/laravel-prod && php artisan schedule:run >> /dev/null 2>&1
0 1 * * * rm /var/log/jenkins/jenkins.log

Unable to mount volume created by terraform

I am using the following terraform template
resource "aws_instance" "ec2" {
ami = "${var.ami_id}"
instance_type = "${var.flavor}"
key_name = "${var.key_name}"
availability_zone = "${var.availability_zone}"
security_groups= ["${var.security_group}"]
tags = {Name = "${var.instance_name}"}
}
resource "aws_volume_attachment" "ebs_volume" {
device_name = "/dev/sdg"
volume_id = "vol-006d716dad719545c"
instance_id = "${aws_instance.ec2.id}"
}
to launch an instance in aws and attach volume to that instance.
When i execute this i see that the instance is created and volume is attached to the instance as well.
ubuntu#ip-172-31-10-43:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 91M 1 loop /snap/core/6350
loop1 7:1 0 18M 1 loop /snap/amazon-ssm-agent/930
loop2 7:2 0 88.4M 1 loop /snap/core/6964
loop3 7:3 0 18M 1 loop /snap/amazon-ssm-agent/1335
xvda 202:0 0 50G 0 disk
└─xvda1 202:1 0 50G 0 part /
xvdg 202:96 0 20G 0 disk
But when i try to mount the volume im getting this weird error
ubuntu#ip-172-31-10-43:~$ sudo mkdir -p /goutham
ubuntu#ip-172-31-10-43:~$ sudo mount /dev/xvdg /goutha,
mount: /goutha,: mount point does not exist.
ubuntu#ip-172-31-10-43:~$ sudo mount /dev/xvdg /goutham
mount: /goutham: wrong fs type, bad option, bad superblock on /dev/xvdg, missing codepage or helper program, or other error.
Can anyone please help me out as to what mistake i am doing in this exercise.
Thanks in advance.
You can make a file system on an attached disk using user data and terraform script.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html
https://www.terraform.io/docs/providers/aws/r/instance.html#user_data
Create a sh file, templates/mkfs.sh
#!/bin/bash
while ! ls /dev/xvdg > /dev/null
do
sleep 5
done
if [ `file -s /dev/xvdg | cut -d ' ' -f 2` = 'data' ]
then
mkfs.xfs /dev/xvdg
fi
terraform script,
data "template_file" "mkfs" {
template = "${file("${path.module}/templates/mkfs.sh")}"
}
resource "aws_instance" "ec2" {
...
user_data = "${data.template_file.mkfs}"
...
}
It will be run when an ec2 instance is created and wait until disk is mounted. after that it will create file system.
I figured it i think i missed creating the file system in the volume as the volume im trying to attach is an empty volume
so this helped me out
$ sudo mkfs -t xfs /dev/xvdg
and
sudo mkdir -p /goutham
sudo mount /dev/xvdg /goutham
Thanks

Resources