Data deleted from /mnt directory after stop and start EC2 instance - linux

Data deleted from /mnt directory after stop and start EC2 instance
[root#localhost opt]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 9.7G 1.4G 7.8G 15% /
none 1.9G 0 1.9G 0% /dev/shm
/dev/xvdb 394G 199M 374G 1% /mnt
I place my data in /mnt .I stop instance yesterday .
Afetr starting the instance today ,I didnt find any data form /mnt .
I have another from /opt .
How can i recover that data from /mnt .
If /mnt is Temporary mounting point .Then how can i use these all space

On EC2, /mnt directory is mounted to ephemeral storage.
After reboot or instance stop/start, all data is lost.
Please refer to this post.

It is a common misconception that a reboot/restart will wipe the ephemeral storage - this isn't true.
You can try it yourself and see.
What will is a stop/start - that actually deprovisions your VM, and then moves it to another host machine - which will have wiped ephemeral drive(s) and then starts it up with your root ebs (at least) attached. Stop/start and reboot are often conflated - but they are very different things here.

/mnt should really just be used for ephemeral data storage that is not critical if instance needs to be restarted. This is actually well suited for things such as local on-disk cache, temporary data storage, etc. as this ephemeral storage will oftentimes perform better from an I/O standpoint than say an EBS volume mount. Just understand that you should only place non-critical data there.

Related

Options for storing many small images for fast batch access on Google Cloud?

We have a few datasets of small images, where each image is about 100KB, and there about 50K images per dataset (around 5GB each dataset). We typically use these datasets to batch-load each image incrementally into a memory of a Google VM instance in order to perform machine learning studies. This is done several times a day.
Currently, a few of us each have our own Google Persistent Disk attached to the VM with the datasets replicated on each. This is not ideal since they are pricey, however, data access is very fast which allows us to iterate on our studies fairly rapidly. We don't share one disk because of the inconvenience of having to manage read/write settings with Google disks when sharing.
Is there an alternative Google Cloud option to handle this use case? Google Buckets are too slow since it is reading many small files.
If your main interest is having rapid I/O your best bet is using an SSD for obvious reasons. Why I don't understand is why you don't want to share one disk. You can have one SSD attached to one of your instances as R/W for loading and modifying your datasets and mounting it read-only to the instances that need to fetch the data.
I'm not sure how faster will be this solution compared to using a bucket, though. I guess you are aware that gsutil has an option for multithreading transfers, which exponentially increases the data transfer speed, specially when transfering a lot of small files? The flag is -m
-m Causes supported operations (acl ch, acl set, cp, mv, rm, rsync,
and setmeta) to run in parallel. This can significantly improve
performance if you are performing operations on a large number of
files over a reasonably fast network connection.
gsutil performs the specified operation using a combination of
multi-threading and multi-processing, using a number of threads
and processors determined by the parallel_thread_count and
parallel_process_count values set in the boto configuration
file. You might want to experiment with these values, as the
best values can vary based on a number of factors, including
network speed, number of CPUs, and available memory.
Using the -m option may make your performance worse if you
are using a slower network, such as the typical network speeds
offered by non-business home network plans. It can also make
your performance worse for cases that perform all operations
locally (e.g., gsutil rsync, where both source and destination
URLs are on the local disk), because it can "thrash" your local
disk.
If a download or upload operation using parallel transfer fails
before the entire transfer is complete (e.g. failing after 300 of
1000 files have been transferred), you will need to restart the
entire transfer.
Also, although most commands will normally fail upon encountering
an error when the -m flag is disabled, all commands will
continue to try all operations when -m is enabled with multiple
threads or processes, and the number of failed operations (if any)
will be reported at the end of the command's execution.
If you want to go with the instance with R/W SSD and multiple read only clients see below:
One option is to set up an NFS on your SSD, one instance will act as the NFS server with R/W rights and the rest will have only read permissions. I will be using Ubuntu 16.04 but the process is similar in all distros:
1 - Install the required packages on both server and clients:
Server: sudo apt install nfs-kernel-server
Client: sudo apt install nfs-common
2 - Mount the disk SSD disk on the server (after formatting it to the filesystem you want to use):
Server:
jordim#instance-5:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb 8:16 0 50G 0 disk <--- My extra SSD disk
sda 8:0 0 10G 0 disk
└─sda1 8:1 0 10G 0 part /
jordim#instance-5:~$ sudo fdisk /dev/sdb
(I will create a single primary ext4 partition)
jordim#instance-5:~$ sudo fdisk /dev/sdb
(create partition)
jordim#instance-5:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb 8:16 0 50G 0 disk
└─sdb1 8:17 0 50G 0 part <- Newly created partition
sda 8:0 0 10G 0 disk
└─sda1 8:1 0 10G 0 part /
jordim#instance-5:~$ sudo mkfs.ext4 /dev/sdb1
(...)
jordim#instance-5:~$ sudo mkdir /mount
jordim#instance-5:~$ sudo mount /dev/sdb1 /mount/
Make a dir for your NFS share folder:
jordim#instance-5:/mount$ sudo mkdir shared
Now configure the exports on your server. Add the folder to share and the private IPs of the clients. Also you can tweak permissions here, use "ro" for "read only" or "rw" for read-write permissions.
jordim#instance-5:/mount$ sudo vim /etc/exports
(inside the exports file, note the IP is the private IP of the client instance):
/mount/share 10.142.0.5(ro,sync,no_subtree_check)
Now start the nfs service on the server:
root#instance-5:/mount# systemctl start nfs-server
Now to create the mountpoint on the client:
jordim#instance-4:~$ sudo mkdir -p /nfs/share
And mount the folder:
jordim#instance-4:~$ sudo mount 10.142.0.6:/mount/share /nfs/share
Now let's test it:
Server:
jordim#instance-5:/mount/share$ touch test
Client:
jordim#instance-4:/nfs/share$ ls
test
Also, see the mounts:
jordim#instance-4:/nfs/share$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 1.8G 0 1.8G 0% /dev
tmpfs 370M 9.9M 360M 3% /run
/dev/sda1 9.7G 1.5G 8.2G 16% /
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
tmpfs 370M 0 370M 0% /run/user/1001
10.142.0.6:/mount/share 50G 52M 47G 1% /nfs/share
There you go, now you have only one instance with a r/w disk and as many clients as you want with read only permissions.

How to mount azure datadisk in virtual machine

How can I mount an Azure data disk from a linux virtual machine?
I think it might be something like this
az vm disk attach-existing [virtualmachinename] [datadiskname]
I found the solution, its confusing because the documentation for creating azure disk is hard to sort from the documentation for creating a mount point. This is the relevant documentation.
https://learn.microsoft.com/en-us/azure/virtual-machines/linux/add-disk#connect-to-the-linux-vm-to-mount-the-new-disk
For an alternative walkthrough, see this blog: https://chrismckee.co.uk/creating-mounting-new-drives-in-ubuntu-azure/. I couldn't identify the disk I'd like to mount with the official Azure docs and this post helped.
You can attach any disk size with azure virtual machine
https://mocktool.com/2020/11/24/attach-managed-disk-to-azure-linux-virtual-machine
Find the disk
Once connected to your VM, you need to find the disk. In this example, we are using lsblk to list the disks.
lsblk -o NAME,HCTL,SIZE,MOUNTPOINT | grep -i "sd"
The output is similar to the following example:
sda 0:0:0:0 30G
├─sda1 29.9G /
├─sda14 4M
└─sda15 106M /boot/efi
sdb 1:0:1:0 14G
└─sdb1 14G /mnt
sdc 3:0:0:0 50G
Here, sdc is the disk that we want, because it is 50G. If you aren't sure which disk it is based on size alone, you can go to the VM page in the portal, select Disks, and check the LUN number for the disk under Data disks.
Mount the disk
Now, create a directory to mount the file system using mkdir. The following example creates a directory at /datadrive:
sudo mkdir /datadrive
Use mount to then mount the filesystem. The following example mounts the /dev/sdc1 partition to the /datadrive mount point:
sudo mount /dev/sdc1 /datadrive

Make a backup of the whole server that can be restored later

I have a server with the following disk structure:
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 219G 192G 17G 93% /
tmpfs 16G 0 16G 0% /lib/init/rw
udev 16G 124K 16G 1% /dev
tmpfs 16G 0 16G 0% /dev/shm
/dev/sda2 508M 38M 446M 8% /boot
/dev/sdb1 2.7T 130G 2.3T 5% /media/3TB1
I am interested in making backup of the whole server on my local machine. When the time comes I want to be able to restore a new server from my local machine backup. What procedure do you recommend?
I tried rsync, but the indexing took extremely long so I aborted it. Than I used scp, and well, it is currently working. There is lots of symbolic links that weren't transferred to the local machine, and I worry I won't be able to restore it later on.
Since your sda isn't very large and a lot of it is used anyway, I'd create a complete backup of the block device. Your sdb, however, is very large and used only to a small part. Of that I'd create a file system backup.
Boot your server with a Ubuntu live CD and become root (sudo su -).
Attach your backup medium (I assume it's mounted as /mnt/backup/ in the following).
Create a block device backup of sda: cat /dev/sda > /mnt/backup/sda
Mount your sdb (I assume it's mounted as /media/3TB1/ in the following).
Create a file system backup of sdb: rsync -av /media/3TB1/ /mnt/backup/sdb/
For restoring the backup later:
Boot your server with a Ubuntu live CD and become root (sudo su -).
Attach your backup medium (I assume it's mounted as /mnt/backup/ in the following).
Restore the block device backup of sda: cat /mnt/backup/sda > /dev/sda
Mount your sdb (I assume it's mounted as /media/3TB1/ in the following).
Restore the file system backup of sdb: rsync -av /mnt/backup/sdb/ /media/3TB1/
There are more fancy ways of doing it for sure. But this routine worked for me lots of times.
A backup of that size should take a long time to copy over the internet in any case: rsync, cp , dd ..etc, the time taken to copy the file depends on your internet speed.
In my opinion, rsync is the way to go, but if you're not willing to wait that long for the download to complete (I wouldn't either) I highly suggest backing your disk up on another remote server, unless you don't plan on restoring it later since uploading would be a pain too (especially on ADSL).
You have a few options:
Ask your data center for disk redundancy.
A cheap and highly unrecommended solution is to backup your most important data on a file sharing web service, eg. Dropbox (As far as I remember they had a shell API for many tasks including uploading files, which can be used for automatic backups).
Wait for the download to finish.
Go with #Alfe's solution, which is pretty neat in my opinion.

Amazon EC2: Unable to unmount and remove EBS drive file system

I have create an EBS drive, attached it to the Instance and created file system using mkfs.ext3.
Now i want to unmount and delete the drive, i've tried many things but nothing seems to work. Although i am able to detach the drive from instance and delete using EC-2 Console,
but when i am checking partition using df -hk it is still showing the drive.
[ec2-user#XXXXXXXXXXXXXX ~]$ df -hk
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/xvda1 8256952 1075740 7097356 14% /
tmpfs 304368 0 304368 0% /dev/shm
/dev/xvdf 30963708 176196 29214648 1% /media/newdrive
And more over when i try to use any other command like "fdisk -l" or and all or trying to browse the drive's folders, the putty session hangs.
I am new to EC2 cloud and also to Linux.
How about this?
You need to run as:
sudo umount /dev/xvdf
umount -dRf /media/newdrive
umount needs mountpoint not a devicetype like /dev/xvdf

Understanding Linux partitions with Amazon EC2

I am relatively new to Linux. In one of our projects, we use amazon's EC2 instance for processing of some files. We upload files to S3 server after processing. EC2 instance is booted using an existing AMI
Recently I got an error no space left on disk, hence processing of files was halted. I cleaned up some older files and the processing continued.
Now when I look at available space using df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 9.9G 5.7G 3.7G 61% /
none 3.7G 0 3.7G 0% /dev/shm
/dev/xvdb 414G 199M 393G 1% /mnt
/dev/xvdc 414G 199M 393G 1% /data
I can see my files are effecting only /dev/xvda1.
I have following queries
What is the use of other partitions when I can see my files only effecting /dev/xvda1
It looks like we are only using 10 GB of space effectively and other is being wasted. How can I use other space? Can I move some disk space to /dev/xvda1 or directly store files in other areas?
As you can see from the output of df -h, there are two large partitions mouted on /mnt and /data respectively. I suggest that you use those partitions by processing the files in one of those directories. If you cannot move where the processing happens for some reason, you can remount the partitions in the appropriate place.
If for example your files are processed in the directory /var/mydir and you cannot change that, do the following (as root):
umount /mnt
mount /dev/xvdb /var/mydir
You can use the other partition as well of course if you prefer that.

Resources