How do I change the filesystem of my 64GB USB, from FAT32 to anything which allows me to put a 35GB file from my x86_64 Linux machine onto the USB? - linux

'uname -a' on my machine gives:
Linux ct-lt-966 4.9.0-8-amd64 #1 SMP Debian 4.9.144-3.1 (2019-02-19) x86_64 GNU/Linux
Currently the filesystem of my USB is MS-DOS 'FAT32' which has a ~4.5 GB maximum size for individual files. I want to change this filesystem to something else, which does not have a limit. (I am trying to put a 35GB file onto a 64GB USB but I believe most USB filesystems do not limit the size of individual files).
I have not found it clear what choices of USB filesystem that I have. I tried to change the filesystem to 'NTFS', but I could not install or locate 'mkfs.ntfs' or even 'ntfsprogs'. (I also tried installing with 'pacman' and 'yum' but apparently 'pacman' requires an aarch architecture and I could not get access to 'yum-config-manager' in order to enable any repos).
So to conclude, with my minimal prowess I am just looking for any way to change the filesystem of my 64GB USB to anything which will accept a 35GB file from my machine.
Thanks
Edit 1: Just planning to use the USB on this Linux machine, not Windows.

If there's nothing on the stick you want, or it's safe to delete it then basically:
delete the current FAT32 partition from the stick
add a new partition, utilising the full size of the device
create an ext4 filesystem on the new partition
PLEASE BE CAREFUL WITH THIS PROCESS: selecting the wrong device can obliterate a disk you needed such as a $HOME or your root OS
All the following is from memory and untested: I don't have a USB stick available right now to test fully.
Start by plugging in the stick while tailing the syslog in a console and see where it gets mounted (hopefully it automounts which it should if it's a desktop based Linux you're running. Possibly not if it's a server)..
sudo tail -f /var/log/syslog
(it might be /var/log/messages depending on distro)
then plug the stick. syslog should show it being allocated a device and a mount point. A file manager window may open depending on your config if you are in a GUI. For example, you might see it being loaded on /dev/sdc1 and mounted at /media/<yourusername>/USBKEY or something.
Confirm by running lsblk and note the device for the key, i.e.
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 167.7G 0 disk
├─sda1 8:1 0 69.9G 0 part /
└─sda2 8:2 0 97.9G 0 part /home
sdb 8:16 0 149.1G 0 disk
└─sdb1 8:17 0 149.1G 0 part /mnt/snapshots
sdc 8:32 0 931.5G 0 disk
└─sdc1 8:33 0 931.5G 0 part /storage
sdd 8:48 0 465.8G 0 disk
└─sdd1 8:49 0 465.8G 0 part /mnt/backup
sr0 11:0 1 1024M 0 rom
Unmount the stick (if it mounted) but leave it plugged in. Assuming again your device is at /dev/sdc1...
umount /dev/sdc1
Now run cfdisk in a terminal if you have it (friendlier) or fdisk if not, passing it the device related to your USB stick, without the partition number.
man cfdisk
sudo cfdisk /dev/sdc
This should show the current FAT32 partition. Delete it, then create a new partition of type 'Linux', following the defaults for start and end blocks which will be suggested in such a way as to fill the available space.
When done, select the option to Write the changes. Again, DOUBLE AND TRIPLE CHECK you have the right device or you will blow away your main disk probably.
Once the changes are written, you can create the ext4 file system;
sudo mkfs.ext4 /dev/sdc1
And after it completes, you should be able to re-plug your stick and find that it remounts, this time with a file system that can take your large files.
This isn't the only way to achieve this, but it's probably the least fiddly. For the sake of repetition, don't make a mistake with the device identifiers. If you're unsure, ask.

Related

Changing EC2 Instance Type modified EBS root device UUID and made disk read only. How to resolve?

I had a fully working Amazon Linux 2 instance, running on t2.small instance type. I wanted to try changing the instance to a t2.medium type to test. As I have done in the past, I simply shut down the instance, changed the type, and then restarted the instance.
After the restart, apache was down and my sites were un-reachable. I was able to login to the instance and when trying to start apache I discovered that the root drive was now read only which prevented start/etc. Through some troubleshooting I was able to get the drive remounted and thing running as normal, but everytime I restart the instance, it goes back to read-only and I have to perform the same fix each time to get it back to normal. I believe it's an issue with my /etc/fstab root device UUID not matching the current root device UUID. I never changed any of the attached EBS volumes, so I'm not sure how the change occured.
Some relevant info:
$ cat /etc/os-release
NAME="Amazon Linux"
VERSION="2"
ID="amzn"
ID_LIKE="centos rhel fedora"
VERSION_ID="2"
PRETTY_NAME="Amazon Linux 2"
ANSI_COLOR="0;33"
CPE_NAME="cpe:2.3:o:amazon:amazon_linux:2"
HOME_URL="https://amazonlinux.com/"
To discover the UUID mismatch/fix, I performed the following:
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 50G 0 disk
└─xvda1 202:1 0 50G 0 part /
xvdb 202:16 0 50G 0 disk
xvdf 202:80 0 50G 0 disk
└─xvdf1 202:81 0 50G 0 part
$ sudo blkid
/dev/xvda1: LABEL="/" UUID="2a7884f1-a23b-49a0-8693-ae82c155e5af" TYPE="xfs" PARTLABEL="Linux" PARTUUID="4d1e3134-c9e4-456d-a253-374c91394e99"
/dev/xvdf1: LABEL="/" UUID="a8346192-0f62-444c-9cd0-655ed0d49a8b" TYPE="ext4" PARTLABEL="Linux" PARTUUID="2688b30d-29ef-424f-9196-05ec7e4a0d80"
I had read that a possible fix would be to perform the following:
$ sudo mount -o remount,rw /
mount: /: can't find UUID=-1a7884f1-a23b-49a0-8693-ae82c155e5af.
Obviously, that didn't work. So I looked at my /etc/fstab:
#
UUID=-1a7884f1-a23b-49a0-8693-ae82c155e5af / xfs defaults,noatime 1 1
/swapfile swap swap defaults 0 0
Seeing this mismatch, I tried:
sudo mount -o remount nouuid /
Which worked, made the root writeable and I was able to get services back up and running.
So, this is how I've come to the belief that it has to do with the mismatch of the UUID in fstab.
My Questions:
Should I change the entry in /etc/fstab to match the current UUID: 2a7884f1-a23b-49a0-8693-ae82c155e5af
Any idea why this happened and how I can prevent it from happening in the future?

Determine WWID of LUN from mapped drive on Linux

I am trying to establish if there is an easier method to determine the WWID of an iSCSI LUN connected with a Linux Filesystem or mountpoint.
A frequent problem we have is where a user requests a disk expansion on a RHEL system with multiple iSCSI LUNs connected. A user will provide us with the path their LUN is mounted on, and from this we need to establish which LUN they are referring to so that we can make the increase as appropriate at the Storage side.
Currently we run df -h to get the Filesystem name, pvdisplay to get the VG Name and then multipath -v4 -ll | grep "^mpath" to get the WWID. This feels messy, long-winded and prone inconsistent interpretation.
Is there a more concise command we can run to determine the WWID of the device?
Here's one approach. The output format leaves something to be desired - it's more suited to eyeballs than programs.
lsblk understands the mapping of a mounted filesystem down through the LVM and multipath layers to the underlying block devices. In the output below, /dev/sdc is my iSCSI-attached LUN, attached via one path to the target. It contains the volume group vg1 and a logical volume lv1. /mnt/tmp is where I have the filesystem on the LV mounted.
$ sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdc 8:32 0 128M 0 disk
└─360a010a0b43e87ab1962194c4008dc35 253:4 0 128M 0 mpath
└─vg1-lv1 253:3 0 124M 0 lvm /mnt/tmp
At the 2nd level there is the SCSI WWN (360a010...), courtesy multipathd.

Options for storing many small images for fast batch access on Google Cloud?

We have a few datasets of small images, where each image is about 100KB, and there about 50K images per dataset (around 5GB each dataset). We typically use these datasets to batch-load each image incrementally into a memory of a Google VM instance in order to perform machine learning studies. This is done several times a day.
Currently, a few of us each have our own Google Persistent Disk attached to the VM with the datasets replicated on each. This is not ideal since they are pricey, however, data access is very fast which allows us to iterate on our studies fairly rapidly. We don't share one disk because of the inconvenience of having to manage read/write settings with Google disks when sharing.
Is there an alternative Google Cloud option to handle this use case? Google Buckets are too slow since it is reading many small files.
If your main interest is having rapid I/O your best bet is using an SSD for obvious reasons. Why I don't understand is why you don't want to share one disk. You can have one SSD attached to one of your instances as R/W for loading and modifying your datasets and mounting it read-only to the instances that need to fetch the data.
I'm not sure how faster will be this solution compared to using a bucket, though. I guess you are aware that gsutil has an option for multithreading transfers, which exponentially increases the data transfer speed, specially when transfering a lot of small files? The flag is -m
-m Causes supported operations (acl ch, acl set, cp, mv, rm, rsync,
and setmeta) to run in parallel. This can significantly improve
performance if you are performing operations on a large number of
files over a reasonably fast network connection.
gsutil performs the specified operation using a combination of
multi-threading and multi-processing, using a number of threads
and processors determined by the parallel_thread_count and
parallel_process_count values set in the boto configuration
file. You might want to experiment with these values, as the
best values can vary based on a number of factors, including
network speed, number of CPUs, and available memory.
Using the -m option may make your performance worse if you
are using a slower network, such as the typical network speeds
offered by non-business home network plans. It can also make
your performance worse for cases that perform all operations
locally (e.g., gsutil rsync, where both source and destination
URLs are on the local disk), because it can "thrash" your local
disk.
If a download or upload operation using parallel transfer fails
before the entire transfer is complete (e.g. failing after 300 of
1000 files have been transferred), you will need to restart the
entire transfer.
Also, although most commands will normally fail upon encountering
an error when the -m flag is disabled, all commands will
continue to try all operations when -m is enabled with multiple
threads or processes, and the number of failed operations (if any)
will be reported at the end of the command's execution.
If you want to go with the instance with R/W SSD and multiple read only clients see below:
One option is to set up an NFS on your SSD, one instance will act as the NFS server with R/W rights and the rest will have only read permissions. I will be using Ubuntu 16.04 but the process is similar in all distros:
1 - Install the required packages on both server and clients:
Server: sudo apt install nfs-kernel-server
Client: sudo apt install nfs-common
2 - Mount the disk SSD disk on the server (after formatting it to the filesystem you want to use):
Server:
jordim#instance-5:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb 8:16 0 50G 0 disk <--- My extra SSD disk
sda 8:0 0 10G 0 disk
└─sda1 8:1 0 10G 0 part /
jordim#instance-5:~$ sudo fdisk /dev/sdb
(I will create a single primary ext4 partition)
jordim#instance-5:~$ sudo fdisk /dev/sdb
(create partition)
jordim#instance-5:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb 8:16 0 50G 0 disk
└─sdb1 8:17 0 50G 0 part <- Newly created partition
sda 8:0 0 10G 0 disk
└─sda1 8:1 0 10G 0 part /
jordim#instance-5:~$ sudo mkfs.ext4 /dev/sdb1
(...)
jordim#instance-5:~$ sudo mkdir /mount
jordim#instance-5:~$ sudo mount /dev/sdb1 /mount/
Make a dir for your NFS share folder:
jordim#instance-5:/mount$ sudo mkdir shared
Now configure the exports on your server. Add the folder to share and the private IPs of the clients. Also you can tweak permissions here, use "ro" for "read only" or "rw" for read-write permissions.
jordim#instance-5:/mount$ sudo vim /etc/exports
(inside the exports file, note the IP is the private IP of the client instance):
/mount/share 10.142.0.5(ro,sync,no_subtree_check)
Now start the nfs service on the server:
root#instance-5:/mount# systemctl start nfs-server
Now to create the mountpoint on the client:
jordim#instance-4:~$ sudo mkdir -p /nfs/share
And mount the folder:
jordim#instance-4:~$ sudo mount 10.142.0.6:/mount/share /nfs/share
Now let's test it:
Server:
jordim#instance-5:/mount/share$ touch test
Client:
jordim#instance-4:/nfs/share$ ls
test
Also, see the mounts:
jordim#instance-4:/nfs/share$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 1.8G 0 1.8G 0% /dev
tmpfs 370M 9.9M 360M 3% /run
/dev/sda1 9.7G 1.5G 8.2G 16% /
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
tmpfs 370M 0 370M 0% /run/user/1001
10.142.0.6:/mount/share 50G 52M 47G 1% /nfs/share
There you go, now you have only one instance with a r/w disk and as many clients as you want with read only permissions.

Filesystem for a partition goes missing EC2 reboot

I created a d2.xlarge EC2 instance on AWS which returns the following output:
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
`-xvda1 202:1 0 8G 0 part /
xvdb 202:16 0 1.8T 0 disk
xvdc 202:32 0 1.8T 0 disk
xvdd 202:48 0 1.8T 0 disk
The default /etc/fstab looks like this
LABEL=cloudimg-rootfs / ext4 defaults,discard 0 0
/dev/xvdb /mnt auto defaults,nofail,x-systemd.requires=cloud-init.service,comment=cloudconfig 0 2
Now, I make an EXT4 filesystem for xvdc
$ sudo mkfs -t ext4 /dev/xvdc
mke2fs 1.42.13 (17-May-2015)
Creating filesystem with 488375808 4k blocks and 122101760 inodes
Filesystem UUID: 2391499d-c66a-442f-b9ff-a994be3111f8
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information:
done
blkid returns a UID for the filesystem
$ sudo blkid /dev/xvdc
/dev/xvdc: UUID="2391499d-c66a-442f-b9ff-a994be3111f8" TYPE="ext4"
Then, I mount it on /mnt5
$ sudo mkdir -p /mnt5
$ sudo mount /dev/xvdc /mnt5
It gets succesfully mounted. Till there, the things work fine.
Now, I reboot the machine(first stop it and then start it) and then SSH into the machine.
I do
$ sudo blkid /dev/xvdc
It returns me nothing. Where did the filesystem go which I created before the reboot? I guess the filesystem for mounts remain created even after the reboot cycle.
Am I missing something to mount a partition on an AWS EC2 instance?
I followed this http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-using-volumes.html and it does not seem to work as described above
You need to read up on EC2 Ephemeral Instance Store volumes. When you stop an instance with this type of volume the data on the volume is lost. You can reboot by performing a reboot/restart operation, but if you do a stop followed later by a start the data is lost. A stop followed by a start is not considered a "reboot" on EC2. When you stop an instance it is completely shut down and when you start it back later it is basically recreated on different backing hardware.
In other words what you describe isn't an issue, it is expected behavior. You need to be very aware of how these volumes work before depending on them.

How To Mount A Hard Disk Of File-System Type "devtmpfs"

I'm trying to recover some data from a hard drive extracted from a broken laptop, and I'm having problems mounting the disk to my current system (Linux Mint). The hard disk I'm recovering from ran Debian. Simply, I'm confused as to how I can mount the hard drive to access the files, however it's not as simple as any other mount I've done. The following details struggles and information I've encountered.
I get the following outputs when trying to mount the hard drive with different file-system tags. I should add that the file-system type isn't automatically detected when using auto, and "sdb" is definitely the correct address for the disk (taken it from dmesg).
$ mount /dev/sdb /mnt/usb -t ntfs
NTFS signature is missing.
Failed to mount '/dev/sdb': Invalid argument
The device '/dev/sdb' doesn't seem to have a valid NTFS.
Maybe the wrong device is used? Or the whole disk instead of a
partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around?
The following returns the same message when all other common file-system tags are used:
$ sudo mount /dev/sdb usb -t ext2
mount: wrong fs type, bad option, bad superblock on /dev/sdb,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
The results from these commands led me to believe that there was an issue with the hard disk and it's partitions, however fdisk proved that it's partition's do seem to be valid and correct:
$ sudo fdisk /dev/sdb -l
Disk /dev/sdb: 250.1 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0002da94
Device Boot Start End Blocks Id System
/dev/sdb1 * 2048 475920383 237959168 83 Linux
/dev/sdb2 475922430 488396799 6237185 5 Extended
/dev/sdb5 475922432 488396799 6237184 82 Linux swap / Solaris
I then decided to try verify the file-system type of the hard drive, which seems to be "devtmpfs", which I got from the following command using df:
$ df /dev/sdb -T
Filesystem Type 1K-blocks Used Available Use% Mounted on
udev devtmpfs 1014764 4 1014760 1% /dev
And so finally, I mount the hard drive using -t devtmpfs, which is successful in mounting however I'm left with a confusing file system very unlike from what I would expect from what was a standard debian set up.
It contains file folders such as "block","bus","char","disk","dri","mapper"... and files like "sda1","sdb","sdb1","tty","vcs".
I'm totally stumped as to how I should progress, and I'm pretty convinced the hard disk isn't broken and that I'm just mounting it incorrectly. How can I successfully mount the disk so I can access my files? Any help would be greatly appreciated.
Ok, you are trying to mount the entire disk instead of individual partitions, which is why you are getting the error. In short the command you need is:
mount /dev/sdb1 /mnt/usb
The file /dev/sdb references the entire disk as a block file. This includes the partition table at the start, which is why it can't find a filesystem. The file /dev/sdb1 references the first partition, which is where your filesystem will be. From the looks of your fdisk output, this is not an ntfs partition since this is a Windows filesystem and the partition is marked as Linux (most likely you will have ext4 unless you specifically set up something different).
To add a quick explanation of devtmpfs, this is a special filesystem which contains these block files which are specified by udev. You can google both for more information, but by now I'm sure you now know its not what you are looking for.

Resources