'cat /proc/swaps' returns nothing [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
Please do not waste anymore of your time on this question...I ended up deleting the whole VM and creating another. The time it took me to do this is less than the time it would take to fix the issue. I have couple of SSDs in RAID mode.
Thank you for all those who tried to troubleshoot the issue!
I am having this problem with ubnuntu not showing active swap spaces when I run the command cat /proc/swaps. Here is a list of commands I ran. I even added a new swap space (file: /swapfile1) just to make sure that at least one swap space, but still I get nothing.
hebbo#ubuntu-12-lts:~$ sudo fdisk -l
[sudo] password for hebbo:
Disk /dev/sda: 26.8 GB, 26843545600 bytes
255 heads, 63 sectors/track, 3263 cylinders, total 52428800 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000e3a7a
Device Boot Start End Blocks Id System
/dev/sda1 * 46569472 52426751 2928640 82 Linux swap / Solaris
/dev/sda2 2046 46567423 23282689 5 Extended
/dev/sda5 2048 46567423 23282688 83 Linux
Partition table entries are not in disk order
hebbo#ubuntu-12-lts:~$ sudo su
root#ubuntu-12-lts:/home/hebbo# cat /proc/swaps
Filename Type Size Used Priority
root#ubuntu-12-lts:/home/hebbo# dd if=/dev/zero of=/swapfile1 bs=1024 count=524288
524288+0 records in
524288+0 records out
536870912 bytes (537 MB) copied, 1.18755 s, 452 MB/s
root#ubuntu-12-lts:/home/hebbo# mkswap /swapfile1
Setting up swapspace version 1, size = 524284 KiB
no label, UUID=cb846612-5f27-428f-9f83-bbe24b410a78
root#ubuntu-12-lts:/home/hebbo# chown root:root /swapfile1
root#ubuntu-12-lts:/home/hebbo# chmod 0600 /swapfile1
root#ubuntu-12-lts:/home/hebbo# swapon /swapfile1
root#ubuntu-12-lts:/home/hebbo# cat /proc/swaps
Filename Type Size Used Priority
root#ubuntu-12-lts:/home/hebbo#
Any idea how to fix this?
This is ubuntu 12.04 LTS running kernel 3.9.0 in a vmware VM.
Thanks in advance!

To activate /swapfile1 after Linux system reboot, add entry to /etc/fstab file. Open this file using a text editor such as vi:
# vi /etc/fstab
Add the following line:
/swapfile1 swap swap defaults 0 0
Save and close the file. Next time Linux comes up after reboot, it enables the new swap file for you automatically.
Have a look here for more info.

I just tried it and it works on my box.
Linux fileserver 3.8.0-32-generic #47~precise1-Ubuntu SMP Wed Oct 2 16:19:35 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
ortang#fileserver:~$ cat /proc/swaps
Filename Type Size Used Priority
/dev/dm-2 partition 4194300 0 -1
ortang#fileserver:~$ sudo su
root#fileserver:/home/ortang# dd if=/dev/zero of=/swapfile bs=1M count=512
512+0 records in
512+0 records out
536870912 bytes (537 MB) copied, 0.695721 s, 772 MB/s
root#fileserver:/home/ortang# chmod 600 /swapfile
root#fileserver:/home/ortang# mkswap /swapfile
Setting up swapspace version 1, size = 524284 KiB
no label, UUID=63cdcf3d-ba03-42ce-b598-15b6aa3ca67d
root#fileserver:/home/ortang# swapon /swapfile
root#fileserver:/home/ortang# cat /proc/swaps
Filename Type Size Used Priority
/dev/dm-2 partition 4194300 0 -1
/swapfile file 524284 0 -2
One thing i can imagine why it is working on my box, is that i already have a working swap partition, and it seems you don't.
It could also be caused by the kernel you use, 3.9.0 is not the regular 12.04.3 LTS kernel? Have you built the kernel yourself?
Whats the output of
grep CONFIG_SWAP /boot/config-`uname -r`
or
zcat /proc/config.gz | grep CONFIG_SWAP
is swap enabled in your kernel?

I ended up deleting the whole VM and creating another. The time it took me to do this is less than the time it would take to fix the issue. I have couple of SSDs in RAID mode. And I already had all the downloads on the same host machine. All in all ~7 minutes.
Thanks for all those who helped troubleshoot the issue.

Related

mkfs.vfat: unable to open {partition}: No such file or directory (command succeeds, but throws this error and blocks rest of script)

Update: I got this working but am still not 100% sure why. I've appended the fully and consistently working script to the end for reference.
I'm trying to script a series of disk partition commands using sgdisk and mkfs.vfat. I'm working from a Live USB (NixOS 21pre), have a blank 1TB M.2 SSD, and am creating a 1GB EFI boot partition, and a 999GB ZFS partition.
Everything works up until I try to create a FAT32 filesystem on the EFI partition, using mkfs.vfat, where I get the error in the title.
However, the odd thing is, the mkfs.vfat command succeeds, but throws that error anyway and blocks the rest of the script. Any idea why it's doing this and how to fix it?
Starting with an unformatted 1TB M.2 SSD:
$ sudo parted /dev/disk/by-id/wwn-0x5001b448b94488f8 print
Error: /dev/sda: unrecognised disk label
Model: ATA WDC WDS100T2B0B- (scsi)
Disk /dev/sda: 1000GB
Sector size (logical/physical): 512B/512B
Partition Table: unknown
Disk Flags:
Script:
$ ls
total 4
drwxr-xr-x 2 nixos users 60 May 18 20:25 .
drwx------ 17 nixos users 360 May 18 15:24 ..
-rwxr-xr-x 1 nixos users 2225 May 18 19:59 partition.sh
$ cat partition.sh
#!/usr/bin/env bash
#make gpt partition table and boot & rpool partitions for ZFS on 1TB M.2 SSD
#error handling on
set -e
#wipe the disk with -Z, then create two partitions, a 1GB (945GiB) EFI boot partition, and a ZFS root partition consisting of the rest of the drive, then print the results
DISK=/dev/disk/by-id/wwn-0x5001b448b94488f8
sgdisk -Z $DISK
sgdisk -n 1:0:+954M -t 1:EF00 -c 1:efi $DISK
sgdisk -n 2:0:0 -t 2:BF01 -c 2:zroot $DISK
sgdisk -p /dev/sda
#make a FAT32 filesystem on the EFI partition, then mount it
#mkfs.vfat -F 32 ${DISK}-part1 (troubleshooting with hardcoded version below)
mkfs.vfat -F 32 /dev/disk/by-id/wwn-0x5001b448b94488f8-part1
mkdir -p /mnt/boot
mount ${DISK}-part1 /mnt/boot
Result (everything fine until mkfs.vfat, which throws error and blocks the rest of the script):
$ sudo sh partition.sh
GPT data structures destroyed! You may now partition the disk using fdisk or
other utilities.
Creating new GPT entries in memory.
Setting name!
partNum is 0
The operation has completed successfully.
Setting name!
partNum is 1
The operation has completed successfully.
Disk /dev/sda: 1953525168 sectors, 931.5 GiB
Model: WDC WDS100T2B0B-
Sector size (logical/physical): 512/512 bytes
Disk identifier (GUID): 77ED6A41-E722-4FFB-92EC-975A37DBCB97
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 1953525134
Partitions will be aligned on 2048-sector boundaries
Total free space is 2014 sectors (1007.0 KiB)
Number Start (sector) End (sector) Size Code Name
1 2048 1955839 954.0 MiB EF00 efi
2 1955840 1953525134 930.6 GiB BF01 zroot
mkfs.fat 4.1 (2017-01-24)
mkfs.vfat: unable to open /dev/disk/by-id/wwn-0x5001b448b94488f8-part1: No such file or directory
Verifying the partitioning and FAT32 creation commands worked:
$ sudo parted /dev/disk/by-id/wwn-0x5001b448b94488f8 print
Model: ATA WDC WDS100T2B0B- (scsi)
Disk /dev/sda: 1000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 1001MB 1000MB fat32 efi boot, esp
2 1001MB 1000GB 999GB zroot
Fwiw, the same command works on the commandline with no error:
$ sudo mkfs.vfat -F 32 /dev/disk/by-id/wwn-0x5001b448b94488f8-part1
mkfs.fat 4.1 (2017-01-24)
Success. But why no error on the commandline, but an error in the script?
Update: fully and consistently working script:
#!/usr/bin/env bash
#make UEFI (GPT) partition table and two partitions (FAT32 boot and ZFS rpool) on 1TB M.2 SSD
#error handling on
set -e
#vars
DISK=/dev/disk/by-id/wwn-0x5001b448b94488f8
POOL='rpool'
#0. if /mnt/boot is mounted, umount it; if any NixOS filesystems are mounted, unmount them
if mount -l | grep -q '/mnt/boot'; then
umount -f /mnt/boot
fi
if mount -l | grep -q '/mnt/nix'; then
umount -fR /mnt
fi
#1. if a zfs pool exists, delete it
if zpool list | grep -q $POOL; then
zfs unmount -a
zpool export $POOL
zpool destroy -f $POOL
fi
#2. wipe the disk
sgdisk -Z $DISK
wipefs -a $DISK
#3. create two partitions, a 1GB (945GiB) EFI boot partition, and a ZFS root partition consisting of the rest of the drive, then print the results
sgdisk -n 1:0:+954M -t 1:EF00 -c 1:efiboot $DISK
sgdisk -n 2:0:0 -t 2:BF01 -c 2:zfsroot $DISK
sgdisk -p /dev/sda
#4. notify the OS of partition updates, and print partition info
partprobe
parted ${DISK} print
#5. make a FAT32 filesystem on the EFI boot partition
mkfs.vfat -F 32 ${DISK}-part1
#6. notify the OS of partition updates, and print new partition info
partprobe
parted ${DISK} print
#mount the partitions in nixos-zfs-pool-dataset-create.sh script. Make sure to first mount the ZFS root dataset on /mnt before mounting and subdirectories of /mnt.
It may take time for kernel to be notified about partition changes. Try calling partprobe before mkfs, to request kernel to re-read the partition tables.

Simulate mounted volume errors to cause read only

Few days ago we have encountered an unexpected error where one of the mounted drive on our RedHat linux machine became Read-Only. The issue was cause by the network outage in the datacenter.
Now I need to see if I can reproduce the same behavior where drive will be re-mounted as Read-Only while application is running.
I tried to remounted it was read-only but that didn't work because there are files that are opened (logs being written).
Is there a way to temporary cause the read-only if I have root access to the machine (but no access to the hypervisor).
That volume is mounted via /etc/fstab. Here is the record:
UUID=abfe2bbb-a8b6-4ae0-b8da-727cc788838f / ext4 defaults 1 1
UUID=8c828be6-bf54-4fe6-b68a-eec863d80133 /opt/sunapp ext4 rw 0 2
Here are the output of few commands that shows details about our mounted drive. I can add more details as needed.
Output of fdisk -l
Disk /dev/vda: 268.4 GB, 268435456000 bytes, 524288000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0008ba5f
Device Boot Start End Blocks Id System
/dev/vda1 * 2048 524287966 262142959+ 83 Linux
Disk /dev/vdb: 42.9 GB, 42949672960 bytes, 83886080 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Output of lsblk command:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 253:0 0 80G 0 disk
└─vda1 253:1 0 80G 0 part /
vdb 253:16 0 250G 0 disk /opt/sunup
Output of blkid command:
/dev/vda1: UUID="abfe2bbb-a8b6-4ae0-b8da-727cc788838f" TYPE="ext4"
/dev/sr0: UUID="2017-11-13-13-33-07-00" LABEL="config-2" TYPE="iso9660"
/dev/vdb: UUID="8c828be6-bf54-4fe6-b68a-eec863d80133" TYPE="ext4"
Output of parted -l command:
Warning: Unable to open /dev/sr0 read-write (Read-only file system). /dev/sr0
has been opened read-only.
Error: /dev/sr0: unrecognised disk label
Model: QEMU QEMU DVD-ROM (scsi)
Disk /dev/sr0: 461kB
Sector size (logical/physical): 2048B/2048B
Partition Table: unknown
Disk Flags:
Model: Virtio Block Device (virtblk)
Disk /dev/vda: 268GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
1 1049kB 268GB 268GB primary ext4 boot
Model: Virtio Block Device (virtblk)
Disk /dev/vdb: 42.9GB
Sector size (logical/physical): 512B/512B
Partition Table: loop
Disk Flags:
Number Start End Size File system Flags
1 0.00B 42.9GB 42.9GB ext4
Yes, you can do it. But the method proposed here may cause data loss, so use it only for testing.
Supposing you have /dev/vdb mounted as /opt/sunapp, do this:
First, unmount it. You may need to shut down any applications using it first.
Configure a loop device to mirror the contents of /dev/vdb:
losetup /dev/loop0 /dev/vdb
Then, mount /dev/loop0 instead of /dev/vdb:
mount /dev/loop0 /opt/sunapp -o rw,errors=remount-ro
Now, you can run your application. When it is time to make /opt/sunapp read-only, use this command:
blockdev --setro /dev/vdb
After that, attempts to write to /dev/loop0 will result in I/O errors. As soon as file system driver detects this, it will remount the file system as read-only.
To restore everything back, you will need to unmount /opt/sunapp, detach the loop device, and make /dev/vdb writable again:
umount /opt/sunapp
losetup -d /dev/loop0
blockdev --setrw /dev/vdb
When I had some issues like corrupted disks, I had used ntfsfix.
Please see if these commands, solve the problem.
sudo ntfsfix /dev/vda
sudo ntfsfix /dev/vdb

pvcreate failing to create PV. Device not found /dev/sdxy (or ignored by filtering)

I have an oVirt installation with CentOS Linux release 7.3.1611.
I want to add a new drive (sdb) to the oVirt volume group to work with VMs.
Here is the result of fdisk on the drive:
[root#host1 ~]# fdisk /dev/sdb Welcome to fdisk (util-linux 2.23.2).
Changes will remain in memory only, until you decide to write them. Be
careful before using the write command.
Orden (m para obtener ayuda): p
Disk /dev/sdb: 300.1 GB, 300069052416 bytes, 586072368 sectors
Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512
bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk
label type: dos Identificador del disco: 0x7a33815f
Disposit. Inicio Comienzo Fin Bloques Id Sistema
/dev/sdb1 2048 586072367 293035160 8e Linux LVM
The partitions are showed up in /proc/partitions:
[root#host1 ~]# cat /proc/partitions
major minor #blocks name
8 0 293036184 sda
8 1 1024 sda1
8 2 1048576 sda2
8 3 53481472 sda3
8 4 1 sda4
8 5 23072768 sda5
8 6 215429120 sda6
8 16 293036184 sdb
8 17 293035160 sdb1
When I execute the command to create PV with "pvcreate /dev/sdb1" the result is:
[root#host1 ~]# pvcreate /dev/sdb1
Device /dev/sdb1 not found (or ignored by filtering).
I have revised the file /etc/lvm/lvm.conf for the filters, but I do not have any filter that makes LVM discarding the drive. I have rebooted the computer after creating the PV with pvcreate. I did research on Google for the error but no idea.
Thanks. Any help would be appreciated Manuel
Try to edit lvm.conf uncomment global_flter and edit like this
global_filter = [ "a|/dev/sdb|"]
After that edit multipath vi /etc/multipath.conf
[root#ovirtnode2 ~]#lsblk /dev/sdb NAME
MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sdb
8:16 0 200G 0 disk └─3678da6e715b018f01f1abdb887594aae 253:2
0 200G 0 mpath
edit
vi /etc/multipath.conf
append the following to multipath.conf blacklist {
wwid 3678da6e715b018f01f1abdb887594aae }
service multipathd restart
its work for me, and i have that problem to when im trying on ovirt but
[root#ovirtnode2 ~]# pvcreate /dev/sdb Physical volume "/dev/sdb"
successfully created. [root#ovirtnode2 ~]#

Low disk performance in Ubuntu VM on Azure

I was evaluating Azure and it appears the Ubuntu VM I created has unexpectedly low disk performance. I noticed this because the database import took much longer compare to another Rackspace VM I'm using. And I'm not sure whether there is any important configurations that I missed or is it just I'm looking at the disk performance in the wrong way. Here are my tests and the results:
Standard A1 VM (1 core, 1.75GB memory, Ubuntu 12.04 LTS)
sudo hdparm -tT /dev/sdb
Timing cached reads: 6892 MB in 2.00 seconds = 3451.31 MB/sec
Timing buffered disk reads: 40 MB in 3.37 seconds = 11.88 MB/sec
sudo hdparm -t --direct /dev/sdb
Timing O_DIRECT disk reads: 46 MB in 3.74 seconds = 12.29 MB/sec
sudo dd if=/dev/zero of=/mnt/test bs=8k count=200000; sudo rm -f /mnt/test
1638400000 bytes (1.6 GB) copied, 246.32 s, 6.7 MB/s
As comparison, my other VM on Rackspace (4 vCPU, 1GB memory, Ubuntu 12.04 LTS) has the following results:
sudo hdparm -tT /dev/xvda
Timing cached reads: 5960 MB in 1.99 seconds = 2990.32 MB/sec
Timing buffered disk reads: 200 MB in 3.05 seconds = 65.66 MB/sec
sudo hdparm -t --direct /dev/xvda
Timing O_DIRECT disk reads: 162 MB in 3.12 seconds = 52.00 MB/sec
sudo dd if=/dev/zero of=test bs=8k count=200000; sudo rm -f test
1638400000 bytes (1.6 GB) copied, 13.7139 s, 119 MB/s
Although the Azure VM has better cached read performance, its disk read (both buffered and direct) is quite slow, and disk write (or copy) is way worse. As Linux VM on Azure does not have swap file configured by default, I manually created a 5GB swap file (on /dev/sdb) but it does not seem to help.
Then I did one more around of testing on Azure using a Standard D3 VM (4 cores, 14GB memory, Ubuntu 12.04 LTS). When executing the commands above on /dev/sdb the performance was amazing, I guess because of local SSD? However when I attach an additional disk to that D3 VM and run the same commands on the newly created /dev/sdc partition (ext4), the results are just as bad as the A1 instance.
Not sure if this is the best way to test disk performance in Linux. But it is pretty noticeable that the Azure VM is much slower when restoring database backup. Microsoft Azure support page suggests that we could ask question here with the "azure" tag, so ... Any comments is welcomed.
I removed the disk that I attached to the Standard D3 VM earlier, then followed the same process and attached a new one. Somehow the newly attached disk has much better performance as showing below
Standard D3 VM (4 cores, 14GB memory, Ubuntu 12.04 LTS)
sudo hdparm -tT /dev/sdc
Timing cached reads: 13054 MB in 1.99 seconds = 6546.15 MB/sec
Timing buffered disk reads: 68 MB in 3.01 seconds = 22.57 MB/sec
sudo hdparm -t --direct /dev/sdc
Timing O_DIRECT disk reads: 98 MB in 3.03 seconds = 32.35 MB/sec
sudo dd if=/dev/zero of=/mnt/test bs=8k count=200000; sudo rm -f /mnt/test
1638400000 bytes (1.6 GB) copied, 1.5689 s, 1.0 GB/s
Not exactly sure why. But my problem does not exist any more. Therefore closing this question.
D series are SSD based. Your /mnt or /mnt/resource on an A would also be SSD based and local to the server, but is not persistent. Cloud vendors have guidance on setting up striping or RAID0 (or 10) to increase IOPS. For Azure, I suggest taking a look at this guide that is designed for MySQL but covers from the disks up.

Can't open /dev/sda2 exclusively. Mounted filesystem?

WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted.
Disk /dev/sda: 3999.7 GB, 3999688294400 bytes
255 heads, 63 sectors/track, 486267 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sda1 1 267350 2147483647+ ee GPT
Partition 1 does not start on physical sector boundary.
/dev/sda2 1 2090 16785120 82 Linux swap / Solaris
/dev/sda3 1 218918 1758456029+ 8e Linux LVM
Partition table entries are not in disk order
Above is my "fdisk -l", my current problem is when I go and try to do "pvcreate /dev/sda2" it gives me "Can't open /dev/sda2 exclusively. Mounted filesystem?" and I have been searching google for a while now trying to find a way to fix this. There is defiantly things I tried from google but none of them ended up working.
You're trying to initialize a partition for use by LVM that's currently used by swap.
You should rather run
pvcreate /dev/sda3
i updated to the new Kernel and the problem was resolved in RHEL 6 . Had upgraded from 2.6.32-131.x to 2.6.32-431.x
Check those disks/partition you are using, are not mounted to any directory on your system.
If yes, umount them and try again.

Resources