How to create swap partition/file on a Yocto distribution - linux

I'm trying to create a swap partition/file on my board where a core-image-minimal has been installed.
The fdisk -l command doesn't show any partition thus I'm not able to figure out which block device I need to use to create a new partition.
Secondly, launchig a swapon command on a swapfile correctly initialized using mkswap will raise an invalid argument error saying that the file contains holes even though I created it using dd.
At this point I'm not sure if I can do something like this since the free output looks like:
total used free shared buff/cache available
Mem: 503304 32108 101108 216 370088 465180
Swap: 0 0 0

To add any partition to your image, you need to modify the wks file that is used for your build.
To get the current wks file run :
bitbake -e | grep ^WKS_FILE=
Then, look for that file in your layers sources.
In that file you can add (example 1GB swap):
part swap --ondisk mmcblk0 --size 44 --label swap --fstype=swap --size=1024M --overhead-factor 1
For a real example, you can see the raspberry-pi machine swap support commit here.
You can use a custom wks file and set it to your custom machine conf file:
WKS_FILE ?= "custom-image.wks"
For detailed info, check the Yocto reference about wks.

Related

squashed then re-squashed give different size?

I extracted a
firmware.bin
using fmk mod kit and gave me 3 files: header.img , rootfs.img and footer.img
now whenever I cat and repack all the files together in firmware2.bin again, it works and it upgrades the router.
but when I unsquash the rootfs.img using this command unsquashfs rootfs.img into squashfs-root/
then I squash it again using mksquashfs rootfs-root/ squash_new.img -comp lzma -b 131072 "which it by the way the same compression method and block size as the original rootfs.img"
but it gives me a less size comparing to the rootfs.img and the router gives me upgrade failed
here are the sizes of the 2 files
squash_new.img (9,945,088 bytes)
rootfs.img (9,945,232 bytes)
is there a problem with unsquashfs or mksquashfs?
because when I used a hex editor software, I noticed some entries are different although I have not changed anything.

How do I change the filesystem of my 64GB USB, from FAT32 to anything which allows me to put a 35GB file from my x86_64 Linux machine onto the USB?

'uname -a' on my machine gives:
Linux ct-lt-966 4.9.0-8-amd64 #1 SMP Debian 4.9.144-3.1 (2019-02-19) x86_64 GNU/Linux
Currently the filesystem of my USB is MS-DOS 'FAT32' which has a ~4.5 GB maximum size for individual files. I want to change this filesystem to something else, which does not have a limit. (I am trying to put a 35GB file onto a 64GB USB but I believe most USB filesystems do not limit the size of individual files).
I have not found it clear what choices of USB filesystem that I have. I tried to change the filesystem to 'NTFS', but I could not install or locate 'mkfs.ntfs' or even 'ntfsprogs'. (I also tried installing with 'pacman' and 'yum' but apparently 'pacman' requires an aarch architecture and I could not get access to 'yum-config-manager' in order to enable any repos).
So to conclude, with my minimal prowess I am just looking for any way to change the filesystem of my 64GB USB to anything which will accept a 35GB file from my machine.
Thanks
Edit 1: Just planning to use the USB on this Linux machine, not Windows.
If there's nothing on the stick you want, or it's safe to delete it then basically:
delete the current FAT32 partition from the stick
add a new partition, utilising the full size of the device
create an ext4 filesystem on the new partition
PLEASE BE CAREFUL WITH THIS PROCESS: selecting the wrong device can obliterate a disk you needed such as a $HOME or your root OS
All the following is from memory and untested: I don't have a USB stick available right now to test fully.
Start by plugging in the stick while tailing the syslog in a console and see where it gets mounted (hopefully it automounts which it should if it's a desktop based Linux you're running. Possibly not if it's a server)..
sudo tail -f /var/log/syslog
(it might be /var/log/messages depending on distro)
then plug the stick. syslog should show it being allocated a device and a mount point. A file manager window may open depending on your config if you are in a GUI. For example, you might see it being loaded on /dev/sdc1 and mounted at /media/<yourusername>/USBKEY or something.
Confirm by running lsblk and note the device for the key, i.e.
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 167.7G 0 disk
├─sda1 8:1 0 69.9G 0 part /
└─sda2 8:2 0 97.9G 0 part /home
sdb 8:16 0 149.1G 0 disk
└─sdb1 8:17 0 149.1G 0 part /mnt/snapshots
sdc 8:32 0 931.5G 0 disk
└─sdc1 8:33 0 931.5G 0 part /storage
sdd 8:48 0 465.8G 0 disk
└─sdd1 8:49 0 465.8G 0 part /mnt/backup
sr0 11:0 1 1024M 0 rom
Unmount the stick (if it mounted) but leave it plugged in. Assuming again your device is at /dev/sdc1...
umount /dev/sdc1
Now run cfdisk in a terminal if you have it (friendlier) or fdisk if not, passing it the device related to your USB stick, without the partition number.
man cfdisk
sudo cfdisk /dev/sdc
This should show the current FAT32 partition. Delete it, then create a new partition of type 'Linux', following the defaults for start and end blocks which will be suggested in such a way as to fill the available space.
When done, select the option to Write the changes. Again, DOUBLE AND TRIPLE CHECK you have the right device or you will blow away your main disk probably.
Once the changes are written, you can create the ext4 file system;
sudo mkfs.ext4 /dev/sdc1
And after it completes, you should be able to re-plug your stick and find that it remounts, this time with a file system that can take your large files.
This isn't the only way to achieve this, but it's probably the least fiddly. For the sake of repetition, don't make a mistake with the device identifiers. If you're unsure, ask.

shm_unlink from the shell?

I have a program that creates a shared memory object with shm_open that I'm working on. It tries to release the object with shm_unlink, but sometimes a programming error will cause it to crash before it can make that call. In that case, I need to unlink the shared memory object "by hand," and I'd like to be able to do it in a normal shell, without writing any C code -- i.e. with normal Linux utilities.
Can it be done? This question seems to say that using unlink(1) on /dev/shm/path_passed_to_shm_open, but the manpages aren't clear.
Unlinking the file in /dev/shm will delete the shared memory object if no other process has the object mapped.
Unlike SYSV shared memory, which is implemented in the Kernel, POSIX shared memory objects are simply "files in disguise".
When you call shm_open and mmap, you can see the following in the process process map (using pmap -X):
Address Perm Offset Device Inode Mapping
b7737000 r--s 00000000 00:0d 2267945 test_object
The device major and minor number correspond to the tmpfs mounted at /dev/shm (some systems mount this at /run, and then symlink /dev/shm to /run/shm).
A listing of the folder will show the same inode number:
$ ls -li /dev/shm/
2267945 -rw------- 1 mikel mikel 1 Apr 14 13:36 test_object
Like any other inode, the space will be freed when all references are removed. If we close the only program referencing this we see:
$ cat /proc/meminfo | grep Shmem
Shmem: 700 kB
Once we remove the last reference (created in /dev/shm), the space will be freed:
$ rm /dev/shm/test_object
$ cat /proc/meminfo | grep Shmem
Shmem: 696 kB
If you're curious, you can look at the corresponding files in the glibc sources. shm_directory.c and shm_directory.h generate the filename as /dev/your_shm_name. The implementations shm_open and shm_unlink simply open and unlink this file. So it should be easy to see that rm /dev/shm/your_shm_name performs the same operation,

mount: you must specify the filesystem type

I was trying to execute qemu while following the qemu/linaro tutorial,
https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Virtual_ARM_Linux_environment
I was executing the command,
sudo mount -o loop,offset=106496 -t auto vexpress.img /mnt/tmp
mount: you must specify the filesystem type
so i did fdisk on the img file and got the following,
Device Boot Start End Blocks Id System
vexpress.img1 * 63 106494 53216 e W95 FAT16 (LBA)
vexpress.img2 106496 6291455 3092480 83 Linux
The filesystem is Linux according to the fdisk command. But I get error,
sudo mount -o loop,offset=106496 -t Linux vexpress.img /mnt/tmp
mount: unknown filesystem type 'Linux'
Kindly help.
You correctly decide to mount the particular partition by specifying its offset but the offset parameter is in bytes and fdisk shows the offset in blocks (the block size is shown before the partition list --- usually 512). For the block size 512 the command would be:
sudo mount -o loop,offset=$((106496*512)) -t auto vexpress.img /mnt/tmp
If the automatic file system type detection does not still work there is another problem. Linux is not really a file system type. In the partition table it is a collective type used for multiple possible particular file systems. For mount you must specify the particular file system. In Linux you can list the supported ones by cat /proc/filesystems.

How to release hugepages from the crashed application

I have an application that uses hugepage and the application suddenly crashed due to some bug.
After crashing, since the application does not release the hugepage properly, the free hugepage number is not increased in sys filesystem.
$ sudo cat /sys/kernel/mm/hugepages/hugepages-2048kB/free_hugepages
0
$ sudo cat /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
1024
Is there a way to release the hugepages by force?
Sometimes need to check all directory that hugetlbfs has been mounted.
So,
find mounted directory by command mount | grep huge.
check every directory except especially /dev/hugepages.
delete all 2M-sized files. (2M is the size of hugepage)
Use ipcs -m to list the shared memory segments.
Use ipcrm to remove the left over shared memory segments.
Edit on 06/24/2019:
Ok, so, the above answer, while correct as far as it goes, was a bit brief. In particular, if you have a host with multiple DB instances, and only one is crashed how can you determine which (if any) memory segments should be cleaned up?
Well, this too, can be done. For each running instance, connect w/ / as sysdba, then do oradebug setmypid (any pid will do, as all Oracle PIDs connect to the SGA). Then do oradebug ipc. That will (hopefully) return IPC information written to the trace file. So, go to the udump (or diag_dest) directory, and look for your trace file. It will contain all the IPC information for the instance. This will include ShmId. Look through the file for the ShmId(s) that this instance is using. Now look at the output of ipcs -m.
When you have done that for all the running instances, any memory segment output by ipcs -m that shows non-zero memory allocation, and that you cannot account for in the oradebug ipc information from any running instance, must be the left over memory segments from the crashed instance. Use ipcrm to remove it/them.
When doing this on a host with multiple running instances, this can be a bit fraught. Please proceed with caution. You don't want to remove the SGA of a running instance!
Hope that helps....
HugeTLB can either be used for shared memory (and Mark J. Bobak's answer would deal with that) or the app mmaps files created in a hugetlb filesystem. If the app crashes without removing those files they survive and keep corresponding memory 'allocated'.
Check hugeTLB filesystem and see if there are any leftover files from the app. Removing them would release the memory.
If you follow the instruction below, you can get rid of the allocated hugepages:
1) Let's check the hugepages which were free at restart
dpdk#dpdkvm:~$ ls /mnt/huge/
empty
dpdk#dpdkvm:~/dpdk-1.8.0/examples/kni$ cat /proc/meminfo
...
HugePages_Total: 256
HugePages_Free: 256
...
2) Starting a dpdk application with wrong parameters, producing an error
dpdk#dpdkvm:~/dpdk-1.8.0/examples/kni$ sudo ./build/kni -c 0x03 -n 2 -- -P -p 0x03 --config="(0,0,1),(1,0,1)"
...
EAL: Error - exiting with code: 1
Cause: No supported Ethernet device found
3) When I check hugepages, there is not any free
dpdk#dpdkvm:~/dpdk-1.8.0/examples/kni$ cat /proc/meminfo
...
HugePages_Total: 256
HugePages_Free: 0
...
4) Now, when I check the mounted hugepage directory, I can see the files which are not given back to OS by dpdk application.
dpdk#dpdkvm:~/dpdk-1.8.0/examples/kni$ ls /mnt/huge/
...
rtemap_0 rtemap_137 rtemap_176 rtemap_214 rtemap_253 rtemap_62
...
5) Finally, if you remove the files starting with rtemap, you can give the hugepages back
dpdk#dpdkvm:~/dpdk-1.8.0/examples/kni$ sudo rm /mnt/huge/*
[sudo] password for dpdk:
dpdk#dpdkvm:~/dpdk-1.8.0/examples/kni$ cat /proc/meminfo
...
HugePages_Total: 256
HugePages_Free: 256
...
your hugetlb may be used by shared memory or mmap files.
try to remove the shared memories or umount the hugetlb fs

Resources