Swap file returns to original size after reboot - linux

I am trying to increase swapfile size on my raspberry pi 3.
I'm following this guide here on DigitalOcean.
After successfully increasing the file and setting it up using mkswap and swpon commands everything works fine. I even tried filling my ram with random data to see if it is going to use the new swap space and it works perfectly.
However after I reboot my raspberry, swap file returns to the previous (default) size of 100MB.
Is there any way to make this change permanent?
I'm running Raspberry pi 3 on Raspbian Jessie.

I figured it out.
Modifying /etc/dphys-swapfile solves all problems.
I just changed CONF_SWAPSIZE=100 to CONF_SWAPSIZE=2000
dphys-swapfile is responsible for setting up, mounting/unmounting and deleting swap files.
In the configuration file you can also specify the location of the swap file as well as few other parameters.

Suppose you have got to the point where swapon -s returns
# sudo swapon -s
Filename Type Size Used Priority
/swapfile file 4194300 0 -1
Now to make this change permanent you need to add a record about your new swapfile in fstab.
You need to add the following line:
/swapfile none swap sw 0 0
The meaning of the fstab fields is as follows:
#1.source 2.mountpoint 3.fstype 4.options 5.freq 6.order

Related

Rust compilation on AWS fails while succeeding on other machines [duplicate]

I am using opensuse, specific the variant on mono's website when you click vmware
I get this error. Does anyone know how i might fix it?
make[4]: Entering directory `/home/rupert/Desktop/llvm/tools/clang/tools/driver'
llvm[4]: Linking Debug+Asserts executable clang
collect2: ld terminated with signal 9 [Killed]
make[4]: *** [/home/rupert/Desktop/llvm/Debug+Asserts/bin/clang] Error 1
The full text can be found here
Your virtual machine does not have enough memory to perform the linking phase. Linking is typical the most memory intensive part of a build since it's where all the object code comes together and is operated on as a whole.
If you can allocate more RAM to the VM then do that. Alternatively you could increase the amount of swap space. I am not that familiar with VMs but I imagine the virtual hard drive you set-up will have a swap partition. If you can make that bigger or allocate a second swap partition that would help.
Increasing the RAM, if only for the duration of your build, is the easiest thing to do though.
Also got the same issue and solved by doing following steps (It is memory issue only) -
Checks current swap space by running free command (It must be around 10GB.).
Checks the swap partition
sudo fdisk -l
/dev/hda8 none swap sw 0 0
Make swap space and enable it.
sudo swapoff -a
sudo /sbin/mkswap /dev/hda8
sudo swapon -a
If your swap disk size is not enough you would like to create swap file and use it.
Create swap file.
sudo fallocate -l 10g /mnt/10GB.swap
sudo chmod 600 /mnt/10GB.swap
OR
sudo dd if=/dev/zero of=/mnt/10GB.swap bs=1024 count=10485760
sudo chmod 600 /mnt/10GB.swap
Mount swap file.
sudo mkswap /mnt/10GB.swap
Enable swap file.
sudo swapon /mnt/10GB.swap
I tried with make -j1 and it works!. But it takes long time to build.
I had the same problem building on a VirtualBox system. FWIW I was building on a laptop with XP and 2GB RAM. I had to bump the virtual RAM up to 1462MB to get a successful build. Also note the recommended disk size of 8GB is not sufficient to build and install both LLVM and Clang under Ubuntu. I'd recommend at least 16GB.
I would suggest using of -l (--max-load) option instead of limiting -j in this case. Possibly helpful
answer.

Backup for a linux system via osx

I have an odroid (raspberry-like) machine with an arch linux system installed. Now I want to move the system from one microsd (A) to another microsd (B). When I tried this, the system became corrupted, information about files attributes were lost:
Copy files from A to osx-host cp -R /Volume/microsd_a/* ~/Desktop/backup
Copy files from osx-host to B cp -R ~/Desktop/backup/* /Volume/microsd_b
Is it real to copy linux-system using osx-host with preserving integrity?
Update:
dd. I tried this way, but there is a problem. My sd cards have different sizes, 64 Gb and 16 Gb, but system installed on 64 Gb disk has no more than 8 Gb. So when I launched the copying process, output image file exceed 16 Gb and I killed the process. By the way, the MBR contains information about partition table which should be different (one partition 64Gb / one partition 16 gb). And notice, I do not need to copy bootloader from MBR, I have an ability to flash disk bootloader by other ways.
cp. What I wanted to listen as the answer is the list of flags I need to make this operation. Reading man cp didn't help me. cp -a does not copy all files because of Cannot allocate memory error. Tried cp -aX, no attributes were restored after copying data to second sdcard.
tar. I tried multiple times with flags, last one was tar -cvpf; tar --same-owner -xpf. But file attributes were still corrupted.
Again:
- Are you sure, it is possible to preserve file attributes through copying ext4 -> APFS -> ext4?
- If this is possisble, how does it work and which command with which flags should I use?
cp -R results in change of permissions, time stamps and missing of hidden files, you can't use that command to create a disk image.
what you need is a disk copy/clone. The command to use is dd.
Check out this webpage:
https://pbxbook.com/other/dd_clone.html

Script to create swap partition fails when running automatically

I am creating a cluster of machines in AWS (Amazon Linux 2) using terraform by utilizing the user_data argument under the aws_instance resource. Part of the script is the creation of a swap partition. The commands in my script work perfectly if I execute them manually in the instance.
I have tried the following in my script. It creates the partition successfully, but it does not seem to finish up creating the swap as confirmed in more /proc/swaps. It successfully executes the lines of code below everything I have showed that I omitted from my post. So it must be failing at the partprobe, mkswap /dev/nvme1n1p2, or swapon /dev/nvme1n1p2. It runs the echo "/dev/nvme1n1p2 swap swap defaults 0 0" >> /etc/fstab. I'm not sure how to tell where it is not executing.
# Create SWAP partition
fdisk /dev/nvme1n1 <<EOF
n
p
2
+48G
w
EOF
partprobe
mkswap /dev/nvme1n1p2
swapon /dev/nvme1n1p2
echo "/dev/nvme1n1p2 swap swap defaults 0 0" >> /etc/fstab
The results intended are to create swap partition as confirmed by running more /proc/swaps. The partition is created, but not the actual swap.
UPDATE: Here is the contents of the log file:
mkswap: cannot open /dev/nvme1n1p2: No such file or directory
swapon: cannot open /dev/nvme1n1p2: No such file or directory
However that device is listed when running lsblk and the command works if I run it manually.
Solution from the comments by #thatotherguy:
I'm guessing it's a race condition with partprobe. If you run manually, there will be several seconds between each command so udev will have plenty of time to create device node asynchronously. When you run them in a script, it doesn't. You could try adding sleep 5 after partprobe.
This solved the OP's error, and I can also report success using this fix.

mdadm: array disappearing on reboot, despite correct mdadm.conf

I'm using Ubuntu 13.10 and trying to create a RAID 5 array across 3 identical disks connected to SATA ports on the motherboard. I've followed every guide and and used both the built-in Disks GUI app and mdadm at the command line, and despite everything I cannot get the array to persist after reboot.
I create the array with the following command:
root#zapp:~# mdadm --create /dev/md/array --chunk=512 --level=5 \
--raid-devices=3 /dev/sda /dev/sdb /dev/sdd
Then I watch /proc/mdstat for awhile while it syncs, until I get this:
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md127 : active raid5 sda1[0] sdd1[3] sdb1[1]
1953262592 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
unused devices: <none>
To update the mdadm config file, I run the following:
root#zapp:~# /usr/share/mdadm/mkconf > /etc/mdadm/mdadm.conf
This adds the essential line to my config file:
ARRAY /dev/md/array metadata=1.2 UUID=0ad3753e:f0177930:8362f527:285d76e7 name=zapp:array
Everything seems correct, but when I reboot, the array is gone!
The key to fixing this was to partition the drives first, and create the array from the partitions instead of the raw devices.
Basically, the create command just needed to change to:
root#zapp:~# mdadm --create /dev/md/array --chunk=512 --level=5 \
--raid-devices=3 /dev/sda1 /dev/sdb1 /dev/sdd1
The rest of the steps were correct, and created everything properly once this change was made. Any further info as to why this was necessary would be helpful. It was certainly not obvious in any of the documentation I found.

Inode of directory on mounted share changes despite no change in modification time

I am running Ubuntu 10.4 and am mounting a drive using cifs. The command I'm using is:
'sudo mount -t cifs -o workgroup="workgroup",username="username",noserverino,ro //"drive" "mount_dir"'
(Obviously with "" values substituted for actual values)
When I then run the command ls -i I get: 394070
Running it a second time I get: 12103522782806018
Is there any reason to expect the inode value to change?
Running ls -i --full-time shows no change in modification time.
noserverino tells your mount not to use server-generated inode numbers, and instead use client-generated temporary inode numbers, to make up for them. Try with serverino, if your server and the exported filesystem support inode numbers, they should be persistent.
I found that using the option "nounix" before the "noserverino" kept the inodes small and persistent. I'm not really sure why this happened. The server is AIX and I'm running it from Ubuntu. Thank you for your response.

Resources