FreeBSD-11 EZJail install fails with tar unable to chdir - freebsd

On a FreeBSD-11.1 host I removed an existing installation of ezjail using the following commands:
pkg remove ezjail
rm -rf /usr/local/etc/ezjail.conf
rm -rf /usr/local/etc/ezjail
chflags -R noschg /usr/jails
rm -rf /usr/jails
zfs destroy -r zroot/ezjail
I also checked for /etc/fstab.* and found none.
I then reinstalled ezjail using pkg and recreated the zfs ezjail partition:
zfs create -p zroot/ezjail
I also modified /usr/local/etc/ezjail.conf to use zfs:
ezjail_use_zfs="YES"
ezjail_use_zfs_for_jails="YES"
ezjail_jailzfs="zroot/ezjail"
However, when I run ezjail-admin install I get this error:
ezjail-admin install
base.txz 100% of 99 MB 621 kBps 02m45s
tar: could not chdir to '/usr/jails/fulljail'
ll /usr/jails
total 0
ll /usr/local/etc/ezjail
total 0
zfs list | grep jail
zroot/ezjail 176K 883G 88K /zroot/ezjail
zroot/ezjail/fulljail 88K 883G 88K /zroot/ezjail/fulljail
What has happened and how do I fix it?

Seems that you miss the mountpoint for your pool, try something like:
# zfs create -o mountpoint=/usr/jails zpool/jails
Check this quick setup guide as a reference.

This issue arose because the initial install of ezjail on this system did not use zfs. Consequently, the directories /usr/jails/fulljail and /usr/jails/newjail were created on the zroot/usr dataset. When I witched ezjail over to zfs I did not realise that this had happened. Somehow the existence of these two directories in zpool/usr conflicted with the same directories in zroot/ezjail under its mount point /usr/jails.
This condition was only discovered after I had destroyed zroot/ezjail in preparation for a clean install of ezjail. My solution was to also remove these directories and the entire /usr/jail directory tree from zroot/usr before reinstalling ezjail with zfs enabled in /usr/local/etc/ezjail.conf.
To lessen the pain of all this I made an archive of the jail first and recreated it from the archive following the ezjail reinstallation.

Related

How to deploy files to /boot partition with Yocto

I'm trying to deploy some binary files to /boot in a Yocto image for RPi CM3 but it deploys them to the wrong location.
do_install() {
install -d ${D}/boot/overlays
install -m 0664 ${WORKDIR}/*.dtb ${D}/boot/overlays/
install -m 0664 ${WORKDIR}/*.dtbo ${D}/boot/overlays/
}
The files are deployed to /boot in the / partition of the final image, but not to the /boot partition. So they are not available at boot time.
I already googled and studied the kernel recipes (and classes) of the Poky distribution but I didn't find the mechanism it uses how to ensure that the files are deployed to the boot image (and not to the /boot dir in the root image).
Any help is appreciated :)
Update #1
In my local.conf I did:
IMAGE_BOOT_FILES_append = " \
overlays/3dlab-nano-player.dtbo \
overlays/adau1977-adc.dtbo \
...
"
And in my rpi3-overlays.bb
do_deploy() {
install -d ${DEPLOYDIR}/${PN}
install -m 0664 ${WORKDIR}/*.dtb ${DEPLOYDIR}/${PN}
install -m 0664 ${WORKDIR}/*.dtbo ${DEPLOYDIR}/${PN}
touch ${DEPLOYDIR}/${PN}/${PN}-${PV}.stamp
}
Using this the image builds, but the files stillt don't get deployed in the /boot partition.
Using RPI_KERNEL_DEVICETREE_OVERLAYS I get a build error because the kernel recipe tries to build the dtbo files like dts files.
RPI images are created with sdimage-raspberrypi.wks WIC wks file. It contains:
part /boot --source bootimg-partition ...
so it uses bootimg-partition.py wic plugin to generate /boot partition. It copies every files defined by IMAGE_BOOT_FILES variable.
It seems you want to add some devicetree overlays, so you need to modify machine configuration and more specifically RPI_KERNEL_DEVICETREE_OVERLAYS variable. IMAGE_BOOT_FILES variable is set in rpi-base.inc.
If you don't have any custom machine or custom distro defined, you can add it in local.conf:
RPI_KERNEL_DEVICETREE_OVERLAYS_append = " <deploy-path>/<dto-path>"
You can see here how to add files in deploy directory.
After too many hours of investigation it turned out, that deploying files to other partitions than / is not easily possible. I now went the way of a post-processing script that mounts the final image, deploys the additional files and unmounts it.
# Ensure the first loopback device is free to use
sudo -n losetup -d /dev/loop0 || true
# Create a loopback device for the given image
sudo -n losetup -Pf ../deploy/images/bapi/ba.rootfs.rpi-sdimg
# Mount the loopback device
mkdir -p tmp
sudo -n mount /dev/loop0p1 tmp
# Deploy files
sudo -n cp -n ../../meta-ba-rpi-cm3/recipes-core/rpi3-overlays/files/* tmp/overlays/
sudo -n cp ../../conf/config.txt tmp/config.txt
sudo -n cp ../../conf/cmdline.txt tmp/cmdline.txt
# Unmount the image and free the loopback device
sudo -n umount tmp
sudo -n losetup -d /dev/loop0

How to clean up aws ec2 server?

I recently ran a report on my EC2 server and was told that it ran out of space. I deleted the csv that was partially generated from my report (it was going to be a pretty sizable one) and ran df -h and was surprised to get this output:
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 7.8G 7.0G 718M 91% /
devtmpfs 15G 100K 15G 1% /dev
tmpfs 15G 0 15G 0% /dev/shm
I surprised not only by how little was available/how much space was used,(I am on the /dev/xvda1 instance) but also surprised to see 2 alternative filesystems.
To investigate what was taking so much space, I ran du -h in ~ and saw the list of all directories on the server. Their reported size in aggregate should not be even close to 7 gb...which is why I ask "what is taking up all that space??"
The biggest directory by far was the ~ directory containing 165MB all other were 30MB and below. My mental math added it up to WAY less than 7gb. (if I understand du -h correctly, all directories within ~ ought to be included within 165MB...so I am very confused how 7 gb could be full)
Anyone know what's going on here, or how I can clean up the space? Also, just out of curiosity, is there a way to utilize the devtmpfs/tmpfs servers from the same box? I am running on AWS Linux, with versions of python and ruby installed
According to this answer, it seems as though it might be because of log files getting too large. Try run the command OP mentioned in their answer, in order to find all large files: sudo find / -type f -size +10M -exec ls -lh {} \;
For me, the best option was to delete the overlay2 docker folder and to completely refresh docker to a clean state. It clears up more than 3GB in my case.
Important note: it will stop and remove your instances, so you need to rebuild them.
In order to do that, first stop the docker engine
sudo systemctl stop docker
Prune and then delete the entire docker directory (not just the overlay2 folder):
docker system prune
sudo rm -rf /var/lib/docker
Restart docker:
sudo systemctl start docker
The engine will restart without any images, containers, volumes, user created networks, or swarm state.
Additionaly you can remove snap with:
sudo apt autoremove --purge snapd

ENOSPC no space left on device -Nodejs

I just built an application with expressJs for an institution where they upload video tutorials. At first the videos were being uploaded to the same server but later I switched to Amazon. I mean only the videos are being uploaded to Amazon. Now I get this error whenever I try to upload ENOSPC no space left on device. I have cleared the tmp file to no avail.I need to say that I have searched extensively about this issue but none of d solutions seems to work for me
Just need to clean up the Docker system in order to tackle it. Worked for me.
$ docker system prune
Link to official docs
In my case, I got the error 'npm WARN tar ENOSPC: no space left on device' while running the nodeJS in docker, I just used below command to reclaim space.
sudo docker system prune -af
I had the same problem, take a look at the selected answer in the Stackoverflow here:
Node.JS Error: ENOSPC
Here is the command that I used (my OS: LinuxMint 18.3 Sylvia which is a Ubuntu/Debian based Linux system).
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p
I have come across a similar situation where the disk is free but the system is not able to create new files. I am using forever for running my node app. Forever need to open a file to keep track of node process it's running.
If you’ve got free available storage space on your system but keep getting error messages such as “No space left on device”; you’re likely facing issues with not having sufficient space left in your inode table.
use df -i which gives IUser% like this
Filesystem Inodes IUsed IFree IUse% Mounted on
udev 992637 537 992100 1% /dev
tmpfs 998601 1023 997578 1% /run
If your IUser% reaches 100% means your "inode table" is exhausted
Identify dummy files or unnecessary files in the system and deleted them
I got this error when my script was trying to create a new file. It may look like you've got lots of space on the disk, but if you've got millions of tiny files on the disk then you could have used up all the available inodes. Run df -hi to see how many inodes are free.
I had the same problem, you can clear the trash if you haven't already, worked for me:
(The command I searched from a forum, so read about it before you decide to use it, I'm a beginner and just copied it, I don't know the full scope of what it does exactly)
$ rm -rf ~/.local/share/Trash/*
The command is from this forum:
https://askubuntu.com/questions/468721/how-can-i-empty-the-trash-using-terminal
Well in my own case. What actually happened was while the files were been uploaded on Amazon web service, I wasn't deleting the files from the temp folder. Well every developer knows that when uploading files to a server they are initially stored in the temp folder before being copied to whichever folder you want it to(I know for Nodejs and php); So try and delete your temp folder and see. And ensure ur upload method handles clearing of your temp folder immediately after every upload
You can set a new limit temporary with:
sudo sysctl fs.inotify.max_user_watches=524288
sudo sysctl -p
If you like to make your limit permanent, use:
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf
sudo sysctl -p
Adding to the discussion, the above command works even when the program is not run from Docker.
Repeating that command:
sudo sysctl fs.inotify.max_user_watches=524288
docker system prune
The previous answers fixed my problem for a short period of time.
I had to do find the big files that weren't being used and were filling my disk.
on the host computer I run: df
I got this, my problem was: /dev/nvme0n1p3
Filesystem 1K-blocks Used Available Use% Mounted on
udev 32790508 0 32790508 0% /dev
tmpfs 6563764 239412 6324352 4% /run
/dev/nvme0n1p3 978611404 928877724 0 100% /
tmpfs 32818816 196812 32622004 1% /dev/shm
tmpfs 5120 4 5116 1% /run/lock
tmpfs 32818816 0 32818816 0% /sys/fs/cgroup
/dev/nvme0n1p1 610304 28728 581576 5% /boot/efi
tmpfs 6563764 44 6563720 1% /run/user/1000
I installed ncdu and run it against root directory, you may need to manually delete an small file to make space for ncdu, if that's is not possible, you can use df to find the files manually:
sudo apt-get install ncdu
sudo ncdu /
that helped me to identify the files, in my case those files were in the /tmp folder, then I used this command to delete the ones that weren't used in the last 10 days:
With this app I was able to identify the big files and delete tmp files: (Sep-4 12:26)
sudo find /tmp -type f -atime +10 -delete
tldr;
Restart Docker Desktop
The only thing that fixed this for me was quitting and restarting Docker Desktop.
I tried docker system prune, removed as many volumes as I could safely do, removed all containers and many images and nothing worked until I quit and restarted Docker Desktop.
Before restarting Docker Desktop the system prune removed 2GB but after restarting it removed 12GB.
So, if you tried to run system prune and it didn't work, try restarting Docker and running the system prune again.
That's what I did and it worked. I can't say I understand why it worked.
This worked for me:
sudo docker system prune -af
Open Docker Desktop
Go to Troubleshoot
Click Reset to factory defaults
The issue was actually as a result of temp folder not being cleared after upload, so all the videos that have been uploaded hitherto were still in the temp folder and the memory has been exhausted. The temp folder has been cleared now and everything works fine now.
I struggled hard with it, some time, following command worked.
docker system prune
But then I checked the volume and it was full. I inspected and came to know that node_modules have become the real trouble.
So, I deleted node_modules, ran again NPM install and it worked like charm.
Note:- This worked for me for NODEJS and REACTJS project.
In my case, Linux ext4 file system, large_dir feature should be enabled.
// check if it's enabled
sudo tune2fs -l /dev/sdc | grep large_dir
// enable it
sudo tune2fs -O large_dir /dev/sda
On Ubuntu, ext4 FS will have a 64M limit on number of files in a single directory by default, unless large_dir is enabled.
I used to check free space first using this command.
to show show human-readable output
free -h
then i reclaimed more free space to almost
Total reclaimed space: 2.77GB from 0.94GB using this command
sudo docker system prune -af
this worked for me.

Unable to start CouchDB

Just installed CouchDb using brew on mac mountain lion. Everything went well till I hit the following issue to start the server I do not know erlnag and could not analyze the dump file
`couchdb
Apache CouchDB 1.2.1 (LogLevel=info) is starting.
{"init terminating in do_boot",{{badmatch,{error,{bad_return,{{couch_app,start,[normal,["/usr/local/etc/couchdb/default.ini","/usr/local/etc/couchdb/local.ini"]]},{'EXIT',{{badmatch,{error,shutdown}},[{couch_server_sup,start_server,1,[{file,"couch_server_sup.erl"},{line,98}]},{application_master,start_it_old,4,[{file,"application_master.erl"},{line,274}]}]}}}}}},[{couch,start,0,[{file,"couch.erl"},{line,18}]},{init,start_it,1,[]},{init,start_em,1,[]}]}}
Crash dump was written to: erl_crash.dump
init terminating in do_boot ()`
Any help much appreciated.
I have left the configurations files as it is
Often this is due to incorrect permissions on various configuration files & directories. It can be caused by running as a sudo / root user for example.
You can try fixing this using the following, but you may need to either create/add yourself to a couchdb group, or use a different user:group combination.
sudo chown -R couchdb:couchdb /etc/couchdb /var/lib/couchdb /var/run/couchdb /var/log/couchdb
sudo chmod -R 770 /etc/couchdb /var/lib/couchdb /var/run/couchdb /var/log/couchdb
sudo find /etc/couchdb /var/lib/couchdb /var/run/couchdb /var/log/couchdb -type f | sudo xargs chmod 660
```
See the chmod section in http://wiki.apache.org/couchdb/Installing_on_OSX for more detail.
I've had this problem when attempting to load a configuration file that doesn't exist, I was starting CouchDB with the -a option to supply additional configuration, and if that file doesn't exist I get an error similar to the one reported:
$ couchdb -a /does/not/exist.ini
{"init terminating in do_boot",{{badmatch,{error,{bad_return,{{couch_app,start,[normal,["/usr/local/etc/couchdb/default.ini","/usr/local/etc/couchdb/local.ini"]]},{'EXIT',{{badmatch,{error,{error,enoent}}},[{couch_server_sup,start_server,1,[{file,"couch_server_sup.erl"},{line,56}]},{application_master,start_it_old,4,[{file,"application_master.erl"},{line,269}]}]}}}}}},[{couch,start,0,[{file,"couch.erl"},{line,18}]},{init,start_it,1,[]},{init,start_em,1,[]}]}}
sudo apt-get install libicu-dev
Provide Proper permissions

How to Free Inode Usage?

I have a disk drive where the inode usage is 100% (using df -i command).
However after deleting files substantially, the usage remains 100%.
What's the correct way to do it then?
How is it possible that a disk drive with less disk space usage can have
higher Inode usage than disk drive with higher disk space usage?
Is it possible if I zip lot of files would that reduce the used inode count?
If you are very unlucky you have used about 100% of all inodes and can't create the scipt.
You can check this with df -ih.
Then this bash command may help you:
sudo find . -xdev -type f | cut -d "/" -f 2 | sort | uniq -c | sort -n
And yes, this will take time, but you can locate the directory with the most files.
It's quite easy for a disk to have a large number of inodes used even if the disk is not very full.
An inode is allocated to a file so, if you have gazillions of files, all 1 byte each, you'll run out of inodes long before you run out of disk.
It's also possible that deleting files will not reduce the inode count if the files have multiple hard links. As I said, inodes belong to the file, not the directory entry. If a file has two directory entries linked to it, deleting one will not free the inode.
Additionally, you can delete a directory entry but, if a running process still has the file open, the inode won't be freed.
My initial advice would be to delete all the files you can, then reboot the box to ensure no processes are left holding the files open.
If you do that and you still have a problem, let us know.
By the way, if you're looking for the directories that contain lots of files, this script may help:
#!/bin/bash
# count_em - count files in all subdirectories under current directory.
echo 'echo $(ls -a "$1" | wc -l) $1' >/tmp/count_em_$$
chmod 700 /tmp/count_em_$$
find . -mount -type d -print0 | xargs -0 -n1 /tmp/count_em_$$ | sort -n
rm -f /tmp/count_em_$$
My situation was that I was out of inodes and I had already deleted about everything I could.
$ df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sda1 942080 507361 11 100% /
I am on an ubuntu 12.04LTS and could not remove the old linux kernels which took up about 400,000 inodes because apt was broken because of a missing package. And I couldn't install the new package because I was out of inodes so I was stuck.
I ended up deleting a few old linux kernels by hand to free up about 10,000 inodes
$ sudo rm -rf /usr/src/linux-headers-3.2.0-2*
This was enough to then let me install the missing package and fix my apt
$ sudo apt-get install linux-headers-3.2.0-76-generic-pae
and then remove the rest of the old linux kernels with apt
$ sudo apt-get autoremove
things are much better now
$ df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sda1 942080 507361 434719 54% /
My solution:
Try to find if this is an inodes problem with:
df -ih
Try to find root folders with large inodes count:
for i in /*; do echo $i; find $i |wc -l; done
Try to find specific folders:
for i in /src/*; do echo $i; find $i |wc -l; done
If this is linux headers, try to remove oldest with:
sudo apt-get autoremove linux-headers-3.13.0-24
Personally I moved them to a mounted folder (because for me last command failed) and installed the latest with:
sudo apt-get autoremove -f
This solved my problem.
I had the same problem, fixed it by removing the directory sessions of php
rm -rf /var/lib/php/sessions/
It may be under /var/lib/php5 if you are using a older php version.
Recreate it with the following permission
mkdir /var/lib/php/sessions/ && chmod 1733 /var/lib/php/sessions/
Permission by default for directory on Debian showed drwx-wx-wt (1733)
We experienced this on a HostGator account (who place inode limits on all their hosting) following a spam attack. It left vast numbers of queue records in /root/.cpanel/comet. If this happens and you find you have no free inodes, you can run this cpanel utility through shell:
/usr/local/cpanel/bin/purge_dead_comet_files
You can use RSYNC to DELETE the large number of files
rsync -a --delete blanktest/ test/
Create blanktest folder with 0 files in it and command will sync your test folders with large number of files(I have deleted nearly 5M files using this method).
Thanks to http://www.slashroot.in/which-is-the-fastest-method-to-delete-files-in-linux
Late answer:
In my case, it was my session files under
/var/lib/php/sessions
that were using Inodes.
I was even unable to open my crontab or making a new directory let alone triggering the deletion operation.
Since I use PHP, we have this guide where I copied the code from example 1 and set up a cronjob to execute that part of the code.
<?php
// Note: This script should be executed by the same user of web server
process.
// Need active session to initialize session data storage access.
session_start();
// Executes GC immediately
session_gc();
// Clean up session ID created by session_gc()
session_destroy();
?>
If you're wondering how did I manage to open my crontab, then well, I deleted some sessions manually through CLI.
Hope this helps!
firstly, get the inode storage usage:
df -i
The next step is to find those files. For that, we can use a small script that will list the directories and the number of files on them.
for i in /*; do echo $i; find $i |wc -l; done
From the output, you can see the directory which uses a large number of files, then repeat this script for that directory like below. Repeat it until you see the suspected directory.
for i in /home/*; do echo $i; find $i |wc -l; done
When you find the suspected directory with large number of unwanted files. Just delete the unwanted files on that directory and free up some inode space by the following the command.
rm -rf /home/bad_user/directory_with_lots_of_empty_files
You have successfully solved the problem. Check the inode usage now with the df -i command again, you can see the difference like this.
df -i
eaccelerator could be causing the problem since it compiles PHP into blocks...I've had this problem with an Amazon AWS server on a site with heavy load. Free up Inodes by deleting the eaccelerator cache in /var/cache/eaccelerator if you continue to have issues.
rm -rf /var/cache/eaccelerator/*
(or whatever your cache dir)
We faced similar issue recently, In case if a process refers to a deleted file, the Inode shall not be released, so you need to check lsof /, and kill/ restart the process will release the inodes.
Correct me if am wrong here.
As told before, filesystem may run out of inodes, if there are a lot of small files. I have provided some means to find directories that contain most files here.
In one of the above answers it was suggested that sessions was the cause of running out of inodes and in our case that is exactly what it was. To add to that answer though I would suggest to check the php.ini file and ensure session.gc_probability = 1 also session.gc_divisor = 1000 and
session.gc_maxlifetime = 1440. In our case session.gc_probability was equal to 0 and caused this issue.
this article saved my day:
https://bewilderedoctothorpe.net/2018/12/21/out-of-inodes/
find . -maxdepth 1 -type d | grep -v '^\.$' | xargs -n 1 -i{} find {} -xdev -type f | cut -d "/" -f 2 | uniq -c | sort -n
On Raspberry Pi I had a problem with /var/cache/fontconfig dir with large number of files. Removing it took more than hour. And of couse rm -rf *.cache* raised Argument list too long error. I used below one
find . -name '*.cache*' | xargs rm -f
you could see this info
for i in /var/run/*;do echo -n "$i "; find $i| wc -l;done | column -t
For those who use Docker and end up here,
When df -i says 100% Inode Use;
Just run docker rmi $(docker images -q)
It will let your created containers (running or exited) but will remove all image that ain't referenced anymore freeing a whole bunch of inodes; I went from 100% back to 18% !
Also might be worth mentioning I use a lot CI/CD with docker runner set up on this machine.
It could be the /tmp folder (where all the temporarily files are stored, yarn and npm script execution for exemple, specifically if you are starting a lot of node script). So normally, you just have to reboot your device or server, and it will delete all the temporarily file that you don't need. For my, I went from 100% of use to 23% of use !
Many answers to this one so far and all of the above seem concrete. I think you'll be safe by using stat as you go along, but OS depending, you may get some inode errors creep up on you. So implementing your own stat call functionality using 64bit to avoid any overflow issues seems fairly compatible.
Run sudo apt-get autoremove command
in some cases it works. If previous unused header data exists, this will be cleaned up.
If you use docker, remove all images. They used many space....
Stop all containers
docker stop $(docker ps -a -q)
Delete all containers
docker rm $(docker ps -a -q)
Delete all images
docker rmi $(docker images -q)
Works to me

Resources