How increase the /usr on freebsd 8.1? - freebsd

My guest vm running os freebsd 8.1 and today my /usr is full, and how to increase the /usr from another virtual disk.
ran df command
Filesystem 1K-blocks Used Avail Capacity Mounted on
/dev/da0s1a 1012974 175918 756020 19% /
devfs 1 1 0 100% /dev
/dev/da0s1e 2026030 127744 1736204 7% /tmp
/dev/da0s1f 25384812 23038656 315372 99% /usr
/dev/da0s1d 20308398 10831484 7852244 58% /var
Thanks in advance.

You must get informations about usr partition:
du -s /usr/* | sort -n
For example, you have another partition /usr2 and big subdirectory /usr/local (very big on my system)
And you may (very ugly way):
move /usr/local subdirectory on /usr2 and link it: tar -C /usr -cf - local | tar -C /usr2 -xf - && mv /usr/local /usr/local2 && ln -s /usr2/local /usr && rm -rf /usr/local2
copy all data from usr to usr2 (tar -C /usr -cf - . | tar -C /usr2 -xf -) and change mountpoint on /etc/fstab
If you do not have a big place, you may use first method for subsubdirectory.
For your situation you may:
create subdir in var: mkdir /var/usr/
and use above method with /var/usr instead of /usr2

Related

Selective decompression, ommitting the existing absolute folder structure, of tar to a directory

Say the example.tar.gz archive contains the following:
volumes/wordpress/a
volumes/wordpress/.b
volumes/wordpress/c/d
volumes/service2/a
volumes/service2/.b
volumes/service3/c/d
volumes/service3/a
volumes/service3/.b
volumes/service3/c/d
I want to extract the contents of volumes/wordpress of the archive to /var/www/html directory on the host (which already exists and cannot be removed), to end up with:
/var/www/html/a
/var/www/html/.b
/var/www/html/c/d
I have no way of reformating the tar file.
What I have tried:
gunzip -c example.tar.gz | tar -C /var/www/html -xf - volumes/wordpress
but this creates /var/www/html/volumes/wordpress/...
gunzip -c example.tar.gz | bash -c 'tar -C /tmp -xf - volumes/wordpress && mv /tmp/volumes/wordpress/* /var/www/html'
but this skips the .b file
gunzip -c example.tar.gz | bash -c 'tar -C /tmp -xf - volumes/wordpress && rsync -a --remove-source-files /tmp/volumes/wordpress/ /var/www/html/'
but rsync does not exist in the context of a docker container
Note, because I don't think it is relevant to this question I skipped out on Docker related specifics but in case useful, the commands I am running are of this format:
gunzip ... | docker-compose run --rm wordpress tar ...
This allows the wordpress container to be defined in docker-compose yml (hence why I state I cannot remove /var/www/html/, as it is mounted as a volume)
Edit It turned out that I ended up finding a satisfactory (although hacky) solution to my problem and it WAS Docker specific (so edited question to include Docker tag). I will supply hacky answer as an answer to the question. Still interested in a non Docker related answer if possible.
I was able to remount the already defined (by docker-compose.yml) volume, my-volume to a docker-compose run of my-service, but in this case mapped to a second containerized directory, where tar would output its contents.
e.g.
gunzip -c example.tar.gz | docker-compose run --rm -v my-volume:/volumes/wordpress my-service tar -C / -xf - volumes/wordpress

How to get the different size of multiple specified directories in unix?

I want to get the size of each specified directory after running some commands.
For example, I want to clean all the cache in /var using
/var : sudo yum clean all
Then, get the size of the following directories:
/var
/boot
/opt
I can write for example the following:
sudo du -sh /var
sudo du -sh /boot
sudo du -sh /opt
But I want to merge them in one command " sudo du". Is that possible? Then I want to merge it with the "yum clean all" command.
Is there a good practice to do it? I'm new to Unix.
You can specify multiple directories with du and so do:
sudo yum clean all && sudo du -sh /var /boot /opt
You run with one sudo operation:
sudo sh -c "yum clean all && du -sh /var /boot /opt"

Is there a quick method on linux to switch to recently accessed n folders (n > 1)?

We can use cd - to access the most recently accessed folder, but what if I want to quickly switch to the last but one folder accessed?
Sure there's a way!
You want the shell builtin pushd.
[~]$ pwd
/home/dan
[~]$ pushd /tmp
/tmp ~
[tmp]$ pushd /usr/bin
/usr/bin /tmp ~
[bin]$ pushd +2
~ /usr/bin /tmp
[~]$
ls -t | head -n1
Above command use to get most recently used directory

linux command to copy contents of folder to root

I am using ubuntu I want to copy folder contents into another folder . I used below command
cp -R /home/user/public_html/domain.com /home/user/public_html/
But I get source and destinations are same error.
I would use tar (because it block copies). Something like
cd /home/user/public_html/domain.com
tar cf - * | (cd .. ; tar xvf -)
or (using your cp) like
cd /home/user/public_html/domain.com
cp -R * ../

Linux Backup Bash

I am trying to create a bash script that backup the whole /dev/sda1 to /mnt/Backup
/dev/sda1 457G 3.5G 431G 1% /
/dev/sdb1 2.8T 3.0G 2.8T 1% /mnt/Backup
The script that have is :
START=$(date +%D)
FOLDER_NAME=`echo $START | tr -s '/' | tr '/' '_'`
SOURCE_PATH='/media /bin /boot /cdrom /dev /etc /home /lib /opt /proc /root /run /sbin /selinux /srv /sys /tmp /usr /var'
SOURCE_PATH='/'
FOLDER_PATH='/mnt/Backup'
BACKUP_PATH=$FOLDER_PATH/Bkp_$FOLDER_NAME
mkdir -p '$BACKUP_PATH'
cp -r $SOURCE_PATH $BACKUP_PATH
As you can see above on the source path i have tried naming all the folders i wanted to back up but when i run with that path i get an error : this is not a directory
Then i tried the source path "/" below and the copy start but get stucked on
cp: reading `/proc/sysrq-trigger': Input/output error
cp: failed to extend `/mnt/Backup/Bkp_09_14_13/proc/sysrq-trigger': Input/output error
The question is how can i change my script to successfully backup the sda1 to sdb1
Thanks in advance for your help
If /dev/sda1 is mounted as your root filesystem, doing a recursive copy on it would also include the mounted filesystems under its directories. You can mount it again on another directory e.g. /mnt/system then do a recursive copy from it. I suggest using cp -a and not just -r as well.
I'm not sure about your actual program, but here are some little things:
START=$(date +%D)
FOLDER_NAME=`echo $START | tr -s '/' | tr '/' '_'`
Both $() and ` ` do exactly the same. Why would you use both in one script? Prefer $() over backticks because they can be nested. Also, what is the point of tr -s '/' ? First of all, it can be condensed to one tr:
echo "$START" | tr -s '/' '_'
But considering that date +%D will never return repeated slashes you can use pure bash parameter expansion:
FOLDER_NAME=${START//\//_}
How could this even work: mkdir -p '$BACKUP_PATH' ? Variables in single quotes are not expanded. Change to mkdir -p "$BACKUP_PATH"
$SOURCE_PATH does not need quotes in cp -r $SOURCE_PATH $BACKUP_PATH, because you want it to be split into different parameters. That is right. But you have to $BACKUP_PATH because you want it to be a single parameter.
So, here is your script with small improvements:
START=$(date +%D)
FOLDER_NAME=${START//\//_}
SOURCE_PATH='/media /bin /boot /cdrom /dev /etc /home /lib /opt /proc /root /run /sbin /selinux /srv /sys /tmp /usr /var'
SOURCE_PATH='/'
FOLDER_PATH='/mnt/Backup'
BACKUP_PATH=$FOLDER_PATH/Bkp_$FOLDER_NAME
mkdir -p "$BACKUP_PATH"
cp -r $SOURCE_PATH "$BACKUP_PATH"

Resources