Running out of space on Linux, copying 12GB of files to a 15GB file system - linux

I have two virtual Linux servers, one for development and one in production, a typical setup one would expect.
On the development server I have files that I need to copy to the production server, that amount to 12GB, well according to the "du -h" command. The production server has 15GB free, according to the "df -h" command. However, when trying to copy the files across, the production server ran out of file space!
Whilst I know that both commands round up or down the answer, there should be still over 2GB free at the end, 12.4GB to 14.5GB. Equally, there could be near 4GB as well, 11.5GB to 15.4GB. (For some reason, I get slightly different answers based upon the user, but still enough to fit the files - supposedly.)
Both servers are running 64 bit Ubuntu 16.04 LTS and have the file system set as EXT4.
I'm using SCP to copy the files across, since I don't have enough space to contain a zipped file and its unzipped contents.
So what am I missing?

Please try the same with rsync command in terminal. Here is the example of it.
$ rsync -a /some/path/to/src/ /other/path/to/dest/

Related

Temp directory on root drive runs out of space when PUTing a large file into Apache

I'm putting (via curl on a client) a 10GB file up into an Apache server on Ubuntu Linux (v17.04). The root drive was pretty much full, and the ultimate destination for the PUT is a subversion root that is on a huge drive that's not root. The only other technology involved is mod_dav_svn.
How to move the tmp folder for Apache to that root drive too?
In modern Apache2 installs there's a /etc/apache2/envvars file that can happily have a line added to set the TMPDIR environmental variable.
Reboot Apache and it'll pick that up and run with it.
I've tested it over and over, and it works well (free space on the boot drive does not jeopardize the PUT of the large file).

How to extract/decompress this multi-part zip file in Linux?

I have a zip file thats titled like so file1.zip,file2.zip,file3.zip,etc...
How do I go about extracting these files together correctly? They should produce one output file.
Thanks for the help!
First, rename them to "file.zip", "file.z01", "file.z02", etc. as Info-ZIP expects them to be named, and then unzip the first file. Info-ZIP will iterate through the split files as expected.
I found a way. I had to mount the remote machines user home folder on my Ubuntu desktop pc and use File Roller Archive Manager, which is just listed as Archive Manger in Ubuntu 18.
Mount remote home folder on local machine...
Install sshfs
sudo apt install sshfs
Make a directory for the mount. Replace remote with whatever folder name you want
mkdir remote
Mount the remote file system locally replacing linuxusername with the user account you want to use to login and xxx.* with its IP address or hostname.
sudo sshfs -o allow_other linuxusername#xxx.xxx.xxx.xxx:/ remote
Now in the mounted "remote" folder you can see the contents of the whole linux filesystem and navigate them in a File Manager just like your local file system, limited by user privileges of course where you can only write to the home folder of the remote user account.
Using Archive Manager I openened up the .zip file of the spanned set (not the .z01, .z02 etc files) and extracted inside the "remote" folder. I saw no indication of extraction progress, the bar stayed at 0% until it was complete. Other X Windows based archive applications might work.
This is slow, about 3-5 megabytes per second on my LAN. I noticed Archive Manager use 7z to extract but do not know how as 7z is not supposed to support spanned sets.
Also if your ssh server is dropbear instead of openssl's sshd it will be unbearably slow for large files. I had to extract a 160gb archive and the source filesystem was fat32 so was not able to combine the spanned set into one zip file as it has a 4gb file size limit.

Git, Windows, Linux & NTFS: "index file open failed"

I created a git repo in Windows 7 on a NTFS partition and when opening it in Linux (Ubuntu 12 x64, dual-boot setup) I get the index file open failed error. How can I figure out what's wrong? The partition is mounted read-write and I've never had any other problems. Does git store data in a different format Windows vs. Linux and I need to do either a clone or some conversion? I'd really like to be able to work on the same repo in both OSs without cloning around...
Clarification: I also get cat: index: Input/output error
when running the command cat index in the .git dir, so it is a NTFS related problem... but I've never had it before untill using git in a cross-systems way and I've run other apps from NTFS parts and copied files around...
The .git/index file is a binary file, which describes the current workdir. Perhaps a git fsck is able to fix it up (move the one you have out of the way to make sure it isn't lost while you fool around, or make any expertiments on a copy of the repository). You might try to clone the repository locally, the clone might get a good copy of the file, which you could then copy over the broken one.
Possibly permission problems? Backup what is relevant, defragment the drive, run hardware checks (it might be a broken/breaking disk!).
Either your Linux NTFS driver is broken, or you have filesystem corruption, or both. Reboot to Windows and run the disk checking utility, then see how things stand when it finishes.

EC2 runs out of storage, which logs etc. can be deleted?

I am having a problem with an EC2 which uses nearly all of the 8GB of storage. I regularly delete log files from the server and files which get created by cronjobs in the users folder (can I turn this off?), but in the past there is always less space after deleting all the files. So somewhere else the EC2 creates files, but I don't know where.
Does anybody know where I can look for unused files automatically created by the Amazon Linux AMI or apache?
Thanks!
Look in /var/log, and for Apache they will likely be in /var/log/httpd. I also suggest that you look into logrotate.
You can clear all log files from your ec2 Ubuntu instance using this command
sudo rm -rf /opt/bitnami/apache2/logs/

Linux to Windows copying network script

I need to improve my method, or even change it completely, for copying files on a private network from multiple Windows machines to a central Linux machine. How this works is that I run the script below as a cron job every 5 minutes to copy data from say 10 Windows machines, all with a shared folder, to the central Linux machine that gets collected each day. So in theory the Linux machine at the end of the day should have all the data that has changed on the Windows machines.
#!/bin/sh
USER='/home/user/Documents/user.ip'
IPADDY=$USER
USERNAME=$USER
while read IPADDY USERNAME; do
mkdir /mnt/$USERNAME
mkdir /home/user/Documents/$USERNAME
smbmount //$IPADDY/$USERNAME /mnt/$USERNAME -o username=usera,password=password,rw,uid=user
rsync -zrv --progress --include='*.pdf' --include='*.txt' --include='issues' --exclude='*' /mnt/$USERNAME/ /home/user/Documents/$USERNAME/
done < $USER
The script works fine but it doesn't seem to be the best method as a lot of the time data is not being copied across or not all the data is copied correctly.
Do you think that this is the best approach or can someone point me in a better solution?
How about git repository? Wouldn't it be easier? You could easily also track the changes.

Resources