No space left on device- Ubuntu Issues - linux

I am getting this error "OSError: [Errno 28] No space left on device" when I am writing files in a directory. I am downloading images programmatically from different sources and creating directories according to day wise. Its working well on windows though.
While checking inodes I got this
I tried different solution like deleting the deleting the junk file and tmp folder but still no success.
What could be the issue?

inodes don't directly correlate with disk usage. Better use df -h to actually see, if your drive is full.
But as you get the error: Yep, it's full. Up to the brim.
You probably have some data some where that uses all that precious storage. Check your home directory with du -hs * | sort -h. This can take a moment, but it will show you the size of all files and directories in the current workdir (and sort it too).
Also directories to check would be /opt, /var and /tmp. Don't randomly delete stuff in /var though, if you don't know what you are doing.
I've listed /tmp here too, because you don't have listed it as a mount of the type tmpfs. You should fix that probably.

Related

du linux command size greater than df

I am using Digi embedded linux module which is having 8MB flash and 16MB RAM.
My partition table is as below:
SO, I got 4.4MB for rootfs. And 2MB for UserFS.
When I run ‘df -ah’, I get following output.
However, when I run ‘du -sh’ on root, I have 4M in /lib and 3M in /usr. Both are under root. However, the root is only 4.4M.
I have checked for symbolic link and can confirm that the files are physically present on /lib and /usr.
I deleted some of the library files(netsnmp) under /lib, which was close to 2M, but the available size on /dev/root only increased by ~390K(from 408K to 792K).
This suggests that the /lib/libnetsnmp* were stored somewhere else. I am not sure where those files were saved. Any ideas?
Also, please note that the rootfs image size is 4M. And this is shown correctly in df -ah command on /dev/root filesystem.
JFFS2 has transparent compression built in if I recall correctly. Executables compress pretty well.
if the file is in use. you can't delete it really.
you can use lsof | grep deleted to find them.
Probably it is due to the existence of hard-links in the root filesystem. Each hard-link will be shown as a normal file, but all hard-links will point to the same inode, so physically there is only one copy of the file in the hard-disk. You can see a good definition of soft-link and hard-link in this link.
EDIT: You can search for hard-links using this command (taken from this answer):
find . -samefile /path/to/file

How to list recently deleted files from a directory?

I'm not even sure if this is easily possible, but I would like to list the files that were recently deleted from a directory, recursively if possible.
I'm looking for a solution that does not require the creation of a temporary file containing a snapshot of the original directory structure against which to compare, because write access might not always be available. Edit: If it's possible to achieve the same result by storing the snapshot in a shell variable instead of a file, that would solve my problem.
Something like:
find /some/directory -type f -mmin -10 -deletedFilesOnly
Edit: OS: I'm using Ubuntu 14.04 LTS, but the command(s) would most likely be running in a variety of Linux boxes or Docker containers, most or all of which should be using ext4, and to which I would most likely not have access to make modifications.
You can use the debugfs utility,
debugfs is a simple to use RAM-based file system specially designed
for debugging purposes
First, run debugfs /dev/hda13 in your terminal (replacing /dev/hda13 with your own disk/partition).
(NOTE: You can find the name of your disk by running df / in the terminal).
Once in debug mode, you can use the command lsdel to list inodes corresponding with deleted files.
When files are removed in linux they are only un-linked but their
inodes (addresses in the disk where the file is actually present) are
not removed
To get paths of these deleted files you can use debugfs -R "ncheck 320236" replacing the number with your particular inode.
Inode Pathname
320236 /path/to/file
From here you can also inspect the contents of deleted files with cat. (NOTE: You can also recover from here if necessary).
Great post about this here.
So a few things:
You may have zero success if your partition is ext2; it works best with ext4
df /
Fill mount point with result from #2, in my case:
sudo debugfs /dev/mapper/q4os--desktop--vg-root
lsdel
q (to exit out of debugfs)
sudo debugfs -R 'ncheck 528754' /dev/sda2 2>/dev/null (replace number with one from step #4)
Thanks for your comments & answers guys. debugfs seems like an interesting solution to the initial requirements, but it is a bit overkill for the simple & light solution I was looking for; if I'm understanding correctly, the kernel must be built with debugfs support and the target directory must be in a debugfs mount. Unfortunately, that won't really work for my use-case; I must be able to provide a solution for existing, "basic" kernels and directories.
As this seems virtually impossible to accomplish, I've been able to negotiate and relax the requirements down to listing the amount of files that were recently deleted from a directory, recursively if possible.
This is the solution I ended up implementing:
A simple find command piped into wc to count the original number of files in the target directory (recursively). The result can then easily be stored in a shell or script variable, without requiring write access to the file system.
DEL_SCAN_ORIG_AMOUNT=$(find /some/directory -type f | wc -l)
We can then run the same command again later to get the updated number of files.
DEL_SCAN_NEW_AMOUNT=$(find /some/directory -type f | wc -l)
Then we can store the difference between the two in another variable and update the original amount.
DEL_SCAN_DEL_AMOUNT=$(($DEL_SCAN_ORIG_AMOUNT - $DEL_SCAN_NEW_AMOUNT));
DEL_SCAN_ORIG_AMOUNT=$DEL_SCAN_NEW_AMOUNT
We can then print a simple message if the number of files went down.
if [ $DEL_SCAN_DEL_AMOUNT -gt 0 ]; then echo "$DEL_SCAN_DEL_AMOUNT deleted files"; fi;
Return to step 2.
Unfortunately, this solution won't report anything if the same amount of files have been created and deleted during an interval, but that's not a huge issue for my use case.
To circumvent this, I'd have to store the actual list of files instead of the amount, but I haven't been able to make that work using shell variables. If anyone could figure that out, I'd help me immensely as it would meet the initial requirements!
I'd also like to know if anyone has comments on either of the two approaches.
Try:
lsof -nP | grep -i deleted
history >> history.txt
Look for all rm statements.

zip command skip errors

zip -r file.zip folder/
This is the typical command I use to zip a directory, however it is on an active site so images are constantly deleted/updated. Leading to the command failing due to a file being there when it started the process but not there when it gets to actually compressing it (at least from what I can see).
I have no option to stop the editing of the files in this case so my only hope is to just skip them, the amount of images getting edited compared to the sheer size of the directory is insigificant. so 2-3 files changing out of 100,000 is nothing, but the error stops the compression altogether.
I tried to look for a way around this, but have had no luck, could be just looking in the wrong direction but I feel that there is no way this is impossible.
Here is an example error:
zip I/O error: No such file or directory
zip error: Input file read failure (was zipping uploads/2010/03/file.jpg)
Is there some way to use the zip command or something similar to zip a folder, but if it runs into an error when it hits a file, it just skips it?
tar is always a good option to compress in Linux. Beware that zip may also have file size limit issue.
tar vcfz file.tar.gz folder

Add files to VFAT image without mounting

I want to create a new VFAT image and add a few files to it.
# Create file of 1MB size:
dd if=/dev/zero of=my-image.fat count=1 bs=1M
# Format file as VFAT:
mkfs.vfat ./my-image.fat
Now I want to add the files ./abc, ./def and ./ghi to the image.
How do I do that without mount -o loop or fusermount?
I only want to write to a new, empty, pristine VFAT image.
I don't need deleting appending or any "complicated" operations.
I tried 7z -a because 7zip can read VFAT images, but it does not know how to write to it.
I want to do the exact same thing as part of an image build for an embedded system. It's really annoying that the entire build, which takes ~3hrs, could be completely unattended except for the final steps which required a password in order to mount a VFAT image. Fortunately, I found a set of tools which solve the problem.
You want mcopy provided by GNU mtools.
Mtools is a collection of utilities to access MS-DOS disks from GNU and Unix without mounting them.
It also supports disk images such as VFAT image files.
As an example, the following command will copy the file hello.txt from your current directory into the subdirectory subdir of the VFAT file system in ~/images/fat_file.img:
mcopy -i ~/images/fat_file.img hello.txt ::subdir/hello.txt
There are more useful inclusions in mtools, such as mdir and mtype which are great for inspecting your image file without having to mount it.
mdir -i ~/images/fat_file.img ::
mdir -i ~/images/fat_file.img ::subdir
mtype -i ~/imags/fat_file.img ::subdir/hello.txt
What you want is basically impossible. You can't just "stuff" some file data onto the end of a disk image and have those files magically "appear" within the image. Feel free to stuff in the data, but there's more to a filesystem than just the data. You have to EXACTLY replicate the metadata operations that the file system handles for you, e.g. updating the FAT tables.
In other words, you'd have to build the ENTIRE FAT filesystem handling code in your own code. Which is utterly ludicrous. Just mount the image, use normal file operations on that mounted file system, then dismount it again. Boom, done.

tar a folder into multiple files over SSH

Here is the thing
I have a server with total 85 GB disk space and right now i have a folder with the size of 50 GB which is containing over 60000 files .
Now i want to download these files on my localhost and in order to do that i need to tar the folder but I can't tar the whole folder because of disk space limitation.
So i'm looking for a way to archive the folder into two 25 GB tar file like part1.tar and part2.tar but when the first part is done it should wait for asking something like next part name or permission or anything so I can transfer the first part to an another server and then continue archiving to part2. Or a way to tar half of the folder like first 30000 files and then tar the rest.
Any idea? Thanks in advance
One of the earliest applications of rsync was to implement mirroring or backup for multiple Unix clients to a central Unix server using rsync/ssh and standard Unix accounts.
I use rsync to move compressed (and uncompressed) files between servers.
I think the command should be something like this
rsync -av host::src /dest
rsync solution was good enough but i found the solution for main question:
tar -c -M --tape-length=30000000 --file=filename.tar foldername
After reaching 29GB you will need to change the tape(in my case transferring the first part and removing it) and hit enter for continue.Additionally it is possible for give next parts name:
Prepare volume #2 for `filename.tar' and hit return:
n filename2.tar
Because it is going to take time i suggest using screen session over SSH :
http://thelinuxnoob.com/linux/screen-in-ssh/

Resources