Checking Current File System with Perl - linux

I need my perl script to check the file system type of the computer it's running on. What is the easiest way to do this? (on Linux)

There is a linux command df -T to determine filesystem
You can invoke it from your script and parse the output:
my $filesystem_info = `df -T`;

The only reliable way to do what you want is (a) decide which mount you are talking about and (b) find its entry in /proc/mounts.
On Linux, /proc/mounts lists all mounted file systems. The format of each line is "device mount-point fs-type mount-options'. It is human-readable; cat /proc/mounts and you should get the idea.
(Note that /etc/fstab only lists the file systems that get auto-mounted at boot time. That can be different than what is mounted at the time the script runs for all sorts of reasons, most notably automounters. /proc/mounts is what you want.)

You can try parsing the /etc/fstab file to find it out.
Beware there might be multiple filesystems in this file, you have to pick the one you want.

Related

How to list recently deleted files from a directory?

I'm not even sure if this is easily possible, but I would like to list the files that were recently deleted from a directory, recursively if possible.
I'm looking for a solution that does not require the creation of a temporary file containing a snapshot of the original directory structure against which to compare, because write access might not always be available. Edit: If it's possible to achieve the same result by storing the snapshot in a shell variable instead of a file, that would solve my problem.
Something like:
find /some/directory -type f -mmin -10 -deletedFilesOnly
Edit: OS: I'm using Ubuntu 14.04 LTS, but the command(s) would most likely be running in a variety of Linux boxes or Docker containers, most or all of which should be using ext4, and to which I would most likely not have access to make modifications.
You can use the debugfs utility,
debugfs is a simple to use RAM-based file system specially designed
for debugging purposes
First, run debugfs /dev/hda13 in your terminal (replacing /dev/hda13 with your own disk/partition).
(NOTE: You can find the name of your disk by running df / in the terminal).
Once in debug mode, you can use the command lsdel to list inodes corresponding with deleted files.
When files are removed in linux they are only un-linked but their
inodes (addresses in the disk where the file is actually present) are
not removed
To get paths of these deleted files you can use debugfs -R "ncheck 320236" replacing the number with your particular inode.
Inode Pathname
320236 /path/to/file
From here you can also inspect the contents of deleted files with cat. (NOTE: You can also recover from here if necessary).
Great post about this here.
So a few things:
You may have zero success if your partition is ext2; it works best with ext4
df /
Fill mount point with result from #2, in my case:
sudo debugfs /dev/mapper/q4os--desktop--vg-root
lsdel
q (to exit out of debugfs)
sudo debugfs -R 'ncheck 528754' /dev/sda2 2>/dev/null (replace number with one from step #4)
Thanks for your comments & answers guys. debugfs seems like an interesting solution to the initial requirements, but it is a bit overkill for the simple & light solution I was looking for; if I'm understanding correctly, the kernel must be built with debugfs support and the target directory must be in a debugfs mount. Unfortunately, that won't really work for my use-case; I must be able to provide a solution for existing, "basic" kernels and directories.
As this seems virtually impossible to accomplish, I've been able to negotiate and relax the requirements down to listing the amount of files that were recently deleted from a directory, recursively if possible.
This is the solution I ended up implementing:
A simple find command piped into wc to count the original number of files in the target directory (recursively). The result can then easily be stored in a shell or script variable, without requiring write access to the file system.
DEL_SCAN_ORIG_AMOUNT=$(find /some/directory -type f | wc -l)
We can then run the same command again later to get the updated number of files.
DEL_SCAN_NEW_AMOUNT=$(find /some/directory -type f | wc -l)
Then we can store the difference between the two in another variable and update the original amount.
DEL_SCAN_DEL_AMOUNT=$(($DEL_SCAN_ORIG_AMOUNT - $DEL_SCAN_NEW_AMOUNT));
DEL_SCAN_ORIG_AMOUNT=$DEL_SCAN_NEW_AMOUNT
We can then print a simple message if the number of files went down.
if [ $DEL_SCAN_DEL_AMOUNT -gt 0 ]; then echo "$DEL_SCAN_DEL_AMOUNT deleted files"; fi;
Return to step 2.
Unfortunately, this solution won't report anything if the same amount of files have been created and deleted during an interval, but that's not a huge issue for my use case.
To circumvent this, I'd have to store the actual list of files instead of the amount, but I haven't been able to make that work using shell variables. If anyone could figure that out, I'd help me immensely as it would meet the initial requirements!
I'd also like to know if anyone has comments on either of the two approaches.
Try:
lsof -nP | grep -i deleted
history >> history.txt
Look for all rm statements.

how to take linux shell backup script system?

I am facing problem in taking shell script backup of my system (Linux). Please help me to know how to take shell script backup of my system.
Thanks in advance
I prefer rsync for backing up my Centos boxes. It is very good at both file transfer and file synchronization and offers tons of options for compression and such. This is a modified one line example from my BASH backup script that backs up all of my media onto a temporarily mounted external hard drive:
rsync -avP --stats /media/* /mnt/ntfs/Media 1>/log/stats.info 2>>log/bkup.err
You can substitute /media/* with the directories you want to backup; the most simple would simply be /*, which will backup everything.
You can also use the --exclude directive to exclude directories.
The other simple method is a simple tar archive to get all the important system files, something like:
tar cvfj /root/sysBkup.bz2 --exclude=/root/sysBkup.bz2 /etc /var /root /sys
Then moving that backup to a remote share with the next line of bash script. But I would recommend getting familiar with rsync; very handy backup utility.

how to find path from where current binary running?

After somewhere searching finally not getting what i want.
I am working on some embedded board with linux system. And many users access it by telnet.So each user suppose copy some binary somewhere and executed like ./binary.So i can see this process running by simply ps command but from where it's running i don't know.
somewhere found that, use which command but as per my understanding(if i am not wrong) which command find only path of that binary whether it's currently executing or not.
And what if multiple users copied same binary in different path?
Also looked another solution use readlink but limited busybox binary supported in my target board. So readlink is not there.
One another solution like
file /proc/"proess id"/exe but here file command not present because of custome linux in my board which contain only limited functionality and binary.
So any other solution is there?
Try ls -l /proc/"proess id"/exe. ls utility from GNU coreutils shows links with -l option, but I don't have exact information about ls from busybox.

Is there a way to wait until root filesystem is mounted?

I have a statically linked code(not a module) in kernel that should launch kernel thread after root file system is mounted. The problem is I don't know how to do this without modifying prepare_namespace() kernel function. I thought it's possible to do via initcalls but
they're executed before kernel takes care about rootfs.
Does anyone know the best way to do this?
UPDATE [1]: #BenVoigit suggested the following solution in comments:
Seems like you should open /proc/mounts and poll_wait on it. See the source for `mounts_poll'
UPDATE [2]: I looked at RSBAC patches, RSBAC modifies prepare_namespace() function to make some actions after filesystem is mounted. It seems to be the easiest way.
Well, current Linux images are too big to fit the PC boot sector. Modern bootloaders like grub will mount an small filesystem in RAM before the real one.
To understand what is happening under the hood, you can open the disk image located under /boot. For example, in Ubuntu:
mkdir test
cd test
zcat /boot/initrd.img-2.6.35-24-generic > image.cpio
cpio -i < image.cpio
vim init
In the end, it's just a bunch of shell scripts - the simplicity is almost poetic.

cygwin slow file open

My application uses fopen to open a lot of files. While in linux opening and reading thousand of files doesn't even take a second; in cygwin it takes more than 5 seconds.
I think it is because path conversion functions in cygwin dlls. 'open' function is a bit faster. If I use -mno-cygwin it becomes very fast but I can't use it.
Is there an easy way to make cygwin dlls just open files; without any linux-windows conversion?
It depends on how the system was mounted in the Cygwin environment.
$ mount
C:/cygwin/bin on /usr/bin type ntfs (binary,auto)
C:/cygwin/lib on /usr/lib type ntfs (binary,auto)
C:/cygwin on / type ntfs (binary,auto)
C: on /cygdrive/c type ntfs (binary,posix=0,user,noumount,auto)
D: on /cygdrive/d type iso9660 (binary,posix=0,user,noumount,auto)
The mount option "binary" makes it so CRLF <-> LF conversions are not performed on files read from the volume. This is default.
Some things you can do to speed up a Cygwin prompt are the following:
Add the following lines to your ~/.bashrc:
# eliminate long Windows pathnames from the PATH
export PATH='/bin:/usr/bin:/usr/local/bin'
# check the hash before searching the PATH directories
shopt -s checkhash
# do not search the path when .-sourcing a file
shopt -u sourcepath
Disconnect your network drives.
Disable your antivirus, or otherwise exclude Cygwin's folders from its scans.
Thorough antivirus programs scan files for malware as they're opened by programs, and this means it'll be working overtime if your script is opening thousands of files.
Use the option --cache-file="$HOME/.config.cache" when running autotools configure scripts.
This will create a file that holds prerecorded configure discoveries, most of which are usable between software builds. (This is also a good idea when using Linux).
Since the shell seems to be the bottleneck of the Cygwin system, a huge script that relies on starting a large number of processes will take forever and this will cut down on the number of processes it needs to start.
Set up Cygwin's sshd and stop using Windows Command Prompt in favor of PuTTY.
PuTTY responds better to changing text on the screen, as it was built for the more mature CLI interface of *NIX.

Resources