Cronjob not ending when deleting files - linux

When my cronjob runs it doesn't appear to end. Its a simple job to delete some files every 2 days. The job wont end and it deletes the file but will not release the space without a reboot. Now the machine has no free storage unless it is rebooted. Has anyone else experienced this and if so how did you fix/resolve it?
0 23 */2 * * /usr/bin/find "var/log/" -name "messages*" -delete
Here is my cronjob

That means that the files are used by some other process. In Unix, when you delete the file, the inode and the space are not reclaimed as long as some process has still a file descriptor open on it.
You can easily find which are the process holding it using lsof /var/log

Deleting a file will unlink the file from the file system's directory structure but if the file is still open (in use by a running process) it will still be accessible to this process and will continue to occupy space on disk, so these processes may need to be restarted before that file's space will be cleared up on the file-system
You can obtain the PIDs of the process that are using the files: lsof | grep deleted .
You can check which file are open by a certain process using his PID by: ls -l /proc/PID/fd
Also you can check the locked files on your machine: lslocks (cat /proc/locks more details).

Related

Wait Until Previous Command Completes

I have written a bash script on my MacMini to execute anytime a file has completed downloading. After the file download is complete, the mac mounts my NAS, renames the file, and then copies the file from the mac to the NAS, deletes the file from the mac and then unmounts the NAS.
My issue is, sometimes, the NAS takes a few seconds to mount. When that happens, I receive an error that the file could not be copies because the directory doesn’t exist.
When the NAS mounts instantly, (if the file size is small), the file copies and then the file deletes and the NAS unmounts.
When the file size is large, the copying process stops when the file is deleted.
What I’m looking for is, how do I make the script “wait” until the NAS is mounted, and then how do I make the script again wait until the file copying is complete?
Thank you for any input. Below is my code.
#connect to NAS
open 'smb://username:password#ip_address_of_NAS/folder_for_files'
#I WANT A WAIT COMMAND HERE SO THAT SCRIPT DOES NOT CONTINUE UNTIL NAS IS MOUNTED
#move to folder where files are downloaded
cd /Users/account/Downloads
#rename files and copy to server
for f in *; do
newName="${f/)*/)}";
mv "$f"/*.zip "$newName".zip;
cp "$newName".zip /Volumes/folder_for_files/"$newName".zip;
#I NEED SCRIPT TO WAIT HERE UNTIL FILE HAS COMPLETED ITS TRANSFER
rm -r "$f";
done
#unmount drive
diskutil unmount /Volumes/folder_for_files
I have no longer a mac to try this, but it seems open 'smb://...' is the only command here that does not wait for completion, but does the actual work in the background instead.
The best way to fix this would be to use something other than open to mount the NAS drive. According to this answer the following should work, but due to the lack of a mac and NAS I cannot test it.
# replace `open ...` with this
osascript <<< 'mount volume "smb://username:password#ip_address_of_NAS"'
If that does not work, use this workaround which manually waits until the NAS is mounted.
open 'smb://username:password#ip_address_of_NAS/folder_for_files'
while [ ! -d /Volumes/folder_for_files/ ]; do sleep 0.1; done
# rest of the script
you can use a loop to sleep and then for five seconds for example, and then run smbstatus and if the output contains any string to identify your smb://username:password#ip_address_of_NAS/folder_for_files connection`
when this is found to then start the copying of your files. You could also have a counter variable to stop, after certain number of times to sleep and then check if the connection has been successful too.

How to move a file to cron.d in Linux?

my_cron-file works when it's created directly in /etc/cron.d/:
sudo nano /etc/cron.d/my_cron
# Add content:
* * * * * username /path/to/python /path/to/file 2>/path/to/log
But it doesn't work when I copy/move it to the directory:
sudo cp ./my_cron /etc/cron.d/my_cron
ls -l /etc/cron.d outputs the same permissions both times: -rw-r--r--. The files are owned by root.
The only reason I could imagine at the moment is that I've to refresh/activate something after copying, which happens automatically on creation.
Tested on Ubuntu and Raspbian.
Any idea? Thanks!
Older cron daemons used to examine /etc/cron.d for updated content only when they saw that the last-modified timestamp of that directory, or of the /etc/crontab file, had changed since the last time cron scanned it. Recent cron daemons also examine the timestamps of the individual files in /etc/cron.d but maybe you're dealing with an old one here.
If you have an old cron, then if you copied a brand new file into /etc/cron.d then the directory's timestamp should change and cron should notice the new file.
However, if your cp was merely overwriting an existing file then that would not change the directory timestamp and cron would not pick up the new file content.
Editing a file in-place in /etc/cron.d would not necessarily update the directory timestamp, but some editors (certainly vi, unless you've configured it otherwise) will create temporary working files and perhaps a backup file in the directory where the file being edited lives. The creation and deletion of those other files will cause the directory timestamp to be updated, and that will cause cron to put the edited file into effect. This could explain why editing behaves differently for you than cp'ing does.
To force a timestamp to be updated you could do something like sudo touch /etc/crontab or create and immediately remove a scratch file (or a directory) in /etc/cron.d after you've cp'ed or rm'ed a file in there. Obviously touch is easier. If you want to go the create+delete route then mktemp would be a good tool to use for that, in order to avoid clobbering someone else's legitimate file.
If you were really paranoid, you'd wait at least a second between making file changes and then doing whatever you choose to do to force a timestamp update. That should avoid the situation where a cron rescan, your file updates, and your touch or scratch create+delete could all happen within the granularity of the timestamp.
If you want to see what your cron is actually doing, you can sudo strace -p <pid-of-cron>. Mostly it sleeps for a minute at a time, but you'll see it stat some files and directories (including /etc/crontab and /etc/cron.d) each time it wakes up. And of course if it decides that it needs to run a job, you'll see that activity too.

Using incrontab mv file results in 0 byte file

I'm watching a folder using incrontab with the command in the incrontab -e editor:
/media/pi/VDRIVE IN_CLOSE_WRITE sudo mv $#/$# /media/pi/VDRIVE/ready/$#
The watched folder is relieving a file over the network from another machine—the file shows up OK and appears to trigger the incrontab job presumably once the copy process has closed the file, but the mv command results in a 0 bytes file in the destination folder with the correct name.
All run as root.
It seems that there is a bug in Samba on OSX which results in two events when writing to a shared folder on the network. This makes incrontab pretty unworkable when working with OSX computers (more recent OS 10.7 up).
So when OSX writes a file to the Linux samba share, there are two events, the first one triggers the mv action before the file has finished actually writing. Its a bug in OSXs SAMBA implementation.
In the end I used inotify to write events to a log file (of which there are always two), then scanned the file for two instances of the event before performing the action.
Another strategy was to use LSOF on a cron routine that will just ignore any files open for writing.

linux /tmp folder + how to know if files will deleted after reboot or after some time

I have Linux red-hat machine
And I not sure what the concept about the directory /tmp
How to know if the files under /tmp will deleted after reboot or maybe will deleted after some time
Which file/configuration in my Linux machine responsible for that ?
And if it possible to change the rules there?
remark my crontab is empty - no deleted Job there
This is specified in the File Hierarchy Standard and Linux Standard Base
/tmp/ is often tmpfs mounted, and on systems where it is not the case, the boot init scripts should (and usually do) clean it.
So files under /tmp/ do not survive a reboot. Put them elsewhere (perhaps /var/tmp/) if you want them to survive a reboot.
In the FHS §2.3:
The /tmp directory must be made available for programs that require temporary files.
Programs must not assume that any files or directories in /tmp are preserved between invocations of the program.
Tip Rationale
IEEE standard P1003.2 (POSIX, part 2) makes requirements that are similar to the above section.
Although data stored in /tmp may be deleted in a site-specific manner, it is recommended that files and directories located in /tmp be deleted whenever the system is booted.
So unless your systems are very badly misconfigured, you should presume that /tmp/ is cleaned at least at reboot time. BTW, some sysadmins are setting a crontab entry to clean old files (e.g. weekly clean older than 2 weeks file). See also tmpfiles.d(5), TMPDIR, mkstemp(3), crontab(5), POSIX tmpfile & tmpnam
Just check the output of
mount
If you find that /tmp is of tmpfs type, then it will be deleted. tmpfs is an in-memory filesystem.
But never count on /tmp to persist.
The default setting that tells your system to clear /tmp at reboot is held in the /etc/default/rcS file.
The value we’ll look at is TMPTIME.The current value of TMPTIME=0 says delete files at reboot despite the age of the file.Changing this value to a different (positive) number will change the number of days a file can survive in /tmp.
Code:
TMPTIME=7
This setting would allow files to stay in /tmp until they are a week old, and then delete them on the next reboot.
A negative number (
TMPTIME=-1
) tells the system to never delete anything in /tmp.
systemctl cat systemd-tmpfiles-clean.timer
# /lib/systemd/system/systemd-tmpfiles-clean.timer
# SPDX-License-Identifier: LGPL-2.1-or-later
#
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
[Unit]
Description=Daily Cleanup of Temporary Directories
Documentation=man:tmpfiles.d(5) man:systemd-tmpfiles(8)
ConditionPathExists=!/etc/initrd-release
[Timer]
OnBootSec=15min
OnUnitActiveSec=1d
The [Timer] section specifies what service to trigger (systemd-tmpfiles-clean.timer) and when to trigger it. In this case, the option OnBootSec specifies a monotonic timer that triggers the service 5 minutes after the system boot, while the option OnUnitActiveSec triggers the service 24 hours after the service has been activated (that is, the timer will trigger the service once a day).

what happens if i rename a parent directory while file is being written by program

I have a question.
I'm running a program on a LINUX machine. This program is writing output to the file 'output.txt' within the subfolder 'SUB' of the parent folder 'PARENT'
PARENT
|________SUB
|_________ output.txt
I accidentally renamed PARENT while output was writing...Namely, I did the following command
mv PARENT PARENT_NEW
So far my program hasn't crashed or anything. Does anyone know the repercussions of what I just did?
On Linux, as inherited from Unix, once a file on the local disk is open, the process has a handle to it. You may rename the parent directory, you may even delete the file. These operations don't trouble the process writing to the file as long as it does not close and reopen it.
The file is kept open by the program via a file descriptor, which is an unsigned integer that the kernel uses to access files. Your action should have no effect.
According to UNIX, the fill will be present in the new location. Here is a simple experiment:
$ mkdir /tmp/test
$ cat > /tmp/test/abc.txt
hello
world
and again!
So while cat is still waiting for input, open a new terminal and rename the folder:
$ mv /tmp/test/ /tmp/test2
Now back to earlier terminal: ( press Ctrl+D to complete the input to cat )
$ ls /tmp/test/
ls: cannot access /tmp/test1/abc.txt: No such file or directory
$ ls /tmp/test2/
abc.txt
$ cat /tmp/test2/abc.txt
hello
world
and again!
So basically, unless the file or directory is deleted completely, it will be present in the new location after the write is complete.
However, if process B deletes a file f while some other process A is still writing to that file, the file f will be available to process A because it holds an inode reference. But for rest of the processes including B it will not be accessible. Any other process can still access file f only if it can obtain a reference to inode via file descriptors from /proc/<PID-of-A>/fd.

Resources