Using incrontab mv file results in 0 byte file - linux

I'm watching a folder using incrontab with the command in the incrontab -e editor:
/media/pi/VDRIVE IN_CLOSE_WRITE sudo mv $#/$# /media/pi/VDRIVE/ready/$#
The watched folder is relieving a file over the network from another machine—the file shows up OK and appears to trigger the incrontab job presumably once the copy process has closed the file, but the mv command results in a 0 bytes file in the destination folder with the correct name.
All run as root.

It seems that there is a bug in Samba on OSX which results in two events when writing to a shared folder on the network. This makes incrontab pretty unworkable when working with OSX computers (more recent OS 10.7 up).
So when OSX writes a file to the Linux samba share, there are two events, the first one triggers the mv action before the file has finished actually writing. Its a bug in OSXs SAMBA implementation.
In the end I used inotify to write events to a log file (of which there are always two), then scanned the file for two instances of the event before performing the action.
Another strategy was to use LSOF on a cron routine that will just ignore any files open for writing.

Related

Wait Until Previous Command Completes

I have written a bash script on my MacMini to execute anytime a file has completed downloading. After the file download is complete, the mac mounts my NAS, renames the file, and then copies the file from the mac to the NAS, deletes the file from the mac and then unmounts the NAS.
My issue is, sometimes, the NAS takes a few seconds to mount. When that happens, I receive an error that the file could not be copies because the directory doesn’t exist.
When the NAS mounts instantly, (if the file size is small), the file copies and then the file deletes and the NAS unmounts.
When the file size is large, the copying process stops when the file is deleted.
What I’m looking for is, how do I make the script “wait” until the NAS is mounted, and then how do I make the script again wait until the file copying is complete?
Thank you for any input. Below is my code.
#connect to NAS
open 'smb://username:password#ip_address_of_NAS/folder_for_files'
#I WANT A WAIT COMMAND HERE SO THAT SCRIPT DOES NOT CONTINUE UNTIL NAS IS MOUNTED
#move to folder where files are downloaded
cd /Users/account/Downloads
#rename files and copy to server
for f in *; do
newName="${f/)*/)}";
mv "$f"/*.zip "$newName".zip;
cp "$newName".zip /Volumes/folder_for_files/"$newName".zip;
#I NEED SCRIPT TO WAIT HERE UNTIL FILE HAS COMPLETED ITS TRANSFER
rm -r "$f";
done
#unmount drive
diskutil unmount /Volumes/folder_for_files
I have no longer a mac to try this, but it seems open 'smb://...' is the only command here that does not wait for completion, but does the actual work in the background instead.
The best way to fix this would be to use something other than open to mount the NAS drive. According to this answer the following should work, but due to the lack of a mac and NAS I cannot test it.
# replace `open ...` with this
osascript <<< 'mount volume "smb://username:password#ip_address_of_NAS"'
If that does not work, use this workaround which manually waits until the NAS is mounted.
open 'smb://username:password#ip_address_of_NAS/folder_for_files'
while [ ! -d /Volumes/folder_for_files/ ]; do sleep 0.1; done
# rest of the script
you can use a loop to sleep and then for five seconds for example, and then run smbstatus and if the output contains any string to identify your smb://username:password#ip_address_of_NAS/folder_for_files connection`
when this is found to then start the copying of your files. You could also have a counter variable to stop, after certain number of times to sleep and then check if the connection has been successful too.

Download file from linux server once it is created

I recently started to work with Linux server, I am very new. My CUDA/C++ program solves 2D differential equation and writes down output every, say, 1000 time steps. It happens roughly every minute. Is it possible to automatically download files to my PC once they generated on the Linux server, or save them directly to my PC? This would significantly accelerate my work since now I have to wait for my program to finish all the calculations and then download it manually. I also typically use 6 GPUS at the same time, they produce output in different specified folders on the LINUX server (say, folders 0, 1,2,3,4,5)
You can use inotify
In Debian or Ubuntu install the package :
apt-get install inotify-tools
Create two script, first for reading new file in directory, second for copying file to your computer
inotifywait_script.sh
#!/bin/bash
# Path to check :
DIR="./files"
while NEW_FILE=$(inotifywait -r -e create --format %w%f $DIR)
do
# Sctipt executed when new file is created :
./script_cp.sh "$NEW_FILE"
done
Used inotifywait options :
-e : Listen for specific event(s) only (here just creating event)
-r : Watch all subdirectories of any directories passed as arguments
--format : %w => Path %f => File
script_cp.sh
#!/bin/bash
echo "Copy file $1"
scp "$1" user#hostname:/path_to_save
You can use scp, rsync or other system to copying files

mv command moves file but reports error: cannot stat no such file or directory

I am hoping that a more experienced set of eyes will find something obvious that I am missing or will be able to help me work around the errors that mv and rsync are producing. Up for the challenge?
Basic idea:
I have a bash script in which I am automating the move of files from one directory to another.
The problem:
When I run the script, periodically I get the following error from the mv command:
mv: cannot stat `/shares/directory with spaces/test file.txt': No such file or directory. The error code from the vm command is 1. Even more odd, the file move actually succeeds sometimes.
In addition, I have a branch of logic in the script that will alternately use rsync to move/copy specific files (from the same local file system source and destination as the mv command mentioned above). I get a similar error related to the stat() system call:
rsync: link_stat "/shares/directory with spaces/test file.txt" failed: No such file or directory (2)
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1070) [sender=3.0.9]
This error does not always manifest itself when the script is run. Sometimes it completes the file move without complaint, while other times it will return the error consistently when the script is run successive times.
There is one additional ingredient you should be aware of (and I am growing to suspect this as a key ingredient in my grief): the directory /shares/ is a directory that is being monitored by an installation of Dropbox -- meaning it is watched and mirrored by an installation of Dropbox. At this point, I am unable to determine if dropboxd is somehow locking the file, or the like, such that it cannot be stat-ed. To be clear, the files are eventually freed from this state without further intervention and are mv-able.
The code:
mv -v --no-clobber "${SOURCEPATH}${item}" "${DESTINATIONPATH}${item}"
More info:
The following might, or might not, be relevant:
mount indicates the filesystem is ext4
Presumably, ownership and permissions shouldn't be an issue as the script is being run by root. Especially if the file system is not fuse-based.
The base "directory" in the path (e.g. /shares/) is a symlink to another directory on the same file system.
The flavor of Linux is Debian.
Troubleshooting:
In trying to eliminate any issues with the variable expansion or their contents, I tried hardwiring the bash script like such:
mv -v --no-clobber "/shares/directory with spaces/test file.txt" "/new destination/directory with spaces/test file.txt" after verifying via ls -al that "test file.txt" existed. For reference the permissions were: -rw-r--r--
Unfortunately, this too results in the same error.
Other possible issues I could think of and what I have done to try to rule them out:
>> possible issue: slow HDD (or drive is in low power mode) or external USB drive
>> findings: The drives are all local SATA disks set to not park heads. In addition, even when forcing a consistent read from the file system, the same error happens
>> possible issue: non-Linux, NFS or fuse-based file system
>> findings: nope, source and destination are on the same local file system and mount indicates the file system is ext4
>> possible issue: white space or other unprintable chars in the file path
>> findings: verified that the source and destination paths where properly wrapped in quotes
>> possible issue: continuation issues after escaped newline (space after \ in wrapped command)
>> findings: made sure the command was all on one line, still the same error
>> possible issue: globbing (use of * in specifying the files to move)
>> findings: nope, each file is specified directly by path and name
>> possible issue: path confusion from the use of local path
>> findings: nope, file paths are fully qualified starting from /
>> possible issue: the files are not actually in the path specified
>> findings: nope, verified the file existed right prior to executing the script via ls -al
>> possible issue: somehow the --no-clobber of mv was causing issues
>> findings: nope, tried it without, same error
>> possible issue: only files created via Dropbox sync to the file system are problematic
>> findings: nope, created a local file directly via touch new-local-file.txt and it too produced the same stat() error
My analysis:
The fact that mv and rsync produce similar stat() errors leads me to believe:
there is some systemic underlying boundary case (e.g. file permissions/ownership or file busy) that is not accounted for in the bash script; or
the same bug is plaguing me in both the mv and the rsync scenarios.
Desired outcomes:
1. The root cause of the intermittent errors can be identified.
2. The root cause can be resolved or worked around.
3. The bash script can be improved to gracefully handle when the error occurs.
So, with a lot more troubleshooting I found an errant rsync statement some 200 lines earlier in the script that was conditionally executed (thus the seeming inconsistent behavior). That rsync --archive ... statement was being passed /shares/ as its source directory, therefore it effected the /shares/directory with spaces/ subdirectory. That subdirectory was the ${SOURCEPATH} of the troubling mv command mentioned in my post above.
Ultimately, it was a missing --dry-run flag on the rsync --archive ... statement that causing the trampling of the files that the script later expected to pass to mv to process.
Thanks for all who took the time to read my post. Though I am bummed to have spent my and your time on what turned out to be a bug in my script, it is reassuring to know that:
- computers are not irrational
- I am not insane
- there is not some nefarious, deep rooted bug in the linux file system
For those that stumble upon this post in the future because you are experiencing an error of cannot stat, please read my troubleshooting notes above. Much research went into that list. One of those might be your issue. If not, keep debugging, there is an explanation. Good luck!

Create a file in a filesystem

I've been requested to run a code on a file in a file system I had to create.
(I created the fs using mkfs and then mounted it with another directory: /home/may/new_place (the original fs appears on my desktop - 8.6GB filesystem)
My question is, can you even create a file in a filesystem? I can't even transfer a file into it, so can't execute my code.
I'm really new to this.. thank you all
(P.S. I'm using linux xubunto OS)
are you able to touch any files -- with touch command -- also open a sample file -- write echo $HOSTNAME - save it - chmod u+x and run it -- see if you are able to execute the file

sftp file age: reading files transferred from sftp that aren't complete yet

I have a linux server that receives data files via sftp. These files contain data that is immediately imported into an application for use. The directory which the files are sent to is constantly read by another process looking for the new files to process.
The problem I am having is that the files are getting read before they are completely transferred. Is there a way to hide the files before they have transferred?
One thought I had is by leveraging the .filepart concept that many sftp clients use to rename files before they are complete. I don't have control of the clients though, so is there a way to do this on the server side?
Or is there another way to do this by permissions or such?
We have solved a similar problem by creating a directory on the same file-system that the files will be read from by the clients, and use inotifywait.
You sftp to the staging directory and have inotifywait watch that staging directory.
Once inotify sees the "FILE_CLOSE" event for any received file you simply "mv" the file to the directory the client reads from.
#!/bin/bash
inotifywait -m -e close --format "%f\n" /path/to/tmp | while read newfile
do
mv /path/to/tmp/"$newfile" ~/real
done

Resources