I recently started to work with Linux server, I am very new. My CUDA/C++ program solves 2D differential equation and writes down output every, say, 1000 time steps. It happens roughly every minute. Is it possible to automatically download files to my PC once they generated on the Linux server, or save them directly to my PC? This would significantly accelerate my work since now I have to wait for my program to finish all the calculations and then download it manually. I also typically use 6 GPUS at the same time, they produce output in different specified folders on the LINUX server (say, folders 0, 1,2,3,4,5)
You can use inotify
In Debian or Ubuntu install the package :
apt-get install inotify-tools
Create two script, first for reading new file in directory, second for copying file to your computer
inotifywait_script.sh
#!/bin/bash
# Path to check :
DIR="./files"
while NEW_FILE=$(inotifywait -r -e create --format %w%f $DIR)
do
# Sctipt executed when new file is created :
./script_cp.sh "$NEW_FILE"
done
Used inotifywait options :
-e : Listen for specific event(s) only (here just creating event)
-r : Watch all subdirectories of any directories passed as arguments
--format : %w => Path %f => File
script_cp.sh
#!/bin/bash
echo "Copy file $1"
scp "$1" user#hostname:/path_to_save
You can use scp, rsync or other system to copying files
Related
I have a tool that converts COBOL files to C++.
When I run the command, it generates the.cpp file from the.cbl file but deletes it at run time.
It is not holding the files or not saving the file in the designated folder, despite the fact that I can see that the.cpp file is created but destroyed within seconds.
I'm using CentOS 7.
Can somebody tell me how to keep that.cpp file or copy it at runtime to a different location?
You can use inotifywait to monitor the directory, where the CPP files are created. Whenever a CREATE event occurs, you can link the file into a backup directory.
#! /bin/bash
test -d backup || mkdir backup
inotifywait --monitor --event CREATE --format %f . |
while read file; do
ln "$file" backup/"$file"
done
I have written a bash script on my MacMini to execute anytime a file has completed downloading. After the file download is complete, the mac mounts my NAS, renames the file, and then copies the file from the mac to the NAS, deletes the file from the mac and then unmounts the NAS.
My issue is, sometimes, the NAS takes a few seconds to mount. When that happens, I receive an error that the file could not be copies because the directory doesn’t exist.
When the NAS mounts instantly, (if the file size is small), the file copies and then the file deletes and the NAS unmounts.
When the file size is large, the copying process stops when the file is deleted.
What I’m looking for is, how do I make the script “wait” until the NAS is mounted, and then how do I make the script again wait until the file copying is complete?
Thank you for any input. Below is my code.
#connect to NAS
open 'smb://username:password#ip_address_of_NAS/folder_for_files'
#I WANT A WAIT COMMAND HERE SO THAT SCRIPT DOES NOT CONTINUE UNTIL NAS IS MOUNTED
#move to folder where files are downloaded
cd /Users/account/Downloads
#rename files and copy to server
for f in *; do
newName="${f/)*/)}";
mv "$f"/*.zip "$newName".zip;
cp "$newName".zip /Volumes/folder_for_files/"$newName".zip;
#I NEED SCRIPT TO WAIT HERE UNTIL FILE HAS COMPLETED ITS TRANSFER
rm -r "$f";
done
#unmount drive
diskutil unmount /Volumes/folder_for_files
I have no longer a mac to try this, but it seems open 'smb://...' is the only command here that does not wait for completion, but does the actual work in the background instead.
The best way to fix this would be to use something other than open to mount the NAS drive. According to this answer the following should work, but due to the lack of a mac and NAS I cannot test it.
# replace `open ...` with this
osascript <<< 'mount volume "smb://username:password#ip_address_of_NAS"'
If that does not work, use this workaround which manually waits until the NAS is mounted.
open 'smb://username:password#ip_address_of_NAS/folder_for_files'
while [ ! -d /Volumes/folder_for_files/ ]; do sleep 0.1; done
# rest of the script
you can use a loop to sleep and then for five seconds for example, and then run smbstatus and if the output contains any string to identify your smb://username:password#ip_address_of_NAS/folder_for_files connection`
when this is found to then start the copying of your files. You could also have a counter variable to stop, after certain number of times to sleep and then check if the connection has been successful too.
I'm watching a folder using incrontab with the command in the incrontab -e editor:
/media/pi/VDRIVE IN_CLOSE_WRITE sudo mv $#/$# /media/pi/VDRIVE/ready/$#
The watched folder is relieving a file over the network from another machine—the file shows up OK and appears to trigger the incrontab job presumably once the copy process has closed the file, but the mv command results in a 0 bytes file in the destination folder with the correct name.
All run as root.
It seems that there is a bug in Samba on OSX which results in two events when writing to a shared folder on the network. This makes incrontab pretty unworkable when working with OSX computers (more recent OS 10.7 up).
So when OSX writes a file to the Linux samba share, there are two events, the first one triggers the mv action before the file has finished actually writing. Its a bug in OSXs SAMBA implementation.
In the end I used inotify to write events to a log file (of which there are always two), then scanned the file for two instances of the event before performing the action.
Another strategy was to use LSOF on a cron routine that will just ignore any files open for writing.
I have an ip camera that automatically ftps images every few seconds to directory on my linux Ubuntu Server web server. I'd like to make a simple webcam page that references a static image and just refreshes every few seconds. The problem is that my ipcamera's firmware automatically names every file with a date_time.jpg type filename, and does not have the option to overwrite with the same file name over and over.
I'd like to have a script running on my linux machine to automatically copy a new file that has been ftp'd into a directory into a different directory, rename it in the process and then delete the original.
Regards,
Glen
I made a quick script, you would need to uncomment the rm -f line to make it delete things :)
it currently prints the command it would have run, so you can test with higher confidence.
You also need to set THE WORK_DIR and DEST_DIR variables near the top of the script.
#!/bin/bash
#########################
# configure vars
YYYYMMDD=`date +%Y%m%d`
WORK_DIR=/Users/neil/linuxfn
DEST_DIR=/Users/neil/linuxfn/dest_dir
##########################
LATEST=`ls -tr $WORK_DIR/$YYYYMMDD* 2>/dev/null | tail -1`
echo "rm -f $DEST_DIR/image.jpg ; mv $LATEST $DEST_DIR/image.jpg"
#rm -f $DEST_DIR/image.jpg ; mv $LATEST $DEST_DIR/image.jpg
This give me the following output when i run it on my laptop:
mba1:linuxfn neil$ bash renamer.sh
rm -f /Users/neil/linuxfn/dest_dir/image.jpg ; mv /Users/neil/linuxfn/20150411-2229 /Users/neil/linuxfn/dest_dir/image.jpg
Inotify (http://en.wikipedia.org/wiki/Inotify) can be set up to do as you ask, but it would probably be better to use a simple web script (PHP, Python, Perl, etc.) to serve the latest file from the directory, instead.
I need to write a shell script to run as a cron task, or preferably on creation of a file in a certain folder.
I have an incoming and an outgoing folder (they will be used to log mail). There will be files created with codes as follows...
bmo-001-012-dfd-11 for outgoing and 012-dfd-003-11 for incoming. I need to filter the project/client code (012-dfd) and then place it in a folder in the specific project folder.
Project folders are located in /projects and follow the format 012-dfd. I need to create symbolic links inside the incoming or outgoing folders of the projects, that leads to the correct file in the general incoming and outgoing folders.
/incoming/012-dfd-003-11.pdf -> /projects/012-dfd/incoming/012-dfd-003-11.pdf
/outgoing/bmo-001-012-dfd-11.pdf -> /projects/012-dfd/outgoing/bmo-001-012-dfd-11.pdf
So my questions
How would I make my script run when a file is added to either incoming or outgoing folder
Additionally, is there any associated disadvantages with running upon file modification compared with running as cron task every 5 mins
How would I get the filename of recent (since script last run) files
How would I extract the code from the filename
How would I use the code to create a symlink in the desired folder
EDIT: What I ended up doing...
while inotifywait outgoing; do find -L . -type l -delete; ls outgoing | php -R '
if(
preg_match("/^\w{3}-\d{3}-(\d{3}-\w{3})-\d{2}(.+)$/", $argn, $m)
&& $m[1] && (file_exists("projects/$m[1]/outgoing/$argn") != TRUE)
){
`ln -s $(pwd)/outgoing/$argn projects/$m[1]/outgoing/$argn;`;
}
'; done;
This works quite well - cleaning up deleted symlinks also (with find -L . -type l -delete) but I would prefer to do it without the overhead of calling php. I just don't know bash well enough yet.
Some near-answers for your task breakdown:
On linux, use inotify, possibly through one of its command-line tools, or script language bindings.
See above
Assuming the project name can be extracted thinking positionally from your examples (meaning not only does the project name follows a strict 7-character format, but what precedes it in the outgoing file also does):
echo `basename /incoming/012-dfd-003-11.pdf` | cut -c 1-7
012-dfd
echo `basename /outgoing/bmo-001-012-dfd-11.pdf`| cut -c 9-15
012-dfd
mkdir -p /projects/$i/incoming/ creates directory /projects/012-dfd/incoming/ if i = 012-dfd,
ln -s /incoming/foo /projects/$i/incoming/foo creates a symbolic link from the latter argument, to the preexisting, former file /incoming/foo.
How would I make my script run when a file is added to either incoming or outgoing folder
Additionally, is there any associated disadvantages with running upon file modification compared with running as cron task
every 5 mins
If a 5 minutes delay isn't an issue, I would go for the cron job (it's easier and -IMHO- more flexible)
How would I get the filename of recent (since script last run) files
If your script runs every 5 minutes, then you can tell that all the files created in between now (and now - 5 minutes) are newso, using the command ls or find you can list those files.
How would I extract the code from the filename
You can use the sed command
How would I use the code to create a symlink in the desired folder
Once you have the desired file names, you can usen ln -s command to create the symbolic link