copy only new files to remote host after write is finished - cron

I have a java application which is writing images to a directory.
I want to copy these images from external utility (not from the java application directly).
I can use scp and/or rsync or other (not sure which one to select) to copy on every, say 5sec interval(mostly using cron).
But the main problem here is to know when the write has been finished ?
For example, if the application write is on-going and the schedule ticks for copy, then the image copied will be corrupted.
So what utility should I go for & how can I make sure that the image copied is legit ?

Related

How to convert an EXE file for Windows CE without Active Sync

I'm able to compile and use Active Sync 4.5 to copy and then run application files in a Windows CE based device. Whenever I copy a file to the device Active Sync will say it will convert the file.
This process of manually uploading the files using Active Sync is slow so I'm now sending the files directly using my application and a TCP sort of file receiver. File is being transferred fine (crc checked).
The problem is that the file does not run complaining with a generic message that the file cannot be opened. My first and only guess is related to this "conversion" process that Active Sync is doing.
I could not find any documentation related to what is is this process and how does it work. Is there any place I can find more information about this conversion? Or is there any tool that can pre-convert so I can transfer it?
The first idea about pre-convert was to copy the file to the device and the copy it back using Active Sync but when copying files out of Windows CE it will convert back to PC format.

Run Buffer(.exe) without writing files in Nodejs

Suppose I had a console app in the terminal I created using C language, shouting "Hello world!"
The program is called hello.exe.
I upload hello.exe to static server.
Now I can download the file by typing the following address in the chrome.
http://localhost:8080/hello.exe
Or I can get a Blob object using the http method in Nodejs.
Is there a way to run this obtained Blob object right away without making a file? And get string Hello world!
No similar topics were found.
Do I need to create and run the file and erase it right away?
I want is for the files to run and not remain on my PC.
I'm not aware of any way to run an .exe file without first putting it on disk. You would essentially need to write your own exe loader that worked from memory instead of disk and that would be no small effort.
Keep in mind that a client that just automatically runs some executable it gets from a URL like http://somedomain.com/hello.exe without any user intervention could be a very dangerous client as rogue web servers could send it any arbitrary executable that did all sorts of harm (viruses, ransom-ware, etc...).
Do I need to create and run the file and erase it right away?
Yes, erase it after the program is done running.
I want is for the files to run and not remain on my PC.
You will just have to clean it up at some point after it has run. If you have a programmatic client, it should be no big deal to put the file in an application-level temporary directory that your app can regular clean up. If this is from a browser, then he user controls where the file goes on disk and the user controls when it gets deleted - you can't manage that yourself from within a webpage.
Or I can get a Blob object using the http method in Nodejs.
You can download a binary. Not sure exactly what you're asking here.

How to monitor a directory for file changes without using inotifywait?

I require a VM for developing and my host is where my IDE is. I have discovered that inotifywait does not work with shared folders, as I am sharing a local folder with my Linux guest using Virtual Box.
Basically, I have a simple bash script which needs to watch a directory and wait for any file changes. Inotifywait would be the best option but I cannot get it to work with my shared folder.
I was wondering if there is another option for my problem?
Depending on the sizes of the files and the nature of the changes you could:
Create a checksum (md5, CRC, SHA256) of the files and watch for changes
check the size of the files and watch for changes

how to copy between two host machine and windows server client using ansible faster?

My aim is to copy from a machine A(ubuntu) to remote server B(windows2012 server) using ansible copy command. I can ping the windows server machine and can even copy a small folder from ubunto to server but when the folder size becomes big it takes so so long to get copied and sometimes not get copied . I am using as follows:
-name:copy file
win_copy:
src: '/service/test.zip'
dest:'D:/test/test.zip'
test.zip folder is around 300 MB. So, win_copy is not solving my purpose. Could you suggest what can be good option in this case?
I've had this problem and just wrote a powershell script to download the file directly to a known location on the target. Write a powershell script to download the file. I would deploy the script to the target using win_copy or win_template (if you need to do substitutions) and then call it using win_command.

How to the history of files changed on a Rsync server

How can I instruct RSYNC server to keep a copy of the old versions of the files that were updated?
Background info:
I have a simple RSYNC server running on Linux which I am using as a backup of a large file system (many TB). Let's call it the backup server.
On the source server, we run daily:
$ rsync -avzc /local/folder user#backup_server::remote_folder
In theory, no files should be changed on the source server, we should only receive new files. But, nonetheless, it might be possible that some updates are legit (very very seldom). If rsync detects the change, it overwrites the old version of file on the backup server with the new one. Now, here is the problem: if the change was a mistake, I lose the data and do not have the ability to recover it.
Ideally, I'd like that rsync server keeps a backup of the replaced files. Is there a way to configure that?
My backups are local to the same machine (but different drive on a mount point of /backup/)
I use --backup-dir=/backup/backups-`date +%F`/ but then it starts nesting the things rather than having a load of backups-yyyy-mm-dd/ in the /backup/ folder.
If someone has a similar issue, there is a easy solution:
Execute a simple cron that changes access rights in the destination computer.

Resources