I'm not an advanced user of Linux, but I'm looking for some simple script in bash which will be working in cron or any other way and in 5-10 minutes period time will be looking for new files, after new directory/files are already uploaded to the directory script will move new directory with files to other location.
I found inotify which can be a great solution for this, but the issue is how to go with it.
I've been using inotifywait to recognize that some filesystem change ocurred in certain path.
Take a look at:
http://linux.die.net/man/1/inotifywait
You can specify on what changes you are interested (delete, create, modify, etc.) and whether the script should output it or simply exit after change.
I've been using this tool in a way that my script was starting inotifywait and when it exists, do some action and restart inotifywait again.
Hope this helps.
Martin
Related
I am SFTP'ing files to a directory on my ubuntu server. Ideally I would like these files in my apache public html folder as they are pictures that a user is uploading.
I've found that I can't simply SFTP the files directly to my public html folder, so am researching other methods. My picture server is ubuntu, so I thought there may some native command or setting that I could use to automatically move pictures that show up in my SFTP directory, to my public html directory.
Hopefully I am making sense, and I'm not sure where else I should be asking this question.
Three possibilities:
Why can you not simply upload the files directly in your public html folder? I assume that has something to do with access restrictions in writing to that directory, so you could try to change this directories write permissions for the user you are uploading as.
The access restrictions are changed with the command chmod, the ownership of diles and directories is changed with chown. Best you read documentation to these commands ("man chmod" and "man chown").
You can run a script periodically that takes all uploaded files and moves them to the specified target dir. For this you need to write a short shell script in bash, for example:
#!/bin/bash
mv /home/user/UPLOADS/*.jpg /var/www/images/
(This script takes simply all files with the extension .jpg from directory /home/user/UPLOADS and puts them without further check to the directory /var/wwww/images )
Place this script somewhere (eg. /home/user/bin/) and make it executable: chmod a+x /home/user/bin/SCRIPTNAME
This script can be run periodically via cron, call crontab -e and write a new line
like so:
*/5 * * * * /home/user/bin/SCRIPTNAME
that executes the script every 5 minutes.
Drawback is that it is called every 5 minutes, so there might be a gap between upload and move of max 5 minutes. additionally, if the script runs WHILE uploading of new images, something strange might happen...
The 3rd possibility is to execute a script as soon as the upload is finished by watching the upload directory with the inotify feature of the kernel. If you want to do this, best google for inotify examples, that is a little bit more complicated. Here is another SO answer to that:
Monitor Directory for Changes
I have few catalogues in my linux directory , the files name is logs-YYMMDD
/logs-140617
/logs-140616
/logs-140615
Can somebody tell me how to write bash script to repeat every day and pack file by date -1 day (today bash script would create archive of logs-140616 and delete unpacked file ?
Thanks for answers
use logrotate for that purpose, that's what it is for.
For periodically executing stuff, a cron job is the way to do that.
logrotate already gets executed by cron, on a daily base. Therefore, when using logrotate, there would be not need to fiddle with the daily-execution aspect of your problem.
I have a process that is continually writing new files to a directory. When the current file reaches a certain size, it creates a new one with the timestamp. Like rolling log files, for example.
When the process closes the current file (A) and creates a new one, I would like to move A to a new directory for processing. I'm not sure of the best way to do this...
I wrote a bash script that runs every few minutes, lists all the files in the dir sorted by time, and moves all but the most recent. This works, but I can't help but feel like there is a better way, something more event-driven. I was looking at using inotifywait and capturing the CLOSE_WRITE,CLOSE event for the file ...
Any suggestions?
Thanks!
Found a better way. Using incron:
http://inotify.aiken.cz/?section=incron&page=why&lang=en
Recently I was asked the following question in an interview.
Suppose I try to create a new file named myfile.txt in the /home/pavan directory.
It should automatically create myfileCopy.txt in the same directory.
A.txt then it automatically creates ACopy.txt,
B.txt then BCopy.txt in the same directory.
How can this be done using a script? I may know that this script should run in crontab.
Please don't use inotify-tools.
Can you explain why you want to do?
Tools like VIM can create a backup copy of a file you're working on automatically. Other tools like Dropbox (which works on Linux, Windows, and Mac) can version files, so it backs up all the copies of the file for the last 30 days.
You could do something by creating aliases to the tools you use for creating these file. You edit a file with the tools you tend to use, and the alias could create a copy before invoking a tool.
Otherwise, your choice is to use crontab to occasionally make backups.
Addendum
let me explain suppose i have directory /home/pavan now i create the file myfile.txt in that directory , immediately now i should automatically generate myfileCopy.txt file in the same folder
paven
There's no easy user tool that could do that. In fact, the way you stated it, it's not clear exactly what you want to do and why. Backups are done for two reasons:
To save an older version of the file in case I need to undo recent changes. In your scenario, I'm simply saving a new unchanged file.
To save a file in case of disaster. I want that file to be located elsewhere: On a different computer, maybe in a different physical location, or at least not on the same disk drive as my current file. In your case, you're making the backup in the same directory.
Tools like VIM can be set to automatically backup a file you're editing. This satisfy reason #1 stated above: To get back an older revision of the file. EMACs could create an infinite series of backups.
Tools like Dropbox create a backup of your file in a different location across the aether. This satisfies reason #2 which will keep the file incase of a disaster. Dropbox also versions files you save which also is reason #1.
Version control tools can also do both, if I remember to commit my changes. They store all changes in my file (reason #1) and can store this on a server in a remote location (reason #2).
I was thinking of crontab, but what would I backup? Backup any file that had been modified (reason #1), but that doesn't make too much sense if I'm storing it in the same directory. All I would have are duplicate copies of files. It would make sense to backup the previous version, but how would I get a simple crontab to know this? Do you want to keep the older version of a file, or only the original copy?
The only real way to do this is at the system level with tools that layer over the disk IO calls. For example, at one location, we used Netapps to create a $HOME/.snapshot directory that contained the way your directory looked every minute for an hour, every hour for a day, and every day for a month. If someone deleted a file or messed it up, there was a good chance that the version of the file exists somewhere in the $HOME/.snapshot directory.
On my Mac, I use a combination of Time Machine - which backs up the entire drive every hour, and gives me a snapshot of my drive that stretches back over a year and a half) and Dropbox which keeps my files stored in the main Dropbox server somewhere. I've been saved many times by that combination.
I now understand that this was an interview question. I'm not sure what was the position. Did the questioner want you to come up with a system wide way of implementing this, like a network tech position, or was this one of those brain leaks that someone comes up with at the spur of the moment when they interview someone, but were too drunk the night before to go over what they should really ask the applicant?
Did they want a whole discussion on what backups are for, and why backing up a file immediately upon creation in the same directory is a stupid idea non-optimal solution, or were they attempting to solve an issue that came up, but aren't technical enough to understand the real issue?
How could I track changes of specific directory in UNIX? For example, I launch some utility which create some files during its execution. I want to know what exact files were created during one particular launch. Is there any simple way to get such information? Problem is that:
I cannot flush directory content after script execution
Files created with the name that has hash as a compound part. There is no possibility to get this hash from script for subsequent search.
There could be several scripts executed simultaneously, I do not want to see files created by another process in the same folder.
Please notice that I do not want to know whether directory has been changed as stated here, I need filenames which ideally could be grepped to match specific pattern.
You need to subscribe to file system change notifications.
You should use something like FAM, gamin, or inotify to detect when a file has been created, closed, etc.
You could use strace -f myscript to trace all system calls made by the script, and use grep to filter the system calls that create new files.
You could use the Linux Auditing System. Here is a howto link:
http://www.cyberciti.biz/tips/linux-audit-files-to-see-who-made-changes-to-a-file.html
You can use the script command to track the commands launched.