Linux - Restoring a file - linux

I've written a vary basic shell script that moves a specified file into the dustbin directory. The script is as follows:
#!/bin/bash
#move items to dustbin directory
mv "$#" ~/dustbin/
echo "File moved to dustbin"
This works fine for me, any file I specify gets moved to the dustbin directory. However, what I would like to do is create a new script that will move the file in the dustbin directory back to its original directory. I know I could easily write a script that would move it back to a location specified by the user, but I would prefer to have one that would move it to its original directory.
Is this possible?
I'm using Mac OS X 10.6.4 and Terminal

You will have to store where the original file is coming from then. Maybe in a seperate file, a database, or in the files attributes (meta-data).
Create a logfile with 2 columns:
The complete filename in the dustbin
The complete original path and filename
You will need this logfile anyway - what will you do when a user deleted 2 files in different directories, but with the same name? /home/user/.wgetrc and /home/user/old/.wgetrc ?
What will you do when a user deletes a file, makes a new one with the same name, and then deletes that too? You'll need versions or timestamps or something.

You need to store the original location somewhere, either in a database or in an extended attribute of the file. A database is definitely the easiest way to do it, though an extended attribute would be more robust. Looking in ~/.Trash/ I see some, but not all files have extended attributes, so I'm not sure how Apple does it.

You need to somehow encode the source directory in the file. I think the easiest would be to change the filename in the dustbin directory. So that /home/user/music/song.mp3 becomes ~/dustbin/song.mp3|home_user_music
And when you copy it back your script needs to process the file name and construct the path beginning at |.

Another approach would be to let the filesystem be your database.
A file moved from /some/directory/somewhere/filename would be moved to ~/dustbin/some/directory/somewhere/filename and you'd do find ~/dustbin -name "$file" to find it based on its basename (from user input). Then you'd just trim "~/bustbin" from the output of find and you'd have the destination ready to use. If more than one file is returned by find, you can list the proposed files for user selection. You could use ~/dustbin/$deletiondate if you wanted to make it possible to roll back to earlier versions.
You could do a cron job that would periodically remove old files and the directories (if empty).

Related

Unix create multiple files with same name in a directory

I am looking for some kind of logic in linux where I can place files with same name in a directory or file system.
For e.g. i create a file abc.txt, so the next time if any process creates abc.txt it should automatically check and make the file named as abc.txt.1 should be created, then next time abc.txt.2 and so on...
Is there a way to achieve this.
Any logic or third party tools are also welcomed.
You ask,
For e.g. i create a file abc.txt, so the next time if any process
creates abc.txt it should automatically check and make the file named
as abc.txt.1 should be created
(emphasis added). To obtain such an effect automatically, for every process, without explicit provision by processes, it would have to be implemented as a feature of the filesystem containing the files. Such filesystems are called versioning filesystems, though typically the details are slightly different from what you describe. Most importantly, however, although such filesystems exist for Linux, none of them are mainstream. To the best of my knowledge, none of the major Linux distributions even offers one as a distribution-supported option.
Although it's a bit dated, see also Linux file versioning?
You might be able to approximate that for many programs via a customized version of the C standard library, but that's not foolproof, and you should not expect it to have universal effect.
It would be an altogether different matter for an individual process to be coded for such behavior. It would need to check for existing files and choose an appropriate name when opening each new file. In doing so, some care needs to be taken to avoid related race conditions, but it can be done. Details would depend on the language in which you are writing.
You can use BASH expression to achieve this. For example if I wanted to make 10 files all with the same name, but having a unique number value I would do the following:
# touch my_file{01..10}.txt
This would create 10 files starting at 01 all the way to 10. This method is also hand for looping over files in a sequence or if your also creating directories.
Now if i am reading you question right your asking that if you move a file or create a file in a directory. you would want the a script to automatically create a new file for you? If that is the case then just use a test and if there is a file move that file and mark it. Me personally I use time stamps to do so.
Logic:
# The [ -f ] tests if the file is present
if [ -f $MY_FILE_NAME ]; then
# If the file is present move the file and give it the PID
# That way the name will always be unique
mv $MY_FILE_NAME $MY_FILE_NAME_$$
mv $MY_NEW_FILE .
else
# Move or make the file here
mv $MY_NEW_FILE .
fi
As you can see the logic is very simple. Hope this helps.
Cheers
I don't know about Your particular use case, but You may try to look at logrotate:
https://wiki.archlinux.org/index.php/Logrotate

Restore shell script linux

Hey I have this question for Coursework and I was wondering if someone could give me some help, as its coursework I don't want someone to just write the code for me, But could give me a short example or even tell me what kind of things I should use and I can read them.
I have a delete script which stores the location of the file that is deleted via
readlink -f $1 >>/root/TAM/store
The files are stored in the directory /root/TAM/dustbin when deleted
and the question I am stuck on is
restore - This script should move the file called back to its original directory without requiring any further user input.
If a file of that name already exists at the restore location, the script prompts the user to select an appropriate alternative action.
When you delete a file, you don't really delete it, but move it to your dustbin directory, keeping the full path from the root (so if you remove /home/foo/blabla, you store it in dustbin/home/foo/blabla.
The restore command/script then should verify, before restoring the file in the dustbin if there is a file with the same name in the original path.

inotify --fromfile directive

I have a system fedora 15 with xfce window manager.
I installed an inotify util to play with.
I want to control, what happens with my files during my work process.
There is a command which i use today for running inotify
inotifywait --fromfile ~/list.inotify
That command easy read a list of folders and files to read and to ignore.
There is my list (list.inotify)
/home/alex
#/home/alex/Torrnets/
#/home/alex/.pulse-cookie
So it should read my home folder and ignore Torrents folder and .pulse-cookie file.
It ignores Torrents as well. But it won't ignore a .pulse-cookie file.
Any solution for this ? (please don't post a solution to use pattern based ignore, i want to work with a file list with absolute path's)
$man inotify
#<file>
When watching a directory tree recursively, exclude the specified file from being watched. The file must be specified with a relative or absolute path according to whether a relative or absolute path is given for watched directories. If a specific
path is explicitly both included and excluded, it will always be watched.
Note: If you need to watch a directory or file whose name starts with #, give the absolute path.
--fromfile <file>
Read filenames to watch or exclude from a file, one filename per line. If filenames begin with # they are excluded as described above. If <file> is `-', filenames are read from standard input. Use this option if you need to watch too many files to
pass in as command line arguments.
If you don't specify a -e argument, inotifywait will call inotify_add_watch with IN_ALL_EVENTS, which causes events to occur for files inside watched directories - note that inotify(7) says:
When monitoring a directory, the events marked with an asterisk (*) above can occur for files in the directory, in which case
the name field in the returned inotify_event structure identifies the name of the file within the directory.
If you have a look at the inotifywait code in question, you'll see that it only watches (and checks the exclude list against) directories. It would perhaps be a bit more user friendly if you were warned when specifying an exclusion that is not a directory or one that is never used, but that's the way it currently it is.

Overwrite file in copying IF content to of them not the same

I have a lot of files from one side (A) and a lot of other files in other place (B)
I'm copying A to B, there are a lot of files are the same, but content could be different!
Usually I used mc (Midnight Commander) to do it, and selected "Overwrite if different size".
But there is a situation when size are the same, but content is different. In this case mc keeps file in B place and not overwrite it.
In mc overwrite dialog there is a work "Update" I don't know what it is doing? In help there is no such information, maybe this is a solution?
So I'm searching solution which can help me copy all files from A to B and overwrite files in B place if they exists AND content is different from A.
if file in "B" place exists (the same name) and content is different it has to be overwritten by file from "A" place every time.
Do you know any solution?
I'd use rsync as this will not rely on the file date but actually check whether the content of the file has changed. For example:
#> rsync -cr <directory to copy FROM> <directory to copy TO>
Rsync copies files either to or from a remote host, or locally on the current host (it does not support copying files between two remote hosts).
-c, --checksum skip based on checksum, not mod-time & size
-r, --recursive recurse into directories
See man rsync for more options and details.
Have you tried the command line:
cp -ru A/* B/
Should copy recursively all changed files (more recent timestamp) from directory A to directory B.
You can also use -a instead of -r in the command line, depending on what you want to do. See the cp man page.
You might want to keep some sort of 'index' file that holds the SHA-1 hash of the files, which you create when you write them. You can then calculate the 'source' hash and compare it against the 'destination' hash from the index file. This will only work if this process is the only way files are written to the destination.
http://linux.math.tifr.res.in/manuals/man/mc.html
The replace dialog is shown when you attempt to copy or move a file on the top of an existing file. The dialog shows the dates and sizes of the both files. Press the Yes button to overwrite the file, the No button to skip the file, the alL button to overwrite all the files, the nonE button to never overwrite and the Update button to overwrite if the source file is newer than the target file. You can abort the whole operation by pressing the Abort button

How can you tell what files are currently open by any user?

I am trying to write a script or a piece of code to archive files, but I do not want to archive anything that is currently open. I need to find a way to determine what files in a directory are open. I want to use either Perl or a shell script, but can try use other languages if needed. It will be in a Linux environment and I do not have the option to use lsof. I have also had inconsistant results with fuser. Thanks for any help.
I am trying to take log files in a directory and move them to another directory. If the files are open however, I do not want to do anything with them.
You are approaching the problem incorrectly. You wish to keep files from being modified underneath you while you are reading, and cannot do that without operating system support. The best that you can hope for in a multi-user system is to keep your archive metadata consistent.
For example, if you are creating the archive directory, make sure that the number of bytes stored in the archive matches the directory. You can checksum the file contents before and after reading the filesystem and compare that with what you wrote to the archive and perhaps flag it as "inconsistent".
What are you trying to accomplish?
Added in response to comment:
Look at logrotate to steal ideas about how to handle this consistently just have it do the work for you. If you are concerned that rename of files will make processes that are currently writing them will break things, take a look at man 2 rename:
rename() renames a file, moving it
between directories if required. Any
other hard links to the file (as
created using link(2)) are unaffected.
Open file descriptors for oldpath are
also unaffected.
If newpath already exists it will be atomically replaced (subject
to a few conditions; see ERRORS
below), so that there is no point at
which another process attempting to
access newpath will find it missing.
Try ls -l /proc/*/fd/* as root.
msw has answered the question correctly but if you want to file the list of open processes, the lsof command will give it to you.

Resources