Restore shell script linux - linux

Hey I have this question for Coursework and I was wondering if someone could give me some help, as its coursework I don't want someone to just write the code for me, But could give me a short example or even tell me what kind of things I should use and I can read them.
I have a delete script which stores the location of the file that is deleted via
readlink -f $1 >>/root/TAM/store
The files are stored in the directory /root/TAM/dustbin when deleted
and the question I am stuck on is
restore - This script should move the file called back to its original directory without requiring any further user input.
If a file of that name already exists at the restore location, the script prompts the user to select an appropriate alternative action.

When you delete a file, you don't really delete it, but move it to your dustbin directory, keeping the full path from the root (so if you remove /home/foo/blabla, you store it in dustbin/home/foo/blabla.
The restore command/script then should verify, before restoring the file in the dustbin if there is a file with the same name in the original path.

Related

How to move folders with command line prompt in Windows 10?

Let's say we have next directory structure: parent\exampledir\exampledir\<very complicated folder structure like node modules>. By running only one command I want to have next structure: parent\exampledir\<very complicated folder structure like node modules>. How can I achieve this in command prompt?
I tried with: move exampledir\exampledir ., but I get a prompt asking me do I want to override exampledir. After I answer with Yes I get a message saying that access is denied. If I change the name of outer exampledir everything is fine and inner exampledir with all files and folders is correctly moved, but then there is one extra step where I need to delete outer exampledir.
You can do this in one command line, though technically it would be multiple commands using a FOR loop and then first renaming the source directory. You can enter multiple commands on a single line using a semi-colon. The following is written from my head so may not be technically accurate but should guide you to the correct way to write it:
d=blah\exampledir;REN "%d" "%d.bak";MOVE "%d.bak\%~nd" "%d"
This effectively moves the directory to a .bak, and then pulls the subdirectory of the same name back to the original name. If you put this into a batch file, don't forget to escape the % signs with a second one.
Another option could be to add * after the directory name so you MOVE exampledir\exampledir* exampledir\
it may be possible that using Git for Bash, you could launch the shell and use the Linux mv command which may work
Finally, if you want to make sure you back up your batch files, create a free GitHub account and either store it in a repo or create a Gist for one off things

linux delete user account without userdel

I'd like to delete a user from a tarball that contains the files for a Linux OS (it's a tarball of the root [/] filesystem). Is there a way to do this completely and properly such that it would mimic the steps taken by the userdel command? I suppose I have two choices:
Work within the OS on an actual target, use userdel and then re-tar
the files. Not a problem, but I was curious about acting directly
on the tarball, hence...
I could mimic the steps taken by userdel: un-tar and delete all entries related to the user...according to the man
page of userdel I would delete entries in /etc/group,
/etc/login.defs, /etc/passwd, and /etc/shadow. Then, re-tar.
Approach (2) is attractive because I could programmatically add or delete users directly on the tarball. I'll try (2), but wondering if there would be any unintended consequences or leftover bookkeeping that I should do? Or is there another way to do this?
/etc/login.defs is only called when a new user is created. That file does not need to be modified. However, a mail spool will be created for the user in the location listed in login.defs
Deleting the user from /etc/shadow and /etc/passwd will work. /etc/group is not a requirement however it cant hurt. Those three files will take care of it, You may delete the mail spool if desired.

shell script to create backup file when creating new file in particular directory

Recently I was asked the following question in an interview.
Suppose I try to create a new file named myfile.txt in the /home/pavan directory.
It should automatically create myfileCopy.txt in the same directory.
A.txt then it automatically creates ACopy.txt,
B.txt then BCopy.txt in the same directory.
How can this be done using a script? I may know that this script should run in crontab.
Please don't use inotify-tools.
Can you explain why you want to do?
Tools like VIM can create a backup copy of a file you're working on automatically. Other tools like Dropbox (which works on Linux, Windows, and Mac) can version files, so it backs up all the copies of the file for the last 30 days.
You could do something by creating aliases to the tools you use for creating these file. You edit a file with the tools you tend to use, and the alias could create a copy before invoking a tool.
Otherwise, your choice is to use crontab to occasionally make backups.
Addendum
let me explain suppose i have directory /home/pavan now i create the file myfile.txt in that directory , immediately now i should automatically generate myfileCopy.txt file in the same folder
paven
There's no easy user tool that could do that. In fact, the way you stated it, it's not clear exactly what you want to do and why. Backups are done for two reasons:
To save an older version of the file in case I need to undo recent changes. In your scenario, I'm simply saving a new unchanged file.
To save a file in case of disaster. I want that file to be located elsewhere: On a different computer, maybe in a different physical location, or at least not on the same disk drive as my current file. In your case, you're making the backup in the same directory.
Tools like VIM can be set to automatically backup a file you're editing. This satisfy reason #1 stated above: To get back an older revision of the file. EMACs could create an infinite series of backups.
Tools like Dropbox create a backup of your file in a different location across the aether. This satisfies reason #2 which will keep the file incase of a disaster. Dropbox also versions files you save which also is reason #1.
Version control tools can also do both, if I remember to commit my changes. They store all changes in my file (reason #1) and can store this on a server in a remote location (reason #2).
I was thinking of crontab, but what would I backup? Backup any file that had been modified (reason #1), but that doesn't make too much sense if I'm storing it in the same directory. All I would have are duplicate copies of files. It would make sense to backup the previous version, but how would I get a simple crontab to know this? Do you want to keep the older version of a file, or only the original copy?
The only real way to do this is at the system level with tools that layer over the disk IO calls. For example, at one location, we used Netapps to create a $HOME/.snapshot directory that contained the way your directory looked every minute for an hour, every hour for a day, and every day for a month. If someone deleted a file or messed it up, there was a good chance that the version of the file exists somewhere in the $HOME/.snapshot directory.
On my Mac, I use a combination of Time Machine - which backs up the entire drive every hour, and gives me a snapshot of my drive that stretches back over a year and a half) and Dropbox which keeps my files stored in the main Dropbox server somewhere. I've been saved many times by that combination.
I now understand that this was an interview question. I'm not sure what was the position. Did the questioner want you to come up with a system wide way of implementing this, like a network tech position, or was this one of those brain leaks that someone comes up with at the spur of the moment when they interview someone, but were too drunk the night before to go over what they should really ask the applicant?
Did they want a whole discussion on what backups are for, and why backing up a file immediately upon creation in the same directory is a stupid idea non-optimal solution, or were they attempting to solve an issue that came up, but aren't technical enough to understand the real issue?

How can you tell what files are currently open by any user?

I am trying to write a script or a piece of code to archive files, but I do not want to archive anything that is currently open. I need to find a way to determine what files in a directory are open. I want to use either Perl or a shell script, but can try use other languages if needed. It will be in a Linux environment and I do not have the option to use lsof. I have also had inconsistant results with fuser. Thanks for any help.
I am trying to take log files in a directory and move them to another directory. If the files are open however, I do not want to do anything with them.
You are approaching the problem incorrectly. You wish to keep files from being modified underneath you while you are reading, and cannot do that without operating system support. The best that you can hope for in a multi-user system is to keep your archive metadata consistent.
For example, if you are creating the archive directory, make sure that the number of bytes stored in the archive matches the directory. You can checksum the file contents before and after reading the filesystem and compare that with what you wrote to the archive and perhaps flag it as "inconsistent".
What are you trying to accomplish?
Added in response to comment:
Look at logrotate to steal ideas about how to handle this consistently just have it do the work for you. If you are concerned that rename of files will make processes that are currently writing them will break things, take a look at man 2 rename:
rename() renames a file, moving it
between directories if required. Any
other hard links to the file (as
created using link(2)) are unaffected.
Open file descriptors for oldpath are
also unaffected.
If newpath already exists it will be atomically replaced (subject
to a few conditions; see ERRORS
below), so that there is no point at
which another process attempting to
access newpath will find it missing.
Try ls -l /proc/*/fd/* as root.
msw has answered the question correctly but if you want to file the list of open processes, the lsof command will give it to you.

Linux - Restoring a file

I've written a vary basic shell script that moves a specified file into the dustbin directory. The script is as follows:
#!/bin/bash
#move items to dustbin directory
mv "$#" ~/dustbin/
echo "File moved to dustbin"
This works fine for me, any file I specify gets moved to the dustbin directory. However, what I would like to do is create a new script that will move the file in the dustbin directory back to its original directory. I know I could easily write a script that would move it back to a location specified by the user, but I would prefer to have one that would move it to its original directory.
Is this possible?
I'm using Mac OS X 10.6.4 and Terminal
You will have to store where the original file is coming from then. Maybe in a seperate file, a database, or in the files attributes (meta-data).
Create a logfile with 2 columns:
The complete filename in the dustbin
The complete original path and filename
You will need this logfile anyway - what will you do when a user deleted 2 files in different directories, but with the same name? /home/user/.wgetrc and /home/user/old/.wgetrc ?
What will you do when a user deletes a file, makes a new one with the same name, and then deletes that too? You'll need versions or timestamps or something.
You need to store the original location somewhere, either in a database or in an extended attribute of the file. A database is definitely the easiest way to do it, though an extended attribute would be more robust. Looking in ~/.Trash/ I see some, but not all files have extended attributes, so I'm not sure how Apple does it.
You need to somehow encode the source directory in the file. I think the easiest would be to change the filename in the dustbin directory. So that /home/user/music/song.mp3 becomes ~/dustbin/song.mp3|home_user_music
And when you copy it back your script needs to process the file name and construct the path beginning at |.
Another approach would be to let the filesystem be your database.
A file moved from /some/directory/somewhere/filename would be moved to ~/dustbin/some/directory/somewhere/filename and you'd do find ~/dustbin -name "$file" to find it based on its basename (from user input). Then you'd just trim "~/bustbin" from the output of find and you'd have the destination ready to use. If more than one file is returned by find, you can list the proposed files for user selection. You could use ~/dustbin/$deletiondate if you wanted to make it possible to roll back to earlier versions.
You could do a cron job that would periodically remove old files and the directories (if empty).

Resources