MV command giving me are the same file error - linux

So I am building this script to take a file from the "trash" directory and move it to the home directory. I am getting an error mv:/home/user/Trash/ and /home/user/Trash are the same file. The problem is I am moving the file to /home/user. I can't figure out why it is giving me this error.
Script:
trash="/home/user/Trash"
homedirectory="/home/user/"
for files in "$trash"/*
do
echo "$(basename $files) deleted on $(date -r $files)"
done
echo "Enter the filename to undelete from the above list:"
read $undeletefile
mv $trash/$undeletefile $homedirectory
Output:
myfile2 deleted on Thu Jan 23 18:47:50 CST 2014
trashfile deleted on Fri Feb 28 23:07:33 CST 2014
Enter the filename to undelete from the above list:
trashfile
mv: `/home/user/Trash/' and `/home/user/Trash' are the same file

I think your problem is in the read command. You are not supposed to add $ to it.
Try:
read undeletefile

Related

deal with filename with space in shell [duplicate]

This question already has answers here:
Iterate over a list of files with spaces
(12 answers)
Closed 3 years ago.
I've read the answer here,but still got wrong.
In my folder,I only want to deal with *.gz file,Windows 10.tar.gz got space in filename.
Assume the folder contain:
Windows 10.tar.gz Windows7.tar.gz otherfile
Here is my shell scripts,I tried everything to quote with "",still can't got what I want.
crypt_import_xml.sh
#/bin/sh
rule_dir=/root/demo/rule
function crypt_import_xml()
{
rule=$1
# list the file in absoulte path
for file in `ls ${rule}/*.gz`; do
echo "${file}"
#tar -xf *.gz
#mv a b.xml to ab.xml
done
}
crypt_import_xml ${rule_dir}
Here is what I got:
root#localhost.localdomain:[/root/demo]./crypt_import_xml.sh
/root/demo/rule/Windows
10.tar.gz
/root/demo/rule/Windows7.tar.gz
After tar xf the *.gz file,the xml filename still contain space.It is a nightmare for me to deal with filename contain spaces.
You shouldn't use ls in for loop.
$ ls directory
file.txt 'file with more spaces.txt' 'file with spaces.txt'
Using ls:
$ for file in `ls ./directory`; do echo "$file"; done
file.txt
file
with
more
spaces.txt
file
with
spaces.txt
Using file globbing:
$ for file in ./directory/*; do echo "$file"; done
./directory/file.txt
./directory/file with more spaces.txt
./directory/file with spaces.txt
So:
for file in "$rule"/*.gz; do
echo "$file"
#tar -xf *.gz
#mv a b.xml to ab.xml
done
You do not need to call that ls command in the for loop, the file globbing will take place in your shell, without running this additional command:
XXX-macbookpro:testDir XXX$ ls -ltra
total 0
drwx------+ 123 XXX XXX 3936 Feb 22 17:15 ..
-rw-r--r-- 1 XXX XXX 0 Feb 22 17:15 abc 123
drwxr-xr-x 3 XXX XXX 96 Feb 22 17:15 .
XXX-macbookpro:testDir XXX$ rule=.
XXX-macbookpro:testDir XXX$ for f in "${rule}"/*; do echo "$f"; done
./abc 123
In your case you can change the "${rule}"/* into:
"${rule}"/*.gz;

Testing a file in a tar archive

I've been manipulating a tar file and I would like to test if a file exists before extracting it
Let's say I have an tar file called Archive.Tar and after entering
tar -tvf Archive.Tar
I get:
-rwxrwxrwx guy/root 1502 2013-10-02 20:43 Directory/File
-rwxrwxrwx guy/root 494 2013-10-02 20:43 Dir/SubDir/Text
drwxrwxrwx guy/root 0 2013-10-02 20:43 Directory
I want to extract Text into my Working directory, but I want to be sure that it's actually a file by doing this:
if [ -f Dir/Sub/Text ]
then
echo "OK"
else
echo "KO"
fi
The result of this test is always KO and I really don't understand why, any suggestions?
Tested with BSD and GNU versions of tar,
in the output of tar tf,
entries that are directories end with /.
So to test if Dir/Sub/Text is a file or directory in the archive,
you can simply grep with full line matching:
if tar tf Archive.Tar | grep -x Dir/Sub/Text >/dev/null
then
echo "OK"
else
echo "KO"
fi
If the archive contains Dir/SubDir/Text/, then Dir/SubDir/Text is a directory, and the grep will not match, so KO will be printed.
If the archive contains Dir/SubDir/Text without a trailing /,
then Dir/SubDir/Text is a file and the grep will match,
so OK will be printed.
if [ ! -d Dir/Sub/Text ]
then
echo "OK"
else
echo "KO"
fi
will return KO only if a directory Text exists and be ok if it's a file or does not exist (or to be precise also OK if it would be a symlink).
This might be a solution,
tar -tvf Archive.Tar | grep Dir/Sub/Text
This will let you know if it find the file.

Bash script to connect to a remote server, and pull the last time a file was modified

I am looking to create a bash script to query a remote server for the last time every instance of a file was modified. Each home directory has a version of this file.
For example, both owner and owner1 have a copy of foo.txt in their home directories on a remote box accessible via ssh.
/home/owner/
-rw-r--r-- 1 owner owner 3368 Jul 29 2014 foo.txt
/home/owner1/
-rw-r--r-- 1 owner1 owner1 3368 Jul 28 2014 foo.txt
I would like to output this information to a file that would look like:
User: owner Last Modified: Jul 29 2014
User: owner1 Last Modified: Jul 28 2014
You really ought to at least show that you attempted to write the script youself. Anyway, it's only a one-liner, so why quibble:
ssh remote-box 'ls -l /home/*/foo.txt'
It's not precisely the format you suggested, but it has all the information you asked for.
echo owner: `ssh owner#remote-box "date -r foo.txt"`>output.txt
echo owner1: `ssh owner1#remote-box "date -r foo.txt"`>>output.txt
The following function will print the data you're looking for:
remote_mod() {
ssh $1 ls -l $2 | awk '{ print "User: "$3" Last Modified: "$6" "$7" "$8 }'
}
This prints something like:
$ remote_mod yourmachine '~/.bashrc'
User: root Last Modified: Jun 2 15:01
You can then do this in a loop if you want to run the command against multiple remote files:
for d in owner owner1
do
remote_mod yourmachine /home/$d/foo.txt
done
The stat command will give you even more information, but it's in a more verbose format.
Here you go, maybe not exactly the output you want but I'm sure you will be able to modify the script to suit your needs. Make sure the user you ssh with have read access to the home directories.
ssh HOSTNAME "find /home/ -maxdepth 2 -name foo.txt | xargs -l -I{} bash -c '{
DIR=\$(dirname {});
LAST=\$(stat -c %y {});
echo "Dir:\${DIR} Last Modified :\${LAST}"
}'"
If the owner of the file is the "user" you want to be printed, you can simplify with :
ssh HOSTNAME "find /home/ -maxdepth 2 -name foo.txt | xargs -l -I{} bash -c '{
stat -c \"User: %U Last Modified : %y\" {};
}'"

Script to look at files in a directory

I am writing a script that shows all the files in a directory named "Trash". The script will then prompt the user for which file he wants to "undelete" and send it back to it's original directory. Currently I am having a problem with the for statement, but I also am not sure how to have the user input which file and how to move it back to it's original directory. Here is what I have thus far:
PATH=/home/user/Trash
for files in $PATH
do
echo "$files deleted on $(date -r $files)"
done
echo "Enter the filename to undelete from the above list:"
Actual Output:
./undelete.sh: line 6: date: command not found
/home/user/Trash deleted on
Enter the filename to undelete from the above list:
Expected Output:
file1 deleted on Thu Jan 23 18:47:50 CST 2014
file2 deleted on Thu Jan 23 18:49:00 CST 2014
Enter the filename to undelete from the above list:
So I am having two problems currently. One instead of reading out the files in the directory it is giving $files the value of PATH, the second is my echo command in the do statement is not processing correctly. I have changed it around all kinds of different ways but can't get it to work properly.
You're making many mistakes in your script but biggest of all is setting the value of reserved path variable PATH. Which is basically messing up standard paths and causing errors like date command not found.
In general avoid using all caps variables in your script.
To give you a start you can use script like this:
trash=/home/user/Trash
restore=$HOME/restored/
mkdir -p "$restore" 2>/dev/null
for files in "$trash"/*
do
read -p "Do you want to keep $file (y/n): " yn
[[ "$yn" == [yY] ]] && mv "$file" "restore"
done

Need to run a command in current directory only if the file is not set with executable

Here is the problem:
Use a bash for loop which loops over files that have the string "osl-guest" and ".tar.gz" in your current directory (using the ‘ls’ command, see sample output below), and runs the command ‘tar -zxf’ on each file individually ONLY IF the file is not set with executable. For example, to run the ‘tar -zxf’ command on the file ‘file1’, the command would be: tar -zxf file1
Sample output of "ls -l":
-rw-r--r-- 1 lance lance 42866 Nov 1 2011 vmlinuz-2.6.35-gentoo-r9-osl-guest_i686.tar.gz
-rwxr-xr-x 1 lance lance 42866 Nov 1 2011 vmlinuz-3.4.5-gentoo-r3-osl-guest_i686.tar.gz
-rw-r--r-- 1 lance lance 42866 Nov 1 2011 vmlinuz-3.5.3-gentoo-r2-osl-guest_i686.tar.gz
You can perform the loop in the following way, without the need to call ls:
# For each file matching the pattern
for f in *osl-guest*.tar.gz; do
# If the file is not executable
if [[ ! -x "$f" ]]; then
tar -zxf $f;
fi;
done;
The *osl-guest*.tar.gz simply uses shell expansion in order to get the list of files you want, rather than making a call it ls.
The if statement checks if the file is executble, -x is the test for an executable and the use of ! negates the result, so it will only enter the if block when the file is not executable.

Resources