Issue in mv command in shell script - linux

Im trying to run mv command using shell script but it gives me
mv: cannot stat `/opt/logs/merchantportal/logger.log.20160501.*': No such file or directory
mv: cannot stat `/opt/logs/merchantapi/logger.log.20160501.*': No such file or directory
// THIS IS MY SHELL SCRIPT
#!/bin/bash
now="$(date +'%Y%m%d')"
merchantPortalLogsPath="/opt/logs/merchantportal"
merchantApiLogsPath="/opt/logs/merchantapi"
currentDate="$(date +%Y%m%d)"
olderDate="$(date "+%Y%m%d" -d "1 days ago")"
merchantPortalLogsPathBackup=$merchantPortalLogsPath"."$olderDate
merchantApiLogsPathBackup=$merchantApiLogsPath"."$olderDate
mkdir $merchantPortalLogsPathBackup
mkdir $merchantApiLogsPathBackup
echo $merchantPortalLogsPath"/logger.log."$olderDate".*" $merchantPortalLogsPathBackup"/"
echo $merchantApiLogsPath"/logger.log."$olderDate".*" $merchantApiLogsPathBackup"/"
mv $merchantPortalLogsPath"/logger.log."$olderDate".*" $merchantPortalLogsPathBackup"/"
mv $merchantApiLogsPath"/logger.log."$olderDate".*" $merchantApiLogsPathBackup"/"
// BUT DIRECTORY IS CREATED SUCCESSFULLY

".*"
Putting the * inside double quotes will prevent the shell from treating that as a wildcard and will instead take it as a literal * character. Instead, change your script so that it does not double quote the *. For example:
mv ${merchantPortalLogsPath}/logger.log.${olderDate}.* ${merchantPortalLogsPathBackup}/
mv ${merchantApiLogsPath}/logger.log.${olderDate}.* ${merchantApiLogsPathBackup}/
Note: Technically should actually double quote the variable expansions to handle paths with spaces and other special characters in them. But I have not shown that to focus just on the problem at hand.

The log directories exist, but the log files within those directories did not. Since mv cannot move from data a log file that doesn't first exist, it complains, rather vaguely:
No such file or directory
Note: IMHO vague error messages are documentation/interface bugs -- if the error had said only:
No such file
And not left the user wondering if there was a missing directory it would have seemed less puzzling, since that message would clearly imply that the directory where the file was supposed to exist did in fact exist.
But consider this egregious GNU mv example, where a directory /tmp/a/ does not exist:
mv /tmp/a/b/c/d /tmp/foo
Output to STDERR:
mv: cannot stat '/tmp/a/b/c/d': No such file or directory
Now, the directory /tmp/a/ doesn't exist, and also the directories /tmp/b/ and /tmp/a/b/c/, and the file /tmp/a/b/c/d, none of them exist. The user is given no indication of which of those is the problem, and it's even possible (unusual, but possible) that /tmp/ doesn't exist. Where as writing just a few lines of code could output an error message that said something so much more useful, like:
mv: cannot stat '/tmp/a/b/c/d': `/tmp/` exists, but not directory `/tmp/a/`
...which collectively would probably save years of user-time.

Related

Is mv * a destructive command on a directory with 2 or more files? What other linux commands have similar behavior?

When I run mv * with no destination directory on a directory with say 10 files, I get an error as follows
root#tryit-apparent:~/test2# ls
file1.txt file10.txt file2.txt file3.txt file4.txt file5.txt file6.txt file7.txt file8.txt file9.txt
root#tryit-apparent:~/test2# mv *
mv: target 'file9.txt' is not a directory
When I run it on a directory with two files it overwrites the file with one just file.
root#tryit-apparent:~/test# ls
tempfile tempfile2
root#tryit-apparent:~/test# mv *
root#tryit-apparent:~/test# ls
tempfile2
I read the man pages but couldn't understand this behaviour. Would like to know what's causing this behavior and what's going on under the hood?
What other linux commands have such pitfalls and have destructive actions that are executed silently if the user is not aware of such behavior?
In Unix, unlike some other OSes, wildcards like * are expanded by the shell, before being passed to the command being run. So when you run mv * with tempfile and tempfile2 as the only files in the current directory, what the shell actually executes is mv tempfile tempfile2, which as normal will rename the first file over the second one, erasing the previous contents of tempfile2. The shell doesn't know or care that this command treats its last argument specially, and mv has no way of knowing that its two arguments came from a wildcard expansion. Hence the behavior you're seeing.
You can have similar issues even with more than two files. For instance, if you have files named tempfile1 through tempfile9 and a subdirectory named zyzzx, then mv * will move all your temp files into the zyzzx subdirectory.
Mostly, you just have to be aware that this is how wildcards work, and use caution with commands that treat one of their arguments specially (e.g. as a destination). cp is another one to watch out for, for the same reason. For interactive usage, you may want to get used to using the -i option to mv and cp, which asks for confirmation before overwriting files; or use an alias to make this the default.
Move is intented to move or rename a file or a directory, so you need a source and a destination.
If the path of the file is unchange then it becomes a rename operation.
If the path changes and the name remains the same it's a move.
You can do both by chaning the path and the name.
Man pages can be challenging to wrap your head around.
Googling can help: https://www.howtoforge.com/linux-mv-command/
Off the top of my head, you could do a cp operation followed by a rm to achieve similar results, but that's two steps, rather than one.

shell script mv is throwing unhelpful error "No such file or directory" even though i see it

I need to use a shell script to move all files in a directory into another directory. I manually did this without a problem and now scripting it is giving me an error on the mv command.
Inside the directory I want to move files out of are 2 directories, php and php.tmp. The error I get is cd: /path/to/working/directory/php: No such file or directory. I'm confused because it is there to begin with and listed when I ls the working directory.
The error I get is here:
ls $PWD #ensure the files are there
mv $PWD/* /company/home/directory
ls /company/home/directory #ensure the files are moved
When I use ls $PWD I see the directories I want to move but the error afterward says it doesn't exist. Then when I ssh to the machine this is running on I see the files were moved correctly.
If it matters the directory I am moving files from is owned by a different user but the shell is executing as root.
I don't understand why I would get this error so, any help would be great.
Add a / after the path to specify you want to move the file, not rename the directory.
You should try this:
mv $PWD/\* /home/user/directory/
Are your variables properly quoted? You could try :
ls "$PWD" #ensure the files are there
mv "$PWD"/* "/company/home/directory"
ls "/company/home/directory" #ensure the files are moved
If any of your file or directory names contains characters such as spaces or tabs, your "mv" command may not be seeing the argument list you think it is seeing.

Facing issues in making a bash script work

I'm new to Bash scripting. My script intended role is to access a provided path and then apply some software (RTG - Real time Genomics) commands on the data provided in that path. However, when i try to execute the bash from CLI, it gives me following error
ERROR:There were invalid input file paths
The path I have provided in the script is accurate. That is, In the original directory, where the program 'RTG' resides, I have made folders accordingly like /data/reads/NA19240 and placed both *_1.fastq and *_2.fastq files inside NA19240.
Here is the script:
#!/bin/bash
for left_fastq in /data/reads/NA19240/*_1.fastq; do
right_fastq=${left_fastq/_1.fastq/_2.fastq}
lane_id=$(basename ${left_fastq/_1.fastq})
rtg format -f fastq -q sanger -o ${lane_id} -l ${left_fastq} -r ${right_fastq} --sam-rg "#RG\tID:${lane_id}\tSM:NA19240\tPL:ILLUMINA"
done
I have tried many workarounds but still not being able to bypass this error. I will be really grateful if you guys can help me fixing this problem. Thanks
After adding set -aux in bash script for debugging purpose, I'm getting following output now
adnan#adnan-VirtualBox[Linux] ./format.sh
+ for left_fastq in '/data/reads/NA19240/*_1.fastq'
+ right_fastq='/data/reads/NA19240/*_2.fastq'
++ basename '/data/reads/NA19240/*'
+ lane_id='*'
+ ./rtg format -f fastq -q sanger -o '*' -l '/data/reads/NA19240/*_1.fastq' -r '/data/reads/NA19240/*_2.fastq' --sam-rg '#RG\tID:*\tSM:NA19240\tPL:ILLUMINA'
Error: File not found: "/data/reads/NA19240/*_1.fastq"
Error: File not found: "/data/reads/NA19240/*_2.fastq"
Error: There were 2 invalid input file paths
You need to set the nullglob option in the script, like so:
shopt -s nullglob
By default, non-matching globs are expanded to themselves. The output you got by setting set -aux indicates that the file glob /data/reads/NA19240/*_1.fastq is getting interpreted literally. The only way this would happen is if there were no files found, and nullglob was disabled.
In the original directory, where the program 'RTG' resides, I have
made folders accordingly like /data/reads/NA19240 and placed both
*_1.fastq and *_2.fastq files inside NA19240.
So you say, your data folders are in the original directory (whatever that may be), but in the script you wrongly specify them to be in the root directory (by the leading /).
Since you start the script in the original directory, just drop the leading / and use a relative path:
for left_fastq in data/reads/NA19240/*_1.fastq

tar files using the -C option and wildcard

I'm passing a tar command to shell executor in an application. But it seems that my tar syntax is incorrect. (This is Windows (bsdtar command) but works the same as Linux as far as I know; I can also test on Linux if need be.)
I'm trying to tar gz everything up all files ending in ext without storing the full path in my tar file.
tar -cvzf test.tar.gz -C C:/mydir/toTar/ *.ext
I get an error:
tar: *.ext: Cannot stat: No such file or directory
I can give the whole path but then my tar will contain C->mydir->toTar->. I just want the files, not mydir and toTar in the result.
So far only thing that is close to what I want is . instead of *.ext, but that tars other things too, which I obviously don't want.
The problem is that * is a wildcard character that is expanded by the shell, but you are bypassing the shell and calling tar directly. The tar command is looking for one file which is named literally *.ext and it does not exist.
Your options are:
Expand the list of files in your own code and pass that list to tar.
Call the shell from your code by calling something like /bin/sh -c tar ...
With option 2 there may be security implications -- if the shell sees something it thinks is a command, it will run it. Option 1 is therefore safer, but it's up to you which makes more sense.
I am befuddled by how you're using dos-style paths in an apparently linux-like context. But this is how I'd do it. Hopefully the concept is clear if the details may be incorrect.
cd C:/mydir/toTar/
mkdir ~/tmpwork
find . -name '*.ext' > ~/tmpwork/extfiles
tar czvfT ~/tmpwork/test.tar.gz ~/tmpwork/extfiles
rm ~/tmpfiles/extfiles
There is no way around the shell expansion without using pipes, etc.

shell script to increment file names when a directory contents changes (centos)

I have a folder containing 100 pictures from a webcam. When the webcam sends a new picture, I want this one to replace number 0 and have all the other jpg's move up one number. I've set up a script where inotify monitors a directory. When a new file is put into this directory the script renumbers all the files in the picture directory, renames the new uploaded picture and puts it in the folder with the rest.
This script 'sort of' works. 'Sort of', because sometimes it does what it's supposed to do and sometimes it complains about missing files:
mv: cannot stat `webcam1.jpg': No such file or directory
Sometimes it complains about only one file, sometimes 4 or 5. Of course I made sure all 100 files were there, properly named before the script was run. After the script is run, the files it complains about are indeed missing.
This is the script, in the version I tested the full paths to the directories are used of course.
#!/bin/bash
dir1= /foo # directory to be watched
while inotifywait -qqre modify "$dir1"; do
cd /f002 #directory where the images are
for i in {99..1}
do
j=$(($i+1))
f1a=".jpg"
f1="webcam$i$f1a"
f2="test"
f2="webcam$j$f1a"
mv $f1 $f2
done
rm webcam100.jpg
mv dir1/*.jpg /f002/webcam0.jpg
done
I also need to implement some error checking, but for now I don't understand why it is missing files that are there.
You are executing the following mv commands:
mv webcam99.jpg webcam100.jpg
...
mv webcam1.jpg webcam2.jpg
The mv webcam0.jpg to webcam1.jpg is missing. With the first change to "$dir" you have the following files in /foo2:
webcam99.jp
...
webcam2.jpg
webcam0.jpg
With subsequent "$dir" change you will have the following:
webcam99.jp
...
webcam3.jpg
webcam0.jpg
In other words -- you are forgetting to move webcam0.jpg to webcam1.jpg. I would modify your script like this:
rm webcam99.jpg
for i in {98..0}
do
j=$(($i+1))
f1a=".jpg"
f1="webcam$i$f1a"
f2="test"
f2="webcam$j$f1a"
mv $f1 $f2
done
mv dir1/*.jpg /f002/webcam0.jpg

Resources