How can I suppress error messages for a shell command?
For example, if there are only jpg files in a directory, running ls *.zip gives an error message:
$ ls *.zip
ls: cannot access '*.zip': No such file or directory
Is there an option to suppress such error messages? I want to use this command in a Bash script, but I want to hide all errors.
Most Unix commands, including ls, will write regular output to standard output and error messages to standard error, so you can use Bash redirection to throw away the error messages while leaving the regular output in place:
ls *.zip 2> /dev/null
$ ls *.zip 2>/dev/null
will redirect any error messages on stderr to /dev/null (i.e. you won't see them)
Note the return value (given by $?) will still reflect that an error occurred.
To suppress error messages and also return the exit status zero, append || true. For example:
$ ls *.zip && echo hello
ls: cannot access *.zip: No such file or directory
$ ls *.zip 2>/dev/null && echo hello
$ ls *.zip 2>/dev/null || true && echo hello
hello
$ touch x.zip
$ ls *.zip 2>/dev/null || true && echo hello
x.zip
hello
I attempted ls -R [existing file] and got an immediate error.
ls: cannot access 'existing file': No such file or directory
So, I used the following:
ls -R 2>dev/null | grep -i [existing file]*
ls -R 2>dev/null | grep -i text*
Or, in your case:
ls -R 2>dev/null | grep -i *.zip
My solution with a raspberry pi3 with buster.
ls -R 2>/dev/null | grep -i [existing file]*
2>/dev/null is very usefull with Bash script to avoid useless warnings or errors.
Do not forget slash caracter
Related
The following commands work on my terminal but not in my shell script. I later found out that my terminal was /bin/tcsh. Can somebody tell me what changes I need to do for /bin/sh. Here are the commands I need to change:
cp source_dir/*/dir1/*.xml destination_dir/
Error in sh-> cp: cannot stat `source_dir/*/dir1/*.xml': No such file or directory
sed -i "s+${initial_name}+${final_name}+" $file_name
This one does not complain but does not work as well.
I am adding an example for testing. The code tends to rename the names of xml files and also the contents of xml files. For example-
The file name crr.ya.na.aa.xml should be changed to aa.xml
The same name inside crr.ya.na.aa.xml should also be changed from crr.ya.na.aa to aa
Here is the code:
#!/bin/sh
# Create dir structure for testing
rm -rf audience
mkdir audience
mkdir audience/dir1 audience/dir2 audience/dir3
mkdir audience/dir1/ipxact audience/dir2/ipxact audience/dir3/ipxact
touch audience/dir1/ipxact/crr.ya.na.aa.xml
echo "<spirit:name>crr.ya.na.aa</spirit:name>" > audience/dir1/ipxact/crr.ya.na.aa.xml
touch audience/dir2/ipxact/crr.ya.na.bb.xml
echo "<spirit:name>crr.ya.na.bb</spirit:name>" > audience/dir2/ipxact/crr.ya.na.bb.xml
touch audience/dir3/ipxact/crr.ya.na.cc.xml
echo "<spirit:name>crr.ya.na.cc</spirit:name>" > audience/dir3/ipxact/crr.ya.na.cc.xml
# Create a dir for ipxact_drop files if it does not exist
mkdir -p ipxact_drop
rm -rf ipxact_drop/*
cp audience/*/ipxact/*.xml ipxact_drop/
ls ipxact_drop/ > ipxact_drop_files.log
cat ipxact_drop_files.log | \
awk '{ split($0,a,"."); print a[length(a)-1] "." a[length(a)] }' ipxact_drop_files.log > file_names.log
cat ipxact_drop_files.log | \
awk '{ split($0,a,"."); print "mv ipxact_drop/" $0 " ipxact_drop/" a[length(a)-1] "." a[length(a)] }' ipxact_drop_files.log > command.log
chmod +x command.log
./command.log
while read line
do
echo ipxact_drop/$line
initial_name=`grep -m 1 crr ipxact_drop/$line | sed -e 's/<spirit:name>//' | sed -e 's/<\/spirit:name>//' `
final_name="${line%.*}"
echo $initial_name
echo $final_name
sed -i "s+${initial_name}+${final_name}+" ipxact_drop/$line
done < file_names.log
echo " ***** SCRIPT RUN FINISHED *****"
Only the sed command at the end is not working
I was reading some other posts and understood that xml files can have problems with scripts. Here is what that worked for me upto now.
To remove cp error: replace #!/bin/sh -f with #!/bin/sh
To remove sed error for the test input: replace sed -i ...... with sed -i.back ....
am trying to execute the following command:
$ ssh root#10.10.10.50 "tail -F -n 1 $(ls -t /var/log/alert_ARCDB.log | head -n1 )"
ls: cannot access /var/log/alert_ARCDB.log: No such file or directory
tail: cannot follow `-' by name
notice the error returned, when i login to ssh separately and then execute
tail -F -n 1 $(ls -t /var/log/alert_ARCDB.log | head -n1 )"
see the below:
# ls -t /var/log/alert_ARCDB.log | head -n1
/var/log/alert_ARCDB.log
why is that happening and how to fix it. am trying to do this in one line as i don't want to create a script file.
Thanks a lot
Shell parameter expansion happens before command execution.
Here's a simple example. If I type...
ls "$HOME"
...the shell replaces $HOME with the path to my home directory first, then runs something like ls /home/larsks. The ls command has no idea that the command line originally had $HOME.
If we look at your command...
$ ssh root#10.10.10.50 "tail -F -n 1 $(ls -t /var/log/alert_ARCDB.log | head -n1 )"
...we see that you're in exactly the same situation. The $(ls -t ...) expression is expanded before ssh is executed. In other words, that command is running your local system.
You can inhibit the shell expansion on your local system by using single quotes. For example, running:
echo '$HOME'
Will produce:
$HOME
So you can run:
ssh root#10.10.10.50 'tail -F -n 1 $(ls -t /var/log/alert_ARCDB.log | head -n1 )'
But there's another problem here. If /var/log/alert_ARCDB.log is a file, your command makes no sense: calling ls -t on a single file gets you nothing.
If alert-ARCDB.log is a directory, you have a different problem. The result of ls /some/directory is a list of filenames without any directory prefix. If I run something like:
ls -t /tmp
I will get output like
file1
file2
If I do this:
tail $(ls -t /tmp | head -1)
I end up with a command that looks like:
tail file1
And that will fail, because there is no file1 in my current directory.
One approach would be to pipe the commands you want to perform to ssh. One simple way to achieve that is to first create a function that will echo the commands you want executed :
remote_commands()
{
echo 'cd /var/log/alert_ARCDB.log'
echo 'tail -F -n 1 "$(ls -t | head -n1 )"'
}
The cd will allow you to use the relative path listed by ls. The single quotes make sure that everything will be sent as-is to the remote shell, with no local expansion occurring.
Then you can do
ssh root#10.10.10.50 bash < <(remote_commands)
This assumes alert_ARCDB.log is a directory (or else I am not sure why you would want to add head -n1 after that).
I'm trying to learn shell commands. I know ls >output.txt saves the output to output.txt. However, what exactly does ls -z >output.txt do? In my book, it says it does Not save the output to output.txt. If this is true, where Does it save / print it? Also, is -z what causes it to not save it?
Lastly, what does ls -z 2>output.txt do? I know 2 refers to stderr (so the standard error). Does this mean it saves the error (if any) of ls in output.txt? If yes, where does the stdout get printed / saved? And what does the -z mean in this case?
Thanks in advance!
There is no option -z for ls on Linux. So let's see what happens:
$ LANG=C ls -z > /tmp/x
ls: invalid option -- 'z'
Try 'ls --help' for more information.
The error message goes to STDERR which is connected to the terminal.
The standard output (which is empty) is redirected to /tmp/x so we get an empty file.
$ LANG=C ls -z 2> /tmp/x
In this second scenario STDOUT is connected to the terminal, however there is no output. The error message which got sent to STDERR lands in /tmp/x:
$ cat /tmp/x
ls: invalid option -- 'z'
Try 'ls --help' for more information.
I am trying to ls the directories and print them out but nothing is being displayed. I am able to SSH and execute the first pwd. However, anything within the the for loop has no output. I know for sure there are directories called event-test- because I've done so manually. I've manually entered the directory (/data/kafka/tmp/kafka-logs/) and ran this piece of code and the correct output appeared so I'm not sure why
manually entered has correct output:
for i in `ls | grep "event-test"`; do echo $i; done;
script:
for h in ${hosts[*]}; do
ssh -i trinity-prod-keypair.pem bc2-user#$h << EOF
sudo bash
cd /data/kafka/tmp/kafka-logs/
pwd
for i in `ls | grep "event-test-"`; do
pwd
echo $i;
done;
exit;
exit;
EOF
done
It is because
`ls | grep "event-test-"`
is executing on your localhost not on remote host. Besides parsing ls is error prone and not even needed. You can do:
for h in "${hosts[#]}"; do
ssh -t -t trinity-prod-keypair.pem bc2-user#$h <<'EOF'
sudo bash
cd /data/kafka/tmp/kafka-logs/
pwd
for i in *event-test-*; do
pwd
echo "$i"
done
exit
exit
EOF
done
When parsing ls it is good practice to do ls -1 to get a prettier list to parse. Additionally, when trying to find files named "event-test-" I would recommend the find command. Since I am not completely sure what you are attempting to do other than list the locations of these "event-test" files I'd recommend something more similar to the following:
for h in "${hosts[#]}"; do ssh trinity-prod-keypair.pem bc2-user#$h -t -t "find /data/kafka/tmp/kafka-logs/ -type f -name *event-test-*;" ; done
This will give you a pretty output of the full path to the file and the file name.
I hope this helps.
I'm building a little bash script to run another bash script that's found in multiple directories. Here's the code:
cd /home/mainuser/CaseStudies/
grep -R -o --include="Auto.sh" [\w] | wc -l
When I execute just that part, it finds the same file 5 times in each folder. So instead of getting 49 results, I get 245. I've written a recursive bash script before and I used it as a template for this problem:
grep -R -o --include=*.class [\w] | wc -l
This code has always worked perfectly, without any duplication. I've tried running the first code with and without the " ", I've tried -r as well. I've read through the bash documentation and I can't seem to find a way to prevent, or even why I'm getting, this duplication. Any thoughts on how to get around this?
As a separate, but related question, if I could launch Auto.sh inside of each directory so that the output of Auto.sh was dumped into that directory; without having to place Auto.sh in each folder. That would probably be much more efficient that what I'm currently doing and it would also probably fix my current duplication problem.
This is the code for Auto.sh:
#!/bin/bash
index=1
cd /home/mainuser/CaseStudies/
grep -R -o --include=*.class [\w] | wc -l
grep -R -o --include=*.class [\w] |awk '{print $3}' > out.txt
while read LINE; do
echo 'Path '$LINE > 'Outputs/ClassOut'$index'.txt'
javap -c $LINE >> 'Outputs/ClassOut'$index'.txt'
index=$((index+1))
done <out.txt
Preferably I would like to make it dump only the javap outputs for the application its currently looking at. Since those .class files could be in any number of sub-directories, I'm not sure how to make them all dump in the top folder, without executing a modified Auto.sh in the top directory of each application.
Ok, so to fix the multiple find:
grep -R -o --include="Auto.sh" [\w] | wc -l
Should be:
grep -R -l --include=Auto.sh '\w' | wc -l
The reason this was happening, was that it was looking for instances of the letter w in Auto.sh. Which occurred 5 times in the file.
However, the overall fix that doesn't require having to place Auto.sh in every directory, is something like this:
MAIN_DIR=/home/mainuser/CaseStudies/
cd $MAIN_DIR
ls -d */ > DirectoryList.txt
while read LINE; do
cd $LINE
mkdir ProjectOutputs
bash /home/mainuser/Auto.sh
cd $MAIN_DIR
done <DirectoryList.txt
That calls this Auto.sh code:
index=1
grep -R -o --include=*.class '\w' | wc -l
grep -R -o --include=*.class '\w' | awk '{print $3}' > ProjectOutputs.txt
while read LINE; do
echo 'Path '$LINE > 'ProjectOutputs/ClassOut'$index'.txt'
javap -c $LINE >> 'ProjectOutputs/ClassOut'$index'.txt'
index=$((index+1))
done <ProjectOutputs.txt
Thanks again for everyone's help!