Permission Denied within Bash Script - linux

First of all thanks again for getting me this far. This question is different than my first.
I keep receiving a permission denied. This script has rwx for everyone. It is happening with the cat /etc/*-release
I am sure there is something that I am missing.
current_distro=`cat /etc/*-release | grep "^ID=" | grep -E -o "[a-z]\w+"`
close_distro=`cat /etc/*-release | grep "^ID_LIKE=" | grep -E -o "[a-z]\w+"`
echo current_distro
echo close_distro

Related

Linux curl : no url found (or) curl: malformed url

So I am downloading docker setup on my linux vm, and have to run this command as part of the steps, but even though it mentions url, and I changed once -o to -O but still getting those errors, what to do for this?
this is the command im running
sudo curl -L $(curl -L https://api.github.com/repos/docker/compose/releases/latest | grep "browser_download_url" | grep "$(uname -s)-$(uname -m)\"" | sed -nr 's/\s+"browser_download_url":\s+"(https.*)"/\1/p') -o /usr/local/bin/docker-compose
The grep that is filtering what system you are running is outputting an upper case L in Linux, this may be the cause of your errors. Try this:
sudo curl -L $(curl -L https://api.github.com/repos/docker/compose/releases/latest | grep "browser_download_url" | grep -i "$(uname -s)-$(uname -m)\"" | sed -nr 's/\s+"browser_download_url":\s+"(https.*)"/\1/p') -o /usr/local/bin/docker-compose
Hope this helps!

grep and tee to identify errors during installation

In order to identify if my installation has errors that I should notice, I am using grep command on the file and write the file using tee because I need to elevate permissions.
sudo grep -inw ${LOGFOLDER}/$1.log -e "failed" | sudo tee -a ${LOGFOLDER}/$1.errors.log
sudo grep -inw ${LOGFOLDER}/$1.log -e "error" | sudo tee -a ${LOGFOLDER}/$1.errors.log
The thing is that the file is created even if the grep didn't find anything.
Is there any way I can create the file only if the grep found a match ?
Thanks
You may replace tee with awk, it won't create file if there is nothing to write to it:
... | sudo awk "{print; print \$0 >> \"errors.log\";}"
But such feature of awk is rarely used. I'd rather remove empty error file if nothing is found:
test -s error.log || rm -f error.log
And, by the way, you may grep for multiple words simultaneously:
grep -E 'failed|error' ...

Bash grep command finding the same file 5 times

I'm building a little bash script to run another bash script that's found in multiple directories. Here's the code:
cd /home/mainuser/CaseStudies/
grep -R -o --include="Auto.sh" [\w] | wc -l
When I execute just that part, it finds the same file 5 times in each folder. So instead of getting 49 results, I get 245. I've written a recursive bash script before and I used it as a template for this problem:
grep -R -o --include=*.class [\w] | wc -l
This code has always worked perfectly, without any duplication. I've tried running the first code with and without the " ", I've tried -r as well. I've read through the bash documentation and I can't seem to find a way to prevent, or even why I'm getting, this duplication. Any thoughts on how to get around this?
As a separate, but related question, if I could launch Auto.sh inside of each directory so that the output of Auto.sh was dumped into that directory; without having to place Auto.sh in each folder. That would probably be much more efficient that what I'm currently doing and it would also probably fix my current duplication problem.
This is the code for Auto.sh:
#!/bin/bash
index=1
cd /home/mainuser/CaseStudies/
grep -R -o --include=*.class [\w] | wc -l
grep -R -o --include=*.class [\w] |awk '{print $3}' > out.txt
while read LINE; do
echo 'Path '$LINE > 'Outputs/ClassOut'$index'.txt'
javap -c $LINE >> 'Outputs/ClassOut'$index'.txt'
index=$((index+1))
done <out.txt
Preferably I would like to make it dump only the javap outputs for the application its currently looking at. Since those .class files could be in any number of sub-directories, I'm not sure how to make them all dump in the top folder, without executing a modified Auto.sh in the top directory of each application.
Ok, so to fix the multiple find:
grep -R -o --include="Auto.sh" [\w] | wc -l
Should be:
grep -R -l --include=Auto.sh '\w' | wc -l
The reason this was happening, was that it was looking for instances of the letter w in Auto.sh. Which occurred 5 times in the file.
However, the overall fix that doesn't require having to place Auto.sh in every directory, is something like this:
MAIN_DIR=/home/mainuser/CaseStudies/
cd $MAIN_DIR
ls -d */ > DirectoryList.txt
while read LINE; do
cd $LINE
mkdir ProjectOutputs
bash /home/mainuser/Auto.sh
cd $MAIN_DIR
done <DirectoryList.txt
That calls this Auto.sh code:
index=1
grep -R -o --include=*.class '\w' | wc -l
grep -R -o --include=*.class '\w' | awk '{print $3}' > ProjectOutputs.txt
while read LINE; do
echo 'Path '$LINE > 'ProjectOutputs/ClassOut'$index'.txt'
javap -c $LINE >> 'ProjectOutputs/ClassOut'$index'.txt'
index=$((index+1))
done <ProjectOutputs.txt
Thanks again for everyone's help!

Command wont run in script

I am trying to run a command in a shell script but it is not working.
Out side of the script in the shell I can run the following command on the needed host. The file is created with the correct information inside.
sudo cat /etc/shadow | cut -d: -f1,8 | sed /:$/d > /tmp/expirelist.txt
When the command is run in my script I first ssh over then run the command but I get the following error.
[batch#testserver01 bin]$ checkP.sh
Testserver02
/usr/local/bin/checkP.sh: line 7: /tmp/expirelist.txt: Permission denied
Here is a part of the script. I have tried using ssh -o
#!/bin/bash
for SERVER in `cat /admin/lists/testlist`
do
echo $SERVER
ssh $SERVER sudo cat /etc/shadow | cut -d: -f1,8 | sed /:$/d > /tmp/expirelist.txt
...
What is causing the Permission denied error?
Don't use hardcoded temporary filenames -- when you do, it means that if one user (say, your development account) already ran this script and left a file named /tmp/expirelist.txt behind, no other user can run the same script.
tempfile=$(mktemp -t expirelist.XXXXXX)
ssh "$SERVER" sudo cat /etc/shadow | cut -d: -f1,8 | sed /:$/d >"$tempfile"
By using mktemp, you guarantee that each invocation will use a new, distinct, and previously-nonexisting temporary file, preventing any chance of conflict.
By the way -- if you want the file to be created on the remote system rather than the local system, you'd want to do this instead:
ssh "$SERVER" <<'EOF'
tempfile=$(mktemp -t expirelist.XXXXXX)
sudo cat /etc/shadow | cut -d: -f1,8 | sed /:$/d >"$tempfile"
EOF
I'm not sure about this, but you could be running into an issue with having the 'sudo' within your script. You could try removing the 'sudo' from the script, and running it like this:
$ sudo checkP.sh

How many open files for each process running for a specific user in Linux

Running Apache and Jboss on Linux, sometimes my server halts unexpectedly saying that the problem was Too Many Open Files.
I know that we might set a higher limit for nproc and nofile at /etc/security/limits.conf to fix the open files problem, but I am trying to get better output, such as using watch to monitor them in real-time.
With this command line I can see how many open files per PID:
lsof -u apache | awk '{print $2}' | sort | uniq -c | sort -n
Output (Column 1 is # of open files for the user apache):
1 PID
1335 13880
1389 13897
1392 13882
If I could just add the watch command it would be enough, but the code below isn't working:
watch lsof -u apache | awk '{print $2}' | sort | uniq -c | sort -n
You should put the command insides quotes like this:
watch 'lsof -u apache | awk '\''{print $2}'\'' | sort | uniq -c | sort -n'
or you can put the command into a shell script like test.sh and then use watch.
chmod +x test.sh
watch ./test.sh
This command will tell you how many files Apache has opened:
ps -A x |grep apache | awk '{print $1}' | xargs -I '{}' ls /proc/{}/fd | wc -l
You may have to run it as root in order to access the process fd directory. This sounds like you've got a web application which isn't closing its file descriptors. I would focus my efforts on that area.

Resources