How to check if linux user has .sh file? - linux

I have to write script in bash that will check if logged users have any .sh files.
Checking who is logged in is simple just using:
w| awk '{print $1}'
But i have no idea how to check if they havy any .sh files

You need to read the output of the who command and use that in your find command.
Since the same user can be logged in multiple times, it's a good idea to remove duplicates before looping.
#!/bin/bash
who| awk '{print $1}' | sort -u | while read -r username; do
find /home/"$username" -name "*.sh"
done

Related

Nested for loops in Shellscript

I need help with a Shellscript.
I have a For loop in my script, which creates files names with a variable names. like file.$variable.
For example I have a list of, servers in a servers.txt file. from there i will read the server and connect to them and get some data from the each server. The filenames will be file.$server.
From that i am using For loop and creating the files of each server.
for server in `cat servers.txt`; do
ssh $server ls | awk '{print $2}' | tee -a files.$server.txt
This one working fine.
Now, from those generated files I need to run one more for loop to read all the files and read the content of the each file and give it as input to another command.
ex:
for file in `cat files.$servers.txt`; do
cat $file | awk '{print $2}' | tee -a column.$file.txt
But, not working for my in the second loop. Please help.
In a nut shell its a nested loop. Excuse me for my English.
Thanks in Advance.

managing user accounts by group name, username and last login linux

I created a script called monitornsuaccounts.sh that should append its output file to useraccountstatus.log. useraccountstatus.log is in the directory /var/local/nsu/logs/.
The output of this script should state every username and the following information about each username: username, last login, user home directory and associated groups. Preferably there should be columns with each information.
The command I use for the usernames is sudo cat /etc/passwd | grep ‘/home’. Last is to find the last login of each user. Groups is to the find the group of each user. When I run the command, the output file only shows the data I need for my current user rather than all users. Any recommendations that anyone has would be greatly appreciated.
#!/bin/bash
usernames=sudo cat /etc/passwd | grep ‘/home’
echo “$usernames” > /home/daniel/names.txt
mlast=$(cat names.txt | xargs -n1 last)
mgroup=$(cat names.txt | xargs -n1 groups)
cat names.txt > /var/local/nsu/logs/useraccountstatus.log
echo “$mlast” >>/var/local/nsu/logs/useraccountstatus.log
echo “$mgroup” >>/var/local/nsu/logs/useraccountstatus.log
There are a lot of issues in your script.
Your definition of users. Are you sure that this is what you want? For example: root does not have a directory under /home.
Watch your quotes. cat /etc/passwd | grep ‘/home’ returns nothing, while cat /etc/passwd | grep 'home' returns a list of stanzas in /etc/passwd
You'll probably want just a list of usernames, not a list of stanzas. Something along the line of
cat /etc/passwd | grep 'home' | sed 's/:.*//'
Why sudo in sudo cat /etc/passwd?
Look at your assignment in the
usernames=sudo cat /etc/passwd | grep ‘/home’
This does not make sense. You might try to do a
usernames=`sudo cat /etc/passwd | grep '/home'| sed 's/:.*//'`
And that is just the first line of the script.
Anyway, if your script does not work as intended, you will need to do some debugging. First question, especially if you are inexperienced, is "do the commands that I write give the result that I expect?" So in your case, you should have tried cat /etc/passwd | grep ‘/home’ and you would have seen that it does not give you the expected results. Even with the correct quotes, you'll get a list of stanzas, which is also not what you expected. Have you looked at /home/daniel/names.txt and was the content of the file what you wanted? I guess not: it was empty.
Just a quick hint, to get you started in the right direction (although there are still some issues and pepole might object to the backtics)
#!/bin/bash
usernames=`sudo cat /etc/passwd | grep '/home'| sed 's/:.*//'`
mlast=`echo $usernames | xargs -n1 last`
mgroup=`echo $usernames| xargs -n1 groups`
echo $usernames > /var/local/nsu/logs/useraccountstatus.log
echo "$mlast" >>/var/local/nsu/logs/useraccountstatus.log
echo "$mgroup" >>/var/local/nsu/logs/useraccountstatus.log
You will want to polish this and make the output more useful.

Linux CP using AWK output

I have been trying to learn more about Linux and have spent this morning focusing on the awk command. the command I have been trying to get to work is below.
ls -lRt lpftp.* | awk '{print $7, $9}' | mkdir -p $(awk '{print $1}') | ls -lRt lpftp.* | cp $(awk '{print $9, $7}')
Essentially I am trying to move each file in a directory into a sub directory based on that files last modified day. The command first prints only the files I want, then uses mkdir to create a folder based on the day of the month it was last modified. What I want to do after that is move each file into its associated directory, however as the command is now it moves every file into the 01 folder and prints out the following text
cp: 0653-436 12 is a directory.
Specify -r or -R to copy.
once for every directory.
does anyone know how I can fix this issue? or if there is a better way to go about it?
ls -lRt lpftp.* | awk '{print $7, $9}' | while read day file ; do mkdir -p "$day"; cp "$file" "$day"; done
The commands between do and done will be executed for each line of output, with the first thing awk prints in the day variable and the second in file (per line). I used quotes here somewhat unnecessarily, as there will not be spaces in the variables given the method by which they are set.
The safest way to do something like this -- and the fastest to execute -- is to use awk on the data to output a shell script. In awk, print the mkdir and cp commands you expect to execute. Pipe the results into head(1) until you're satisfied. Maybe look at the whole thing in less(1). Then execute as follows:
ls -lRg lpfpt.* | awk script.awk | sh -ex
That will echo the commands to standard error, and stop on the first error. If you're super absolutely sure it's right, drop the x option.
The advantage of this approach over a loop or a bunch of subprocesses in awk (with the system function) is:
you can see what's going to happen, and what's happening
speed of execution

Regarding the use of xargs in find command

I have one scenario where I need to select all files having aliencoders.numeric-digits
like alienoders.1206
and find command should search all subdirectories too. If there is no such file it should not do anything.
I wrote :
find /home/jassi/ -name "aliencoders.[0-9]+" | xargs ls -lrt | awk print '$9'
Bit it says no such file or directory if there no such file starts with aliencoders.xx...
How I can by pass this error. I have to run it for several such directories and it should give output for those directories only in which such file pattern exists else no warning and doesn't do xargs etc stuffs.
Currently, if no such file is there then it is taking current directory t find instead of /home/jassi
If you don't want xargs to execute if the input is empty you can use -r or --no-run-if-empty which are GNU extensions as pointed out in the man page. So if you have that support, you can try
find /home/jassi/ -name "aliencoders.[0-9]+" | xargs -r ls -lrt | awk '{print $9}'
Alternatively you can make use of exec option with find to achieve this something on these lines
find /home/jassi/ -name "aliencoders.[0-9]+" -exec ls -lrt {} + | awk '{print $9}'
Hope this helps!
bash:
Try this command (under the bash shell since most people use it, and no shell was specified):
find /home/jassi/ -name "aliencoders.[0-9]+" 2>&1 | xargs ls -lrt | awk '{print $9}'
With 2>&1 you are redirecting the error messages from stderr to stdout. Once you have this single output stream, you can process it with your pipes etc.
Without this redirection, your error messages to stderr would have continued to go to the console, creating clutter in the output and only stdout was getting processed by the pipes.
All about redirection will give you more details and control about redirection.
UPDATE:
tcsh:
Based on use of the tcsh (sometimes I think I'm the only one using it), it is possible to redirect stderr to stdout with the following construct:
command |& ...
so
find /home/jassi/ -name "aliencoders.[0-9]+" |& xargs ls -lrt | awk '{print $9}'
should help. Before your error messages were bypassing your pipes and going directly to the console, only stdout was getting processed by your pipes. With this all of your output goes through the pipes and you can filter out what you want.
Also, note the use of the awk command at the end of the command:
awk '{print $9}'
is different from what you posted originally.

500 internal error when uploading new files to server

I just switched to a new server host (VPS) and I transferred all my files over. I noticed that nothing was working everything was throwing a 500 internal error.
I then ran this via command line and it worked fine
for i in `cat /etc/trueuserdomains | awk '{print $2}'`; do chown $i.$i /home/$i/public_html -R; chown $i.nobody /home/$i/public_html; done
I'm not really sure what it does, but I think it changes the owner of the script. Anyways I've noticed over the past week anytime I upload a new script that wasn't already on the server it gives me the same 500 error and I have to run that script again. Is there somehow I can prevent this from happening?
for i in `cat /etc/trueuserdomains | awk '{print $2}'`;
do
chown $i.$i /home/$i/public_html -R;
chown $i.nobody /home/$i/public_html;
done
Description by breaking down the above code
cat /etc/trueuserdomains | awk '{print $2}'
This prints out a list of users made up of each word to be found in the second column of the file /etc/trueuserdomains (there is likely only one line in this file and the second word of which contains the user that the files should be owned by)
If you want to see exactly what that list is then run the following from the command line.
cat /etc/trueuserdomains | awk '{print $2}'
Then the for i part executes the two chown commands replacing the $i with the word gathered from the cat /etc/trueuserdomains | awk '{print $2}' command.
The first chown command changes the owner and group of every file and directory to be that which is found in the cat /etc/trueuserdomains | awk '{print $2}' command.
The second chown command then sets group on the public_html to be nobody, a group that likely has no user account assigned to it on the host machine.
So that sorts out the permissions in your web server files but, like you say, does not quite describe the root cause of your problem.
To fix the underlying problem let us know the following.
How do you upload files to the server? What is the name of the tool? When you upload the files can you give a sample of the owner and group permissions that they have prior to running the above command?

Resources