I just switched to a new server host (VPS) and I transferred all my files over. I noticed that nothing was working everything was throwing a 500 internal error.
I then ran this via command line and it worked fine
for i in `cat /etc/trueuserdomains | awk '{print $2}'`; do chown $i.$i /home/$i/public_html -R; chown $i.nobody /home/$i/public_html; done
I'm not really sure what it does, but I think it changes the owner of the script. Anyways I've noticed over the past week anytime I upload a new script that wasn't already on the server it gives me the same 500 error and I have to run that script again. Is there somehow I can prevent this from happening?
for i in `cat /etc/trueuserdomains | awk '{print $2}'`;
do
chown $i.$i /home/$i/public_html -R;
chown $i.nobody /home/$i/public_html;
done
Description by breaking down the above code
cat /etc/trueuserdomains | awk '{print $2}'
This prints out a list of users made up of each word to be found in the second column of the file /etc/trueuserdomains (there is likely only one line in this file and the second word of which contains the user that the files should be owned by)
If you want to see exactly what that list is then run the following from the command line.
cat /etc/trueuserdomains | awk '{print $2}'
Then the for i part executes the two chown commands replacing the $i with the word gathered from the cat /etc/trueuserdomains | awk '{print $2}' command.
The first chown command changes the owner and group of every file and directory to be that which is found in the cat /etc/trueuserdomains | awk '{print $2}' command.
The second chown command then sets group on the public_html to be nobody, a group that likely has no user account assigned to it on the host machine.
So that sorts out the permissions in your web server files but, like you say, does not quite describe the root cause of your problem.
To fix the underlying problem let us know the following.
How do you upload files to the server? What is the name of the tool? When you upload the files can you give a sample of the owner and group permissions that they have prior to running the above command?
Related
I have to write script in bash that will check if logged users have any .sh files.
Checking who is logged in is simple just using:
w| awk '{print $1}'
But i have no idea how to check if they havy any .sh files
You need to read the output of the who command and use that in your find command.
Since the same user can be logged in multiple times, it's a good idea to remove duplicates before looping.
#!/bin/bash
who| awk '{print $1}' | sort -u | while read -r username; do
find /home/"$username" -name "*.sh"
done
I need help with a Shellscript.
I have a For loop in my script, which creates files names with a variable names. like file.$variable.
For example I have a list of, servers in a servers.txt file. from there i will read the server and connect to them and get some data from the each server. The filenames will be file.$server.
From that i am using For loop and creating the files of each server.
for server in `cat servers.txt`; do
ssh $server ls | awk '{print $2}' | tee -a files.$server.txt
This one working fine.
Now, from those generated files I need to run one more for loop to read all the files and read the content of the each file and give it as input to another command.
ex:
for file in `cat files.$servers.txt`; do
cat $file | awk '{print $2}' | tee -a column.$file.txt
But, not working for my in the second loop. Please help.
In a nut shell its a nested loop. Excuse me for my English.
Thanks in Advance.
I created a script called monitornsuaccounts.sh that should append its output file to useraccountstatus.log. useraccountstatus.log is in the directory /var/local/nsu/logs/.
The output of this script should state every username and the following information about each username: username, last login, user home directory and associated groups. Preferably there should be columns with each information.
The command I use for the usernames is sudo cat /etc/passwd | grep ‘/home’. Last is to find the last login of each user. Groups is to the find the group of each user. When I run the command, the output file only shows the data I need for my current user rather than all users. Any recommendations that anyone has would be greatly appreciated.
#!/bin/bash
usernames=sudo cat /etc/passwd | grep ‘/home’
echo “$usernames” > /home/daniel/names.txt
mlast=$(cat names.txt | xargs -n1 last)
mgroup=$(cat names.txt | xargs -n1 groups)
cat names.txt > /var/local/nsu/logs/useraccountstatus.log
echo “$mlast” >>/var/local/nsu/logs/useraccountstatus.log
echo “$mgroup” >>/var/local/nsu/logs/useraccountstatus.log
There are a lot of issues in your script.
Your definition of users. Are you sure that this is what you want? For example: root does not have a directory under /home.
Watch your quotes. cat /etc/passwd | grep ‘/home’ returns nothing, while cat /etc/passwd | grep 'home' returns a list of stanzas in /etc/passwd
You'll probably want just a list of usernames, not a list of stanzas. Something along the line of
cat /etc/passwd | grep 'home' | sed 's/:.*//'
Why sudo in sudo cat /etc/passwd?
Look at your assignment in the
usernames=sudo cat /etc/passwd | grep ‘/home’
This does not make sense. You might try to do a
usernames=`sudo cat /etc/passwd | grep '/home'| sed 's/:.*//'`
And that is just the first line of the script.
Anyway, if your script does not work as intended, you will need to do some debugging. First question, especially if you are inexperienced, is "do the commands that I write give the result that I expect?" So in your case, you should have tried cat /etc/passwd | grep ‘/home’ and you would have seen that it does not give you the expected results. Even with the correct quotes, you'll get a list of stanzas, which is also not what you expected. Have you looked at /home/daniel/names.txt and was the content of the file what you wanted? I guess not: it was empty.
Just a quick hint, to get you started in the right direction (although there are still some issues and pepole might object to the backtics)
#!/bin/bash
usernames=`sudo cat /etc/passwd | grep '/home'| sed 's/:.*//'`
mlast=`echo $usernames | xargs -n1 last`
mgroup=`echo $usernames| xargs -n1 groups`
echo $usernames > /var/local/nsu/logs/useraccountstatus.log
echo "$mlast" >>/var/local/nsu/logs/useraccountstatus.log
echo "$mgroup" >>/var/local/nsu/logs/useraccountstatus.log
You will want to polish this and make the output more useful.
I have been trying to learn more about Linux and have spent this morning focusing on the awk command. the command I have been trying to get to work is below.
ls -lRt lpftp.* | awk '{print $7, $9}' | mkdir -p $(awk '{print $1}') | ls -lRt lpftp.* | cp $(awk '{print $9, $7}')
Essentially I am trying to move each file in a directory into a sub directory based on that files last modified day. The command first prints only the files I want, then uses mkdir to create a folder based on the day of the month it was last modified. What I want to do after that is move each file into its associated directory, however as the command is now it moves every file into the 01 folder and prints out the following text
cp: 0653-436 12 is a directory.
Specify -r or -R to copy.
once for every directory.
does anyone know how I can fix this issue? or if there is a better way to go about it?
ls -lRt lpftp.* | awk '{print $7, $9}' | while read day file ; do mkdir -p "$day"; cp "$file" "$day"; done
The commands between do and done will be executed for each line of output, with the first thing awk prints in the day variable and the second in file (per line). I used quotes here somewhat unnecessarily, as there will not be spaces in the variables given the method by which they are set.
The safest way to do something like this -- and the fastest to execute -- is to use awk on the data to output a shell script. In awk, print the mkdir and cp commands you expect to execute. Pipe the results into head(1) until you're satisfied. Maybe look at the whole thing in less(1). Then execute as follows:
ls -lRg lpfpt.* | awk script.awk | sh -ex
That will echo the commands to standard error, and stop on the first error. If you're super absolutely sure it's right, drop the x option.
The advantage of this approach over a loop or a bunch of subprocesses in awk (with the system function) is:
you can see what's going to happen, and what's happening
speed of execution
Im looking to monitor some aspects of a farm of servers that are necessary for the application that runs on them.
Basically, Im looking to have a file on each machine, which when accessed via http (on a vlan), with curl, that will spit out information Im looking for, which I can log into the database with dameon that sits in a loop and checks the health of all the servers one by one.
The info Im looking to get is
<load>server load</load>
<free>md0 free space in MB</free>
<total>md0 total space in MB</total>
<processes># of nginx processes</processes>
<time>timestamp</time>
Whats the best way of doing that?
EDIT: We are using cacti and opennms, however what Im looking for here is data that is necessary for the application that runs on these servers. I dont want to complicate it by having it rely on any 3rd party software to fetch this basic data which can be gotten with a few linux commands.
Make a cron entry that:
executes a shell script every few minutes (or whatever frequency you want)
saves the output in a directory that's published by the web server
Assuming your text is literally what you want, this will get you 90% of the way there:
#!/usr/bin/env bash
LOAD=$(uptime | cut -d: -f5 | cut -d, -f1)
FREE=$(df -m / | tail -1 | awk '{ print $4 }')
TOTAL=$(df -m / | tail -1 | awk '{ print $2 }')
PROCESSES=$(ps aux | grep [n]ginx | wc -l)
TIME=$(date)
cat <<-EOF
<load>$LOAD</load>
<free>$FREE</free>
<total>$TOTAL</total>
<processes>$PROCESSES</processes>
<time>$TIME</time>
EOF
Sample output:
<load> 0.05</load>
<free>9988</free>
<total>13845</total>
<processes>6</processes>
<time>Wed Apr 18 22:14:35 CDT 2012</time>