How to search in many servers their logs and sort the information? - linux

The idea is very simple:
I would like to pass some word as something as argument to some script, then this scripts d would search in all my servers into their logs, when found something relevant, they would throw this information in some file which this one would be rsync to some server which would sort the whole information of all servers and presents to me where and when something has been passed.
I think this is possible because my servers are syncronized with NTP which grants me they won't have the exact same time in two or more servers.
But I wonder if this is a good idea and how do this search and sort these logs ?
The problem for me is:
1) How do I access my servers to run this search in each one of them ?
2) How do I make this search ?
3) How do I sort this whole information in the final log (contained the whole information of all servers) ?

You could add your ssh keys to each server and then from your main server add this to your bashrc
export web_servers=(server1 server2 server3 server4 )
function grepallservers() {
for s in ${web_servers[#]}; do echo $s; ssh $s grep "$#"; done
}
function all-serv-grep() {
grepallservers $1 /var/log/error.log
}

Related

Linux bash script to get own internet IP address

I know I got quite rusty when it comes to bash coding, especially the more elaborate needed trickery handling awk or sed parts.
I do have a script that logs the IP address currently in use for the interwebs.
It gets that by either using wget -q0 URL or lynx -dump URL.
The most easy one was a site that only returned the IP address in plain text and nothing else. Unfortunately that site no longer exists.
The code was simple as can be:
IP=$(wget -qO - http://cfaj.freeshell.org/ipaddr.cgi)
But alas! using the code returns nothing cause the site is gone, as lynx can tell us:
$ lynx -dump http://cfaj.freeshell.org/ipaddr.cgi
Looking up cfaj.freeshell.org
Unable to locate remote host cfaj.freeshell.org.
Alert!: Unable to connect to remote host.
lynx: Can't access startfile http://cfaj.freeshell.org/ipaddr.cgi
Some other sites I used to retrieve for the same purpose no longer work either.
And the one I want to use is a German speaking one, not that I care one way or the other, it could be in Greek or Mandarin for all I care. I want only to have the IP address itself extracted, but like I said, my coding skills got rusty.
Here is the relevant area of what lynx -dump returns
[33]powered by
Ihre IP-Adresse lautet:
178.24.x.x
Ihre IPv6-Adresse lautet:
Ihre System-Informationen:
when running it as follows:
lynx -dump https://www.wieistmeineip.de/
Now, I need either awk or sed to find the 178.24.x.x part. (I know it can be done with python or Perl as well, but both are not part of a standard setting of my Linux, while awk and sed are.)
Since the script is there to extract the IP address, one needs to do the following either via sed or awk:
Search for "Ihre IP-Adresse lautet:"
Skip the next line.
Skip the whitespace at the beginning
Only return what is left of that line (without the lf at the end).
In the above example (that shows only the relevant part of the lynx dump, the whole dump is much larger but all above and below is irrelevant.) it would be "178.24.x.x" that should be returned.
Any help greatly appreciated to get my log-ip script back into working order.
Currently I have collected some other working URLs that report back the own internet IP. Any of these can also be used, but the area around the reported IP will differ from the above example. These are:
https://meineipinfo.de/
http://www.wie-ist-meine-ip.net/
https://www.dein-ip-check.de/
https://whatismyipaddress.com/
https://www.whatismyip.org/
https://www.whatismyip.net/
https://mxtoolbox.com/whatismyip/
https://www.whatismyip.org/my-ip-address
https://meineipadresse.de/
Even duckduckgo returns the IP address when e.g. asked this: https://duckduckgo.com/?q=ip+address&ia=answer
At least I know of no way of getting the own IP address when using the internet without retrieving an outside URL that reports that very IP address back to me.
You can do:
wget -O - v4.ident.me 2>/dev/null && echo
So, if you have a VM in some cloud provider you can solve this easily. I wrote some small Go app than echoes back an HTTP request. For instance :
$ curl 167.99.63.182:8888
Method ->
GET
Protocol ->
HTTP/1.1
Headers ->
User-Agent: [curl/7.54.0]
Accept: [*/*]
Content length (in Bytes) ->
0
Remote address ->
179.XXXXX
Payload
####################
####################
Where remote address is the address which the app received, hence, your IP.
And in case you are wondering, yes, 167.99.63.182 is the IP of the server and you can curl it right now and check it. I am disclosing the IP as anyway I get bombarded by brute force attacks for as long as I can remember and the machine does not have anything worth the break through.
Not exactly without relying on external services, but you could use dig to reach out to the resolver at opendns.com:
dig +short myip.opendns.com #resolver1.opendns.com
I think this is easier to integrate to a script.

Trying to use SCP to copy multiple files from remote to local using script

So I'll start with the fact that I'm relatively new to linux scripting, so if I am going about it the wrong way, let me know.
I am creating a script that is meant to copy logs from many different hosts onto the local machine depending on user input.
One of the functions I am writing requires the use of scp. Each time you use the scp command at a particular remote host, you have to enter your password. So to save time for the user, I want to copy any file that the particular host may have on it that the user wants.
I know I can do this using scp user#Remoteipaddress:'directory/file1 directory/file2' local/machine/directory
I have it running (what I feel is too many, so if there is a better way let me know) a bunch of loops.
The portion with the scp command is my main issue. Code looks fine if I quote it and echo it. I can even copy and paste the echoed result and it will work, but if I let the script do it I receive bash: -c: line 0: unexpected EOF while looking for matching `''
edit: $app is a static number created in another portion of program
added a couple things that seemed to be missing. I'm trying to piece together from multiple areas of program without making it more messy than it already is
#assigns different remote host paths do array variable
until [ $scriptCounter == $app ]
do
scpScript[$scriptCounter]="user#${ipAddress[$ipCounter]}:'"
((++ipCounter))
((++scriptCounter))
done
#$app value gets set by another function - typically 3 if that matters
scpCount=0
DayCounter=0
ipScriptCounter=0
until [ $Count == $app ]
do
((++scpCount))
mkdir ~/MyDocuments/Logs/$3/app$scpCount
echo "Creating ~/MyDocuments/Logs/${3}/app${scpCount}"
#there is one log for each day, $totalDiffDays is the total amount of days
#$DayCounter is set and gets marked up everytime it goes through loop until
it matches total days
until [ $DayCounter == $totalDiffDays ]
do
scpPath[$DayCounter]="/var/log/docker/theLog*${datePath[$DayCounter]}*"
noSpaceSCP[$DayCounter]=${scpPath[$DayCounter]//[[:blank:]]/}
((++DayCounter))
done
fullSCPscript[$scpCount]="${scpScript[$ipScriptCounter]}${noSpaceSCP[*]}'"
#this portion I have an issue with.
scp ${fullSCPscript[$scpCount]} ~/MyDocuments/Logs/$3/app$scpCount
#this ups the array counter for my ipaddress array
((++ipScriptCounter))
#How im zeroing out the $DayCounter so it will run through again for other
nodes but with different IP address
until [ $DayCounter == "0" ]
do
((--DayCounter))
done
done
example output i get when I echo the line with the scp command
scp user#10.10.200.100:'/var/log/docker/theLog*2018-07-26* /var/log/docker/theLog*2018-07-27*' /home/mobaxterm/MyDocuments/Logs/care3/app1
I'm sorry that this looks messy, but overall I'm trying to build the directory that its grabbing the log from, and if there are multiple days, just add onto the scp command. I'm trying to do this as opposed to running a whole separate command to save the user from entering their password 5 times if they need 5 files. Instead they would only have to enter it once.

Close uzbl-browser on certain url

I'm using uzbl-browser for a kiosk computer. I'd like to send "close" (or kill) to my uzbl-browser's instance when a user opens a certain URL. What is the best way?
My aim is not that.
I have a survey and i would show it before logout. If user close it then logout. Otherwise wait until last page of survey (identify by a "certain url") and the close uzbl and logout
My solution is that.
Add this to config file
#on_event LOAD_FINISH spawn #scripts_dir/survey_end_check.sh
and in my survey_end_check.sh
#!/bin/sh
if [ $UZBL_URI = "http://yoururl" ];
then
sleep 5
echo "exit" | socat - unix-connect:$UZBL_SOCKET
fi
variant in order to find ad certain string in final page.
After grep, $? is 0 if grep succeeded
#!/bin/sh
end=`echo "#<document.getElementsByClassName('success')[0].innerText>#" | socat - unix-connect:$UZBL_SOCKET | grep -q 'Success!'; echo $?`
if [ $end -eq 0 ];
then
sleep 5
echo "exit" | socat - unix-connect:$UZBL_SOCKET
fi
If I were a user on that computer and had any window, browser or not, closing itself without warning I'd consider this an application crash and try again.
Forcing that behavior on your users may not be the most informative choice.
What you want to look into is a transparent proxy that can filter content. This is how most companies restrict their employees from visiting certain pages.
Squid is one example of a proxy solution commonly used for this, usually setup together with SquidGuard. This guide will get you started.
Alternatively you could also use a DNS solution that redirects all filtered hostnames to a given page. DansGuardian is a possibility here.
A search on stackoverflow will also give you answers as several users already asked similar questions.

Create an aggregate Nagios check based on values from other checks

I have multiple checks for ten different web servers. One of these checks is to monitor the number of established connections (using neststat and findstr to filter for ESTABLISHED). It works as expected on servers WEB1 through to WEB10. I can graph (using pnp4nagios) the TCP established connection count because the output is an integer. If it's above a certain threshold it goes into warning status, above another it becomes critical.
The individual checks are working the way I want.
However, what I'm looking to do is add up all of these connections into one graph. This would be some sort of aggregate graph or SUM of all of the others.
Is there a way to take the values/outputs from other checks and add them into one?
Server TCP Connections
WEB1 223
WEB2 124
WEB3 412
WEB4 555
WEB5 412
WEB6 60
WEB7 0
WEB8 144
WEB9 234
WEB10 111
TOTAL 2275
I want to graph only the total.
Nagios itself does not use performance data in any way, it just takes it and passes it to whatever you specify in your config. So there's no good way to do this in Nagios (You could pipe the performance output of nagios to some tee command which passes it to pnp4nagios and a different script that sums everything up, but that's just horrible to maintain).
If i had your problem, i'd do the following:
At the end of your current plugin, do something like
echo $nconnections > /some/dir/connections.$NAGIOS_HOSTNAME
where nconnections is the number of connections the plugin found. This example is shell, replace if you use some different language for the plugin. The important thing is: it should be easy to write the number to a special file in the plugin.
Then, create a new plugin which has code similar to:
#!/bin/bash
WARN=1000
CRIT=2000
sumconn=$(cat /some/dir/connections.* | awk '{sum += $1} END {print sum}')
if [ $sumconn -ge $CRIT ]; then
echo "Connection sum CRITICAL: $summconn connections|conn=$sumconn;$WARN;$CRIT"
exit 2
elif [ $sumconn -ge $WARN ]; then
echo "Connection sum WARNING: $summconn connections|conn=$sumconn;$WARN;$CRIT"
exit 1
else
echo "Connection sum OK: $summconn connections|conn=$sumconn;$WARN;$CRIT"
exit 0
fi
That way, whenever you probe an individual server, you'll save the data for the new plugin; the plugin will just pick up the data that's there, which makes it extremely short. Of course, the output of the summary will lag behind a bit, but you can minimize that effect by setting the normal_check_interval of the individual services low enough.
If you want to get fancy, add code to remove files older than a certain threshold from the cache directory. Or, you could even remove the individual services from your nagios configuration, and call the individual-server-plugin from the summation plugin for each server, if you're really uninterested in the connection count per server.
EDIT:
To solve the nrpe problem, create a check_nrpe_and_save plugin like this:
#!/bin/bash
output=$($NAGIOS_USER1/check_nrpe "$#")
rc=$?
nconnections=$(echo "$output" | head -1 | sed 's/.*you have \([0-9]*\) connections.*/$1/')
echo $nconnections > /some/dir/connections.$NAGIOS_HOSTNAME
echo $output
exit $rc
Create a new define command entry for this script, and use the new command in your service definitions. You'll have to adjust the sed pattern to what your plugin outputs. If you don't have the number of connections in your regular output, an expression like .*connections=\([0-9]*\);.* should work. This check_nrpe_and_save should behave just like check_nrpe, especially it should output the same string and return the same exit code, and write to the special file as well.

Separating 'body' of domain name from extension - DOS shell

I tried everything possible, but still failed. I thought I got it at the point which I'll post
as my final attempt, but still isn't good [enough].
A script is being passed three arguments. Domain name, username and password.
But the probles is that I need domain separated in "domain" + ".com" format. Two variables.
I tried to split it using name.extension cheat, but it doesn't work quite well.
Check the simple code:
#echo off
echo.
set domain=%~n1
set ext=%~x1
echo %DOMAIN%
echo %EXT%
echo.
When you try it, you get:
D:\Scripts\test>test.bat domain.com
domain
.com
D:\Scripts\test>test.bat domain.co.uk
domain.co
.uk
First obviously does work, but only because I'm able to cheat my way through.
String operations in DOS Shell are a pain in the ass. I might be able to convince
a script writer to pass me 4 arguments instead of 3... but in case that fails... HELP!
Windows ships with the Windows Scripting Host which lets you run javascript.
Change the batch file to:
#echo off
cscript //Nologo test.js %*
Create test.js:
if (WScript.Arguments.Length > 0) {
var arg = WScript.Arguments.Item(0);
var index = arg.indexOf('.');
if (index != -1) {
var domain = arg.substring(0, index);
var ext = arg.substring(index);
WScript.Echo(domain);
WScript.Echo(ext);
} else WScript.Echo("Error: Argument has no dots: " + arg);
} else WScript.Echo("Error: No argument given");
And you can use it:
C:\Documents and Settings\Waqas\Desktop>test.bat domain.com
domain
.com
C:\Documents and Settings\Waqas\Desktop>test.bat domain.co.uk
domain
.co.uk
And that does what I think you wanted.
If you want to automatize something (as stated in another answer), my solution would be to use appropriate tools. Install a Perl runtime or something else you're comfortable with. Or use the Windows power shell
Also, unless you supply your script with a list of valid top level domains, there is NO WAY, in no language, that your script can decide whether test.co.uk should be splitted as text and co.uk or test.co and uk. The only feasible possibility would be to make sure that you get only second-level-domains without sub-domain parts. Simply split at the first dot in that case.
BTW: I'm curious to why you would want to automate website creation in a Windows shell script. You aren't doing anything nasty, are you?

Resources