I have installed Nagios (Nagios® Core™ Version 4.2.2) in Linux Server.I have written a JIRA URL check using check_http for HTTPS url.
It should get a response 200, but It gives response HTTP CODE 302.
[demuc1dv48:/pkg/vdcrz/Nagios/libexec][orarz]# ./check_http -I xx.xx.xx -u https://xxx.xxx.xxx.com/secure/Dashboard.jspa -S CONNECT
SSL Version: TLSv1
HTTP OK: HTTP/1.1 302 Found - 296 bytes in 0.134 second response time |time=0.134254s;;;0.000000 size=296B;;;
So I configured the same in the nagios configuration file.
define command{
command_name check_https_jira_prod
command_line $USER1$/check_http -I xxx.xxx.xxx.com -u https://xxx.xxx.xxx.com/secure/Dashboard.jspa -S CONNECT -e 'HTTP/1.1 302'
}
Now my JIRA server is down, But it is not reflected in the nagios check.The nagios response still shows HTTP code 302 only.
How to fix this issue?
You did not specify, but I assume you defined your command in the Nagios central server commands.cfgconfiguration file, but you also need to define a service in services.cfg as services use commands to run scripts.
If you are running your check_httpcheck from a different server you also need to define it in the nrpe.cfg configuration file on that remote machine and then restart nrpe.
As a side note, from the output you've shared, I believe you're not using the flags that the check_http Nagios plugin supports correctly.
From your post:
check_http -I xxx.xxx.xxx.com -u https://xxx.xxx.xxx.com/secure/Dashboard.jspa -S CONNECT -e 'HTTP/1.1 302'
From ./check_http -h:
-I, --IP-address=ADDRESS
IP address or name (use numeric address if possible to bypass DNS lookup).
You are using a host name instead (xxx.xxx.xxx.com )
-S, --ssl=VERSION
Connect via SSL. Port defaults to 443. VERSION is optional, and prevents auto-negotiation (1 = TLSv1, 2 = SSLv2, 3 = SSLv3).
You specified CONNECT
You can't get code 200 unless you set follow parameter in chech_http script.
I suggest you to use something like this:
./check_http -I jira-ex.telefonica.de -u https://xxx.xxx.xxx.com/secure/Dashboard.jspa -S -f follow
The -f follow is mandatory for your use case.
Related
I try to read/fetch this file:
https://blockchain-office.com/file.txt with a bash script over dev/tcp without using curl,wget, etc..
I found this example:
exec 3<>/dev/tcp/www.google.com/80
echo -e "GET / HTTP/1.1\r\nhost: http://www.google.com\r\nConnection: close\r\n\r\n" >&3
cat <&3
I change this to my needs like:
exec 3<>/dev/tcp/www.blockchain-office.com/80
echo -e "GET / HTTP/1.1\r\nhost: http://www.blockchain-office.com\r\nConnection: close\r\n\r\n" >&3
cat <&3
When i try to run i receive:
400 Bad Request
Your browser sent a request that this server could not understand
I think this is because strict ssl/only https connections is on.
So i change it to :
exec 3<>/dev/tcp/www.blockchain-office.com/443
echo -e "GET / HTTP/1.1\r\nhost: https://www.blockchain-office.com\r\nConnection: close\r\n\r\n" >&3
cat <&3
When i try to run i receive:
400 Bad Request
Your browser sent a request that this server could not understand.
Reason: You're speaking plain HTTP to an SSL-enabled server port.
Instead use the HTTPS scheme to access this URL, please.
So i even can't get a normal connection without get the file!
All this post's does not fit, looks like ssl/tls is the problem only http/80 works, if i don't use curl, wget, lynx, openssl, etc...:
how to download a file using just bash and nothing else (no curl, wget, perl, etc.)
Using /dev/tcp instead of wget
How to get a response from any URL?
Read file over HTTP in Shell
I need a solution to get/read/fetch a normal txt file from a domain over https only with /dev/tcp no other tools like curl, and output in my terminal or save in a variable without wget, etc.., is it possible and how, or is it there an other solution over the terminal with the standard terminal utilities?
You can use openssl s_client to perform the equivalent operation but delegate the SSL part:
#!/bin/sh
host='blockchain-office.com'
port=443
path='/file.txt'
crlf="$(printf '\r\n_')"
crlf="${crlf%?}"
{
printf '%s\r\n' \
"GET ${path} HTTP/1.1" \
"host: ${host}" \
'Connection: close' \
''
} |
openssl s_client -quiet -connect "${host}:${port}" 2 >/dev/null | {
# Skip headers by reading up until encountering a blank line
while IFS="${crlf}" read -r line && [ -n "$line" ]; do :; done
# Output the raw body content
cat
}
Instead of cat to output the raw body, you may want to check some headers like Content-Type, Content-Transfer-Encoding and even maybe navigate and handle recursive MIME chunks, then decode the raw content to something.
After all the comments and research, the answer is no, we can't get/fetch files using only the standard tools with the shell like /dev/tcp because we can't handle ssl/tls without handle the complete handshake.
It is only possbile with the http/80.
i dont think bash's /dev/tcp supports ssl/tls
If you use /dev/tcp for a http/https connection you have to manage the complete handshake including ssl/tls, http headers, chunks and more. Or you use curl/wget that manage it for you.
then shell is the wrong tool because it is not capable of performing any of the SSL handshake without using external resources/commands. Now relieve and use what you want and can from what I show you here as the cleanest and most portable POSIX-shell grammar implementation of a minimal HTTP session through SSL. And then maybe it is time to consider alternative options (not using HTTPS, using languages with built-in or standard library SSL support).
We will use curl, wget and openssl on seperate docker containers now.
I think there are still some requirements in the future to see if we keep only one of them or all of them.
We will use the script from #Léa Gris in a docker container too.
Hi my country blocked google.com anyway I have a virtual machine which is outside the country and have access to google. it has nginx & haproxy installed, based on my limited understanding these reverse proxy can do proxy to internal servers but is there anyway to let them do proxy to google.com directly?
Thanks so much.
Instead of using NGINX or HAPROXY to proxy some URL or google.com what you should do is use your VM as a proxy for the browser. Execute below on your machine
$ ssh -D 8123 -f -C -q -N sammy#example.com
Explanation of arguments
-D: Tells SSH that we want a SOCKS tunnel on the specified port number (you can choose a number between 1025-65536)
-f: Forks the process to the background
-C: Compresses the data before sending it
-q: Uses quiet mode
-N: Tells SSH that no command will be sent once the tunnel is up
This will open a socks proxy on 127.0.0.1:8123, you can set this in your browser and open google through your server.
For more detailed article refer to below
https://www.digitalocean.com/community/tutorials/how-to-route-web-traffic-securely-without-a-vpn-using-a-socks-tunnel
I've got a very strange problem.
There's a cron job on the server to run a script daily:
wget -O /dev/null --timeout=300 --tries=1 "http://website.com/script"
It was all working well since about two weeks ago, I started receiving errors:
--2016-07-13 09:45:01-- http://website.com/script
Resolving website.com (website.com)... 11.22.33.44
Connecting to website.com (website.com)|11.22.33.44|:80... failed: Connection timed out.
Giving up.
These are some information for this question:
The cron job is on the same server of http://website.com hosted.
I can access the script (http://website.com/script) correctly from browser on my desktop.
The server is CentOS 7, with WHM and cPanel installed.
Anyone know what could be the issue? or how do I suppose to identify the issue?
Thanks
If the issue still is unresolved..
You could try running wget in debug mode to see if you get some more info.
wget -dv -O /dev/null --timeout=300 --tries=1 "http://website.com/script"
Also, confirm if the resolved IP "11.22.33.44" belongs to one of the servers NIC's.
ip a s (ip address show) or
ifconfig -a
If the IP is not listed, It could be that the ip "11.22.33.44" is a public facing address of the company's firewall. And that the FW is directing requests on port 80 from the outside/internet (where you're browser is) to that specific server. And the Firewall/Nat/Proxy, could be configured to not allow requests coming from inside the network, reaching the external IP of the firewall and getting back in.
If this is the case, you could try changing you're wget using the internal ip address, something like: (still using -dv for debugging, remove after)
wget -dv -O /dev/null --timeout=300 --tries=1 --header="Host: website.com" http://127.0.0.1/script
Note1: the --header="Host: website.com" will tell you're webserver what site you wanna reach
Note2: maybe you'll have to change the IP: 127.0.0.1 (localhost address) to one of the server's NIC addresses.
If the website is up try a different command.
*/10 * * * * /usr/bin/wget -q -O temp.txt http://website.com/script
Try adding -H
wget -H -O /dev/null --timeout=300 --tries=1 "http://website.com/script"
How list from the command line URLs requests that are made from the server (an *ux machine) to another machine.
For instance, I am on the command line of server ALPHA_RE .
I do a ping to google.co.uk and another ping to bbc.co.uk
I would like to see, from the prompt :
google.co.uk
bbc.co.uk
so, not the ip address of the machine I am pinging, and NOT an URL from servers that passes my the request to google.co.uk or bbc.co.uk , but the actual final urls.
Note that only packages that are available on normal ubuntu repositories are available - and it has to work with command line
Edit
The ultimate goal is to see what API URLs a PHP script (run by a cronjob) requests ; and what API URLs the server requests 'live'.
These ones do mainly GET and POST requests to several URLs, and I am interested in knowing the params :
Does it do request to :
foobar.com/api/whatisthere?and=what&is=there&too=yeah
or to :
foobar.com/api/whatisthathere?is=it&foo=bar&green=yeah
And does the cron jobs or the server do any other GET or POST request ?
And that, regardless what response (if any) these API gives.
Also, the API list is unknown - so you cannot grep to one particular URL.
Edit:
(OLD ticket specified : Note that I can not install anything on that server (no extra package, I can only use the "normal" commands - like tcpdump, sed, grep,...) // but as getting these information with tcpdump is pretty hard, then I made installation of packages possible)
You can use tcpdump and grep to get info about activity about network traffic from the host, the following cmd line should get you all lines containing Host:
tcpdump -i any -A -vv -s 0 | grep -e "Host:"
If I run the above in one shell and start a Links session to stackoverflow I see:
Host: www.stackoverflow.com
Host: stackoverflow.com
If you want to know more about the actual HTTP request you can also add statements to the grep for GET, PUT or POST requests (i.e. -e "GET"), which can get you some info about the relative URL (should be combined with the earlier determined host to get the full URL).
EDIT:
based on your edited question I have tried to make some modification:
first a tcpdump approach:
[root#localhost ~]# tcpdump -i any -A -vv -s 0 | egrep -e "GET" -e "POST" -e "Host:"
tcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes
E..v.[#.#.......h.$....P....Ga .P.9.=...GET / HTTP/1.1
Host: stackoverflow.com
E....x#.#..7....h.$....P....Ga.mP...>;..GET /search?q=tcpdump HTTP/1.1
Host: stackoverflow.com
And an ngrep one:
[root#localhost ~]# ngrep -d any -vv -w byline | egrep -e "Host:" -e "GET" -e "POST"
^[[B GET //meta.stackoverflow.com HTTP/1.1..Host: stackoverflow.com..User-Agent:
GET //search?q=tcpdump HTTP/1.1..Host: stackoverflow.com..User-Agent: Links
My test case was running links stackoverflow.com, putting tcpdump in the search field and hitting enter.
This gets you all URL info on one line. A nicer alternative might be to simply run a reverse proxy (e.g. nginx) on your own server and modify the host file (such as shown in Adam's answer) and have the reverse proxy redirect all queries to the actual host and use the logging features of the reverse proxy to get the URLs from there, the logs would probably a bit easier to read.
EDIT 2:
If you use a command line such as:
ngrep -d any -vv -w byline | egrep -e "Host:" -e "GET" -e "POST" --line-buffered | perl -lne 'print $3.$2 if /(GET|POST) (.+?) HTTP\/1\.1\.\.Host: (.+?)\.\./'
you should see the actual URLs
A simple solution is to modify your '/etc/hosts' file to intercept the API calls and redirect them to your own web server
api.foobar.com 127.0.0.1
I am using snmp to query and set some OIDs in IPv6 mode. I use the below snmp command. I have checked and configured it to listen to udp6:161.
snmpget -cpublic -v2c udp6:[2001:db8:3c4d::41a9:8e4e:a094:3840] .1.3.6.1.4.1.1429.5.1.1.2.5.6.0
It gives the result as
Timeout: No Response from udp6:[2001:db8:3c4d::41a9:8e4e:a094:3840]
The given ip address is also alive when checked using ping. Changed conf file to include rwcommunity6 and rocommunity6. What am I doing wrong?
As Cougar said in the comment, you must tell snmpd to listen to the ipv6 address. By default, snmpd only listens to udp4:. To get it to listen to multiple transports, you should specify each:
snmpd udp: udp6:
for example. Also, because the agent won't respond if the incoming packet is denied authorization, you can always run snmpd with the dump flag (-d) to show what traffic it is receiving. If it's not receiving it, you've found one problem. But if it is but not responding, you've found another. Make sure you run it in the foreground (-f) and with logging to stderr (-Le):
snmpd -f -Le -d udp: udp6:
Is it working for this command?
snmpget -v 2c -c public localhost .1.3.6.1.2.1.1.1.0
It should give system description. If yes then it has been set correctly. Otherwise you need to set it using the command snmpconf -g basic_setup