wget connection timed out on same server - linux

I've got a very strange problem.
There's a cron job on the server to run a script daily:
wget -O /dev/null --timeout=300 --tries=1 "http://website.com/script"
It was all working well since about two weeks ago, I started receiving errors:
--2016-07-13 09:45:01-- http://website.com/script
Resolving website.com (website.com)... 11.22.33.44
Connecting to website.com (website.com)|11.22.33.44|:80... failed: Connection timed out.
Giving up.
These are some information for this question:
The cron job is on the same server of http://website.com hosted.
I can access the script (http://website.com/script) correctly from browser on my desktop.
The server is CentOS 7, with WHM and cPanel installed.
Anyone know what could be the issue? or how do I suppose to identify the issue?
Thanks

If the issue still is unresolved..
You could try running wget in debug mode to see if you get some more info.
wget -dv -O /dev/null --timeout=300 --tries=1 "http://website.com/script"
Also, confirm if the resolved IP "11.22.33.44" belongs to one of the servers NIC's.
ip a s (ip address show) or
ifconfig -a
If the IP is not listed, It could be that the ip "11.22.33.44" is a public facing address of the company's firewall. And that the FW is directing requests on port 80 from the outside/internet (where you're browser is) to that specific server. And the Firewall/Nat/Proxy, could be configured to not allow requests coming from inside the network, reaching the external IP of the firewall and getting back in.
If this is the case, you could try changing you're wget using the internal ip address, something like: (still using -dv for debugging, remove after)
wget -dv -O /dev/null --timeout=300 --tries=1 --header="Host: website.com" http://127.0.0.1/script
Note1: the --header="Host: website.com" will tell you're webserver what site you wanna reach
Note2: maybe you'll have to change the IP: 127.0.0.1 (localhost address) to one of the server's NIC addresses.

If the website is up try a different command.
*/10 * * * * /usr/bin/wget -q -O temp.txt http://website.com/script

Try adding -H
wget -H -O /dev/null --timeout=300 --tries=1 "http://website.com/script"

Related

How to configure https_check URL in nagios

I have installed Nagios (NagiosĀ® Coreā„¢ Version 4.2.2) in Linux Server.I have written a JIRA URL check using check_http for HTTPS url.
It should get a response 200, but It gives response HTTP CODE 302.
[demuc1dv48:/pkg/vdcrz/Nagios/libexec][orarz]# ./check_http -I xx.xx.xx -u https://xxx.xxx.xxx.com/secure/Dashboard.jspa -S CONNECT
SSL Version: TLSv1
HTTP OK: HTTP/1.1 302 Found - 296 bytes in 0.134 second response time |time=0.134254s;;;0.000000 size=296B;;;
So I configured the same in the nagios configuration file.
define command{
command_name check_https_jira_prod
command_line $USER1$/check_http -I xxx.xxx.xxx.com -u https://xxx.xxx.xxx.com/secure/Dashboard.jspa -S CONNECT -e 'HTTP/1.1 302'
}
Now my JIRA server is down, But it is not reflected in the nagios check.The nagios response still shows HTTP code 302 only.
How to fix this issue?
You did not specify, but I assume you defined your command in the Nagios central server commands.cfgconfiguration file, but you also need to define a service in services.cfg as services use commands to run scripts.
If you are running your check_httpcheck from a different server you also need to define it in the nrpe.cfg configuration file on that remote machine and then restart nrpe.
As a side note, from the output you've shared, I believe you're not using the flags that the check_http Nagios plugin supports correctly.
From your post:
check_http -I xxx.xxx.xxx.com -u https://xxx.xxx.xxx.com/secure/Dashboard.jspa -S CONNECT -e 'HTTP/1.1 302'
From ./check_http -h:
-I, --IP-address=ADDRESS
IP address or name (use numeric address if possible to bypass DNS lookup).
You are using a host name instead (xxx.xxx.xxx.com )
-S, --ssl=VERSION
Connect via SSL. Port defaults to 443. VERSION is optional, and prevents auto-negotiation (1 = TLSv1, 2 = SSLv2, 3 = SSLv3).
You specified CONNECT
You can't get code 200 unless you set follow parameter in chech_http script.
I suggest you to use something like this:
./check_http -I jira-ex.telefonica.de -u https://xxx.xxx.xxx.com/secure/Dashboard.jspa -S -f follow
The -f follow is mandatory for your use case.

My Ip changes Dynamically How can I get updated with the latest ip?

My isp provides dynamic ip addresses.I have forwarded my port to an raspberry pi and accessing it through ssh and also using it as web server.but the problem is that ip changes every 3-4 days is there any way or script so that i can be informed or updated with new ip address.
Thank You.
You can write a script like:
============
#!/bin/bash
OUT=$(wget http://checkip.dyndns.org/ -O - -o /dev/null | cut -d: -f 2 | cut -d\< -f 1)
echo $OUT > /root/ipfile
============
Set a cron to execute this every 3h or something and configure your mta to send the file /root/ipfile to your email address ( that too you can use a cron ). mutt can be a useful tool to attach the file and do the email delivery.

Linux send URL my IP address on startup

So, I'm trying to write a simple bash script to send my internal IP address to a website of mine on startup. I am on a network with DHCP, so I don't always know what the IP address of my Raspberry Pi will be after I do a reboot over ssh. I figured I could fix this by sending my website the current IP on startup. I haven't written many bash scripts, and I'm not really sure how to send data to my website. Right now I was just trying in the terminal this:
wget -qO- http://http://mywebsite.com/private/CurrentIP.php?send=$(/sbin/ifconfig eth0|grep 'inet addr:')
But I'm not having any luck. I don't actually know much about linux, and I'm trying to learn. That's why I got the raspberry pi actually. Anyway, can someone head me in the right direction?
I already know I need to put it in /etc/init.d/.
You could do this:
IP_ADDR=$(ifconfig eth0 | sed -rn 's/^.*inet addr:(([0-9]+\.){3}[0-9]+).*$/\1/p')
wget -q -O /dev/null http://mywebsite.com/private/CurrentIP.php?send=${IP_ADDR}
...but if your machine is stuck behind NAT, $IP_ADDR won't be your externally-visible address. Might want to use $_SERVER['REMOTE_ADDR'] in your PHP instead of/in addition to this to get the address for your client that your server sees.
Edit: Sounds like you want to be able to find your Raspberry Pi on your local (DHCP-managed) network after reboots. Have you considered using Multicast DNS instead?
How it works in practice: Let's say you've set the hostname of your RasPi to gooseberry. If you've enabled a multicast DNS server on that machine, other computers on the same network segment that can send multicast DNS queries will be able to find it at the domain name gooseberry.local. This is a peer-to-peer protocol and not dependent on gooseberry receiving any specific address via DHCP - so if it reboots and receives a new address, other machines should still be able to find it.
Mac OS X has this enabled out of the box; this can be enabled on most Linux distros (on Debian/Ubuntu you'd install the avahi-daemon and libnss-mdns packages); not sure about Windows, but a quick Google shows encouraging results.
This worked for me (wget part untested, but it finds IP address):
interface="eth0"
ip_addr=$(ifconfig ${interface} | sed -rn 's/^.*inet *([0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}).*$/\1/p')
wget -q -O /dev/null http://mywebsite.com/private/CurrentIP.php?send=${ip_addr}
Can't you use:
hostname --ip-address

Domain or IP exists check in Linux

I want to know if a given website or IP address is online or offline. I researched a lot, but all I can find is to install some software or using the ping command.
I did this test:
ping -c 5 -n example.com
It outputs the expected result, but when I do the following where a website does not ext, the result is almost the same as if website existed, with 0% packet loss. Please see the screenshot attached.
ping -c 5 -n examplesurenotexists.com
I am confused by this. Is there a better way to do this task?
If you want to know if a website is online of offline, simply check the website:
if curl -s http://www.alfe.de >/dev/null
then
echo "online"
else
echo "offline"
fi
Using ping instead would not test the HTTP protocol (which is for websites) but the ICMP protocol; one is merely independent from the other (but of course, if the host is down, both won't work). There are sites which still react on ICMP while the HTTP server is down (this is rather typical) and there are sites which won't react on ICMP although the HTTP server is up and running functioning perfectly well.

Changing network ip in a bash script started from a NFS-mounted folder

I wrote a simple Bash script to change the network address of a Linux Host:
#!/bin/sh
REMOTE_HOST=192.168.2.127 # Default Host address
NEW_IP=192.168.30.33 # New IP I want to set
NEW_GW=192.168.30.1 # New Gateway I want to set
sudo ifconfig eth0 192.168.2.1 # Moving to the right network...
#ping $REMOTE_HOST -c 3 # I can correctly ping the host here...
ssh-copy-id root#${REMOTE_HOST} > /dev/null # ...for my comfort...
# Setting the network with new values for the IP addr and the GW...
COMMAND="sed -i 's#address *\\([0-9.]\\+\\)#address ${NEW_IP}#' /etc/network/interfaces\
&& sed -i 's#gateway *\\([0-9.]\\+\\)#gateway ${NEW_GW}#' /etc/network/interfaces"
ssh root#${REMOTE_HOST} $COMMAND
# done!
# Now restart the network services:
ssh root#${REMOTE_HOST} "/etc/init.d/networking restart &" & # (Note the 2nd '&' !!!)
# Come back to my old IP
sudo ifconfig eth0 192.168.30.10
sudo route add default gw 192.168.30.1
This script works almost perfectly but:
1) If I run it from my home folder, no problems; if I run it from a NFS shared folder the script hangs for a minute or two before to end correctly
2) If I omit the second '&' when restarting the network on the host the command never returns...
The questions are:
1) What causes the long wait (NFS, different IP address, different gateway)? And is it possible to workaround it?
2) Why it happens? How could I avoid it?
Thanks for any kind of help and sorry for my bad English!
You're restarting networking services, which drops all active connections.
Bash reads the file you're running line by line. Since NFS is a Network File System, this will terminate the connection to the file. So the system waits (can't actually) with executing the lines after networking restart until the connection is re-established.
Instead, you should first make a local copy of the entire script and then run it locally.
You could also code a script for that ;-)

Resources