Domain or IP exists check in Linux - linux

I want to know if a given website or IP address is online or offline. I researched a lot, but all I can find is to install some software or using the ping command.
I did this test:
ping -c 5 -n example.com
It outputs the expected result, but when I do the following where a website does not ext, the result is almost the same as if website existed, with 0% packet loss. Please see the screenshot attached.
ping -c 5 -n examplesurenotexists.com
I am confused by this. Is there a better way to do this task?

If you want to know if a website is online of offline, simply check the website:
if curl -s http://www.alfe.de >/dev/null
then
echo "online"
else
echo "offline"
fi
Using ping instead would not test the HTTP protocol (which is for websites) but the ICMP protocol; one is merely independent from the other (but of course, if the host is down, both won't work). There are sites which still react on ICMP while the HTTP server is down (this is rather typical) and there are sites which won't react on ICMP although the HTTP server is up and running functioning perfectly well.

Related

How to run ssh over an existing TCP connection

I want to be able to SSH to a number linux devices at once, behind different NATs. I can't configure the network that they are on. However, I'm having trouble getting ssh to go over an existing connection.
I have full control over both my client and the devices. Here's the process so far:
On my client, I first run
socat TCP-LISTEN:5001,pktinfo,fork EXEC:./create_socket.sh,fdin=3,fdout=4,nofork
Contents of ./create_socket.sh:
ssh -N -M -S "~/sockets/${SOCAT_PEERADDR}" -o "ProxyCommand=socat - FD:3!!FD:4" "root#${SOCAT_PEERADDR}"
On the device, I'm running
socat TCP:my_host:4321 TCP:localhost:22
However, nothing comes in or out of FD:3!!FD:4, I assume because the ProxyCommand is a subprocess. I've also tried setting fdin=3,fdout=3 and changing ./create_socket.sh to:
ssh -N -M -S "~/sockets/${SOCAT_PEERADDR}" -o "ProxyUseFdpass=yes" -o "ProxyCommand=echo 3" "root#${host}"
This prints an error:
mm_receive_fd: no message header
proxy dialer did not pass back a connection
I believe this is because the fd should be sent in some way using sendmsg, but the fd doesn't originate from the subprocess anyways. I'd like to make it as simple as possible, and this feels close to workable.
You want to turn the client/server model on its head and make a generic server to spawn a client on-demand and in-response-to an incoming unauthenticated TCP connection from across a network boundary, and then tell that newly-spawned client to use that unauthenticated TCP session. I think that may have security considerations that you haven't thought of. If a malicious person spams connections to your computer, your computer will spawn a lot of SSH instances to connect back and these processes can take up a lot of local system resources while authenticating. You're effectively trying to set up SSH to automatically connect to an untrusted (unverified) remote-initiated machine across a network boundary. I can't stress how dangerous that could be for your client computer. Using the wrong options could expose any credentials you have or even give a malicious person full access to your machine.
It's also worth noting that the scenario you're asking to do, building a tunnel between multiple devices to multiplex additional connections across an untrusted network boundary, is exactly the purpose of VPN software. Yes, SSH can build tunnels. VPN software can build tunnels better. The concept would be that you'd run a VPN server on your client machine. The VPN server will create a new (virtual) network interface which represents only your devices. The devices would connect to the VPN server and be assigned an IP address. Then, from the client machine, you'd just initiate SSH to the device's VPN address and it will be routed over the virtual network interface and arrive at the device and be handled by its SSH daemon server. Then you don't need to muck around with socat or SSH options for port forwarding. And you'd get all the tooling and tutorials that exist around VPNs. I strongly encourage you to look at VPN software.
If you really want to use SSH, then I strongly encourage you to learn about securing SSH servers. You've stated that the devices are across network boundaries (NAT) and that your client system is unprotected. I'm not going to stop you from shooting yourself in the foot but it would be very easy to spectacularly do so in the situation you've stated. If you're in a work setting, you should talk to your system administrators to discuss firewall rules, bastion hosts, stuff like that.
Yes, you can do what you've stated. I strongly advise caution though. I advise it strongly enough that I won't suggest anything which would work with that as stated. I will suggest a variant with the same concepts but more authentication.
First, you've effectively set up your own SSH bounce server but without any of the common tooling compatible with SSH servers. So that's the first thing I'd fix: use SSH server software to authenticate incoming tunnel requests by using ssh client software to initiate the connection from the device instead of socat. ssh already has plenty of capabilities to create tunnels in both directions and you get authentication bundled with it (with socat, there's no authentication). The devices should be able to authenticate using encryption keys (ssh calls these identities). You'll need to connect once manually from the device to verify and authorize the remote encryption key fingerprint. You'll also need to copy the public key file (NOT the private key file) to your client machine and add it to your authorized_keys files. You can ask for help on that separately if you need it.
A second issue is that you appear to be using fd3 and fd4. I don't know why you're doing that. If anything, you should be using fd0 and fd1 since these are stdin and stdout, respectively. But you don't even need to do that if you're using socat to initiate a connection. Just use - where stdin and stdout are meant. It should be completely compatible with -o ProxyCommand without specifying any file descriptors. There's an example at the end of this answer.
The invocation from the device side might look like this (put it into a script file):
IDENTITY=/home/WavesAtParticles/.ssh/tunnel.id_rsa # on device
REMOTE_SOCKET=/home/WavesAtParticles/.ssh/$(hostname).sock # on client
REMOTEUSER=WavesAtParticles # on client
REMOTEHOST=remotehost # client hostname or IP address accessible from device
while true
do
echo "$(date -Is) connecting"
#
# Set up your SSH tunnel. Check stderr for known issues.
ssh \
-i "${IDENTITY}" \
-R "${REMOTE_SOCKET}:127.0.0.1:22" \
-o ExitOnForwardFailure=yes \
-o PasswordAuthentication=no \
-o IdentitiesOnly=yes \
-l "${REMOTEUSER}" \
"${REMOTEHOST}" \
"sleep inf" \
2> >(
read -r line
if echo "${line}" | grep -q "Error: remote port forwarding failed"
then
ssh \
-i "${IDENTITY}" \
-o PasswordAuthentication=no \
-o IdentitiesOnly=yes \
-l "${REMOTEUSER}" \
"${REMOTEHOST}" \
"rm ${REMOTE_SOCKET}" \
2>/dev/null # convince me this is wrong
echo "$(date -Is) removed stale socket"
fi
#
# Re-print stderr to the terminal
>&2 echo "${line}" # the stderr line we checked
>&2 cat - # and any unused stderr messages
)
echo "disconnected"
sleep 30
done
Remember, copying and pasting is bad in terms of shell scripts. At a minimum, I recommend you read man ssh and man ssh_config, and to check the script against shellcheck.net. The intent of the script is:
In a loop, have your device (re)connect to your client to maintain your tunnel.
If the connection drops or fails, then reconnect every 30 seconds.
Run ssh with the following parameters:
-i "${IDENTITY}": specify a private key to use for authentication.
-R "${REMOTE_SOCKET}:127.0.0.1:22": specify a connection request forwarder which accept connections on the Remote side /home/WavesAtParticles/$(hostname).sock then forward them to the local side by connecting to 127.0.0.1:22.
-o ExitOnForwardFailure=yes: if the remote side fails to set up the connection forwarder, then the local side should emit an error and die (and we check for this error in a subshell).
-o PasswordAuthentication=no: do not fall back to a password request, particularly since the local user isn't here to type it in
-o IdentitiesOnly=yes: do not use any default identity nor any identity offered by any local agent. Use only the one specified by -i.
-l "${REMOTEUSER}": log in as the specified user.
remotehost, eg your client machine that you want a device to connect to.
Sleep forever
If the connection failed because of a stale socket, then work around the issue by:
Log in separately
Delete the (stale) socket
Print today's date indicating when it was deleted
Loop again
There's an option which is intended to make this error-handling redundant: StreamLocalBindUnlink. However the option does not correctly work and has a bug open for years. I imagine that's because there really aren't many people who use ssh to forward over unix domain sockets. It's annoying but not difficult to workaround.
Using a unix domain socket should limit connectivity to whoever can reach the socket file (which should be only you and root if it's placed in your ${HOME}/.ssh directory and the directory has correct permissions). I don't know if that's important for your case or not.
On the other hand you can also simplify this a lot if you're willing to open a TCP port on 127.0.0.1 for each device. But then any other user on the same system can also connect. You should specifically listen on 127.0.0.1 which would then only accept connections from the same host to prevent external machines from reaching the forwarding port. You'd change the ${REMOTE_SOCKET} variable to, for example, 127.0.0.1:4567 to listen on port 4567 and only accept local connections. So you'd lose the named socket capability and permit any other user on the client machine to connect to your device, but gain a much simpler tunnel script (because you can remove the whole bit about parsing stderr to remove a stale socket file).
As long as your device is online (can reach your workstation's incoming port) and is running that script, and the authentication is valid, then the tunnel should also be online or coming-online. It will take some time to recover after a loss (and restore) of network connectivity, though. You can tune that with ConnectTimeout, TCPKeepAlive, and ServerAliveInterval options and the sleep 30 part of the loop. You could run it in a tmux session to keep it going even when you don't have a login session running. You could also run it as a system service on the device to bring it online even after recovering from a power failure.
Then from your client, you can connect in reverse:
ssh -o ProxyCommand='socat - unix-connect:/home/WavesAtParticles/remotehost.sock' -l WavesAtParticles .
In this invocation, you'll start ssh. It will then set up the proxycommand using socat. It will take its stdin/stdout and relay it through a connected AF_UNIX socket at the path provided. You'll need to update the path for the remote host you expect. But there's no need to specify file descriptors at all.
If ssh complains:
2019/08/26 18:09:52 socat[29914] E connect(5, AF=1 "/home/WavesAtParticles/remotehost.sock", 7): Connection refused
ssh_exchange_identification: Connection closed by remote host
then the tunnel is currently down and you should investigate the remotehost device's connectivity.
If you use the remote forwarding option with a TCP port listening instead of a unix domain socket, then the client-through-tunnel-to-remote invocation becomes even easier: ssh -p 4567 WavesAtParticles#localhost.
Again, you're trying to invert the client/server model and I don't think that's a very good idea to do with SSH.
I’m going to try this today:
http://localhost.run/
It seems like what you are looking for.
Not to answer your question but helpful for people who may not know:
Ngrok is the easiest way I’ve found. they do webservers as well as tcp connections. I’d recommend installing it through homebrew.
https://ngrok.com/product
$ ngrok http 5000
In the terminal for http, 5000 being the port of your application.
$ ngrok tcp 5000
In the terminal for tcp.
It’s free for testing(random changing domains).
For tcp connections remove “http://“ from the web address to get the IP address. Sorry I can’t remember. I think the client ports to 80 and I believe you can change that by adding port 5001 or something, google it to double check

wget connection timed out on same server

I've got a very strange problem.
There's a cron job on the server to run a script daily:
wget -O /dev/null --timeout=300 --tries=1 "http://website.com/script"
It was all working well since about two weeks ago, I started receiving errors:
--2016-07-13 09:45:01-- http://website.com/script
Resolving website.com (website.com)... 11.22.33.44
Connecting to website.com (website.com)|11.22.33.44|:80... failed: Connection timed out.
Giving up.
These are some information for this question:
The cron job is on the same server of http://website.com hosted.
I can access the script (http://website.com/script) correctly from browser on my desktop.
The server is CentOS 7, with WHM and cPanel installed.
Anyone know what could be the issue? or how do I suppose to identify the issue?
Thanks
If the issue still is unresolved..
You could try running wget in debug mode to see if you get some more info.
wget -dv -O /dev/null --timeout=300 --tries=1 "http://website.com/script"
Also, confirm if the resolved IP "11.22.33.44" belongs to one of the servers NIC's.
ip a s (ip address show) or
ifconfig -a
If the IP is not listed, It could be that the ip "11.22.33.44" is a public facing address of the company's firewall. And that the FW is directing requests on port 80 from the outside/internet (where you're browser is) to that specific server. And the Firewall/Nat/Proxy, could be configured to not allow requests coming from inside the network, reaching the external IP of the firewall and getting back in.
If this is the case, you could try changing you're wget using the internal ip address, something like: (still using -dv for debugging, remove after)
wget -dv -O /dev/null --timeout=300 --tries=1 --header="Host: website.com" http://127.0.0.1/script
Note1: the --header="Host: website.com" will tell you're webserver what site you wanna reach
Note2: maybe you'll have to change the IP: 127.0.0.1 (localhost address) to one of the server's NIC addresses.
If the website is up try a different command.
*/10 * * * * /usr/bin/wget -q -O temp.txt http://website.com/script
Try adding -H
wget -H -O /dev/null --timeout=300 --tries=1 "http://website.com/script"

Check for specific string within wget result set and update the log based on that

I have a permanently VPN connection to a server in Germany. I have intermittent outages of this connection where the VPN connection drops and it falls back to default broadband ISP based internet connection path. I am trying to track this outages by using the way google works depending where your connection originating from. If I connect from my default connection in US, I get the standard google.com server, but if I connect over the VPN server in germany google.com connection attempt resolves to google.de site instead of google.com. This is a suitable criteria to see if connection is down.
So, if I issue a wget against "www.google.com" the resulting set will include either google.de in indicating that google detects that the connection is from germany or google.com which will indicate the connection is coming from U.S which means for my purposes that the VPN connection is down. I can't figure out the proper syntax for the wget and the grep to follow to make this specific determination in the script.
The script I came up with doesn't seem to work consistently. When it is executed by cru (cron) it gets US when I execute interactively I get Germany.
Any suggestions?
rm index*
wget -nv google.com 2 > /jffs/user/google.txt
cat google.txt | grep google.de
if [[ $? -eq 0 ]]; then
echo "$(date) Germany" >> /jffs/user/google.log
else
echo "$(date) U.S." >> /jffs/user/google.log
fi

Linux send URL my IP address on startup

So, I'm trying to write a simple bash script to send my internal IP address to a website of mine on startup. I am on a network with DHCP, so I don't always know what the IP address of my Raspberry Pi will be after I do a reboot over ssh. I figured I could fix this by sending my website the current IP on startup. I haven't written many bash scripts, and I'm not really sure how to send data to my website. Right now I was just trying in the terminal this:
wget -qO- http://http://mywebsite.com/private/CurrentIP.php?send=$(/sbin/ifconfig eth0|grep 'inet addr:')
But I'm not having any luck. I don't actually know much about linux, and I'm trying to learn. That's why I got the raspberry pi actually. Anyway, can someone head me in the right direction?
I already know I need to put it in /etc/init.d/.
You could do this:
IP_ADDR=$(ifconfig eth0 | sed -rn 's/^.*inet addr:(([0-9]+\.){3}[0-9]+).*$/\1/p')
wget -q -O /dev/null http://mywebsite.com/private/CurrentIP.php?send=${IP_ADDR}
...but if your machine is stuck behind NAT, $IP_ADDR won't be your externally-visible address. Might want to use $_SERVER['REMOTE_ADDR'] in your PHP instead of/in addition to this to get the address for your client that your server sees.
Edit: Sounds like you want to be able to find your Raspberry Pi on your local (DHCP-managed) network after reboots. Have you considered using Multicast DNS instead?
How it works in practice: Let's say you've set the hostname of your RasPi to gooseberry. If you've enabled a multicast DNS server on that machine, other computers on the same network segment that can send multicast DNS queries will be able to find it at the domain name gooseberry.local. This is a peer-to-peer protocol and not dependent on gooseberry receiving any specific address via DHCP - so if it reboots and receives a new address, other machines should still be able to find it.
Mac OS X has this enabled out of the box; this can be enabled on most Linux distros (on Debian/Ubuntu you'd install the avahi-daemon and libnss-mdns packages); not sure about Windows, but a quick Google shows encouraging results.
This worked for me (wget part untested, but it finds IP address):
interface="eth0"
ip_addr=$(ifconfig ${interface} | sed -rn 's/^.*inet *([0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}).*$/\1/p')
wget -q -O /dev/null http://mywebsite.com/private/CurrentIP.php?send=${ip_addr}
Can't you use:
hostname --ip-address

Bash script to (more or less) reliably check if the Internet is up

I need a Bash (or a plain shell) script to put in a cronjob that every minute checks if the Internet is up.
This is how I did it:
#! /bin/sh
host1=google.com
host2=wikipedia.org
curr_date=`date +"%Y%m%d%H%M"`
echo -n "${curr_date};"
((ping -w5 -c3 $host1 || ping -w5 -c3 $host2) > /dev/null 2>&1) &&
echo "up" || (echo "down" && exit 1)
How would you do it? Which hosts would you ping?
Clarifications:
By "internet is up", I mean my internet connection.
By "up", I mean to have usable connection (doesn't really matter if we are talking about the DNS being down or the connection is really really slow [mind the -w for timeout]). That is also why I didn't include any IP but only hosts.
Should I also ping Stack Overflow? I mean, if I can't access Google, Wikipedia or Stack Overflow, I don't want Internet :p
That one seems like a good solution. Just add a few more hosts, and maybe some pure IP hosts so you don't rely on DNS functioning (which in itself depends on your definition of "up").
Thanks for your code, it works great, I've left only one line actually:
((ping -w5 -c3 8.8.8.8 || ping -w5 -c3 4.2.2.1) > /dev/null 2>&1) && echo "up" || (echo "down" && exit 1)
What portion of Internet connectivity are you looking to check? DHCP? DNS? Physically being plugged into a jack? Kernel recognizing the presence of the NIC?
You can manually query your ISP's DNS server(s) by using the host(1) command. This is generally a good indication of whether your router has lost its connection to the ISP.
You can query what interfaces your kernel has by using netstat(8) or ifconfig(8).
You can get detailed statistics about the interface using ifstat.

Resources