is nginx or haproxy able to do proxy stuff to google.com? - node.js

Hi my country blocked google.com anyway I have a virtual machine which is outside the country and have access to google. it has nginx & haproxy installed, based on my limited understanding these reverse proxy can do proxy to internal servers but is there anyway to let them do proxy to google.com directly?
Thanks so much.

Instead of using NGINX or HAPROXY to proxy some URL or google.com what you should do is use your VM as a proxy for the browser. Execute below on your machine
$ ssh -D 8123 -f -C -q -N sammy#example.com
Explanation of arguments
-D: Tells SSH that we want a SOCKS tunnel on the specified port number (you can choose a number between 1025-65536)
-f: Forks the process to the background
-C: Compresses the data before sending it
-q: Uses quiet mode
-N: Tells SSH that no command will be sent once the tunnel is up
This will open a socks proxy on 127.0.0.1:8123, you can set this in your browser and open google through your server.
For more detailed article refer to below
https://www.digitalocean.com/community/tutorials/how-to-route-web-traffic-securely-without-a-vpn-using-a-socks-tunnel

Related

Programatically set DNS servers (Windows, MacOS)

I need to programmatically set DNS servers of the host on their active network interfaces (Wi-fi, ethernet, etc) on both Windows, MacOS and as a bonus Linux.
I want to avoid having to manually update/pollute /etc/hosts for my Kubernetes services I am running on my ingress.
Currently, my process is to manually set the DNS server for each person in my team running our app
The problem with this is that it's a manual process, and I am having trouble trying to automate it because the outputs are weirdly formatted and hard to parse. This means I am unable to know which is the proper network interface to use.
Essentially, what needs to be done is the following (on both platforms)
Get the active networks name
Set the DNS servers for the active network to 127.0.0.1 & 8.8.4.4
What is being done manually currently
MacOS:
networksetup -setdnsservers Wi-Fi 127.0.0.1 8.8.8.8
sudo killall -HUP mDNSResponder
127.0.0.1 is the local DNS server running on node that serves the A record for the service
8.8.8.8 is Google's Public DNS Server
Currently, I am assuming the user on MacOS is using the "Wi-Fi" network, but i'd like to determine this programatically
Windows
As administrator:
netsh interface show interface
Locate the network connection for which you want the DNS server changed (eg: WiFi).
netsh interface ipv4 add dns "WiFi" 127.0.0.1 index=1
netsh interface ipv4 add dns "WiFi" 8.8.8.8 index=2
ipconfig /flushdns
On macOS, I don't think this will do what you want. When you configure multiple DNS servers on macOS, the system resolver doesn't try them in order, it just fires off requests semi-randomly between the available servers. This means it'll sometimes send off requests for your private servers to the public (Google) server, get told there's no such domain, and stop there. Or it'll send requests for pubic sites to the localhost DNS, and if that doesn't respond properly decide that site doesn't work. Basically, the macOS resolver doesn't do failover.
Are your private servers under a non-standard TLD or something like that? If so, you might be able to do the job by adding a file under /etc/resolver/ to redirect queries for that TLD to the private DNS server.
Anyway, in case it is useful, here's a way to detect the primary (active) network interface and set its DNS servers in macOS:
#!/bin/bash
interfaceDevice=$(netstat -rn | awk '($1 == "default") {print $6; exit}')
if [[ -z "$interfaceDevice" ]]; then
echo "Unable to get primary network interface device" >&2
exit 1
fi
interfaceName=$(networksetup -listallhardwareports | grep -B1 "Device: $interfaceDevice\$" | sed -n 's/^Hardware Port: //p')
if [[ -z "$interfaceName" ]]; then
echo "Unable to get primary network interface name" >&2
exit 1
fi
networksetup -setdnsservers "$interfaceName" 127.0.0.1 8.8.8.8

How to run ssh over an existing TCP connection

I want to be able to SSH to a number linux devices at once, behind different NATs. I can't configure the network that they are on. However, I'm having trouble getting ssh to go over an existing connection.
I have full control over both my client and the devices. Here's the process so far:
On my client, I first run
socat TCP-LISTEN:5001,pktinfo,fork EXEC:./create_socket.sh,fdin=3,fdout=4,nofork
Contents of ./create_socket.sh:
ssh -N -M -S "~/sockets/${SOCAT_PEERADDR}" -o "ProxyCommand=socat - FD:3!!FD:4" "root#${SOCAT_PEERADDR}"
On the device, I'm running
socat TCP:my_host:4321 TCP:localhost:22
However, nothing comes in or out of FD:3!!FD:4, I assume because the ProxyCommand is a subprocess. I've also tried setting fdin=3,fdout=3 and changing ./create_socket.sh to:
ssh -N -M -S "~/sockets/${SOCAT_PEERADDR}" -o "ProxyUseFdpass=yes" -o "ProxyCommand=echo 3" "root#${host}"
This prints an error:
mm_receive_fd: no message header
proxy dialer did not pass back a connection
I believe this is because the fd should be sent in some way using sendmsg, but the fd doesn't originate from the subprocess anyways. I'd like to make it as simple as possible, and this feels close to workable.
You want to turn the client/server model on its head and make a generic server to spawn a client on-demand and in-response-to an incoming unauthenticated TCP connection from across a network boundary, and then tell that newly-spawned client to use that unauthenticated TCP session. I think that may have security considerations that you haven't thought of. If a malicious person spams connections to your computer, your computer will spawn a lot of SSH instances to connect back and these processes can take up a lot of local system resources while authenticating. You're effectively trying to set up SSH to automatically connect to an untrusted (unverified) remote-initiated machine across a network boundary. I can't stress how dangerous that could be for your client computer. Using the wrong options could expose any credentials you have or even give a malicious person full access to your machine.
It's also worth noting that the scenario you're asking to do, building a tunnel between multiple devices to multiplex additional connections across an untrusted network boundary, is exactly the purpose of VPN software. Yes, SSH can build tunnels. VPN software can build tunnels better. The concept would be that you'd run a VPN server on your client machine. The VPN server will create a new (virtual) network interface which represents only your devices. The devices would connect to the VPN server and be assigned an IP address. Then, from the client machine, you'd just initiate SSH to the device's VPN address and it will be routed over the virtual network interface and arrive at the device and be handled by its SSH daemon server. Then you don't need to muck around with socat or SSH options for port forwarding. And you'd get all the tooling and tutorials that exist around VPNs. I strongly encourage you to look at VPN software.
If you really want to use SSH, then I strongly encourage you to learn about securing SSH servers. You've stated that the devices are across network boundaries (NAT) and that your client system is unprotected. I'm not going to stop you from shooting yourself in the foot but it would be very easy to spectacularly do so in the situation you've stated. If you're in a work setting, you should talk to your system administrators to discuss firewall rules, bastion hosts, stuff like that.
Yes, you can do what you've stated. I strongly advise caution though. I advise it strongly enough that I won't suggest anything which would work with that as stated. I will suggest a variant with the same concepts but more authentication.
First, you've effectively set up your own SSH bounce server but without any of the common tooling compatible with SSH servers. So that's the first thing I'd fix: use SSH server software to authenticate incoming tunnel requests by using ssh client software to initiate the connection from the device instead of socat. ssh already has plenty of capabilities to create tunnels in both directions and you get authentication bundled with it (with socat, there's no authentication). The devices should be able to authenticate using encryption keys (ssh calls these identities). You'll need to connect once manually from the device to verify and authorize the remote encryption key fingerprint. You'll also need to copy the public key file (NOT the private key file) to your client machine and add it to your authorized_keys files. You can ask for help on that separately if you need it.
A second issue is that you appear to be using fd3 and fd4. I don't know why you're doing that. If anything, you should be using fd0 and fd1 since these are stdin and stdout, respectively. But you don't even need to do that if you're using socat to initiate a connection. Just use - where stdin and stdout are meant. It should be completely compatible with -o ProxyCommand without specifying any file descriptors. There's an example at the end of this answer.
The invocation from the device side might look like this (put it into a script file):
IDENTITY=/home/WavesAtParticles/.ssh/tunnel.id_rsa # on device
REMOTE_SOCKET=/home/WavesAtParticles/.ssh/$(hostname).sock # on client
REMOTEUSER=WavesAtParticles # on client
REMOTEHOST=remotehost # client hostname or IP address accessible from device
while true
do
echo "$(date -Is) connecting"
#
# Set up your SSH tunnel. Check stderr for known issues.
ssh \
-i "${IDENTITY}" \
-R "${REMOTE_SOCKET}:127.0.0.1:22" \
-o ExitOnForwardFailure=yes \
-o PasswordAuthentication=no \
-o IdentitiesOnly=yes \
-l "${REMOTEUSER}" \
"${REMOTEHOST}" \
"sleep inf" \
2> >(
read -r line
if echo "${line}" | grep -q "Error: remote port forwarding failed"
then
ssh \
-i "${IDENTITY}" \
-o PasswordAuthentication=no \
-o IdentitiesOnly=yes \
-l "${REMOTEUSER}" \
"${REMOTEHOST}" \
"rm ${REMOTE_SOCKET}" \
2>/dev/null # convince me this is wrong
echo "$(date -Is) removed stale socket"
fi
#
# Re-print stderr to the terminal
>&2 echo "${line}" # the stderr line we checked
>&2 cat - # and any unused stderr messages
)
echo "disconnected"
sleep 30
done
Remember, copying and pasting is bad in terms of shell scripts. At a minimum, I recommend you read man ssh and man ssh_config, and to check the script against shellcheck.net. The intent of the script is:
In a loop, have your device (re)connect to your client to maintain your tunnel.
If the connection drops or fails, then reconnect every 30 seconds.
Run ssh with the following parameters:
-i "${IDENTITY}": specify a private key to use for authentication.
-R "${REMOTE_SOCKET}:127.0.0.1:22": specify a connection request forwarder which accept connections on the Remote side /home/WavesAtParticles/$(hostname).sock then forward them to the local side by connecting to 127.0.0.1:22.
-o ExitOnForwardFailure=yes: if the remote side fails to set up the connection forwarder, then the local side should emit an error and die (and we check for this error in a subshell).
-o PasswordAuthentication=no: do not fall back to a password request, particularly since the local user isn't here to type it in
-o IdentitiesOnly=yes: do not use any default identity nor any identity offered by any local agent. Use only the one specified by -i.
-l "${REMOTEUSER}": log in as the specified user.
remotehost, eg your client machine that you want a device to connect to.
Sleep forever
If the connection failed because of a stale socket, then work around the issue by:
Log in separately
Delete the (stale) socket
Print today's date indicating when it was deleted
Loop again
There's an option which is intended to make this error-handling redundant: StreamLocalBindUnlink. However the option does not correctly work and has a bug open for years. I imagine that's because there really aren't many people who use ssh to forward over unix domain sockets. It's annoying but not difficult to workaround.
Using a unix domain socket should limit connectivity to whoever can reach the socket file (which should be only you and root if it's placed in your ${HOME}/.ssh directory and the directory has correct permissions). I don't know if that's important for your case or not.
On the other hand you can also simplify this a lot if you're willing to open a TCP port on 127.0.0.1 for each device. But then any other user on the same system can also connect. You should specifically listen on 127.0.0.1 which would then only accept connections from the same host to prevent external machines from reaching the forwarding port. You'd change the ${REMOTE_SOCKET} variable to, for example, 127.0.0.1:4567 to listen on port 4567 and only accept local connections. So you'd lose the named socket capability and permit any other user on the client machine to connect to your device, but gain a much simpler tunnel script (because you can remove the whole bit about parsing stderr to remove a stale socket file).
As long as your device is online (can reach your workstation's incoming port) and is running that script, and the authentication is valid, then the tunnel should also be online or coming-online. It will take some time to recover after a loss (and restore) of network connectivity, though. You can tune that with ConnectTimeout, TCPKeepAlive, and ServerAliveInterval options and the sleep 30 part of the loop. You could run it in a tmux session to keep it going even when you don't have a login session running. You could also run it as a system service on the device to bring it online even after recovering from a power failure.
Then from your client, you can connect in reverse:
ssh -o ProxyCommand='socat - unix-connect:/home/WavesAtParticles/remotehost.sock' -l WavesAtParticles .
In this invocation, you'll start ssh. It will then set up the proxycommand using socat. It will take its stdin/stdout and relay it through a connected AF_UNIX socket at the path provided. You'll need to update the path for the remote host you expect. But there's no need to specify file descriptors at all.
If ssh complains:
2019/08/26 18:09:52 socat[29914] E connect(5, AF=1 "/home/WavesAtParticles/remotehost.sock", 7): Connection refused
ssh_exchange_identification: Connection closed by remote host
then the tunnel is currently down and you should investigate the remotehost device's connectivity.
If you use the remote forwarding option with a TCP port listening instead of a unix domain socket, then the client-through-tunnel-to-remote invocation becomes even easier: ssh -p 4567 WavesAtParticles#localhost.
Again, you're trying to invert the client/server model and I don't think that's a very good idea to do with SSH.
I’m going to try this today:
http://localhost.run/
It seems like what you are looking for.
Not to answer your question but helpful for people who may not know:
Ngrok is the easiest way I’ve found. they do webservers as well as tcp connections. I’d recommend installing it through homebrew.
https://ngrok.com/product
$ ngrok http 5000
In the terminal for http, 5000 being the port of your application.
$ ngrok tcp 5000
In the terminal for tcp.
It’s free for testing(random changing domains).
For tcp connections remove “http://“ from the web address to get the IP address. Sorry I can’t remember. I think the client ports to 80 and I believe you can change that by adding port 5001 or something, google it to double check

How to configure https_check URL in nagios

I have installed Nagios (Nagios® Core™ Version 4.2.2) in Linux Server.I have written a JIRA URL check using check_http for HTTPS url.
It should get a response 200, but It gives response HTTP CODE 302.
[demuc1dv48:/pkg/vdcrz/Nagios/libexec][orarz]# ./check_http -I xx.xx.xx -u https://xxx.xxx.xxx.com/secure/Dashboard.jspa -S CONNECT
SSL Version: TLSv1
HTTP OK: HTTP/1.1 302 Found - 296 bytes in 0.134 second response time |time=0.134254s;;;0.000000 size=296B;;;
So I configured the same in the nagios configuration file.
define command{
command_name check_https_jira_prod
command_line $USER1$/check_http -I xxx.xxx.xxx.com -u https://xxx.xxx.xxx.com/secure/Dashboard.jspa -S CONNECT -e 'HTTP/1.1 302'
}
Now my JIRA server is down, But it is not reflected in the nagios check.The nagios response still shows HTTP code 302 only.
How to fix this issue?
You did not specify, but I assume you defined your command in the Nagios central server commands.cfgconfiguration file, but you also need to define a service in services.cfg as services use commands to run scripts.
If you are running your check_httpcheck from a different server you also need to define it in the nrpe.cfg configuration file on that remote machine and then restart nrpe.
As a side note, from the output you've shared, I believe you're not using the flags that the check_http Nagios plugin supports correctly.
From your post:
check_http -I xxx.xxx.xxx.com -u https://xxx.xxx.xxx.com/secure/Dashboard.jspa -S CONNECT -e 'HTTP/1.1 302'
From ./check_http -h:
-I, --IP-address=ADDRESS
IP address or name (use numeric address if possible to bypass DNS lookup).
You are using a host name instead (xxx.xxx.xxx.com )
-S, --ssl=VERSION
Connect via SSL. Port defaults to 443. VERSION is optional, and prevents auto-negotiation (1 = TLSv1, 2 = SSLv2, 3 = SSLv3).
You specified CONNECT
You can't get code 200 unless you set follow parameter in chech_http script.
I suggest you to use something like this:
./check_http -I jira-ex.telefonica.de -u https://xxx.xxx.xxx.com/secure/Dashboard.jspa -S -f follow
The -f follow is mandatory for your use case.

Ssh port forwarding google compute engine

I am trying to forward traffic with google instances but no luck.
Here is the scenario:
I have 2 instances currently main-server, and mini-server-1
I want to ssh mini-server-1 from main-server and create a dynamic port forwarding like so:
gcloud compute ssh "mini-server-1" --zone="us-central1-f" --ssh-flag="-D:5551" --ssh-flag="-N" --ssh-flag="-n" &
I have this error:
bind: Cannot assign requested address
I tried: ssh -N username#mini-server-1(all ips internal external, hostname) -D 5551 &
When i run netstat i can see that ports are free.
Here is wget with proxy from main-server
wget google.com -e use_proxy=yes -e http_proxy=127.0.0.1:5551
Connecting to 127.0.0.1:5551... connected.
Proxy request sent, awaiting response...
Does someone know how can i achieve this?
Much simpler syntax:
gcloud compute ssh my-vm-name --zone=europe-west1-b -- -NL 4000:localhost:4000
You can pass as many options as you want:
-NL 8080:localhost:80 -NL 8443:localhost:443
https://cloud.google.com/solutions/connecting-securely
https://cloud.google.com/community/tutorials/ssh-tunnel-on-gce
https://cloud.google.com/community/tutorials/ssh-port-forwarding-set-up-load-testing-on-compute-engine
run the command with the debug flag to help you find more information:
gcloud compute ssh --ssh-flag=-vvv "mini-server-1" \
--zone="us-central1-f" \
--ssh-flag="-D:5551" \
--ssh-flag="-N" \
--ssh-flag="-n" &
and as mention in my comment before, use https_proxy.
gcloud compute ssh --ssh-flag="-L 2222:localhost:8080" --zone "us-central1-a" "your_instance_name"
With this command you can port forward and connect to a port running on a particular VM instance from your local PC
2222 refer to local IP
8080 is the remote port where our application will be running

ssh port forwarding (tunneling) in linux

I have a specific scenario that I want to solve. I currently connect to a host via port forwarding:
laptop -> gateway -> remote_server_1
and another host:
laptop -> remote_server_2
with passwordless login working on both. Neither of the remote servers are visible to the outside world. Now I'm running a service on remote_server_2, that I'd like to be able to access on remote_server_1. I presume I have to setup reverse port forwarding from remote_server_1 to my laptop, and then on to remote_server_2, but I'm not sure how to do this. Anyone come across this situation before?
Edit:
The full solution in case anyone else needs it:
mylaptop$ ssh -L 3001:localhost:3000 server_2
server_2$ netcat -l 3000
Then setup the tunnel via gateway to server_1:
ssh -t -t -L 3003:server_1:22 gateway
Then access it from server_1:
ssh -R 3002:localhost:3001 -p3003 localhost
echo "bar" | nc localhost 3002`
and hey presto server_2 shows bar :-)
You have to do exactly as you've described. Setup the server on server_2.
mylaptop$ ssh -L 3001:localhost:3000 server_2
server_2$ netcat -l 3000
Then access to it from server_1.
mylaptop$ ssh -R 3002:localhost:3001 server_1
server_1$ echo "foo" | netcat localhost 3002
server_2 will show foo.

Resources