How to get IP of outside connection using telnet, or perhaps misconfiguration of memcached allowed ip's - linux

Server A with memcached, Server B needs to use memcached. If I leave memcached open to all ips (default), Server B can:
telnet server.a.etc 11211
and get in. If I ping server B, I get an ip, 153.353.234.23 (example), and I put that in my memcached options
OPTIONS="-l 153.353.234.23"
restart and now server b can't connect over telnet any longer. I assume this is because perhaps it is using a different ip address for this connection? Some other error? Either way, I'd like to be able to see the ip of server B when it connects.
I enabled memcached logging
-vvv >var/log/memcached.log 2>&1
and I see logs getting generated, but not the ip addresses of connecting machines.

with memcached -l you are instructing the memcached server to bind on a specific IP address of the server. This is useful if you want memcached to accept data on a specific IP or network interface on your server node.
You can leave the default to make memcached bind on any IP address of the server (0.0.0.0 on IPv4).
I don't think memcached has a command switch to login the source IP.
You can use the command
netstat -natp | grep 11211 | grep ESTABLISHED
on a linux system to get the established connections to memcached and the IPs of the clients.

You can:
telnet telnetmyip.com
also
ssh telnetmyip.com
also
curl telnetmyip.com

Related

How to access a host port (bind with ssh -R) from a container?

Using Docker 1.12.1, I face a strange behaviour trying to access a host port created with ssh -R.
Basically I try to access a service running on port 12345 on my local machine from a docker container running on a server.
I opened a ssh connection with ssh -R *:12345:localhost:12345 user#server to open a port 12345 on server that forwards to port 12345 on my local machine.
Now when I try curl https://172.17.42.1:12345 inside the container (172.17.42.1 is the IP to access the docker host from the docker container) I get :
root#f6873fe1109b:/# curl https://172.17.42.1:12345
curl: (7) Failed to connect to 172.17.42.1 port 12345: Connection refused
But on server the command curl http://localhost:12345 succeeds (eg. no Connection refused)
server$ curl http://localhost:12345
curl: (52) Empty reply from server
I don't really understand how the port binding done with ssh differs from a test with nc on server (it works) :
# on server
nc -l -p 12345
# inside a container
root#f6873fe1109b:/# curl http://172.17.42.1:12345
curl: (52) Empty reply from server
NB: the container was started with docker run -it --rm maven:3-jdk-8 bash.
What can I do to allow my container to access the host port corresponding to a ssh binding ?
From man ssh:
-R [...]
... Specifying a remote bind_address will only succeed if the server's GatewayPorts option is enabled
And man sshd_config:
GatewayPorts
Specifies whether remote hosts are allowed to connect to ports forwarded for the client. By default, sshd(8) binds remote port forwardings to the loopback address. This prevents other remote hosts from connecting to forwarded ports. GatewayPorts can be used to specify that sshd should allow remote port forwardings to bind to non-loopback addresses, thus allowing other hosts to connect. The argument may be “no” to force remote port forwardings to be available to the local host only, “yes” to force remote port forwardings to bind to the wildcard address, or “clientspecified” to allow the client to select the address to which the forwarding is bound. The default is “no”.
This means that a default sshd server installation only allows to create forwards that bind to the local interface. If you want to allow forwards to other interfaces then loopback, you need to set the GatewayPorts option to yes or clientspecified in your /etc/ssh/sshd_config

How can I find where node.js is running?

I have a VPS with node.js installed, I already uploaded a basic example to test it on the server, so I tried doing this:
I access by SSH, navigate to my project folder and run
node app.js
I get this message
Express server listening on port 8080
I thought i could see my app here
example.com:8080 or server.example:8080... but nothing. Then I tried with the info from os.networkInterfaces(); and os.host(); and still nothing happen
could you help me out? as you can see I am a total noob on node.js. What I'm doing wrong? or what should I do before running my app? Something related to DNS's? i have no idea
How do you ssh to your host? with ip or name? Is it something like:
ssh root#example.com
if so then at least you know your DNS is ok.
Once on the server do a
netstat -a
if you find *:8080 then your server is listening in the default ip. If you see something like 12.23.45.67:8080 then this number is the ip your server is listening.
ifconfig
will give you the servers ip. This should be the same as the ip of example.com. If not then maybe there is some router/firewall in front of your server and you have to configure that to allow port 8080 to reach your server.
if someone ever has the same problem this is how i solved on CentOS:
Open this file
/ Etc / csf / csf.conf
Add the required port
Allow incoming TCP ports
TCP_IN = "20,21,22,25,53,80,110,143,443,465,587,993,995,26"
Allow outgoing TCP ports
TCP_OUT = "20,21,22,25,37,43,53,80,110,113,443,587,873"
Restart
# # Csf-r

SSH Agent forward specific keys rather than all registered ssh keys

I am using agent forwarding, it works fine. But the ssh client is sharing all registered (ssh-add) keys with the remote server. I have personal keys that I don't want to share with the remote server. Is there a way to restrict with keys are being forwarded?
I have multiple github accounts and aws accounts. I don't want to share all the ssh-keys.
Looks like it is possible with OpenSSH 6.7 - it supports unix socket forwarding. We could start secondary ssh-agent with specific keys and forward it's socket to remote host. Unfortunately this version is not available for my server/client systems at the time of writing.
I have found another possible solution, using socat and standard SSH TCP forwarding.
Idea
On local host we run secondary ssh-agent with only keys we want to see on remote host.
On local host we set up forwarding of TCP connections on some port (portXXX) to secondary ssh-agent's socket.
On remote host we set up forwarding from some socket to some TCP port (portYYY).
Then we establish ssh connection with port forwarding from remote's portYYY to local portXXX.
Requests to ssh agent go like this:
local ssh-agent (secondary)
^
|
v
/tmp/ssh-.../agent.ZZZZZ - agent's socket
^
| (socat local)
v
localhost:portXXX
^
| (ssh port forwarding)
v
remote's localhost:portYYY
^
| (socat remote)
v
$HOME/tmp/agent.socket
^
| (requests for auth via agent)
v
SSH_AUTH_SOCK=$HOME/tmp/agent.socket
^
| (uses SSH_AUTH_SOCK variable to find agent socket)
v
ssh
Drawbacks
It is not completely secure, because ssh-agent becomes partially available through TCP: users of remote host can connect to your local agent on 127.0.0.1:portYYY, and other users of your local host can connect on 127.0.0.1:portXXX. But they will see only limited set of keys you manually added to this agent. And, as AllenLuce mentioned, they can't grab it, they only could use it for authentication while agent is running.
socat must be installed on remote host. But looks like it is possible to simply upload precompiled binary (I tested it on FreeBSD and it works).
No automation: keys must be added manually via ssh-add, forwarding requires 2 extra processes (socat) to be run, multiple ssh connections must be managed manually.
So, this answer is probably just a proof of concept and not a production solution.
Let's see how it can be done.
Instruction
Client side (where ssh-agent is running)
Run new ssh-agent. It will be used for keys you want to see on remote host only.
$ ssh-agent # below is ssh-agent output, DO NOT ACTUALLY RUN THESE COMMANDS BELOW
SSH_AUTH_SOCK=/tmp/ssh-qVnT0UsgV6yO/agent.22982; export SSH_AUTH_SOCK;
SSH_AGENT_PID=22983; export SSH_AGENT_PID;
It prints some variables. Do not set them: you will loose your main ssh agent. Set another variable with suggested value of SSH_AUTH_SOCK:
SSH_AUTH_SECONDARY_SOCK=/tmp/ssh-qVnT0UsgV6yO/agent.22982
Then establish forwarding from some TCP port to our ssh-agent socket locally:
PORT=9898
socat TCP4-LISTEN:$PORT,bind=127.0.0.1,fork UNIX-CONNECT:$SSH_AUTH_SECONDARY_SOCK &
socat will run in background. Do not forget to kill it when you're done.
Add some keys using ssh-add, but run it with modified enviromnent variable SSH_AUTH_SOCK:
SSH_AUTH_SOCK=$SSH_AUTH_SECONDARY_SOCK ssh-add
Server side (remote host)
Connect to remote host with port forwarding. Your main (not secondary) ssh agent will be used for auth on hostA (but will not be available from it, as we do not forward it).
home-host$ PORT=9898 # same port as above
home-host$ ssh -R $PORT:localhost:$PORT userA#hostA
On remote host establish forwarding from ssh-agent socket to same TCP port as on your home host:
remote-host$ PORT=9898 # same port as on home host
remote-host$ mkdir -p $HOME/tmp
remote-host$ SOCKET=$HOME/tmp/ssh-agent.socket
remote-host$ socat UNIX-LISTEN:$SOCKET,fork TCP4:localhost:$PORT &
socat will run in background. Do not forget to kill it when you're done. It does not automatically exit when you close ssh connection.
Connection
On remote host set enviromnent variable for ssh to know where agent socket (from previous step) is. It can be done in same ssh session or in parallel one.
remote-host$ export SSH_AUTH_SOCK=$HOME/tmp/ssh-agent.socket
Now it is possible to use secondary agent's keys on remote host:
remote-host$ ssh userB#hostB # uses secondary ssh agent
Welcome to hostB!
The keys themselves are not shared by forwarding your agent. What's forwarded is the ability to contact the ssh-agent on your local host. Remote systems send challenge requests through the forwarding tunnel. They do not request the keys themselves.
See http://www.unixwiz.net/techtips/ssh-agent-forwarding.html#fwd for a graphical explanation.

SSHD accept connections only from its own tunnel

I have a machine behind a firewall which is accessible from my server via a reverse SSH tunnel.
I would now like to prevent everyone from logging in to the machine IF they came from the ssh tunnel connection. Is this possible?
You can use the match directive in sshd_config to not provide a shell for connections coming from a specific ip address
Match Address 10.1.1.1/32
ForceCommand /bin/false

NRPE remote host setup on amazon ec2

I have been trying to monitor a remote server using Nagios-Nrpe.
The remote host is the Amazon Ec2 instance where I have installed npre daemon on xinetd.
I have added my nagios server IP to "only-from" property in /etc/xinet.d/nrpe file.
I have added the entry in /etc/services.
I have made changes in iptables also.
I have added an entry for TCP port 5666 in my security group too.
These commands work properly:
$ netstat -at | grep nrpe
$usr/local/nagios/libexec/check_nrpe -H localhost
I have setup the nagios server and nrpe_check plugin on my local machine.
But whenever I am doing:
/usr/local/nagios/libexec/check_nrpe -H <"amazon-ec2-IP-address">
I get the following error:
connect to address <"amazon-ec2-IP-address"> port 5666: Connection refused ......
connect to host <"amazon-ec2-IP-address"> port 5666: Connection refused
I have tried making the nrpe client on another linux on my LAN and the command worked, but not for Amazon Ec2.
If anyone has the solution for this issue, please do share ASAP.
Make sure you have,
Opened up port 5666 in the Security Group linked to the EC2-instance.

Resources