So in a controlled environment I used evilgrade, a payload I had created, etter, and then netcat to do a mitm to get my target device to install my created payload as an "update" and it worked. I got my target device to successfully do this and so I began writing a python script to automate this process rather than memorize the commands and now it does not work. As stated before everything worked flawlessly the first time through, but after I had established a connection to the target device, I cannot replicate it. I have tried uninstalling/purging all of the applications (except ettercap) and doing a fresh install as well as returning all config files like etter.dns and etter.conf to their defaults before trying to replicate.
The evilgrade CLI shows this:
start
evilgrade>
[3/5/2017:9:40:33] - [WEBSERVER] - Webserver ready. Waiting for connections ...
evilgrade>
[3/5/2017:9:40:33] - [DNSSERVER] - DNS Server Ready. Waiting for Connections ...
evilgrade>
The netcat CLI shows this:
root#oxYMmCIZ:~# nc -l -p 444 -v
Listening on [0.0.0.0] (family 0, port 444)
The first time netcat ran I saw what looked like something encrypted going across the CLI (I assume showing that traffic was indeed coming through) but I did not use -v that time so I did not see anything but the squares.
This is my first questions here and I feel like I should state that I am self taught so if I misuse any terms feel free to let me know. I know how to make stuff work, not explain it by the books' terms :)
Related
I have a VPS running ubuntu 20.04 that I'm trying to setup as a SSH server.
On my first try I got overrun by Chinese bots. I deleted everything and started from scratch.
I installed and setup fail2ban, currently on about 2000 banned ip.
I removed root login, setup a new random username, with a 12 char password.
But sometimes when i run netstat -tpna i still get results like this
These are again Chinese ip addresses. These 2 disappeared after a couple of seconds.
Is it something I'm missing here? Are these 2 really connected to my server ? How ?
Or is it just that I don't really understand how netstat works ?
I am indeed planning on removing password login, using just ssh keys, but after I finish setting up the VPS.
Thank you for your help.
On my first try I got overrun by Chinese bots
Regardless of IDS (fail2ban etc) it is always advisable to change port sshd listening on something else (if you don't necessarily bound on 22 for some reason).
i still get results like this ... ESTABLISHED ...
This can basically have 2 major reasons:
this IPs are not (yet) banned due to some missing rule or not fully covered "attack" vector.Did you check the IPs were really banned? See also first answer in wiki :: FAQ.
If there just no finding for this IPs at all in fail2ban.log ([sshd] Found 192.0.2.1), you may also try to set mode = aggressive for the jail. And check whether your fail2ban version or your sshd filter is not too old, e. g. here is actual filter for latest v.0.10. There is no guarantee that this filter would work with your fail2ban version, if you have also v.0.10, but other version (for example latest 0.10.6 vs. 0.10.2 on your side), so be careful updating the filter only.
If they are really banned by fail2ban, it may depend on its banning action (and config of the net-filter and network subsystem). Sometimes it is possible that only the new packets will be rejected or dropped, already established connections are not affected. This don't imply that the intruder is able to continue the attack.
However if the IP is banned (so [sshd] Ban 192.0.2.1 is there), but you always see Already banned for this address in fail2ban.log later (and you see new attempts or even connections from this IP), then your banning action may not work properly (some errors in log after every ban, wrong port are protected, net-filter white-listing rules available, etc).
Are these 2 really connected to my server ? How ?
The connection may remain established (also if the IP got banned) for some time, if it is not dropped (killed) or one of both side does not close it.
As already said it depends.
You can surely add some custom second action killing all the connections from IP (e. g. using tcpkill, killcx, ss or whatever), but it is not needed if you don't see any communication from its side (all packets are rejected, so no new attempts in log and no new connections later from this IP, etc).
Anyway if they disappeared as you said after couple of seconds (I guess firstly going in TIME_WAIT state), then it is a good sign. But better check it with something like:
tail -f /var/log/fail2ban.log /var/log/auth.log | grep sshd
# or for journal:
journalctl -f | grep sshd
so you would see what happens before and after ban of some address (for example whether it remains silent hereafter).
I've been using the following format for sending AT commands from the Application processor (AP) to the modem subsystem:
cat /dev/smd7 &
echo -e "ati\r" > /dev/smd7
It has always worked but I never realized where's /dev/smd7 port coming from. After looking up online, looks like it's referring to the port used by Radio Interface Layer (RIL), but how does one confirm it?
I have lately started getting timeout issues and I'm starting to wonder if the port may be incorrect or might have changed
read error: Network dropped connection on reset
Note that AP is running linux
Edit:
Looks like the modem keeps rebooting hence the read error. But it seems to happen after running the two commands above
I have a CentOS 7 server running several Node-scripts at specific times with crontab.
The scripts are supposed to make a few web requests before exiting. Which works fine at all times on my local machine (running Mac OS X).
However on the server it sometimes seems like the node script stalls around a web request and nothing more happens, leaving the process and taking up memory on the server. Since the script is working at my machine I'm guessing that there is some issue on the server. I looked at netstat -tnp and found that the stalled PID's have left connections open in ESTABLISHED state and without sending or receiving any data. The connections are left like this.
tcp 0 0 x.x.x.x:39448 x.x.x.x:443 ESTABLISHED 17143/node
It happens on different ports, different PID's, different scripts and to different IP-addresses.
My guess is that the script stalls because node is waiting for some I/O operation (the request) to finish, but I can't find out any reason why this would happen. Has anyone else had issues with node leaving connections open at random?
This problem was apparently not related to any OS or Node setting. Our server provider had made a change to their network, which caused massive packet loss between the router and server. They reverted the change for us and now it's working again.
TL;DR: Why does the ssh client for OpenSSH 6.x send the string "OpenSSH_6.2p2" immediately when connecting, and OpenSSH 5.x client does not send anything?
I am trying to get an ssh tunnel working via an HTTP/S proxy. I can get a TCP connection which is properly tunneled, using an http CONNECT request. It works correctly with the SSH client on my Mac OSX 10.9, but does not work with an older Mac running an older OSX.
This led to the following oddity which I am at lost to explain. (This may be a foolish question to someone familiar with the SSH protocol, but after searching for a bit I cannot find a simple explanation of what that protocol is supposed to look like, and am hoping to not have to read the entire RFC in order to debug this; thus this post.)
On Mac OSX 10.9 with OpenSSH_6.2p2:
Terminal 1:
nc -l 127.0.0.1 5000
Terminal 2:
ssh test#127.0.0.1 -p 5000
Terminal 1 then outputs:
OpenSSH_6.2p2
So this newer client transmitted that string upon connection.
On CentOS 6.3 with OpenSSH_5.3p1:
Terminal 1 and 2 commands exactly the same as above.
But terminal 1 does not output anything. Looks like this older client didn't send anything upon connection.
The TCP connection itself it working correctly from everything I can tell. It seems to be a protocol difference. But these are both apparently using SSH "version 2" protocol.
These two machines seems to be able to SSH to each other without trouble. However there is something odd happening with my tunnel, and I'm trying to understand what the protocol is looking for so I can debug.
Does anyone understand what is going on here? Or perhaps know where there is a simple 1,2,3 type explanation of which side sends what for this protocol and any info on version differences?
It's covered here: https://www.rfc-editor.org/rfc/rfc4253#section-4.2
Both the client and the server are supposed to send their version strings upon connection. However it appears that in the earlier versions the client is waiting for the server before sending it's string - which seems like an implementation detail that is technically a bug (presumably fixed in SSH 6.x) but doesn't normally create a problem in practice.
If anyone is interested, here is what I was trying to solve: https://github.com/bradleypeabody/proxyman/blob/master/README.md
I am interested in finding out when things SSH into my boxen to create a reverse tunnel. Currently I'm using a big hack - just lsof with a few lines of script. So my goal is to see when a socket calls bind() and, ideally, get the port it binds to (it's listening locally since it's a reverse tunnel) and the remote host that I would be connecting to. My lsof hack is basically fine, except I don't get instant notifications and it's rather... hacky :)
This is easy for files; once a file does just about anything, inotify can tell me in Linux. Of course, other OSs have a similar capability.
I'm considering simply tailing the SSHD logs and parsing the output, but my little "tunnel monitor" daemon needs to be able to figure out the state of the tunnels at any point in time, even if it hasn't been running the whole time SSHD has.
I have a pretty evil hack I've been considering as well. It's a script that invokes GDB on /usr/sbin/sshd, then sets a breakpoint on bind. Then it runs it with the options -d -p <listening port> -- Running a separate SSHD for these tunnels is fine. Then it waits for that breakpoint to get hit, and uses GDB's input to get the remote hosts's IP address and the local IP on which SSH is now listening. Again, that's text parsing and opens some other issues.
Is there a "good" way to do this?
I would use SystemTap for a problem like this. You can use it to probe the kernel to see when a bind is done by any process on the system. http://sourceware.org/systemtap/