Running CentOS 6.7 on several VM's, Friday ran a test that sent data from machine 1 to machine 2, successful test. Today, (no changes on either end that anybody will admit to) can't repeat the test, the python script that sends the data gives me a traceback that says 'no route to host.' Can ping both ways (1-2, 2-1, can ssh both ways, nmap from 1 to 2 doesn't show the port I sent to on Friday, nmap on 2 to 127.0.0.1 shows port is open. Of the 12 machines on this network, none can send data to this machine on this port, where all could before. No changes in sending script, no changes in vm network (that I am aware of), as I said, theoretically no changes anywhere, but obviously something changed. If it matters, machine 2 is running Splunk 6.3.1 with a modular input that receives the data over port 2008, worked great until today. Disabled and re-enabled the modular input, restarted Splunk.
Not really sure where to look next.
Related
I have a VPS running ubuntu 20.04 that I'm trying to setup as a SSH server.
On my first try I got overrun by Chinese bots. I deleted everything and started from scratch.
I installed and setup fail2ban, currently on about 2000 banned ip.
I removed root login, setup a new random username, with a 12 char password.
But sometimes when i run netstat -tpna i still get results like this
These are again Chinese ip addresses. These 2 disappeared after a couple of seconds.
Is it something I'm missing here? Are these 2 really connected to my server ? How ?
Or is it just that I don't really understand how netstat works ?
I am indeed planning on removing password login, using just ssh keys, but after I finish setting up the VPS.
Thank you for your help.
On my first try I got overrun by Chinese bots
Regardless of IDS (fail2ban etc) it is always advisable to change port sshd listening on something else (if you don't necessarily bound on 22 for some reason).
i still get results like this ... ESTABLISHED ...
This can basically have 2 major reasons:
this IPs are not (yet) banned due to some missing rule or not fully covered "attack" vector.Did you check the IPs were really banned? See also first answer in wiki :: FAQ.
If there just no finding for this IPs at all in fail2ban.log ([sshd] Found 192.0.2.1), you may also try to set mode = aggressive for the jail. And check whether your fail2ban version or your sshd filter is not too old, e. g. here is actual filter for latest v.0.10. There is no guarantee that this filter would work with your fail2ban version, if you have also v.0.10, but other version (for example latest 0.10.6 vs. 0.10.2 on your side), so be careful updating the filter only.
If they are really banned by fail2ban, it may depend on its banning action (and config of the net-filter and network subsystem). Sometimes it is possible that only the new packets will be rejected or dropped, already established connections are not affected. This don't imply that the intruder is able to continue the attack.
However if the IP is banned (so [sshd] Ban 192.0.2.1 is there), but you always see Already banned for this address in fail2ban.log later (and you see new attempts or even connections from this IP), then your banning action may not work properly (some errors in log after every ban, wrong port are protected, net-filter white-listing rules available, etc).
Are these 2 really connected to my server ? How ?
The connection may remain established (also if the IP got banned) for some time, if it is not dropped (killed) or one of both side does not close it.
As already said it depends.
You can surely add some custom second action killing all the connections from IP (e. g. using tcpkill, killcx, ss or whatever), but it is not needed if you don't see any communication from its side (all packets are rejected, so no new attempts in log and no new connections later from this IP, etc).
Anyway if they disappeared as you said after couple of seconds (I guess firstly going in TIME_WAIT state), then it is a good sign. But better check it with something like:
tail -f /var/log/fail2ban.log /var/log/auth.log | grep sshd
# or for journal:
journalctl -f | grep sshd
so you would see what happens before and after ban of some address (for example whether it remains silent hereafter).
Environment:
Windows 10 > Linux (Ubuntu Server) via LAN
PhpStorm
Followed https://xdebug.org/wizard.php
and https://www.jetbrains.com/help/phpstorm/zero-configuration-debugging.html
php.ini (/etc/php/7.2/fpm/php.ini as using Nginx) has:
zend_extension = /usr/lib/php/20170718/xdebug.so
xdebug.remote_enable = On
xdebug.remote.connect_back = 1
;xdebug.remote_host=192.168.56.1;commented out as copied from another PC with VBox (x-debug working here) but left for reference
xdebug.remote_port=24680;port 9000 is usually occupied by FPM, so port change recommended
xdebug.remote_autostart=1
xdebug.idekey=PHPSTORM
xdebug.remote_log="/tmp/xdebug.log"
* EDIT- using xdebug.remote_host=192.168.0.201 works but I want to use multiple network locations to debug from preferablly also WAN *
I have setup bookmarklets as per PhpStorm link and clicked the bookmarklet:
javascript:(/** #version 0.5.2 */function() {document.cookie='XDEBUG_SESSION='+'PHPSTORM'+';path=/;';})()
In the actual IDE the View>debug window is greyed out but allow incoming connections all green. The Language>PHP>Debug is set to stop at first line and has same port number 24680.
In setups on other systems I have at least had flagged up that mappings need attention but I simply cannot get to any debug view here.
tail -f /tmp/xdebug.log
gives:
Log opened at 2018-08-24 21:52:05
I: Connecting to configured address/port: localhost:24680.
W: Creating socket for 'localhost:24680', poll success, but error: Operation now in progress (29).
W: Creating socket for 'localhost:24680', poll success, but error: Operation now in progress (29).
E: Could not connect to client. :-(
Log closed at 2018-08-24 21:52:05
Showing response.
Obviously something missed with connecting back to Windows client PhpStorm.
Tested with Windows firewall off
I will also need to connect remotely via port forwarding to this server at some point, however all this initial setup is on LAN.
When I mention setups regarding other systems they are physically seperate (IE Macbook talking to its own VBox). This setup is a windows machine talking to a real Linux server on the same LAN. SSH is not used here.
php.ini is (/etc/php/7.2/fpm/php.ini as using Nginx)
Anyone got any idea ?
So in a controlled environment I used evilgrade, a payload I had created, etter, and then netcat to do a mitm to get my target device to install my created payload as an "update" and it worked. I got my target device to successfully do this and so I began writing a python script to automate this process rather than memorize the commands and now it does not work. As stated before everything worked flawlessly the first time through, but after I had established a connection to the target device, I cannot replicate it. I have tried uninstalling/purging all of the applications (except ettercap) and doing a fresh install as well as returning all config files like etter.dns and etter.conf to their defaults before trying to replicate.
The evilgrade CLI shows this:
start
evilgrade>
[3/5/2017:9:40:33] - [WEBSERVER] - Webserver ready. Waiting for connections ...
evilgrade>
[3/5/2017:9:40:33] - [DNSSERVER] - DNS Server Ready. Waiting for Connections ...
evilgrade>
The netcat CLI shows this:
root#oxYMmCIZ:~# nc -l -p 444 -v
Listening on [0.0.0.0] (family 0, port 444)
The first time netcat ran I saw what looked like something encrypted going across the CLI (I assume showing that traffic was indeed coming through) but I did not use -v that time so I did not see anything but the squares.
This is my first questions here and I feel like I should state that I am self taught so if I misuse any terms feel free to let me know. I know how to make stuff work, not explain it by the books' terms :)
I have a CentOS 7 server running several Node-scripts at specific times with crontab.
The scripts are supposed to make a few web requests before exiting. Which works fine at all times on my local machine (running Mac OS X).
However on the server it sometimes seems like the node script stalls around a web request and nothing more happens, leaving the process and taking up memory on the server. Since the script is working at my machine I'm guessing that there is some issue on the server. I looked at netstat -tnp and found that the stalled PID's have left connections open in ESTABLISHED state and without sending or receiving any data. The connections are left like this.
tcp 0 0 x.x.x.x:39448 x.x.x.x:443 ESTABLISHED 17143/node
It happens on different ports, different PID's, different scripts and to different IP-addresses.
My guess is that the script stalls because node is waiting for some I/O operation (the request) to finish, but I can't find out any reason why this would happen. Has anyone else had issues with node leaving connections open at random?
This problem was apparently not related to any OS or Node setting. Our server provider had made a change to their network, which caused massive packet loss between the router and server. They reverted the change for us and now it's working again.
I have setup a CentOS 6.4 server (minimal install) which is connected to network through an ethernet cable. The problem is that when the network link goes down, the status changes are not automatically detected but if i type "ifconfig" the interface still keeps its IP address (which is assigned by a DHCP server). After some time that the link is down the interface loses the address but when the link comes up again the network connection is not automatically restored like it would happen in a desktop computer. Even the command "dhclient eth0" does not always work to restore things, and I have to restart the whole network service with "/ect/init.d/network restart".
Is there any way to automatically detect network status changes like it happens in desktop installations? I'm thinking about a cron script that every 5 minutes pings a server outside my network and if it doesn't get any response it restarts network service, but this does not sound very efficient... is there another way?
EDIT: I realized I have not explained the situation correctly. My network topology is: server --> switch --> router --> external network (the router is another centos server with DHCPD).
For some reasons (that i'm not getting), when it happens that the router goes down and reboots, the other server becomes unreachable, and I have to manually restart network service on it. So the link does not effectively go down (the switch keeps it up), but the status change is at IP level.
You can check if you have NetworkManager enabled, I usually don't use it in the servers but it can help you in this case because it will monitor automatically the connections (it is quite common in desktop installations).
https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/ch-NetworkManager.html
To look for messages indicating an issue with the NIC just check the kernel ring buffer using the dmesg command.
For example when the cable is disconnected from a given interface this is what I get:
igb: eth1 NIC Link is Down
The first word will depend on the name of your network driver.
You could also configure the system to log these messages also to /var/log/messages (by default I am not sure if they appear there). Then it would be just a matter of monitoring the log, look for similar messages and restart the network service.
In any case the NetworkManager, if it is not already enabled, it should be an easier solution.
There is a module called miimon for monitoring the network interface's status. ethtool will give you the link status.
$ethtool eth0
...
Link detected: yes