IP Tables script for reading logs - linux

Need some help designing a bash script for grepping IP addresses from auth.log and apache.log that look dodgy so I can automatically add them to the IP logs.
Thinking of grepping both of these logs, but I need to know which's are dodgy.
At the moment I have a IP Table rule in place for ssh that block incoming connections but I need to block all these requests for w00t, phpadmin, etc.
Cheers

If for some reason you don't want to use an already made tool for such task like fail2ban, you can use the regexps provided in that tool as an excellent starting point.

Related

IPTables rules being applied multiple times at startup

Specifically talking about an Ubuntu 10.04 LTS server (Lucid Lynx), although its probably applicable to other Linux versions.
I was trawling through the logs for a few websites, doing some spring cleaning so to speak and noticed a few IP addresses that have been acting dodgy so I wanted to add them to the blacklist.
Basically I got playing around with IPtables, the blacklist of IP's is just a text file. I then created a shell script to loop through the text file and block each IP address in IPtables.
This worked fine when the shell script was run manually. But obviously I wanted it to run automatically at start up, for whenever the server may be rebooted. So I included the shell script into
Code:
/etc/network/if-pre-up.d/iptables
So it now looks like
Code:
#!/bin/sh
/sbin/iptables-restore < /etc/iptables.up.rules
sh /etc/addBlacklist.sh
So I rebooted the server and the blacklist rules where applied, but it seems like they have been applied multiple times. As in duplicate lines appearing when iptables -L is run.
Just wondering if anyone would know the reason for this?
I suppose it doesn't really matter in the grand scheme of things but I'm curious.
Never did find out why they where being applied multiple times but I just removed the separate blacklist file and amalgamated it into the iptables.up.rules file.
Not as pretty but stops the duplication.
Just add the iptables -F at the start of the script so when the script starts, it automatically flushes the old entry and then blocks the ip's again.

IPTables rules to Deny ALL traffic for an IP range with CRON

I currently have a ClearOS gateway server in transparent proxy mode. It has a lot of great tools but I have some issues preventing me from forcing the use of the web proxy directly which would give the control over scheduling using ACLs....
Long story short. I need to find a way to block ALL traffic for specific IP addresses and IP ranges. I have studied this out and I need to do this through the firewall (IPTables). Ideally, I would like to set up a cron job to swap out the IPTables rules to start and stop all traffic for the specified IP ranges/addresses.
I saw someone who talked about doing this in a forum but they gave no details. They suggested having multiple iptable config files for the different conditions and then using cron to swap them. I am not sure if setting a cron command using an IPTables command to add/remove a rule or set of rules is possible or preffereable. In any case I am a novice at this specifically but I am saavy enough to get in and get my hands dirty....
If you have information on how to do this I thank you in advance. I am not looking for alternative methods at this point as I have looked at just about all of them. Going with the firewall and cron is my goal.
Thanks!
Summing this up for those who may come across this issue.... I was able to solve it with some help from the ClearOS forum and a couple of really helpful guys... Here is the link so I don't have to replicate the entire thread here:
http://www.clearfoundation.com/component/option,com_kunena/Itemid,232/catid,7/func,view/id,46918/

How to detect ftp connection

I'm using vsftpd and I want to write a shell script that will detect a connection to my server and send me an email with information who and when has logged in.
I don't know where to start. Can someone point me in the right direction.
Thanx
Read the log.
http://www.redhat.com/docs/manuals/enterprise/RHEL-3-Manual/ref-guide/s1-ftp-vsftpd-conf.html
Enable the transfer log.
Read the file.
I'm not familiar with vsftpd, but you could have your shell script look at the output of netstat to see if you've got any connections on port 21 (the default ftp port).
The most reliable way is using log analysis. If you use a tool like OSSEC (free and open source), it can run any scripts or generate email alerts when logins, logouts, failed logins, etc happens.
link: http://www.ossec.net
Same applies for "fail2ban", though the purpose of this thing is something else (you guessed it).
J.

How to find connected hosts at network (vpn or lan)

I'm looking for possible solutions to the following need:
I have a VPN configured (using openVPN over Linux, BTW), and I want to know at any moment which hosts are connected to it. I recognize that it probably is the same thing as trying to know which hosts are connected to a lan, so any of the solutions might do the job...
The fact is that I once used a hamachi vpn on linux and with it I had the chance to know which hosts were connected to a particular network where I belonged, so I was wondering if something similar might be possible in openVPN (or even any VPN and/or any LAN).
Preferably, I'm looking for opensource/free sw solutions, or maybe the hints to program it myself (in the most simple way if possible, not that I don't know how to program, but I'm trying to achieve this in a simple manner). But anyway, if there are no os/fsw solutions, any other one might do...
Thanks a lot!
Javier,
Mexico city
An easy way to do this with OpenVPN in linux is to use the client-connect and client-disconnect scripts on the server end to maintain a list for you. The client-connect script can log the $common_name environment variable (and also its $trusted_ip, if you like) each time a client connects, and the client-disconnect script can remove that client from the list.
If you also write both connections and disconnections to a different time-stamped log, you'll have a permanent record of the time and duration of each connection.

Automated Script for testing Network Connectivity in Linux

I have got a requirement to test network connectivity to around 30 servers with different ports as part of some new firewall rules implementation. After the rules are in place i need to check whether the connectivity is succesfull or not, and i need to test the same rules from 3 servers. SO i am looking at some way i can automate this. Currently i use telnet to test connectivity, but this is too slow, I am open to a shell script and ant script. The end result should be a log file listing the server and port to which the connect attempt was made, and the status of the attempt (success/failure)
I beleive nmap can do it. It can scan selected/all ports and generate a report.
Ping may help, or even curl? Please describe a scenario that == "Its Dead, Jim!", if the script checking should not block.
Nagios can probably do what you want.
http://www.nagios.org/
If you don't mind a Perl solution, Net::Ping is pretty helpful. I use this for testing SSH connectivity to servers in our test environment.
Try fping. Very simple and likely gives you most of what you're looking for. If you block ICMP or want to do something with ssh or telnet, then you should look at nagios as Brian Lindauer answered.
Get a list of hosts that are up:
fping -a -f hostlist.txt
Get a list of hosts that are down:
fping -u -f hostlist.txt

Resources