Specifically talking about an Ubuntu 10.04 LTS server (Lucid Lynx), although its probably applicable to other Linux versions.
I was trawling through the logs for a few websites, doing some spring cleaning so to speak and noticed a few IP addresses that have been acting dodgy so I wanted to add them to the blacklist.
Basically I got playing around with IPtables, the blacklist of IP's is just a text file. I then created a shell script to loop through the text file and block each IP address in IPtables.
This worked fine when the shell script was run manually. But obviously I wanted it to run automatically at start up, for whenever the server may be rebooted. So I included the shell script into
Code:
/etc/network/if-pre-up.d/iptables
So it now looks like
Code:
#!/bin/sh
/sbin/iptables-restore < /etc/iptables.up.rules
sh /etc/addBlacklist.sh
So I rebooted the server and the blacklist rules where applied, but it seems like they have been applied multiple times. As in duplicate lines appearing when iptables -L is run.
Just wondering if anyone would know the reason for this?
I suppose it doesn't really matter in the grand scheme of things but I'm curious.
Never did find out why they where being applied multiple times but I just removed the separate blacklist file and amalgamated it into the iptables.up.rules file.
Not as pretty but stops the duplication.
Just add the iptables -F at the start of the script so when the script starts, it automatically flushes the old entry and then blocks the ip's again.
Related
So, following advice from all over the Internet, including Tor documentation, I'm trying to force US-only exit nodes by editing the torrc file like so:
StrictNodes 1
ExitNodes {US}
But, I’m still getting exit nodes from Western Europe, Australia, and the US. I’m using the Vidalia bundle, though I’m starting Tor and Polipo from the command line programmatically and executing HttpWebRequests via Polipo. Any thoughts? I really, really need the exit nodes to only be from the US, and I'm really surprised this isn't working. Thanks.
I appear to have fixed the problem by adding this argument when I start Tor from the command line:
-f C:\Users\Frank\AppData\Local\Vidalia\torrc
I'm not sure why Tor wasn't using this config file by default, but now that I'm pointing to it explicitly, it appears to be following the StrictNodes directive. Thanks.
Need some help designing a bash script for grepping IP addresses from auth.log and apache.log that look dodgy so I can automatically add them to the IP logs.
Thinking of grepping both of these logs, but I need to know which's are dodgy.
At the moment I have a IP Table rule in place for ssh that block incoming connections but I need to block all these requests for w00t, phpadmin, etc.
Cheers
If for some reason you don't want to use an already made tool for such task like fail2ban, you can use the regexps provided in that tool as an excellent starting point.
In my university there's a certain wlan network open for the students and employees. To use it, however, one must first log in via a website using your own username and password. This can also be done by submitting a http request with the right POST-data to the same website. I already have a shell script that does this but I'm still curious to whether it would be possible to have this script run automagically every time my computer connects to the university wlan. Is it possible to do this in some semi-easy way?
I know that NetworkManager (which is used in Ubuntu) exposes a DBUS interface -- I would suspect there is an event for network connected / disconnected which you could use. Try checking the NetworkManager DBUS Interface spec.
If you've never worked with DBUS before, fear not, there are bindings for pretty much every language. I'm sure there's even a CLI client you could invoke from a shell script. This blog entry shows how to detect a new connection from NetworkManager with Python -- it might be a good starting point
You might write a simple script that runs "iwconfig" and processes it's output. If the name of the network is found (with Regex for example) you send a request.
I don't think you can trigger the script when you are acutally connected to the network, but you can add it to CRON, so it is executed for example every ten seconds.
Heres's a document you may find helpful: https://help.ubuntu.com/community/CronHowto
Is there a way to execute commands using directory traversal attacks?
For instance, I access a server's etc/passwd file like this
http://server.com/..%01/..%01/..%01//etc/passwd
Is there a way to run a command instead? Like...
http://server.com/..%01/..%01/..%01//ls
..... and get an output?
To be clear here, I've found the vuln in our company's server. I'm looking to raise the risk level (or bonus points for me) by proving that it may give an attacker complete access to the system
Chroot on Linux is easily breakable (unlike FreeBSD). Better solution is to switch on SELinux and run Apache in SELinux sandbox:
run_init /etc/init.d/httpd restart
Make sure you have mod_security installed and properly configured.
If you are able to view /etc/passwd as a result of the document root or access to Directory not correctly configured on the server, then the presence of this vulnerability does not automatically mean you can execute commands of your choice.
On the other hand if you are able view entries from /etc/passwd as a result of the web application using user input (filename) in calls such as popen, exec, system, shell_exec, or variants without adequate sanitization, then you may be able to execute arbitrary commands.
Unless the web server is utterly hideously programmed by someone with no idea what they're doing, trying to access ls using that (assuming it even works) would result in you seeing the contents of the ls binary, and nothing else.
Which is probably not very useful.
Yes it is possible (the first question) if the application is really really bad (in terms of security).
http://www.owasp.org/index.php/Top_10_2007-Malicious_File_Execution
Edit#2: I have edited out my comments as they were deemed sarcastic and blunt. Ok now as more information came from gAMBOOKa about this, Apache with Fedora - which you should have put into the question - I would suggest:
Post to Apache forum, highlighting you're running latest version of Apache and running on Fedora and submit the exploit to them.
Post to Fedora's forum, again, highlighting you're running the latest version of Apache and submit the exploit to them.
It should be noted, include the httpd.conf to both of the sites when posting to their forums.
To minimize access to passwd files, look into running Apache in a sandbox/chrooted environment where any other files such as passwd are not visible outside of the sandbox/chrooted environment...have you a spare box lying around to experiment with it or even better use VMWARE to simulate the identical environment you are using for the Apache/Fedora - try get it to be IDENTICAL environment, and make the httpd server run within VMWare, and remotely access the Virtual machine to check if the exploit is still visible. Then chroot/sandbox it and re-run the exploit again...
Document the step-by-step to reproduce it and include a recommendation until a fix is found, meanwhile if there is minimal impact to the webserver running in sandbox/chrooted environment - push them to do so...
Hope this helps,
Best regards,
Tom.
If you already can view etc/passwd then the server must be poorly configured...
if you really want to execute commands then you need to know the php script running in the server whether there is any system() command so that you can pass commands through the url..
eg: url?command=ls
try to view the .htaccess files....it may do the trick..
I have got a requirement to test network connectivity to around 30 servers with different ports as part of some new firewall rules implementation. After the rules are in place i need to check whether the connectivity is succesfull or not, and i need to test the same rules from 3 servers. SO i am looking at some way i can automate this. Currently i use telnet to test connectivity, but this is too slow, I am open to a shell script and ant script. The end result should be a log file listing the server and port to which the connect attempt was made, and the status of the attempt (success/failure)
I beleive nmap can do it. It can scan selected/all ports and generate a report.
Ping may help, or even curl? Please describe a scenario that == "Its Dead, Jim!", if the script checking should not block.
Nagios can probably do what you want.
http://www.nagios.org/
If you don't mind a Perl solution, Net::Ping is pretty helpful. I use this for testing SSH connectivity to servers in our test environment.
Try fping. Very simple and likely gives you most of what you're looking for. If you block ICMP or want to do something with ssh or telnet, then you should look at nagios as Brian Lindauer answered.
Get a list of hosts that are up:
fping -a -f hostlist.txt
Get a list of hosts that are down:
fping -u -f hostlist.txt