IIS: Auto-blacklisting flooding IPs? - iis

Right now, we a have a site that is flooded and the site is very slow. Normally it is working well but someone decided to flood it today. We have other sites on the server and they are not directly affected. We have limited the amount of CPU that the flooded site may use.
We really need a solution to avoid flooding of sites like this case.
I have read this page:
http://www.acunetix.com/blog/articles/8-tips-secure-iis-installation/
It tells about temporarily dening IP addresses but it does not say for how long the IPs are denied access.
In addition, I rather like that the IPs were auto-blocked in IIS or even better in the Windows firewall. Is this possible?
Can someone help so the site can start behaving normally again? :-)
We are using IIS version 8.5 :-)

Related

Windows Active Directory Domain setup remotely through univention using samba4

I have a slight problem bit of the back story. recently ive been trying to test out univention which is a linux distribution with the goal of being able to replace Microsoft active directory.
I tested it locally and all went reasonably well after a few minor issues i then decided to test it remotely as the company wants to allow remote users to access this so i used myhyve.com to host it and its now been setup successfully and works reasonably well.
however
my main problem is DNS based as when trying to connect to the domain the only way windows will recognize it is by editing the network adapter and setting ip v4 dns server address to the ip address of the server hosting the univention active directory replacement. although this does allow every thing to work its not ideal and dns look up on the internet are considerably longer. i was wondering if any one had any ideas or have done something similar and encountered this problems before and know a work around. i want to avoid setting up a vpn if possible.
after initially registering the computer on the domain i am able to remove the dns server address and just use a couple of amendments to the HOST file to keep it running but this still leads to having issues connecting to the domain controller sometimes and is not ideal. any ideas and suggestions would be greatly received.
.Michael
For the HOST entries, the most likely issue is, that there are several service records a computer in the domain needs. I'm not sure, whether these can be provided via the HOST file or not but you'll definitely have authentication issues if they are missing. To see the records your domain is using issue the following commands on the UCS system.
/usr/share/univention-samba4/scripts/check_essential_samba4_dns_records.sh
For the slow resolution of the DNS records there are several points where you could start looking. My first test would be whether or not you are using a forwarder for the web DNS requests and whether or not the forwarder is having a decent speed. To check if you are using one, type
ucr search dns/forwarder
If you get a valid IP for either of the UCR Variables, dns/forwarder1, dns/forwarder2 or dns/forwarder3, you are forwarding your DNS requests to a different Server. If all of them are empty or not valid IPs then your server is doing the resolution itself.
Not using a forwarder is often slow, as the DNS servers caching is optimized for the AD operations, like the round robin load balancing. Likewise a number of ISPs require you to use a forwarder to minimize the DNS traffic. You can simply define a forwarder using ucr, I use Google on IPv4 for the example
ucr set dns/forwarder1='8.8.8.8'
The other scenario might be a slow forwarder. To check it try to query the forwarder directly using the following command
dig univention.com #(ucr get dns/forwarder1)
If it takes long, then there is nothing the UCS server can do, you'll simply have to choose a different forwarder from the ucr command above.
If neither of the above helps, the next step would be to check whether there are error messages for the named daemon in the syslog file. Normally these come when you are trying to manually remove software or if the firewall configuration got changed.
Kevin
Sponsored post, as I work for Univention North America, Inc.

Cannot access websites on apache from outside the server

I have a debian 7.5 based Ubuntu server, apache 2.2.22.
It's a rather vanilla installed XAMP used as a basic web server.
It used to work fine and I have no idea why it stopped working suddenly (there was some maintenance today but it worked when I left it - I changed partition sizes with Gparted).
When I try to access a website from the server (tried with w3m) all is working OK, including PHP and MySQL access.
When I try to access the same host (using a domain) from the outside, the browser keeps loading for a long while, eventually (after few minutes) saying the page could not be loaded.
I made sure that ports are open and accessible with outside scanner.
So I'm sure the Apache is available (working from inside the network, websites loading from SSH using w3M and pinging)
I'm sure the server is connected to the web (I can use putty to SSH)
the host is resolving to the correct IP (but won't ping from outside, only inside)
The ports seems to be opened (scanned and got OK for port 80)
I'm not a professional IT, so If there is info I can add that could help just ask away.
would really appreciate any idea or direction.
Thanks!
I still suspect the UFW/iptables firewall is blocking all incoming connections... Please go through this article and double check
http://www.cyberciti.biz/faq/ubuntu-server-disable-firewall/
If you're sure that the firewall config is OK, please try packet capturing with Wireshark to see what's going on underneath.
http://www.youtube.com/watch?v=sOTCRqa8U9Y How to install
Thanks for the help,
Oddly enough - It just started working again after 12 hours of not working.
A friend of mine, an IT person just called to try and help, and he simply connected (5 mins after I tried) and said it's all working for him.
I tried, and it's working for me also.
Have no idea why it stopped working, and why it is working now.
I think it might be an ISP problem or a router issue... The server is in our offices so I guess it could be both. I just don't understand why SSH would work and HTTP wouldn't.

Slow website even though VPS is up and running

Sorry if this is a bit of a newbie question, but I am quite new to VPS and the relatively more complicated set up. I have a VPS set up, and every day or twice a day the site loads for a bout 10 minutes with no luck. Then when it comes back on line its fine after that. Upon logging on to Plesk, the server is up and running, very low CPU usage (0.10 and drops to 0.00 after a few minutes) and around 18% RAM usage.
The MySQLAdmin loads up fine.
So it seems the VPS is running fine.
Is there maybe another reason? The domain is with Daily.co.uk and the VPS is with LCN.com. Could there be another problem somewhere? On daily.co.uk, there are two nameservers set. ns0.etc*** and ns1.etc***. I did a tracert on windows cmd, this traced down to the server, with two timeouts.
I also tried a check on http://dnscheck.pingdom.com/ while the site was slow and this came back fine, except this: Too few IPv4 name servers (1). Only one IPv4 name server was found for the zone. You should always have at least two IPv4 name servers for a zone to be able to handle transient connectivity problems.
Any help would be appreciated. I have tried searching but with no luck.
The recommended diagnostic check for the issue you are experiencing is called a DIG.
On your Windows system, this check is not intrinsically available, but it can be downloaded from http://members.shaw.ca/nicholas.fong/dig/
Once you have installed it, you'll want to run it from the command prompt with the following syntax:
C:> dig -insert your domain here- +trace
This will show you how DNS resolution is happening from your location to the requested end point. Chances are, the error you received is correct. Most DNS setups have several name servers to assign to your domain registration to allow the round-robining of delegated name servers in the event that one becomes unresponsive.
My personal recommendation would be to outsource the DNS to a managed provider. Doing so will increase the availability of the zone, and reduce latency.

FTP suddenly refuses connection after multiple & sporadic file transfers

I have an issue that my idiot web host support team cannot solve, so here it is:
When I'm working on a site, and I'm uploading many files here and there (small files, most of them a few dozen lines at most, php and js files mostly, with some png and jpg files), after multiple uploads in a very short timeframe, the FTP chokes on me. It cuts me off with a "refused connection" error from the server end as if I am brute-force attacking the server, or trying to overload it. And then after 30 minutes or so it seems to work again.
I have a dedicated server with inmotion hosting (which I do NOT recommend, but that's another story - I have too many accounts to switch over), so I have access to all logs etc. if you want me to look.
Here's what I have as settings so far:
I have my own IP on the whitelist in the firewall.
FTP settings have maximum 2000 connections at a time (Which I am
nowhere near close to hitting - most of the accounts I manage
myself, without client access allowed)
Broken Compatibility ON
Idle time 15 mins
On regular port 21
regular FTP (not SFTP)
access to a sub-domain of a major domain
Anyhow this is very frustrating because I have to pause my web development work in the middle of an update. Restarting FTP on WHM doesn't seem to resolve it right away either - I just have to wait. However when I try to access the website directly through the browser, or use ping/traceroute commands to see if I can reach it, there's no problem - just the FTP is cut off.
The ftp server is configured for such a behavior. If you cannot change its configuration (or switch to another ftp server program on the server), you can't avoid that.
For example vsftpd has many such configuration switches.
Going to something else like scp or ssh should help
(I'm not sure that calling idiot your web support team can help you)

Securing a linux webserver for public access

I'd like to set up a cheap Linux box as a web server to host a variety of web technologies (PHP & Java EE come to mind, but I'd like to experiment with Ruby or Python in the future as well).
I'm fairly versed in setting up Tomcat to run on Linux for serving up Java EE applications, but I'd like to be able to open this server up, even just so I can create some tools I can use while I am working in the office. All the experience I've had with configuring Java EE sites has all been for intranet applications where we were told not to focus on securing the pages for external users.
What is your advice on setting up a personal Linux web server in a secure enough way to open it up for external traffic?
This article has some of the best ways to lock things down:
http://www.petefreitag.com/item/505.cfm
Some highlights:
Make sure no one can browse the directories
Make sure only root has write privileges to everything, and only root has read privileges to certain config files
Run mod_security
The article also takes some pointers from this book:
Apache Securiy (O'Reilly Press)
As far as distros, I've run Debain and Ubuntu, but it just depends on how much you want to do. I ran Debian with no X and just ssh'd into it whenever i needed anything. That is a simple way to keep overhead down. Or Ubuntu has some nice GUI things that make it easy to control Apache/MySQL/PHP.
It's important to follow security best practices wherever possible, but you don't want to make things unduly difficult for yourself or lose sleep worrying about keeping up with the latest exploits. In my experience, there are two key things that can help keep your personal server secure enough to throw up on the internet while retaining your sanity:
1) Security through obscurity
Needless to say, relying on this in the 'real world' is a bad idea and not to be entertained. But that's because in the real world, baddies know what's there and that there's loot to be had.
On a personal server, the majority of 'attacks' you'll suffer will simply be automated sweeps from machines that have already been compromised, looking for default installations of products known to be vulnerable. If your server doesn't offer up anything enticing on the default ports or in the default locations, the automated attacker will move on. Therefore, if you're going to run a ssh server, put it on a non-standard port (>1024) and it's likely it will never be found. If you can get away with this technique for your web server then great, shift that to an obscure port too.
2) Package management
Don't compile and install Apache or sshd from source yourself unless you absolutely have to. If you do, you're taking on the responsibility of keeping up-to-date with the latest security patches. Let the nice package maintainers from Linux distros such as Debian or Ubuntu do the work for you. Install from the distro's precompiled packages, and staying current becomes a matter of issuing the occasional apt-get update && apt-get -u dist-upgrade command, or using whatever fancy GUI tool Ubuntu provides.
One thing you should be sure to consider is what ports are open to the world. I personally just open port 22 for SSH and port 123 for ntpd. But if you open port 80 (http) or ftp make sure you learn to know at least what you are serving to the world and who can do what with that. I don't know a lot about ftp, but there are millions of great Apache tutorials just a Google search away.
Bit-Tech.Net ran a couple of articles on how to setup a home server using linux. Here are the links:
Article 1
Article 2
Hope those are of some help.
#svrist mentioned EC2. EC2 provides an API for opening and closing ports remotely. This way, you can keep your box running. If you need to give a demo from a coffee shop or a client's office, you can grab your IP and add it to the ACL.
Its safe and secure if you keep your voice down about it (i.e., rarely will someone come after your home server if you're just hosting a glorified webroot on a home connection) and your wits up about your configuration (i.e., avoid using root for everything, make sure you keep your software up to date).
On that note, albeit this thread will potentially dwindle down to just flaming, my suggestion for your personal server is to stick to anything Ubuntu (get Ubuntu Server here); in my experience, the quickest to get answers from whence asking questions on forums (not sure what to say about uptake though).
My home server security BTW kinda benefits (I think, or I like to think) from not having a static IP (runs on DynDNS).
Good luck!
/mp
Be careful about opening the SSH port to the wild. If you do, make sure to disable root logins (you can always su or sudo once you get in) and consider more aggressive authentication methods within reason. I saw a huge dictionary attack in my server logs one weekend going after my SSH server from a DynDNS home IP server.
That being said, it's really awesome to be able to get to your home shell from work or away... and adding on the fact that you can use SFTP over the same port, I couldn't imagine life without it. =)
You could consider an EC2 instance from Amazon. That way you can easily test out "stuff" without messing with production. And only pay for the space,time and bandwidth you use.
If you do run a Linux server from home, install ossec on it for a nice lightweight IDS that works really well.
[EDIT]
As a side note, make sure that you do not run afoul of your ISP's Acceptable Use Policy and that they allow incoming connections on standard ports. The ISP I used to work for had it written in their terms that you could be disconnected for running servers over port 80/25 unless you were on a business-class account. While we didn't actively block those ports (we didn't care unless it was causing a problem) some ISPs don't allow any traffic over port 80 or 25 so you will have to use alternate ports.
If you're going to do this, spend a bit of money and at the least buy a dedicated router/firewall with a separate DMZ port. You'll want to firewall off your internal network from your server so that when (not if!) your web server is compromised, your internal network isn't immediately vulnerable as well.
There are plenty of ways to do this that will work just fine. I would usually jsut use a .htaccess file. Quick to set up and secure enough . Probably not the best option but it works for me. I wouldn't put my credit card numbers behind it but other than that I dont really care.
Wow, you're opening up a can of worms as soon as you start opening anything up to external traffic. Keep in mind that what you consider an experimental server, almost like a sacrificial lamb, is also easy pickings for people looking to do bad things with your network and resources.
Your whole approach to an externally-available server should be very conservative and thorough. It starts with simple things like firewall policies, includes the underlying OS (keeping it patched, configuring it for security, etc.) and involves every layer of every stack you'll be using. There isn't a simple answer or recipe, I'm afraid.
If you want to experiment, you'll do much better to keep the server private and use a VPN if you need to work on it remotely.

Resources