FTP suddenly refuses connection after multiple & sporadic file transfers - linux

I have an issue that my idiot web host support team cannot solve, so here it is:
When I'm working on a site, and I'm uploading many files here and there (small files, most of them a few dozen lines at most, php and js files mostly, with some png and jpg files), after multiple uploads in a very short timeframe, the FTP chokes on me. It cuts me off with a "refused connection" error from the server end as if I am brute-force attacking the server, or trying to overload it. And then after 30 minutes or so it seems to work again.
I have a dedicated server with inmotion hosting (which I do NOT recommend, but that's another story - I have too many accounts to switch over), so I have access to all logs etc. if you want me to look.
Here's what I have as settings so far:
I have my own IP on the whitelist in the firewall.
FTP settings have maximum 2000 connections at a time (Which I am
nowhere near close to hitting - most of the accounts I manage
myself, without client access allowed)
Broken Compatibility ON
Idle time 15 mins
On regular port 21
regular FTP (not SFTP)
access to a sub-domain of a major domain
Anyhow this is very frustrating because I have to pause my web development work in the middle of an update. Restarting FTP on WHM doesn't seem to resolve it right away either - I just have to wait. However when I try to access the website directly through the browser, or use ping/traceroute commands to see if I can reach it, there's no problem - just the FTP is cut off.

The ftp server is configured for such a behavior. If you cannot change its configuration (or switch to another ftp server program on the server), you can't avoid that.
For example vsftpd has many such configuration switches.
Going to something else like scp or ssh should help
(I'm not sure that calling idiot your web support team can help you)

Related

IIS shared config on network drive - if network is down for a bit, IIS doesn't recover

We have several servers using shared IIS config stored on a network storage. After access to that storage is down for a few seconds (and then comes back), IIS isn't working until you do iisreset.
The problem seems to be that the local app pool config files become corrupted. To be more precise, the error given out is "Configuration file is not well-formed XML", and if you go to the app pool config, you see that instead of an actual config, it contains the following:
Now, trying to solve this we've come across the "Offline Files" feature and tried it for the shared applicationHost.config, but it wouldn't sync (saying other process is using the file, which is strange - I can easily change and save it).
The shared path starts with an IP (like \1.2.3.4...) - perhaps that's the issue (can't figure why it would be, just out of ideas at this point)?
Basically, I have two questions:
1) If the shared config is unavailable for a bit, how to make IIS recover and not be left with corrupt files till iisreset?
2) Any other idea to prevent this situation altogether.
We did manage to get offline files to work - the problem was the network drive is over Samba, and had to have oplocks on - otherwise was telling it can't sync because file is used by another process.
Now, the IIS does recover - actually, doesn't go down with the drive. However, since our websites are also on that drive, they are not available during network outage (which is predictable), the last strange thing is that it takes IIS about 1 minute to "feel" them again after the drive is back online.

IIS: Auto-blacklisting flooding IPs?

Right now, we a have a site that is flooded and the site is very slow. Normally it is working well but someone decided to flood it today. We have other sites on the server and they are not directly affected. We have limited the amount of CPU that the flooded site may use.
We really need a solution to avoid flooding of sites like this case.
I have read this page:
http://www.acunetix.com/blog/articles/8-tips-secure-iis-installation/
It tells about temporarily dening IP addresses but it does not say for how long the IPs are denied access.
In addition, I rather like that the IPs were auto-blocked in IIS or even better in the Windows firewall. Is this possible?
Can someone help so the site can start behaving normally again? :-)
We are using IIS version 8.5 :-)

Cannot access websites on apache from outside the server

I have a debian 7.5 based Ubuntu server, apache 2.2.22.
It's a rather vanilla installed XAMP used as a basic web server.
It used to work fine and I have no idea why it stopped working suddenly (there was some maintenance today but it worked when I left it - I changed partition sizes with Gparted).
When I try to access a website from the server (tried with w3m) all is working OK, including PHP and MySQL access.
When I try to access the same host (using a domain) from the outside, the browser keeps loading for a long while, eventually (after few minutes) saying the page could not be loaded.
I made sure that ports are open and accessible with outside scanner.
So I'm sure the Apache is available (working from inside the network, websites loading from SSH using w3M and pinging)
I'm sure the server is connected to the web (I can use putty to SSH)
the host is resolving to the correct IP (but won't ping from outside, only inside)
The ports seems to be opened (scanned and got OK for port 80)
I'm not a professional IT, so If there is info I can add that could help just ask away.
would really appreciate any idea or direction.
Thanks!
I still suspect the UFW/iptables firewall is blocking all incoming connections... Please go through this article and double check
http://www.cyberciti.biz/faq/ubuntu-server-disable-firewall/
If you're sure that the firewall config is OK, please try packet capturing with Wireshark to see what's going on underneath.
http://www.youtube.com/watch?v=sOTCRqa8U9Y How to install
Thanks for the help,
Oddly enough - It just started working again after 12 hours of not working.
A friend of mine, an IT person just called to try and help, and he simply connected (5 mins after I tried) and said it's all working for him.
I tried, and it's working for me also.
Have no idea why it stopped working, and why it is working now.
I think it might be an ISP problem or a router issue... The server is in our offices so I guess it could be both. I just don't understand why SSH would work and HTTP wouldn't.

FTP configuration for WordPress

I've installed a WordPress instance on a Linux server, and I need to give it FTP access in order to install plugins and execute automatic backup/restores. I've just installed vsftpd, and started the service, but now what?
How do I figure out/set what the username/pass is?
Should I allow anonymous access?
Is the hostname just 'localhost'?
Any advice would be appreciated. I've never messed with FTP on linux before. Thanks-
Your question is a little unclear because you don't specify what aspect of wordpress "wants" FTP access. If you got WP installed, you clearly have at least some access to the machine already. That said, I'll try to answer around that inclarity.
Your questions in order, then some general thoughts:
How do I figure out/set what the username/pass is?
Remember that the man page for a program is a good first stop. A good man page will also contain a FILES or "SEE ALSO" section near the bottom that will point you to relevant config files.
In this case, "man vsftpd" mentions /etc/vsftpd.conf, so you can then do "man vsftpd.conf" to get info on how to configure it.
VSFTPD is configurable, and can allow users to log in in several ways. In the man page, check out "guest_enable" and "guest_username", "local_enable" and "user_sub_token".
*The easiest route for your single user usage is probably configuring local_enable, then your username and password would be whatever it is in /etc/password.*
Should I allow anonymous access?
No. Since you're using this to admin your Wordpress, there's no reason anyone else should be using this FTP. VSFTPD has this off by default.
Is the hostname just 'localhost'?
Depends where you're coming from. 'localhost' maps back to the loopback, or the same physical machine you're on. So if you need to put ftp configuration information for Server A into a wordpress configuration file on Server A, then 'localhost' is perfectly acceptable. If you're trying to configure the pasv_addr_resolve/pasv_addr flag of VSFTPD, then no, you'll want to either pass in the fully qualified name of Server A (serverA.mydomain.com), or leave it off an rely on the IP address.
EDIT: I actually forgot the critical disclaimer to never send credentials over plain FTP. Plain old FTP (meaning not SFTP) sends your username and password in cleartext. I didn't install VSFTP and play with it, but you'll want to make sure that there is some form of encryption happening when you connect. Try hitting it with WinSCP (from windows) or sftp (from linux) to make sure you're getting an ecrypted SFTP, rather than plaintext FTP.
Apologies if you already knew that ;)
You would probably get better answers on server fault.
That said:
vsftp should use your local users by default, and drop you in that user's home directory on login.
disable anonymous access if you don't need it, I don't think wordpress will care but your server will be safer.
yes, or 127.0.0.1, or your public IP if you think you might split the front and back end some day.
WordPress does not natively support SFTP. You can get around this two ways:
chmod permissions in the appropriate directories to allow the normal, automatic update to work correctly. This is the approach most certain to work, as long as it doesn't trip over any local security policies.
Try hacking it in yourself. There have been any number of threads on this at the WordPress.org forums. Here is a recent one which is also talking about non-standard ports. Here is an article about how to try to get it working on Debian Lenny (which also addresses the non-standard port issue).

Securing a linux webserver for public access

I'd like to set up a cheap Linux box as a web server to host a variety of web technologies (PHP & Java EE come to mind, but I'd like to experiment with Ruby or Python in the future as well).
I'm fairly versed in setting up Tomcat to run on Linux for serving up Java EE applications, but I'd like to be able to open this server up, even just so I can create some tools I can use while I am working in the office. All the experience I've had with configuring Java EE sites has all been for intranet applications where we were told not to focus on securing the pages for external users.
What is your advice on setting up a personal Linux web server in a secure enough way to open it up for external traffic?
This article has some of the best ways to lock things down:
http://www.petefreitag.com/item/505.cfm
Some highlights:
Make sure no one can browse the directories
Make sure only root has write privileges to everything, and only root has read privileges to certain config files
Run mod_security
The article also takes some pointers from this book:
Apache Securiy (O'Reilly Press)
As far as distros, I've run Debain and Ubuntu, but it just depends on how much you want to do. I ran Debian with no X and just ssh'd into it whenever i needed anything. That is a simple way to keep overhead down. Or Ubuntu has some nice GUI things that make it easy to control Apache/MySQL/PHP.
It's important to follow security best practices wherever possible, but you don't want to make things unduly difficult for yourself or lose sleep worrying about keeping up with the latest exploits. In my experience, there are two key things that can help keep your personal server secure enough to throw up on the internet while retaining your sanity:
1) Security through obscurity
Needless to say, relying on this in the 'real world' is a bad idea and not to be entertained. But that's because in the real world, baddies know what's there and that there's loot to be had.
On a personal server, the majority of 'attacks' you'll suffer will simply be automated sweeps from machines that have already been compromised, looking for default installations of products known to be vulnerable. If your server doesn't offer up anything enticing on the default ports or in the default locations, the automated attacker will move on. Therefore, if you're going to run a ssh server, put it on a non-standard port (>1024) and it's likely it will never be found. If you can get away with this technique for your web server then great, shift that to an obscure port too.
2) Package management
Don't compile and install Apache or sshd from source yourself unless you absolutely have to. If you do, you're taking on the responsibility of keeping up-to-date with the latest security patches. Let the nice package maintainers from Linux distros such as Debian or Ubuntu do the work for you. Install from the distro's precompiled packages, and staying current becomes a matter of issuing the occasional apt-get update && apt-get -u dist-upgrade command, or using whatever fancy GUI tool Ubuntu provides.
One thing you should be sure to consider is what ports are open to the world. I personally just open port 22 for SSH and port 123 for ntpd. But if you open port 80 (http) or ftp make sure you learn to know at least what you are serving to the world and who can do what with that. I don't know a lot about ftp, but there are millions of great Apache tutorials just a Google search away.
Bit-Tech.Net ran a couple of articles on how to setup a home server using linux. Here are the links:
Article 1
Article 2
Hope those are of some help.
#svrist mentioned EC2. EC2 provides an API for opening and closing ports remotely. This way, you can keep your box running. If you need to give a demo from a coffee shop or a client's office, you can grab your IP and add it to the ACL.
Its safe and secure if you keep your voice down about it (i.e., rarely will someone come after your home server if you're just hosting a glorified webroot on a home connection) and your wits up about your configuration (i.e., avoid using root for everything, make sure you keep your software up to date).
On that note, albeit this thread will potentially dwindle down to just flaming, my suggestion for your personal server is to stick to anything Ubuntu (get Ubuntu Server here); in my experience, the quickest to get answers from whence asking questions on forums (not sure what to say about uptake though).
My home server security BTW kinda benefits (I think, or I like to think) from not having a static IP (runs on DynDNS).
Good luck!
/mp
Be careful about opening the SSH port to the wild. If you do, make sure to disable root logins (you can always su or sudo once you get in) and consider more aggressive authentication methods within reason. I saw a huge dictionary attack in my server logs one weekend going after my SSH server from a DynDNS home IP server.
That being said, it's really awesome to be able to get to your home shell from work or away... and adding on the fact that you can use SFTP over the same port, I couldn't imagine life without it. =)
You could consider an EC2 instance from Amazon. That way you can easily test out "stuff" without messing with production. And only pay for the space,time and bandwidth you use.
If you do run a Linux server from home, install ossec on it for a nice lightweight IDS that works really well.
[EDIT]
As a side note, make sure that you do not run afoul of your ISP's Acceptable Use Policy and that they allow incoming connections on standard ports. The ISP I used to work for had it written in their terms that you could be disconnected for running servers over port 80/25 unless you were on a business-class account. While we didn't actively block those ports (we didn't care unless it was causing a problem) some ISPs don't allow any traffic over port 80 or 25 so you will have to use alternate ports.
If you're going to do this, spend a bit of money and at the least buy a dedicated router/firewall with a separate DMZ port. You'll want to firewall off your internal network from your server so that when (not if!) your web server is compromised, your internal network isn't immediately vulnerable as well.
There are plenty of ways to do this that will work just fine. I would usually jsut use a .htaccess file. Quick to set up and secure enough . Probably not the best option but it works for me. I wouldn't put my credit card numbers behind it but other than that I dont really care.
Wow, you're opening up a can of worms as soon as you start opening anything up to external traffic. Keep in mind that what you consider an experimental server, almost like a sacrificial lamb, is also easy pickings for people looking to do bad things with your network and resources.
Your whole approach to an externally-available server should be very conservative and thorough. It starts with simple things like firewall policies, includes the underlying OS (keeping it patched, configuring it for security, etc.) and involves every layer of every stack you'll be using. There isn't a simple answer or recipe, I'm afraid.
If you want to experiment, you'll do much better to keep the server private and use a VPN if you need to work on it remotely.

Resources