remove gitorious excessive usage restriction - gitorious

I've installed a new local gitorious server, however when I configure the users\projects\permissions I get:
Slow down!
Seems like you are just a tad busy, creating all those records that fast. Too much coffee?
To prevent abuse, we have denied your request due to excessive usage.
Feel free to contact us if you believe you have received this message in error.
That's great and all for later, but now I really need to configure a bunch of stuff. How can I remove this restriction?

set:
disable_record_throttling: true
in gitorious.yml

Related

What do unrecognized GET requests mean?

It's a new Amazon EC2 instance, it's live maybe a couple of hours a day, I've just installed NodeJS and still playing with it. And then I get this in my Putty SSH window:
I don't know what those two last requests mean. I don't have robots.txt and I definitely don't have any HTML file whatsoever (it's all in Jade).
Do I havet to be concerned?
Its probably a bot. Generally you don't need to worry about it, unless they are generating to many requests to the point that it causes performance issues with your application.

TortoiseSVN Error: Could not send request body: an existing connection was forcibly closed by the remote host

Let me preface this by saying I have basically 0 knowledge of web development. That being said, I'll still try to provide you with as much information as I possibly can. Our client is using IIS7 on a Windows Server 2008 R2 machine. The TortoiseSVN error they're getting is this:
Error: Could not send request body: an existing connection was forcibly closed by the remote host.
Using the powers of Google, it seems that there's two possible things that could be occurring here. As it is a 4GB file, I've seen people mention that it could be a configuration issue in that the timeout could be a little short, that I might need to enable a setting somewhere to allow committing of larger files or that it could be a network issue. It might be useful to note that they can commit smaller files.
I've all ready tried disabling the firewall, as well as the antivirus, on the server and having them retry, but that didn't work. They are trying to upload from a desktop to the server and they are on the same network through a gigabit switch. I'm sure I'm missing useful information for you guys but I'm a total noob to web dev, their set up, and actually understanding what they're trying to do. If you need any more information from me I'll be glad to provide it.
The problem could be the too strict timeout options configured in Apache2's reqtimeout module. I simply disabled it
a2dismod reqtimeout
/etc/init.d/apache2 restart
Chocolate to: https://serverfault.com/questions/297562/svn-https-problem-could-not-read-status-line-connection-was-closed-by-ser

If I download a hacked Joomla website on my laptop to fix it

If I download a hacked website on my laptop to fix it and I run php code that someone else could have modified, am I going to risk to damage my local computer?
Let's say I need to assign some privileges to run a mysql database this could be potentially dangerous right?
It is a hacked Joomla website.
You cannot be sure what can happen. For maximum protection, I recommend putting everything in a virtual machine and then disable its internet access.
Yes, there is a risk: the PHP code will have the same permissions as the user running the code on the computer. If you give the PHP code access to a database, it will be able to do anything the MySQL user can do.
If you're going for 100% safety, run all of it in a virtual machine to avoid accidents with your actual laptop.
Update: of course, a good first step would be to diff the PHP code with the Joomla! official PHP code of the appropriate version, to identify differences between the two.
That depends.
If hacker put some malicious script(js\html) that use vulnerabilities in browser, or something similar, than you may damage your machine.
Usually modified php's provide backdoors, also known as shells, or provide proxies, or something similar. They are used for remote access, and are not usually intended to broke the machine. However, that's not always true.
If your site was running under unix environment, and your laptop runs Windows, the risk is lesser.
I would recommend at least using firewall. For full protection, you should do anything inside a virtual machine.
Use any compare tool to find modified places.
As for database, use only local copy. When you've corrected everything, replace the version on the server with it.
When code has been modified by someone else, running/executing it is always dangerous. Therefore, you must take care that it can't be executed:
Don't download with a web browser. Use a tool that just makes a binary copy like rsync, wget or log into your server, create a ZIP archive of the modified scripts and then download that.
Always make a backup copy of everything before you look at it. That includes the database, all scripts, HTML pages, templates, everything.
Run the code on an isolated computer (no network connection). If you don't have a spare laptop, run it in a virtual machine with networking turned off. This isn't as secure as the first option because virtual machines have bugs, too, but it's better than nothing.
Never execute the code unless you know it's safe. First, compare it against a know good copy. If there isn't one, read the code and try to figure out what it does. If that's beyond your limits, mark it down as experience, scrap the whole thing and start from scratch.
You don't want users of your site to sue you when they get hacked because you failed to remove all the malicious code, do you?
The bad code might not be in the scripts; if your site is vulnerable to script injection, then it can be in the database and only be visible when the pages are rendered. If this is the case, fix all places where database values are pasted into the HTML verbatim before you try to view them in a web browser.
Joomla hacks are usually pretty straight forward (but time consuming) to clean up (old Joomla versions can be pretty venerable to attack), follow some of the tips here to keep your self safe and remember to:
Replace all the Joomla system files with the latest version from Joomla!
If you have a fairly recent backup it would be much easier to just remove the hacked site and restore the backup, and then update it to the latest version of Joomla to help secure it.

How do I secure a production server after inheriting it from the previous development vendor?

We received access to the environment, but I now need to go through the process of securing it so that the previous vendor can no longer access it, or the Web applications running on it. This is a Linux box running Ubuntu. I know I need to change the following passwords:
SSH
FTP
MySQL
Control Panel Admin
Primary Application Admin
However, how do I really know I've completely secured the system using best practices, and am I missing anything else that I need to do other than just changing passwords?
3 simple steps
Backup configurations / source files from HTTP / SQL tables
Reinstall operating system
Follow standard hardening steps on fresh OS
Regardless of who it was, they could have installed any old crap on there (rootkits) that you can't configure away.
You will probably get more responses at serverfault.com on these kinds of questions.
There are several things you can do to secure SSH by editing your sshd_config file which is usually in /etc/ssh/:
Disable Root Logins
PermitRootLogin no
Change the ssh port from Port 22
Port 9222
Manually specifying which accounts can login
AllowUsers Andrew,Jane,Doe
SecurityFocus has a good article about securing MySQL, although it's a bit dated.
The best thing you could do would be reinstall and make sure when you bring over files from the old system to the new that it is just data, and not executables that could be nasty. If this is to much, changing all the passwords, and watching the logs for a few weeks, as well as playing with iptables to block former vendor. Also given that it could have a rootkit at the kernel level its probably good idea to change that out, and also watch traffic coming out of the box fro something that might be going to the vendor. It really is a hassle to take someone else's machine and say that is safe now, I would go as far to say it is nearly impossible.
side note. This isn't really programming related so probably shouldn't be on this site.

Securing a linux webserver for public access

I'd like to set up a cheap Linux box as a web server to host a variety of web technologies (PHP & Java EE come to mind, but I'd like to experiment with Ruby or Python in the future as well).
I'm fairly versed in setting up Tomcat to run on Linux for serving up Java EE applications, but I'd like to be able to open this server up, even just so I can create some tools I can use while I am working in the office. All the experience I've had with configuring Java EE sites has all been for intranet applications where we were told not to focus on securing the pages for external users.
What is your advice on setting up a personal Linux web server in a secure enough way to open it up for external traffic?
This article has some of the best ways to lock things down:
http://www.petefreitag.com/item/505.cfm
Some highlights:
Make sure no one can browse the directories
Make sure only root has write privileges to everything, and only root has read privileges to certain config files
Run mod_security
The article also takes some pointers from this book:
Apache Securiy (O'Reilly Press)
As far as distros, I've run Debain and Ubuntu, but it just depends on how much you want to do. I ran Debian with no X and just ssh'd into it whenever i needed anything. That is a simple way to keep overhead down. Or Ubuntu has some nice GUI things that make it easy to control Apache/MySQL/PHP.
It's important to follow security best practices wherever possible, but you don't want to make things unduly difficult for yourself or lose sleep worrying about keeping up with the latest exploits. In my experience, there are two key things that can help keep your personal server secure enough to throw up on the internet while retaining your sanity:
1) Security through obscurity
Needless to say, relying on this in the 'real world' is a bad idea and not to be entertained. But that's because in the real world, baddies know what's there and that there's loot to be had.
On a personal server, the majority of 'attacks' you'll suffer will simply be automated sweeps from machines that have already been compromised, looking for default installations of products known to be vulnerable. If your server doesn't offer up anything enticing on the default ports or in the default locations, the automated attacker will move on. Therefore, if you're going to run a ssh server, put it on a non-standard port (>1024) and it's likely it will never be found. If you can get away with this technique for your web server then great, shift that to an obscure port too.
2) Package management
Don't compile and install Apache or sshd from source yourself unless you absolutely have to. If you do, you're taking on the responsibility of keeping up-to-date with the latest security patches. Let the nice package maintainers from Linux distros such as Debian or Ubuntu do the work for you. Install from the distro's precompiled packages, and staying current becomes a matter of issuing the occasional apt-get update && apt-get -u dist-upgrade command, or using whatever fancy GUI tool Ubuntu provides.
One thing you should be sure to consider is what ports are open to the world. I personally just open port 22 for SSH and port 123 for ntpd. But if you open port 80 (http) or ftp make sure you learn to know at least what you are serving to the world and who can do what with that. I don't know a lot about ftp, but there are millions of great Apache tutorials just a Google search away.
Bit-Tech.Net ran a couple of articles on how to setup a home server using linux. Here are the links:
Article 1
Article 2
Hope those are of some help.
#svrist mentioned EC2. EC2 provides an API for opening and closing ports remotely. This way, you can keep your box running. If you need to give a demo from a coffee shop or a client's office, you can grab your IP and add it to the ACL.
Its safe and secure if you keep your voice down about it (i.e., rarely will someone come after your home server if you're just hosting a glorified webroot on a home connection) and your wits up about your configuration (i.e., avoid using root for everything, make sure you keep your software up to date).
On that note, albeit this thread will potentially dwindle down to just flaming, my suggestion for your personal server is to stick to anything Ubuntu (get Ubuntu Server here); in my experience, the quickest to get answers from whence asking questions on forums (not sure what to say about uptake though).
My home server security BTW kinda benefits (I think, or I like to think) from not having a static IP (runs on DynDNS).
Good luck!
/mp
Be careful about opening the SSH port to the wild. If you do, make sure to disable root logins (you can always su or sudo once you get in) and consider more aggressive authentication methods within reason. I saw a huge dictionary attack in my server logs one weekend going after my SSH server from a DynDNS home IP server.
That being said, it's really awesome to be able to get to your home shell from work or away... and adding on the fact that you can use SFTP over the same port, I couldn't imagine life without it. =)
You could consider an EC2 instance from Amazon. That way you can easily test out "stuff" without messing with production. And only pay for the space,time and bandwidth you use.
If you do run a Linux server from home, install ossec on it for a nice lightweight IDS that works really well.
[EDIT]
As a side note, make sure that you do not run afoul of your ISP's Acceptable Use Policy and that they allow incoming connections on standard ports. The ISP I used to work for had it written in their terms that you could be disconnected for running servers over port 80/25 unless you were on a business-class account. While we didn't actively block those ports (we didn't care unless it was causing a problem) some ISPs don't allow any traffic over port 80 or 25 so you will have to use alternate ports.
If you're going to do this, spend a bit of money and at the least buy a dedicated router/firewall with a separate DMZ port. You'll want to firewall off your internal network from your server so that when (not if!) your web server is compromised, your internal network isn't immediately vulnerable as well.
There are plenty of ways to do this that will work just fine. I would usually jsut use a .htaccess file. Quick to set up and secure enough . Probably not the best option but it works for me. I wouldn't put my credit card numbers behind it but other than that I dont really care.
Wow, you're opening up a can of worms as soon as you start opening anything up to external traffic. Keep in mind that what you consider an experimental server, almost like a sacrificial lamb, is also easy pickings for people looking to do bad things with your network and resources.
Your whole approach to an externally-available server should be very conservative and thorough. It starts with simple things like firewall policies, includes the underlying OS (keeping it patched, configuring it for security, etc.) and involves every layer of every stack you'll be using. There isn't a simple answer or recipe, I'm afraid.
If you want to experiment, you'll do much better to keep the server private and use a VPN if you need to work on it remotely.

Resources