Protecting web-server from ssh brute force attacks - security

I have a server acting as a reverse-proxy connected directly to the internet. I access this computer through ssh on a non-standard port. It doesn't seem too secure. If someone found the ssh port they could brute-force it and gain access to the computer.
Is there a more secure way to configure this?

Brute forcing SSH is very slow and time consuming, by design. With OpenSSH (most implementations are similar) there is a couple second delay after submitting an incorrect password. After three failures the connection is dropped. This makes it unbelievably slow to brute force a password of even moderate entropy. Generally brute forcing of SSH is not a problem so long as your server doesn't have a ton of threads accepting connections and your passwords is sufficiently complex.
Nevertheless #demure's suggestion to use denyhosts or sshguard would not only put your mind at ease, but it would help you detect attacks and possibly take action if necessary. #Steve's suggestion to use public key authentication is also an excellent preventive measure. If you're truly paranoid, consider adding password and public key authentication to achieve two-factor authentication. Something you have and something you know.

I would recommend a tool like denyhosts or sshguard, which watch your logs for failed logins, and auto ban IPs based on the rules you set.

I use fail2ban and wrote a utility myself in C# using mono to add any IP that comes to attack my SSHD to add them to the /etc/hosts.deny as well. Now I see there are some persistent hackers whose IP addresses are kept getting refused. They always come back the next day, though. Digital Ocean has a good tutorial for setting up fail2ban here.
Edit: I changed my logging from username password to Certificate based authentication and that seemed to have reduced most of the attacks but some people even use brute force to crack the certificate. I changed my code now based on the new logs. That seems to be working fine.

Brute force SSH attacks that can suck the resources from a server. The problem can be especially exasperating on low powered servers with a minimal amount of processors (CPU) and memory (RAM).
We developed a solution that downloads IP address black lists of known SSH attackers and adds them to the /etc/hosts.deny file.
The solution is released as an open source project named am-deny-hosts on GitHub. The project consists of a set of shell scripts that does the job without taking up a lot of time, CPU, or memory.

Related

What security risks are posed by using a local server to provide a browser-based gui for a program?

I am building a relatively simple program to gather and sort data input by the user. I would like to use a local server running through a web browser for two reasons:
HTML forms are a simple and effective means for gathering the input I'll need.
I want to be able to run the program off-line and without having to manage the security risks involved with accessing a remote server.
Edit: To clarify, I mean that the application should be accessible only from the local network and not from the Internet.
As I've been seeking out information on the issue, I've encountered one or two remarks suggesting that local servers have their own security risks, but I'm not clear on the nature or severity of those risks.
(In case it is relevant, I will be using SWI-Prolog for handling the data manipulation. I also plan on using the SWI-Prolog HTTP package for the server, but I am willing to reconsider this choice if it turns out to be a bad idea.)
I have two questions:
What security risks does one need to be aware of when using a local server for this purpose? (Note: In my case, the program will likely deal with some very sensitive information, so I don't have room for any laxity on this issue).
How does one go about mitigating these risks? (Or, where I should look to learn how to address this issue?)
I'm very grateful for any and all help!
There are security risks with any solution. You can use tools proven by years and one day be hacked (from my own experience). And you can pay a lot for security solution and never be hacked. So, you need always compare efforts with impact.
Basically, you need protect 4 "doors" in your case:
1. Authorization (password interception or, for example improper, usage of cookies)
2. http protocol
3. Application input
4. Other ways to access your database (not using http, for example, by ssh port with weak password, taking your computer or hard disk etc. In some cases you need properly encrypt the volume)
1 and 4 are not specific for Prolog but 4 is only one which has some specific in a case of local servers.
Protect http protocol level means do not allow requests which can take control over your swi-prolog server. For this purpose I recommend install some reverse-proxy like nginx which can prevent attacks on this level including some type of DoS. So, browser will contact nginx and nginx will redirect request to your server if it is a correct http request. You can use any other server instead of nginx if it has similar features.
You need install proper ssl key and allow ssl (https) in your reverse proxy server. It should be not in your swi-prolog server. Https will encrypt all information and will communicate with swi-prolog by http.
Think about authorization. There are methods which can be broken very easily. You need study this topic, there are lot of information. I think it is most important part.
Application input problem - the famose example is "sql injection". Study examples. All good web frameworks have "entry" procedures to clean all possible injections. Take an existing code and rewrite it with prolog.
Also, test all input fields with very long string, different charsets etc.
You can see, the security is not so easy, but you can select appropriate efforts considering with the impact of hacking.
Also, think about possible attacker. If somebody is very interested particulary to get your information all mentioned methods are good. But it can be a rare case. Most often hackers just scan internet and try apply known hacks to all found servers. In this case your best friend should be Honey-Pots and prolog itself, because the probability of hacker interest to swi-prolog internals is extremely low. (Hacker need to study well the server code to find a door).
So I think you will found adequate methods to protect all sensitive data.
But please, never use passwords with combinations of dictionary words and the same password more then for one purpose, it is the most important rule of security. For the same reason you shouldn't give access for your users to all information, but protection should be on the app level design.
The cases specific to a local server are a good firewall, proper network setup and encription of hard drive partition if your local server can be stolen by "hacker".
But if you mean the application should be accessible only from your local network and not from Internet you need much less efforts, mainly you need check your router/firewall setup and the 4th door in my list.
In a case you have a very limited number of known users you can just propose them to use VPN and not protect your server as in the case of "global" access.
I'd point out that my post was about a security issue with using port forwarding in apache
to access a prolog server.
And I do know of a successful prolog injection DOS attack on a SWI-Prolog http framework based website. I don't believe the website's author wants the details made public, but the possibility is certainly real.
Obviously this attack vector is only possible if the site evaluates Turing complete code (or code which it can't prove will terminate).
A simple security precaution is to check the Request object and reject requests from anything but localhost.
I'd point out that the pldoc server only responds by default on localhost.
- Anne Ogborn
I think SWI_Prolog http package is an excellent choice. Jan Wielemaker put much effort in making it secure and scalable.
I don't think you need to worry about SQL injection, indeed would be strange to rely on SQL when you have Prolog power at your fingers...
Of course, you need to properly manage the http access in your server...
Just this morning there has been an interesting post in SWI-Prolog mailing list, about this topic: Anne Ogborn shares her experience...

Is basic HTTP auth in CouchDB safe enough for replication across EC2 regions?

I can appreciate that seeing "basic auth" and "safe enough" in the same sentence is a lot like reading "Is parachuting without a parachute still safe?", so I'll do my best to clarify what I am getting at.
From what I've seen online, people typically describe basic HTTP auth as being unsecured due to the credentials being passed in plain text from the client to the server; this leaves you open to having your credentials sniffed by a nefarious person or man-in-the-middle in a network configuration where your traffic may be passing through an untrusted point of access (e.g. an open AP at a coffee shop).
To keep the conversation between you and the server secure, the solution is to typically use an SSL-based connection, where your credentials might be sent in plain text, but the communication channel between you and the server is itself secured.
So, onto my question...
In the situation of replicating one CouchDB instance from an EC2 instance in one region (e.g. us-west) to another CouchDB instance in another region (e.g. singapore) the network traffic will be traveling across a path of what I would consider "trusted" backbone servers.
Given that (assuming I am not replicating highly sensitive data) would anyone/everyone consider basic HTTP auth for CouchDB replication sufficiently secure?
If not, please clarify what scenarios I am missing here that would make this setup unacceptable. I do understand for sensitive data this is not appropriate, I just want to better understand the ins and outs for non-sensitive data replicated over a relatively-trusted network.
Bob is right, it is better to err on the side of caution, but I disagree. Bob could be right in this case (see details below), but the problem with his general approach is that it ignores the cost of paranoia. It leaves "peace dividend" money on the table. I prefer Bruce Schneier's assessment that it is a trade-off.
Short answer
Start replicating now! Do not worry about HTTPS.
The greatest risk is not wire sniffing, but your own human error, followed by software bugs, which could destroy or corrupt your data. Make a replica!. If you will replicate regularly, plan to move to HTTPS or something equivalent (SSH tunnel, stunnel, VPN).
Rationale
Is HTTPS is easy with CouchDB 1.1? It is as easy as HTTPS can possibly be, or in other words, no, it is not easy.
You have to make an SSL key pair, purchase a certificate or run your own certificate authority—you're not foolish enough to self-sign, of course! The user's hashed password is plainly visible from your remote couch! To protect against cracking, will you implement bi-directional SSL authentication? Does CouchDB support that? Maybe you need a VPN instead? What about the security of your key files? Don't check them into Subversion! And don't bundle them into your EC2 AMI! That defeats the purpose. You have to keep them separate and safe. When you deploy or restore from backup, copy them manually. Also, password-protect them so if somebody gets the files, they can't steal (or worse, modify!) your data. When you start CouchDB or replicate, you must manually input the password before replication will work.
In a nutshell, every security decision has a cost.
A similar question is, "should I lock my house at night? It depends. Your profile says you are in Tuscon, so you know that some neighborhoods are safe, while others are not. Yes, it is always safer to always lock all of your doors all of the time. But what is the cost to your time and mental health? The analogy breaks down a bit because time invested in worst-case security preparedness is much greater than twisting a bolt lock.
Amazon EC2 is a moderately safe neighborhood. The major risks are opportunistic, broad-spectrum scans for common errors. Basically, organized crime is scanning for common SSH accounts and web apps like Wordpress, so they can a credit card or other database.
You are a small fish in a gigantic ocean. Nobody cares about you specifically. Unless you are specifically targeted by a government or organized crime, or somebody with resources and motivation (hey, it's CouchDB—that happens!), then it's inefficient to worry about the boogeyman. Your adversaries are casting broad nets to get the biggest catch. Nobody is trying to spear-fish you.
I look at it like high-school integral calculus: measuring the area under the curve. Time goes to the right (x-axis). Risky behavior goes up (y-axis). When you do something risky you saved time and effort, but the the graph spikes upward. When you do something the safe way, it costs time and effort, but the graph moves down. Your goal is to minimize the long-term area under the curve, but each decision is case-by-case. Every day, most Americans ride in automobiles: the single most risky behavior in American life. We intuitively understand the risk-benefit trade-off. Activity on the Internet is the same.
As you imply, basic authentication without transport layer security is 100% insecure. Anyone on EC2 that can sniff your packets can see your password. Assuming that no one can is a mistake.
In CouchDB 1.1, you can enable native SSL. In earlier version, use stunnel. Adding SSL/TLS protection is so simple that there's really no excuse not to.
I just found this statement from Amazon which may help anyone trying to understand the risk of packet sniffing on EC2.
Packet sniffing by other tenants: It is not possible for a virtual instance running in promiscuous mode to receive or "sniff" traffic that is intended for a different virtual instance. While customers can place their interfaces into promiscuous mode, the hypervisor will not deliver any traffic to them that is not addressed to them. This includes two virtual instances that are owned by the same customer, even if they are located on the same physical host. Attacks such as ARP cache poisoning do not work within EC2. While Amazon EC2 does provide ample protection against one customer inadvertently or maliciously attempting to view another's data, as a standard practice customers should encrypt sensitive traffic.
http://aws.amazon.com/articles/1697

Allow RDP to public webserver?

Is it a huge security flaw to allow user to connect to your server via Remote Desktop? Right now i have a setup where i only allow a couple of ip-addresses to connect via the RDP port but i am thinking of removing this and allow all IP's to connect so i can RDP with my iPhone if there is some problem when I'm not at home.
So as long as i have a secure password do you guys think this is a bad idea? Is there anything else i can do to make it a bit more secure but still be able to connect from "wherever"? Is it for example possible to setup a page that i must visit that "allow anyone to login for 2 hours". Some kind of security by obscurity thingy?
Thankful for any help i can get.
Maybe you should post this question to serverfault. But anyway.
If you are using only user/password as the access method. Then it will be very easy for an attacker to lock your user ( or all users, thay don't even have to have terminal access rights ). So yes, it will be a huge security flaw. There are lots of way to protect from this treat and make rdp available from wherever. But I am not familiar with any of them.
It's very common to implement two-factor authentication for any remote access to corporate servers. In many companies you'll see the RSA tokens used as a second factor, albeit I prefer to use SMS --- it doesn't matter as long as you have two factors in play: something you know, something you have, something you are.
If your company doesn't want to implement a second factor then I still wouldn't recommend a publicly exposed RDP interface. It's open to brute force attacks, OS exploits or just plain old Denial of Service (if I blast your public interface with traffic then it will slow down legitimate machine use within your company). At a minimum I would look into tunneling over SSH, maybe with a client-side certificate authentication, or I would implement port knocking to get at the server interface in the first place.
It is a security flaw, but not so huge. Traffic is encrypted and reading user or password from it is not immediate as in text based protocols as in say ftp. It is just a little bit less secure than ssh.
It obviously has the same flaws as any other remote access (possible brute force or DOS attack). You should also use non a default account name to avoid simplifying task for attackers.
Your idea of opening access only after visiting some page is not bad either. Looks like it's a variant of the classical port knocking mechanism (but beware avoid opening a bigger hole).

Client Server socket security

Assuming we have a server S and a few Clients (C) and whenever a client update a server, an internal database on the server is updated and replicated to the other clients. This is all done using sockets in an intranet environment.
I believe that an attacker can fairly easily sniff this plain text traffic. My colleagues believe I am overly paranoid because we are behind a firewall.
Am I being overly paranoid? Do you know of any exploit (link please) that took advantage of a situation such as this and what ca be done differently. Clients were rewritten in Java but server is still using C++.
Any thing in code can protect against an attack?
Inside your company's firewall, you're fairly safe from direct hack attacks from the outside. However, statistics that I won't trouble to dig out claim that most of the damage to a business' data is done from the INside. Most of that is simple accident, but there are various reasons for employees to be disgruntled and not found out; and if your data is sensitive they could hurt your company this way.
There are also boatloads of laws about how to handle personal ID data. If the data you're processing is of that sort, treating it carelessly within your company could also open your company up to litigation.
The solution is to use SSL connections. You want to use a pre-packaged library for this. You provide private/public keys for both ends and keep the private keys well hidden with the usual file access privileges, and the problem of sniffing is mostly taken care of.
SSL provides both encryption and authentication. Java has it built in and OpenSSL is a commonly used library for C/C++.
Your colleagues are naïve.
One high-profile attack occurred at Heartland Payment Systems, a credit card processor that one would expect to be extremely careful about security. Assuming that internal communications behind their firewall were safe, they failed to use something like SSL to ensure their privacy. Hackers were able to eavesdrop on that traffic, and extract sensitive data from the system.
Here is another story with a little more description of the attack itself:
Described by Baldwin as "quite a
sophisticated attack," he says it has
been challenging to discover exactly
how it happened. The forensic teams
found that hackers "were grabbing
numbers with sniffer malware as it
went over our processing platform,"
Baldwin says. "Unfortunately, we are
confident that card holder names and
numbers were exposed." Data, including
card transactions sent over
Heartland's internal processing
platform, is sent unencrypted, he
explains, "As the transaction is being
processed, it has to be in unencrypted
form to get the authorization request
out."
You can do many things to prevent a man in the middle attack. For most internal data, within a firewall/IDS protected network you really don't need to secure it. However, if you do wish to protect the data you can do the following:
Use PGP encryption to sign and encrypt messages
Encrypt sensitive messages
Use hash functions to verify that the message sent has not been modified.
It is a good standard operating proceedure to secure all data, however securing data has very large costs. With secure channels you need to have a certificate authority, and allow for extra processing on all machines that are involved in communication.
You're being paranoid. You're talking about data moving across an, ideally, secured internal network.
Can information be sniffed? Yea, it can. But it's sniffed by someone who has already breached network security and got within the firewall. That can be done in innumerable ways.
Basically, for the VAST majority of businesses, no reason to encrypt internal traffic. There are almost always far far easier ways of getting information, from inside the company, without even approaching "sniffing" the network. Most such "attacks" are from people who are simply authorized to see the data in the first place, and already have a credential.
The solution is not to encrypt all of your traffic, the solution is to monitor and limit access, so that if any data is compromised, it is easier to detect who did it, and what they had access to.
Finally, consider, the sys admins, and DBAs pretty much have carte blanche to the entire system anyway, as inevitably, someone always needs to have that kind of access. It's simply not practical to encrypt everything to keep it away from prying eyes.
Finally, you're making a big ado about something that is just as likely written on a sticky tacked on the bottom of someone's monitor anyway.
Do you have passwords on your databases? I certainly hope the answer to that is yes. Nobody would believe that password protecting a database is overly paranoid. Why wouldn't you have at least the same level of security* on the same data flowing over your network. Just like an unprotected DB, unprotected data flow over the network is vulnerable not only to sniffing but is also mutable by a malicious attacker. That is how I would frame the discussion.
*By same level of security I mean use SSL as some have suggested, or simply encrypt the data using one of the many available encryption libraries around if you must use raw sockets.
Just about every "important" application I've used relies on SSL or some other encryption methodology.
Just because you're on the intranet doesn't mean you may not have malicious code running on some server or client that may be trying to sniff traffic.
An attacker which has access to a device inside your network that offers him the possibility to sniff the entire traffic or the traffic between a client and a server is the minimum required.
Anyway, if the attacker is already inside, sniffing should be just one of the problems you'll have to take into consideration.
There are not many companies I know of which use secure sockets between clients and servers inside an intranet, mostly because of the higher costs and lower performance.

Which subversion server type is best?

Subversion has multiple server types:
svnserve daemon
svnserve via xinetd
svn over ssh
http-based server
direct access via file:/// URLs
Which one of these is best for a small Linux system (one to two users)?
http:
very flexible and easy for administration
no network problems (Port 80)
3rd party authentication (eg. LDAP, Active Directory)
Unix + Win native support
webdav support for editing without svn client
slow, as each action triggers a new http-action approx. 5-8 times slower than svn://
especially slow on history
no encryption of transferred data
https:
same as http
encryption of transferred data
svn:
fastest transfer
no password encryption in std. setup: pw are readable by admin
firewall problems as no std.port is used
daemon service has to be started
no encryption of transferred data
svn+ssh
nearly as fast as svn://
no windows OS comes with build in ssh components, so 3rd party tools are essentiell
no daemon service needed
encryption of passwords
encryption of transfer
1 of those options is definitely a 'worst' one: file access. Don't use it, use one of the server-based methods instead.
However, whether to use HTTP or Svnserve is entirely a matter of preference. In fact, you can use both simultaneously, the write-lock on the repo ensures that you won't corrupt anything if you use one and then use another.
My preference is simply to use apache though - http is more firewall and internet friendly, it is also easier to hook into ldap or other authentication mechanisms, and you get features like webdav too. The performance may be less than svnserve, but its not particularly noticeable (the transferring of data across the network makes up the bulk of any performance issues)
If you need security for file transfers, then svnserve+ssh, or apache over https is your choice.
Check out FLOSS Weekly Episode 28. Greg Stein is one of the inventors of the WebDAV protocol for SVN and discusses the tradeoffs. My takeaway is that SVN: is faster but the http/webdav implementation is just fine for almost all purposes.
I've always used XInetD and HTTP.
HTTP also had WebDAV going on, so I could browse the source online if I wanted (or you can require a VPN if you wanted encryption and a dark-net type thing).
It really depends on what restrictions (if any) you're under.
Is it only going to be on a LAN? Will you need access outside of your LAN?
If so, will you have a VPN?
Do you have a static IP address and are you allowed to forward ports?
If you aren't under any restriction, I would then suggest going with xinetd (if you have xientd installed, daemon if you don't) and then (if you need remote access) use http-based server if you need remote access (you can also encrypt using HTTPS if you don't want plain text un/pw sent across).
Most other options are more effort with less benefit.
It's an SVN Repo -- you can always pack your bags and change things if you don't like it.
For ease of administration and security, we use svn+ssh for anything that requires commit access. We have set up HTTP based access for anonymous (read only) access to some open-source code, and it is much faster; the problem with svn+ssh is that it has to start up an ssh connection and a whole new svnserve for each user for every operation, which can get to be pretty slow after a while.
So, I'd recommend:
http for anonymous connections
svn+ssh if you need something secure and relatively quick and easy (assuming your users already have ssh set up and your users have access to the server)
https if you need something faster, secure, and you don't mind the extra overhead of administering it (or if you don't already have ssh set up or don't want to deal with Unix permissions)
I like sliksvn runs as a service in Windows, 2mins to setup and then forget about it.
It also comes with the client tools but download tortoise as well.
If you are going to be using the server only on the local machine and understand unix permissions, using file:// urls will be fast, simple and secure. Likewise, if you understand unix permissions and ssh and need to access it remotely, ssh will work great. While I see somebody else mentions it as "worst", I'm pretty sure that's simply due to the need to understand unix permissions.
If you do not like or understand unix permissions, you need to go with svnserve or http. I would probably choose to run it in xinetd, personally.
Finally, if you have firewall or proxy issues, you may need to consider using http. It's much more complicated, and i don't think you're going to see the benefits, so I'd put it last on your list.
I would recommend the http option, since I'm currently using svn+ssh and it appears to be the red-headed stepchild of the available protocols: 3rd-party tool support is consistently worse for svn+ssh than it is for http.
I've been responsible for administering both svnserve and Apache+SVN for my development teams, and I prefer the http-based solution for its flexibility. I know next to little about system administration, I'm a software guy after all, and I liked being able to hand authentication and authorization over to Apache.
Both tines the teams were about 10~15 people and both methods worked equally well. If you're planning for any expansion in the future, you might consider the http-based solution. It's just as easy to configure as svnserve, and if you're not going to expose the server to the Internet then you don't have to worry too much about securing and administering Apache either.
As a user of SVN, I prefer the http-based integration with Apache. I like being able to browse the repository with my web browser.
I am curious why NOT FSFS?? Important information - I am managing Windows systems.
I have done many projects with SVN and almost all of them were running from FSFS. Biggest repository was around 70GB (extreme), biggest ammount of repositories was around 700.
We never had any issues, even though we hosted it on Windows, NetApp and many other storage systems. Most of the time when I asked why NOT using FSFS only problem was that people simply didn't trust it.
Advantages:
No backend required (or dedicated server)
Fast and reliable
Hook scripts are supported
NTFS permissions are used
Easy to understand, easy to support, easy to manage
Disadvantages:
Not so easy access from outside your network (VPN)
Permissions only on repository-level (Read, Read/Write)
Hook scripts are running under current user credentials (which is sometimes advantage, sometimes disadvantage)
Martin

Resources