I've found other answers to my question on StackOverflow, but they are old, and things have changed a lot when it comes to security.
I would like a year 2017 re-evaluation of the answer to this question:
Inside my company, should my database require TLS connections from our internal client applications? My database has customer data that would constitute a Yahoo-sized public relations disaster if it leaked. But my client applications are firewall'd away from any public internet access.
TLS is used to prevent someone from sniffing traffic. But from my understanding (correct me if I'm wrong) the only way sniffing traffic is possible is to be a root user on the source or destination server. (assuming we don't have shared Ethernet hubs and such)
The may be be another way to sniff traffic if someone has direct physical access to the network switches in our datacenter.
But trying to protect ourselves against root users and datacenter employees seems overkill to me, especially given that TLS does come at a considerable cost in terms of performance, complexity, and maintainability.
Or is this just one of those situations where we need to be absolute in our security choices rather than try to reason cost-benefit, because cost-benefit is impossible to calculate accurately when it comes to security breaches anyway?
Thanks! :)
Using TLS for an internal network helps simplify the threat model. More would have to go wrong in order for the transmitted data to be leaked to an adversary.
An adversary who has compromised any system on the same network as the database can use ARP spoofing to reroute traffic. From a threat modeling perspective, this increases the attack surface, and every system on the same network needs to have the same security guarantee. Many routers have a configuration option that protects the ARP table from such attacks - and this option can be disabled by a firmware update. Using TLS is defense-in-depth because this measure protects core assets, even when other systems fail.
Related
I have an Ubuntu droplet running a webserver. It's serving dynamic pages.
The backend is written in python with fastapi + uvicorn.
Generally speaking, security can often affect performance. As this paper points out
Network security and network performance are inversely related.
It goes on to say that a firewall indeed does have an negative impact on performance.
As seen from the result of the simulation, network performance is adversely affected when firewall is implemented.
I am concerned about speed. I want it to be lightning fast. That's why I've chosen ASGI.
Does it make sense to set up a firewall in this scenario? There is not input of user data (forms and the like) anywhere on the website.
No one can tell based on the information provided whether you should use a firewall or not. There are too many moving parts.
Have you done any risk assessment for your application? What kind of data is your application processing, and how sensitive is it? What are your requirements regarding Confidentiality, Integrity and Availability? Have you done any hardening already? Has someone evaluated the security of your application? Do you require authentication and access control? Which firewall would you like to have (normal, next gen, application firewall) and what for?
When you look on the definition of application security (application should offer Confidentiality, Integrity and Availability of the data at rest, in transfer and processed) you notice that Confidentiality and Integrity goes against the Availability principle. The very core of application security contains a contradiction. By improving security (putting a firewall) you might worsen the security (legitimate users may not access the site, everything gets slower).
Having said that - please do either a threat and risk analysis of your application or have a look on the OWASP Application Security Verification Standard and go over Level 1 requirements.
I can appreciate that seeing "basic auth" and "safe enough" in the same sentence is a lot like reading "Is parachuting without a parachute still safe?", so I'll do my best to clarify what I am getting at.
From what I've seen online, people typically describe basic HTTP auth as being unsecured due to the credentials being passed in plain text from the client to the server; this leaves you open to having your credentials sniffed by a nefarious person or man-in-the-middle in a network configuration where your traffic may be passing through an untrusted point of access (e.g. an open AP at a coffee shop).
To keep the conversation between you and the server secure, the solution is to typically use an SSL-based connection, where your credentials might be sent in plain text, but the communication channel between you and the server is itself secured.
So, onto my question...
In the situation of replicating one CouchDB instance from an EC2 instance in one region (e.g. us-west) to another CouchDB instance in another region (e.g. singapore) the network traffic will be traveling across a path of what I would consider "trusted" backbone servers.
Given that (assuming I am not replicating highly sensitive data) would anyone/everyone consider basic HTTP auth for CouchDB replication sufficiently secure?
If not, please clarify what scenarios I am missing here that would make this setup unacceptable. I do understand for sensitive data this is not appropriate, I just want to better understand the ins and outs for non-sensitive data replicated over a relatively-trusted network.
Bob is right, it is better to err on the side of caution, but I disagree. Bob could be right in this case (see details below), but the problem with his general approach is that it ignores the cost of paranoia. It leaves "peace dividend" money on the table. I prefer Bruce Schneier's assessment that it is a trade-off.
Short answer
Start replicating now! Do not worry about HTTPS.
The greatest risk is not wire sniffing, but your own human error, followed by software bugs, which could destroy or corrupt your data. Make a replica!. If you will replicate regularly, plan to move to HTTPS or something equivalent (SSH tunnel, stunnel, VPN).
Rationale
Is HTTPS is easy with CouchDB 1.1? It is as easy as HTTPS can possibly be, or in other words, no, it is not easy.
You have to make an SSL key pair, purchase a certificate or run your own certificate authority—you're not foolish enough to self-sign, of course! The user's hashed password is plainly visible from your remote couch! To protect against cracking, will you implement bi-directional SSL authentication? Does CouchDB support that? Maybe you need a VPN instead? What about the security of your key files? Don't check them into Subversion! And don't bundle them into your EC2 AMI! That defeats the purpose. You have to keep them separate and safe. When you deploy or restore from backup, copy them manually. Also, password-protect them so if somebody gets the files, they can't steal (or worse, modify!) your data. When you start CouchDB or replicate, you must manually input the password before replication will work.
In a nutshell, every security decision has a cost.
A similar question is, "should I lock my house at night? It depends. Your profile says you are in Tuscon, so you know that some neighborhoods are safe, while others are not. Yes, it is always safer to always lock all of your doors all of the time. But what is the cost to your time and mental health? The analogy breaks down a bit because time invested in worst-case security preparedness is much greater than twisting a bolt lock.
Amazon EC2 is a moderately safe neighborhood. The major risks are opportunistic, broad-spectrum scans for common errors. Basically, organized crime is scanning for common SSH accounts and web apps like Wordpress, so they can a credit card or other database.
You are a small fish in a gigantic ocean. Nobody cares about you specifically. Unless you are specifically targeted by a government or organized crime, or somebody with resources and motivation (hey, it's CouchDB—that happens!), then it's inefficient to worry about the boogeyman. Your adversaries are casting broad nets to get the biggest catch. Nobody is trying to spear-fish you.
I look at it like high-school integral calculus: measuring the area under the curve. Time goes to the right (x-axis). Risky behavior goes up (y-axis). When you do something risky you saved time and effort, but the the graph spikes upward. When you do something the safe way, it costs time and effort, but the graph moves down. Your goal is to minimize the long-term area under the curve, but each decision is case-by-case. Every day, most Americans ride in automobiles: the single most risky behavior in American life. We intuitively understand the risk-benefit trade-off. Activity on the Internet is the same.
As you imply, basic authentication without transport layer security is 100% insecure. Anyone on EC2 that can sniff your packets can see your password. Assuming that no one can is a mistake.
In CouchDB 1.1, you can enable native SSL. In earlier version, use stunnel. Adding SSL/TLS protection is so simple that there's really no excuse not to.
I just found this statement from Amazon which may help anyone trying to understand the risk of packet sniffing on EC2.
Packet sniffing by other tenants: It is not possible for a virtual instance running in promiscuous mode to receive or "sniff" traffic that is intended for a different virtual instance. While customers can place their interfaces into promiscuous mode, the hypervisor will not deliver any traffic to them that is not addressed to them. This includes two virtual instances that are owned by the same customer, even if they are located on the same physical host. Attacks such as ARP cache poisoning do not work within EC2. While Amazon EC2 does provide ample protection against one customer inadvertently or maliciously attempting to view another's data, as a standard practice customers should encrypt sensitive traffic.
http://aws.amazon.com/articles/1697
Anytime a username/password authentication is used, the common wisdom is to protect the transport of that data using encryption (SSL, HTTPS, etc). But that leaves the end points potentially vulnerable.
Realistically, which is at greater risk of intrusion?
Transport layer: Compromised via wireless packet sniffing, malicious wiretapping, etc.
Transport devices: Risks include ISPs and Internet backbone operators sniffing data.
End-user device: Vulnerable to spyware, key loggers, shoulder surfing, and so forth.
Remote server: Many uncontrollable vulnerabilities including malicious operators, break-ins resulting in stolen data, physically heisting servers, backups kept in insecure places, and much more.
My gut reaction is that although the transport layer is relatively easy to protect via SSL, the risks in the other areas are much, much greater, especially at the end points. For example, at home my computer connects directly to my router; from there it goes straight to my ISPs routers and onto the Internet. I would estimate the risks at the transport level (both software and hardware) at low to non-existant. But what security does the server I'm connected to have? Have they been hacked into? Is the operator collecting usernames and passwords, knowing that most people use the same information at other websites? Likewise, has my computer been compromised by malware? Those seem like much greater risks.
My question is this: should I be worried if a service I'm using or developing doesn't use SSL? Sure, it's a low-hanging fruit, but there are a lot more fruit up above.
By far the biggest target in network security is the Remote server. In the case of a Web Browser and an HTTP Server, the most common threats are in the form of XSS and XSRF. Remote Servers are juicy targets for other protocols as well because they often have an open port which is globally accessable.
XSS can be used to bypass the Same-Origin Policy. This can be used by a hacker to fire off xmlhttprequests to steal data from a remote server. XSS is wide spread and easy for hackers to find.
Cross-Site Request Forgeries (XSRF) can be used to change a the password for an account on a remote server. It can also be used to Hijack mail from your gmail account. Like XSS, this vulnerability type is also wide spread and easy to find.
The next biggest risk is the "Transport layer", but I'm not talking about TCP. Instead you should worry more about the other network layers. Such as OSI Layer 1, the Physical Layer such as 802.11b. Being able to sniff the wireless traffic at your local cafe can be incredibly fruitful if applications aren't properly using ssl. A good example is the Wall of Sheep. You should also worry about OSI Layer 2, the Data Link Layer, ARP spoofing can be used to sniff a switched wired network as if it where a wireless broadcast. OSI Layer 4 can be compromised with SSLStrip. Which can still be used to this day to undermine TLS/SSL used in HTTPS.
The next up is End-user device. Users are dirty, if you every come across one of these "Users" tell them to take a shower! No seriously, users are dirty because they have lots of: Spyware/Viruses/Bad Habits.
Last up is Transport devices. Don't get me wrong, this is an incredibly juicy target for any hacker. The problem is that serious vulnerabilities have been discovered in Cisco IOS and nothing has really happened. There hasn't been a major worm to affect any router. At the end of the day its unlikely that this part of your network will be directly compromised. Although, if a transport device is responsible for your security, as in the case of a hardware firewall, then mis-configurations can be devastating.
Let's not forgot things like:
leaving logged-in sessions unattended
writing passwords on stickies
The real risk is stupid users.
They leave their terminals open when they go to lunch.
Gullible in front of any service personell doing "service".
Storing passords and passphrases on notes next to the computer.
In great numbers someone someday will install the next Killer App (TM) which takes down the network.
Through users, any of the risks you mention can be accomplished trough social engineering.
Just because you think the other parts of your communications might be unsafe doesn't mean you shouldn't protect the bits that you can protect as best you can.
The things you can do are:
Protect your own end
give your message a good shot at
surviving the
internet, by wrapping it up warm.
try to make sure that the other end is not an impostor.
The transport is where more people can listen-in than at any other stage. (There could only be a maximum 2 or 3 people standing behind you while you type in your password, but dozens could be plugged into the same router, doing a man-in-the-middle attack, hundreds could be sniffing your wifi-packets)
If you don't encrypt your message then anyone along the way can get a copy.
If you're communicating with a malicious/negligent end-point, then you're in trouble no matter what security you use, you have to avoid that scenario (authenticate them to you as well as you to them (server-certs))
None of these problems have been solved, or anywhere close. But going out there naked is hardly the solution.
I'm looking to implement a basic product activation scheme such that when the program is launched it will contact our server via http to complete the activation. I'm wondering if it is a big problem (especially with bigger companies or educational organizations) that firewalls will block the outgoing http request and prevent activation. Any idea how big as issue this may be?
In my experience when HTTP traffic is blocked by a hardware firewall then there is more often than not a proxy server which is used to browse the internet. Therefore it is good practice to allow the user to enter proxy and authentication details.
The amount of times I have seen applications fail due to not using a corporate proxy server and therefore being blocked by the firewall astonishes me.
there are personal software solutions to purposely block outgoing connections. Check out little snitch. This program can set up rules that explicitly block your computer from making connections to certain domains, IP's and / or Ports. A common use for this program is to stop one's computer from "phoning home" to an activation server.
I can't tell you how prevalent this will be, sorry. But I can give you one data point.
In this company Internet access is granted on an as needed basis. There is one product I have had to support which is wonderful for its purpose and reasonably priced, but I will never approve its purchase again - the licensing is too much of a hassle to be worth it.
I'd say that it may not be common, but if any one of your customers is a business it's likely that you will encounter someone who tryes to run your software behind a restricted internet connection or a proxy. Your software will need to handle this situation, otherwise you will ahve a pissed off customer who cannot use your product, and you will lose the sale for sure.
If you are looking for a third party tool, I've used InstallKey (www.lomacons.com) for product activations. This thing has functionaility that allows for validating with and without an internet connection.
Assuming we have a server S and a few Clients (C) and whenever a client update a server, an internal database on the server is updated and replicated to the other clients. This is all done using sockets in an intranet environment.
I believe that an attacker can fairly easily sniff this plain text traffic. My colleagues believe I am overly paranoid because we are behind a firewall.
Am I being overly paranoid? Do you know of any exploit (link please) that took advantage of a situation such as this and what ca be done differently. Clients were rewritten in Java but server is still using C++.
Any thing in code can protect against an attack?
Inside your company's firewall, you're fairly safe from direct hack attacks from the outside. However, statistics that I won't trouble to dig out claim that most of the damage to a business' data is done from the INside. Most of that is simple accident, but there are various reasons for employees to be disgruntled and not found out; and if your data is sensitive they could hurt your company this way.
There are also boatloads of laws about how to handle personal ID data. If the data you're processing is of that sort, treating it carelessly within your company could also open your company up to litigation.
The solution is to use SSL connections. You want to use a pre-packaged library for this. You provide private/public keys for both ends and keep the private keys well hidden with the usual file access privileges, and the problem of sniffing is mostly taken care of.
SSL provides both encryption and authentication. Java has it built in and OpenSSL is a commonly used library for C/C++.
Your colleagues are naïve.
One high-profile attack occurred at Heartland Payment Systems, a credit card processor that one would expect to be extremely careful about security. Assuming that internal communications behind their firewall were safe, they failed to use something like SSL to ensure their privacy. Hackers were able to eavesdrop on that traffic, and extract sensitive data from the system.
Here is another story with a little more description of the attack itself:
Described by Baldwin as "quite a
sophisticated attack," he says it has
been challenging to discover exactly
how it happened. The forensic teams
found that hackers "were grabbing
numbers with sniffer malware as it
went over our processing platform,"
Baldwin says. "Unfortunately, we are
confident that card holder names and
numbers were exposed." Data, including
card transactions sent over
Heartland's internal processing
platform, is sent unencrypted, he
explains, "As the transaction is being
processed, it has to be in unencrypted
form to get the authorization request
out."
You can do many things to prevent a man in the middle attack. For most internal data, within a firewall/IDS protected network you really don't need to secure it. However, if you do wish to protect the data you can do the following:
Use PGP encryption to sign and encrypt messages
Encrypt sensitive messages
Use hash functions to verify that the message sent has not been modified.
It is a good standard operating proceedure to secure all data, however securing data has very large costs. With secure channels you need to have a certificate authority, and allow for extra processing on all machines that are involved in communication.
You're being paranoid. You're talking about data moving across an, ideally, secured internal network.
Can information be sniffed? Yea, it can. But it's sniffed by someone who has already breached network security and got within the firewall. That can be done in innumerable ways.
Basically, for the VAST majority of businesses, no reason to encrypt internal traffic. There are almost always far far easier ways of getting information, from inside the company, without even approaching "sniffing" the network. Most such "attacks" are from people who are simply authorized to see the data in the first place, and already have a credential.
The solution is not to encrypt all of your traffic, the solution is to monitor and limit access, so that if any data is compromised, it is easier to detect who did it, and what they had access to.
Finally, consider, the sys admins, and DBAs pretty much have carte blanche to the entire system anyway, as inevitably, someone always needs to have that kind of access. It's simply not practical to encrypt everything to keep it away from prying eyes.
Finally, you're making a big ado about something that is just as likely written on a sticky tacked on the bottom of someone's monitor anyway.
Do you have passwords on your databases? I certainly hope the answer to that is yes. Nobody would believe that password protecting a database is overly paranoid. Why wouldn't you have at least the same level of security* on the same data flowing over your network. Just like an unprotected DB, unprotected data flow over the network is vulnerable not only to sniffing but is also mutable by a malicious attacker. That is how I would frame the discussion.
*By same level of security I mean use SSL as some have suggested, or simply encrypt the data using one of the many available encryption libraries around if you must use raw sockets.
Just about every "important" application I've used relies on SSL or some other encryption methodology.
Just because you're on the intranet doesn't mean you may not have malicious code running on some server or client that may be trying to sniff traffic.
An attacker which has access to a device inside your network that offers him the possibility to sniff the entire traffic or the traffic between a client and a server is the minimum required.
Anyway, if the attacker is already inside, sniffing should be just one of the problems you'll have to take into consideration.
There are not many companies I know of which use secure sockets between clients and servers inside an intranet, mostly because of the higher costs and lower performance.