Proof that BitTorrent is secure for deploying server code and data files to many servers - bittorrent

Is using BitTorrent module for distributing server code and data files to internal server machines secure? Of course, the server code and data files are confidential stuff, and never to be leaving the server private network.
This is my rough proof: Distributing files to many internal servers using BitTorrent module (e.g. LibTorrent) is secure, unless Magnet nor Torrent file is disclosed to a hacker. Of course, Magnet or Torrent file is to be kept always secure, thus, using BitTorrent is secure.
There is a news that Facebook uses BitTorrent module for deploying server code and data files to many internal servers.
Is my proof right? I have to tell the proof to my colleagues.

There is no proof that data won't leave your network, because data leaving your network can be entirely orthogonal to your choice of distributing data: any computer can get hacked and can send data to the outside world, no matter how it was received.
However, if you're asking if you can
restrict your BitTorrent data to your network,
safeguard yourself against eaversdroppers within your network, and
protect your data against tampering mid-transfer,
you should be able to do so.
Firstly, you can contain data transfers by setting up your firewalls correctly:
Prevent access to the tracker from outside the network
Prevent access to BitTorrent ports to all of your clients from outside the network
Disable DHT for your Torrents
Disable UPnP in your clients
If you do this correctly, even leaking the Magnet link or .torrent file will only reveal the filenames, checksums and your internal tracker and client IPs, but not any of the contents.
To protect against a third party listening to your internal network data, you should enable Protocol Encryption. However note that PE is only meant to obfuscate BitTorrent traffic, it might not be 100% attack-resilient.
Lastly, BitTorrent comes with a protection against a third party trying to inject their own data and distribute malicious packages in your network this way. BitTorrent prevents this by distributing checksums in the .torrent file. However, it is crucial that you prevent tampering with the .torrent file, which can be solved easily by using HTTPS for the Tracker.
Needless to say, using BitTorrent will definitely increase your attack surface, but it won't be inherently unsafe.
Twitter was also using BitTorrent to distribute files to their servers. They even open sourced their implementation, called murder.

As a dissent to #Nils Werner:
The obfuscation scheme (known as protocol encryption) built into many bittorrent clients explicitly is not a scheme to ensure data confidentiality. It relies on weak diffie hellman exchanges and a RC4 stream cipher is is largely considered broken.
If you cannot trust the network link between the servers and your data needs to be kept confidential you should either encrypt it at rest before creating the torrents or use libtorrent's non-standard TLS transport feature.
If confidentiality against an internal attacker is not a concern (know your threat model!) and you only need integrity checking then bittorrent usually is sufficient unless collision attacks are a concern since SHA1 is only considered to have preimage resistance now, not collision resistance.
If you're only concerned about accidental leaks to the internet then neither of these points are particularly relevant. But you may want use the private flag which instructs the client to only use the tracker as peer source. The downside is that it also disables peer exchange and local service discovery, which may be useful even in internal networks.

Related

Transfer protocol for sending user uploaded files to a remote server?

I'm used to working with user-uploaded files to the same server, and transferring my own files to a remote server. But not transferring user-uploaded files to a remote server.
I'm looking for the best (industry) practice for selecting a transfer protocol in this regard.
My application is running Django on a Linux Server and the files live on a Windows Server.
Does it not matter which protocol I choose as long as it's secure (FTPS, SFTP, HTTPS)? Or is one better than the other in terms of performance/security specifically in regards to user-uploaded files?
Please do not link to questions that explain the differences of protocols, I am asking specifically in the context of user-uploaded files.
As long as you choose a standard protocol that provides (mutual) authentication, encryption and message authentication, there is not much difference security-wise. If all of this is provided by a layer of TLS in your chosen protocol (like in all of your examples), you can't make a big mistake on a design level (but implementation is key, many security bugs are bugs of implementation, and not design flaws). Such protocols might differ in the supported list of algorithms for different purposes though.
Performance-wise there can be quite significant differences, it depends on what you want to optimize for. If you choose HTTPS, you won't be able to keep a connection open for a long time, and would most probably have to bear the overhead of the whole connection setup with authentication and everything, for every transmitted file. (Well, you can actually keep a https connection open, but that would be quite a custom implementation for such file uploads.) Choosing FTPS/SFTP you will be able to keep a connection open and transmit as many files as you want, but would probably have to have more complex error handling logic (sometimes connections terminate without the underlying sockets knowing about it for a while and so on). So in short I think HTTPS would be more resilient, but secure FTP would be more performant for many small files.
It's also an architecture question, by using HTTPS, you would be able to implement all of this in your application code, while something like FTP would mean dependence on external components, which might be important from an operational point of view (think about how this will actually be deployed and whether there is already a devops function to manage proper operations).
Ultimately it's just a design decision you have to make, the above is just a few things that came to mind without knowing all the circumstances, and not at all a comprehensive list of things to consider.

When to use TLS for a database

I've found other answers to my question on StackOverflow, but they are old, and things have changed a lot when it comes to security.
I would like a year 2017 re-evaluation of the answer to this question:
Inside my company, should my database require TLS connections from our internal client applications? My database has customer data that would constitute a Yahoo-sized public relations disaster if it leaked. But my client applications are firewall'd away from any public internet access.
TLS is used to prevent someone from sniffing traffic. But from my understanding (correct me if I'm wrong) the only way sniffing traffic is possible is to be a root user on the source or destination server. (assuming we don't have shared Ethernet hubs and such)
The may be be another way to sniff traffic if someone has direct physical access to the network switches in our datacenter.
But trying to protect ourselves against root users and datacenter employees seems overkill to me, especially given that TLS does come at a considerable cost in terms of performance, complexity, and maintainability.
Or is this just one of those situations where we need to be absolute in our security choices rather than try to reason cost-benefit, because cost-benefit is impossible to calculate accurately when it comes to security breaches anyway?
Thanks! :)
Using TLS for an internal network helps simplify the threat model. More would have to go wrong in order for the transmitted data to be leaked to an adversary.
An adversary who has compromised any system on the same network as the database can use ARP spoofing to reroute traffic. From a threat modeling perspective, this increases the attack surface, and every system on the same network needs to have the same security guarantee. Many routers have a configuration option that protects the ARP table from such attacks - and this option can be disabled by a firmware update. Using TLS is defense-in-depth because this measure protects core assets, even when other systems fail.

Best secure data transmission method?

I'm attempting to set up a small network of computers where 4 child nodes feed small snippets of data into 1 parent node. I need the data transmission between the nodes to be secure (secure as in, if the packets are intercepted it is not easy to determine their contents). Does anyone have suggestions? I looked into HTTPS POST and encrypting SOAP messages, but before I got my hands too dirty I wanted to get opinions from the crowd.
HTTP over TLS is fine. And, since there is already quite a bit of tooling around it, it should be quite an easy and productive approach.
Besides TLS (SSL), which provides line-level encryption, you may want to authenticate the user using WS Security with X.509 Certificates. Another cheap security method is to limit traffic between hosts via by accepting messages only from known addresses/internal network hosts.
Again, HTTP over TLS (Formerly known as HTTPS) is secure enough for most security needs, and since th network packets already encrypted by HTTPS, you do not need to reencrypte your message (Unless you have an extraordinary situation).
I would suggest using TLS/SSL protocol.
There are some libraries available to implement this (depending on the programming language you are using):
-OpenSSL
-GnuTLS
-JSSE
-...
You also might want to check wikipedia on TLS/SSL for more information.
You probably want to look up TLS, which was built on its predecessor: the Secure Sockets Layer. Version 1.2 uses a combination of a MD5 and SHA-1 key.
SHA is one of the best one-way encryption algorithms to date.
You should also look into virtual private networks (VPNs), since it sounds like you need to maintain a secure session.
You could set up ssh tunnels, which (IMHO) are better for transmitting arbitrary data. The security is probably better since you can use public key crypto to secure messages. The system doesn't scale terribly well, but if you only have 4 nodes, this shouldn't be a problem.

Where are the real risks in network security?

Anytime a username/password authentication is used, the common wisdom is to protect the transport of that data using encryption (SSL, HTTPS, etc). But that leaves the end points potentially vulnerable.
Realistically, which is at greater risk of intrusion?
Transport layer: Compromised via wireless packet sniffing, malicious wiretapping, etc.
Transport devices: Risks include ISPs and Internet backbone operators sniffing data.
End-user device: Vulnerable to spyware, key loggers, shoulder surfing, and so forth.
Remote server: Many uncontrollable vulnerabilities including malicious operators, break-ins resulting in stolen data, physically heisting servers, backups kept in insecure places, and much more.
My gut reaction is that although the transport layer is relatively easy to protect via SSL, the risks in the other areas are much, much greater, especially at the end points. For example, at home my computer connects directly to my router; from there it goes straight to my ISPs routers and onto the Internet. I would estimate the risks at the transport level (both software and hardware) at low to non-existant. But what security does the server I'm connected to have? Have they been hacked into? Is the operator collecting usernames and passwords, knowing that most people use the same information at other websites? Likewise, has my computer been compromised by malware? Those seem like much greater risks.
My question is this: should I be worried if a service I'm using or developing doesn't use SSL? Sure, it's a low-hanging fruit, but there are a lot more fruit up above.
By far the biggest target in network security is the Remote server. In the case of a Web Browser and an HTTP Server, the most common threats are in the form of XSS and XSRF. Remote Servers are juicy targets for other protocols as well because they often have an open port which is globally accessable.
XSS can be used to bypass the Same-Origin Policy. This can be used by a hacker to fire off xmlhttprequests to steal data from a remote server. XSS is wide spread and easy for hackers to find.
Cross-Site Request Forgeries (XSRF) can be used to change a the password for an account on a remote server. It can also be used to Hijack mail from your gmail account. Like XSS, this vulnerability type is also wide spread and easy to find.
The next biggest risk is the "Transport layer", but I'm not talking about TCP. Instead you should worry more about the other network layers. Such as OSI Layer 1, the Physical Layer such as 802.11b. Being able to sniff the wireless traffic at your local cafe can be incredibly fruitful if applications aren't properly using ssl. A good example is the Wall of Sheep. You should also worry about OSI Layer 2, the Data Link Layer, ARP spoofing can be used to sniff a switched wired network as if it where a wireless broadcast. OSI Layer 4 can be compromised with SSLStrip. Which can still be used to this day to undermine TLS/SSL used in HTTPS.
The next up is End-user device. Users are dirty, if you every come across one of these "Users" tell them to take a shower! No seriously, users are dirty because they have lots of: Spyware/Viruses/Bad Habits.
Last up is Transport devices. Don't get me wrong, this is an incredibly juicy target for any hacker. The problem is that serious vulnerabilities have been discovered in Cisco IOS and nothing has really happened. There hasn't been a major worm to affect any router. At the end of the day its unlikely that this part of your network will be directly compromised. Although, if a transport device is responsible for your security, as in the case of a hardware firewall, then mis-configurations can be devastating.
Let's not forgot things like:
leaving logged-in sessions unattended
writing passwords on stickies
The real risk is stupid users.
They leave their terminals open when they go to lunch.
Gullible in front of any service personell doing "service".
Storing passords and passphrases on notes next to the computer.
In great numbers someone someday will install the next Killer App (TM) which takes down the network.
Through users, any of the risks you mention can be accomplished trough social engineering.
Just because you think the other parts of your communications might be unsafe doesn't mean you shouldn't protect the bits that you can protect as best you can.
The things you can do are:
Protect your own end
give your message a good shot at
surviving the
internet, by wrapping it up warm.
try to make sure that the other end is not an impostor.
The transport is where more people can listen-in than at any other stage. (There could only be a maximum 2 or 3 people standing behind you while you type in your password, but dozens could be plugged into the same router, doing a man-in-the-middle attack, hundreds could be sniffing your wifi-packets)
If you don't encrypt your message then anyone along the way can get a copy.
If you're communicating with a malicious/negligent end-point, then you're in trouble no matter what security you use, you have to avoid that scenario (authenticate them to you as well as you to them (server-certs))
None of these problems have been solved, or anywhere close. But going out there naked is hardly the solution.

Client Server socket security

Assuming we have a server S and a few Clients (C) and whenever a client update a server, an internal database on the server is updated and replicated to the other clients. This is all done using sockets in an intranet environment.
I believe that an attacker can fairly easily sniff this plain text traffic. My colleagues believe I am overly paranoid because we are behind a firewall.
Am I being overly paranoid? Do you know of any exploit (link please) that took advantage of a situation such as this and what ca be done differently. Clients were rewritten in Java but server is still using C++.
Any thing in code can protect against an attack?
Inside your company's firewall, you're fairly safe from direct hack attacks from the outside. However, statistics that I won't trouble to dig out claim that most of the damage to a business' data is done from the INside. Most of that is simple accident, but there are various reasons for employees to be disgruntled and not found out; and if your data is sensitive they could hurt your company this way.
There are also boatloads of laws about how to handle personal ID data. If the data you're processing is of that sort, treating it carelessly within your company could also open your company up to litigation.
The solution is to use SSL connections. You want to use a pre-packaged library for this. You provide private/public keys for both ends and keep the private keys well hidden with the usual file access privileges, and the problem of sniffing is mostly taken care of.
SSL provides both encryption and authentication. Java has it built in and OpenSSL is a commonly used library for C/C++.
Your colleagues are naïve.
One high-profile attack occurred at Heartland Payment Systems, a credit card processor that one would expect to be extremely careful about security. Assuming that internal communications behind their firewall were safe, they failed to use something like SSL to ensure their privacy. Hackers were able to eavesdrop on that traffic, and extract sensitive data from the system.
Here is another story with a little more description of the attack itself:
Described by Baldwin as "quite a
sophisticated attack," he says it has
been challenging to discover exactly
how it happened. The forensic teams
found that hackers "were grabbing
numbers with sniffer malware as it
went over our processing platform,"
Baldwin says. "Unfortunately, we are
confident that card holder names and
numbers were exposed." Data, including
card transactions sent over
Heartland's internal processing
platform, is sent unencrypted, he
explains, "As the transaction is being
processed, it has to be in unencrypted
form to get the authorization request
out."
You can do many things to prevent a man in the middle attack. For most internal data, within a firewall/IDS protected network you really don't need to secure it. However, if you do wish to protect the data you can do the following:
Use PGP encryption to sign and encrypt messages
Encrypt sensitive messages
Use hash functions to verify that the message sent has not been modified.
It is a good standard operating proceedure to secure all data, however securing data has very large costs. With secure channels you need to have a certificate authority, and allow for extra processing on all machines that are involved in communication.
You're being paranoid. You're talking about data moving across an, ideally, secured internal network.
Can information be sniffed? Yea, it can. But it's sniffed by someone who has already breached network security and got within the firewall. That can be done in innumerable ways.
Basically, for the VAST majority of businesses, no reason to encrypt internal traffic. There are almost always far far easier ways of getting information, from inside the company, without even approaching "sniffing" the network. Most such "attacks" are from people who are simply authorized to see the data in the first place, and already have a credential.
The solution is not to encrypt all of your traffic, the solution is to monitor and limit access, so that if any data is compromised, it is easier to detect who did it, and what they had access to.
Finally, consider, the sys admins, and DBAs pretty much have carte blanche to the entire system anyway, as inevitably, someone always needs to have that kind of access. It's simply not practical to encrypt everything to keep it away from prying eyes.
Finally, you're making a big ado about something that is just as likely written on a sticky tacked on the bottom of someone's monitor anyway.
Do you have passwords on your databases? I certainly hope the answer to that is yes. Nobody would believe that password protecting a database is overly paranoid. Why wouldn't you have at least the same level of security* on the same data flowing over your network. Just like an unprotected DB, unprotected data flow over the network is vulnerable not only to sniffing but is also mutable by a malicious attacker. That is how I would frame the discussion.
*By same level of security I mean use SSL as some have suggested, or simply encrypt the data using one of the many available encryption libraries around if you must use raw sockets.
Just about every "important" application I've used relies on SSL or some other encryption methodology.
Just because you're on the intranet doesn't mean you may not have malicious code running on some server or client that may be trying to sniff traffic.
An attacker which has access to a device inside your network that offers him the possibility to sniff the entire traffic or the traffic between a client and a server is the minimum required.
Anyway, if the attacker is already inside, sniffing should be just one of the problems you'll have to take into consideration.
There are not many companies I know of which use secure sockets between clients and servers inside an intranet, mostly because of the higher costs and lower performance.

Resources