Is there any point in encrypting files that users have uploaded to my site? - security

This is just a question to put out there as I am unsure if there is a benefit to this or not. However I am thinking about encrypting the files that are uploaded to my site (As they could potentially contain users names and addresses along with some other personal data). However it occurred to me that the only way hackers could get hold of these files would be to gain control of the server and if they did that then they would have access to the encrypted files and also the database connection strings etc. Once they have those then they could easily decrypt the files.
The traffic itself is protected with SSL and I am not interested in using public key encryption as that is not something that would work for the users. So is it worth the extra effort and hassle of encrypting the uploaded files?

Related

How should I store a known/hard-coded password in the database?

I have a web app that uses known username and password combinations to login to external servers. There are multiple username/password combinations used for different services. Right now, they are essentially "hard-coded" into the website code, but, I would like to move this information off the code base for better security.
My initial thought is to store this data in the database which is used to support the website. I want to store it in a way that it is not easily "hackable" (i.e. I'm not going to store it as plain text or as a MD5 hash). Should I follow the same format that I use to store the website user's passwords, where I use a random number generator to create SALT for each password and then store the password as hashed combination of the password and SALT, or would this be overkill?
Generally, storing passwords in the application code is always a bad idea. Moving it outside the code has many advantages including security.
Now storing it either in DB or Configuration Files is a choice you have to take depending on your application.
For full security you should never store passwords in retrievable form. But to login to a external server as in your case, you need to get the actual plain text password, so one way hash will not work for you.
In our product we deal with such situation by using 2 Way SSL Certificates. It is very secure and there is no need to store the passwords.
But if you really need to store the passwords, then I will suggest to use configuration file and let your application read it. You can encrypt the passwords stored in the configuration files (Encrypting the passwords stored in the configuration file will again bring you back to the same question of how to protect the key). The access to the configuration file should be restricted (in Unix, 600 File Permission).
Alternatively, if your web application is Java, then you can consider using JNDI.
After more research, I've decided at this point to follow the ideas here:
Encrypt a Column of Data - SQL Server | Microsoft Docs
...and encrypt/decrypt on the DB inside a Stored Procedure.

Web server that encrypts uploaded files, can withstand compromised server & decrypt to multiple users

There are generally 2 main methods of encrypting user uploaded files.
The client can encrypt the file, send it over for the server to store, and on request the server retrieves the encrypted file and the client does the decryption. In this scenario, the server never has access to the keys. There are cons to this; the encryption scheme is viewable by anyone in the source of the web app, it's extra processing for the client, and others
The second scenario of course is when the client sends the file in plaintext (presumably over SSL), and the server manages the keys and encryption/decryption.
It seems to me that the most common form implemented is the latter, where the server manages the encryption/decryption for the client. This seems ineffectual to me. If the web server is compromised, even if the attacker just has web app level privileges, encryption in the first place was pointless since he will have access to all the keys just as the web app did, and thereby the decrypted files. Is there a way to prevent this? Why would people even encrypt files to begin with then, unless they did client-side encryption and never store the keys?
Also, as a second part to this question, is it feasible to allow multiple people access to an encrypted file (say a division within a company) if you used the former option (client-side encryption)? I would presume the users would have to share their keys among themselves, which poses another security risk.
SSL only protects connection. If you want to prevent server from peeking at yours files, the file must be encrypted by a secret key server does not know. (scenario one)
For the second part, There are many papers discussing how to build such systems based on public key cryptography. Or you may look into other recently crypto reserches, such as broadcast encryption or attribute-based encryption.

is client based online encryption practical?

I'm wondering whether a mechanism exists that allows client to client encryption. For example, when enabled, any information that is entered on one client can only be decrypted using a specific key.
Similar to how regular public key transactions work, but server agnostic.
A use case:
Everything on my Facebook profile is encrypted, and no body would be able to view that information (not even facebook). The users that I give the key would be able to decrypt that information.
This would allow complete control of data stored online.
The same idea can be applied for pictures uploaded to the internet.
One issue that I see is to have a practical mechanism to manage keys and a secure way to distribute keys to other users.
Has anyone done something like this before?
In case of Facebook I can imagine encrypting the data with OpenPGP keys into armored (text) format. Then you can place encrypted block to facebook or anywhere else. Other users would take the block, decrypt it on the client side and see it.
The same applies with other social networks and places where you can store some text block.
You can easily do encryption in some client application and even in Javascript (if you manage to make JavaScript load local user's keys somehow).

B2B File Transfer

I have been asked to develop a highly secure B2B File Transfer system between three companies.
VPN is not an option and they prefer to use common ports like 80,443, etc, so no extra firewall configuration shall be done.
i found solutions like oftp2 and as2 to be sufficient enough. although, i have some questions before i can decide:
is not https file transfer secure enough. so i can use asp.net/C# to do the task.
what about existing tools like SFTP, rsync and other *nix tools.
what about using SOAP?
my main concern is to avoid any possible clear data exposing to the outer world.
all ideas are appreciated.
thanks in advance.
if you use a block cipher like AES to encrypt the data and send the result using RSA encryption that will do the job. For the RSA you encrypt using their "Public key" which you get them to send to you out of band (Courier service) then they decrypt with their private key. This is totally secure providing both companies keep their private key secret. You have a key pair for each of the 3 companies. The extra AES layer is if you are really paranoid and really really want to make sure even if someone got the private keys they still can't read the data. Also you should sign all messages: send a hash of the rest of the message encrypted (AES) with your private key then the recipient can decrypt with your public key, and hash the data themselves and if their hash is not the same as your one that was attached after it was decrypted then it was not from you. This prevents man in the middle, domain in the middle etc interceptions. This would only allow someone to interfere if they got both the public and private key and the AES password... at that point the estimated crack time is well over 2 billion years with 2048 bit RSA so I think you're safe.
Technically you can always do a scp/rsync over ssh, if port 22 is among the white-listed port. If not, you can run a ssh daemon on 80/443 etc.
To answer your question, yes https/SFTP are secure enough, so is rsync if done over a encrypted channel (refer http://troy.jdmz.net/rsync/index.html)
Another thing you can explore is stunnel ( http://www.stunnel.org/ )
I can think of more than one ways to go about it. Totally depends on your servers' OS and other restrictions you may have.
The main issue with SSL is certificate validation. By default all certificates matching the target domain which are signed by any of a plethora of CAs is considered valid. If you are paranoid, you should check the certificate used on the connection directly against the a certificate stored in your configuration.
Using a DHE handshake to achieve perfect forward privacy would also be nice, but the built in SSL API in .net doesn't expose a way to enforce that. So you might or might not get DHE depending on the version of windows and .net.
Another good choice is tunneling something over SSH. For example SCP is an existing file copying utility that does this.
OK, you don't want to expose the file contents, with files to be exchanged between three parties, to anyone else.
There are two things to consider:
1) Protect the transport. Here, the files are sent over an encrypted link. So, you're basically putting the normal bits into a tunnel that is encrypted to protect anyone from snooping over the link. This is usually done using SFTP for company-to-company communications and keys are exchanged and authenticated out-of-band before any transfers occur.
2) Protect the files. Here, each file is encrypted independently and then transported to the destination. You encrypt the files of the file before they leave your network and then they are decrypted once they arrive at their destination. This is usually done using PGP for company-to-company communications and the PGP keys are exchanged and authenticated out-of-band before any transfers occur.
If you protect the transport, you're just sending the data through a protected pipe, linking the companies. Once the file is received, it's not encrypted (it's only encrypted through the pipe). If you protect the file, you are block-encrypting files themselves, so it's more of a process to encrypt and decrypt the files; only the actual process/system that has the PGP keys at the receiving end can decrypt the file.
So, what do you want to do? That's a risk decision. If you're only concerned about someone intercepting the file contents that is not company A or B (or C), you need to protect the transport (SFTP, et al). If you're concerned about protecting each file independently and making sure that only specific processes at the receiving end can decrypt the file, you want to protect the files. If the data is very sensitive, and under high risk, you may want to do both.
Some very good points have been made in security issues of developing your own file transfer programs. There are software security, network security, and user authentication security issues involved here. Understanding all the various encryption algorithms and security rules take years to master and is a time consuming endeavor for the development team to just keep up with all of the intricate changes in digital security standards and laws.
Another option is that there are several very good and affordable managed file transfer (MFTP) solutions that have already developed and addressed all of these security issues. They also have mastered the workflow of file transfer management to make this process much much easier on the IT staff. One of these MFTP solutions that I've used for the past few years is Linoma Software's GoAnywhere product. It has saved our team months of time and headache, allowing us to focus on our core business.
I hope this helps...

Secure file server

Introduction
I want to create a Java web application for storing and backing up user files, similar to Dropbox. One of the interesting Dropbox feature is that it can detect whether a certain file already exists on server. For example, if one user upload a file onto server, another user who tries to upload the same file will not need to upload the same file content. Server will only need mark that he has the same file. This helps to save the bandwidth/space and increases the speed in many ways.
The most basic solution to this problem is to use a file hash string, e.g. sha1, md5, etc., to identify the file. The client software check whether a certain hash exists on server or not. If it exists, then it can skip the uploading process and mark that user has the same file.
Problem
The web application is implemented based on REST architecture so that user can easily write their own client software to upload their files. For security reasons, the SSL is enabled for all transactions. But my most security concern is about users faking that they have a file without actually owning it if I use sha1 or any other standard hash alogorithms. This cannot be prevented by SSL or encryption. If a user manage to get the hash string, e.g. md5 and sha1 of many files can be found by googling, he can mark that he has the file using REST service on the web application.
So one of the possible solution is that the server requests a set of certain random bytes from the file as well as the hash of the whole file. Here is example steps:
Client checks whether a certain hash exists on server or not. Then, server returns the required positions of random bytes if the file already exists.
Client sends random bytes as per request if the server has the file. Client software will not be able to response to it without having the actual file.
In this way, it can save the bandwidth as well as ensure that user owns the file they want to upload.
Question
I am no expert in Security over the web so I have no idea whether this is a good idea or not. I have read some articles about implementing their own fancy process might lead to the reduction in security strength because the security cannot be tested and the extra information may provide a cracking method.
Does anyone has any comment on the process?
Will it reduce the sucurity?
Does anyone have an idea to solve this problem differently?
I understand that there might not be an exactly answer to this question but I would like to hear if anyone has encounter the same problem and has any good solution to it.
Rather than asking the client to upload some random bytes of the file's contents, it may be better to ask the client to upload the hash of a random region the file. That way you can use a wider range of sizes that you ask the client to verify.
Better yet, though, may be to send the client a random number and require the client to compute an HMAC of the entire file's contents using that number as the key. This is more computationally-expensive since the server must compute the HMAC too, but it verifies that the client has the entire file, not just a small portion of it.
One unavoidable side effect of this hash feature, even with a verification scheme, is that it reveals that a copy of the file already exists somewhere on the server. That by itself may be sensitive information.
For the most stringent privacy protection, you should forego this feature and make each user upload their own copy of the file. You can use hash comparison on the server to avoid storing multiple copies of the file, transparently to the clients.

Resources