I am setting up a server where some important code will reside. I want to make sure the code is unreachable, in case the HD is stolen. Well I know you never can be sure, but reasonably secure.
Which method could I use?
How to i.e. mount a crypted filesystem at bootup without human interaction?
Thank you very much for your help.
I do not know if any of the encrypted filesystem solutions support this, but one solution would be to have the server contact another server to get the key. You could even imagine splitting the key between several servers, so the server would have to contact n out of m servers to get the key.
If you place the servers in different locations that would make you safe against (n-1) out of the servers being stolen.
An attacker would however of course be able to get access to the encryption key if he performs the attack while the server is still connected to the network, but this implementation would make you secure against simple theft.
Mounting an encrypted file-system without human intervention will ultimately weaken your security. The thief would just need to steal your server. It is perfectly doable with any Linux based system using dm_crypt. There are many online tutorials showing you how to do it.
If this is for a file-server, you may want to consider using FreeNAS. It is a BSD based NAS operating system and it includes the ability to encrypt the disks, amongst other things. You will need to enter a password through the web-interface to mount the disks.
The open source TrueCrypt creates a virtual disk within a file and mounts it like a real drive, or it can encrypt an entire drive. Encryption is transparent and fast. I have used it; it works in real time. It might make things easier.
What you want is called Full Disk Encryption. A complete partition/filesystem is encrypted, it is decrypted by the OS (or 3rd-party-software) when it's mounted.
There are many implementations, and at least MS Windows & Linux have it as part of the OS.
See the Wikipedia article for details.
Being able to mount it w/o human intervention could be problematic; after all the whole point is that you cannot read the HD without human (i.e. your) intervention :-). You might be able to do this with some hardware token, but then that could also be stolen. So that requirement might not be doable.
Without human interaction is possible using a hardware token but you need to guard against someone stealing the token along with your server.
You could accomplish some safety with built-in GPS and a 10-minute backup battery or something (forget the key if power is lost for >10 minutes or the server is moved). You can make it work somehow but it will be insanely expensive.
You propably want a less involved solution like this:
Boot from a regular partition
Set up encrypted swap with a randomized key on startup (important!)
Set up /tmp and similar locations on an encrypted partition or in RAM (important!)
Mount the encrypted data partition by logging in over ssh
Still human intervention required, but you can stay at home while doing it.
Thank you very much for your helping answers.
I'll try a truecrypt container wich uses several distributed keyfiles (and no password).
A script will retrieve the keyfiles, then mount the volume, then delete the keyfiles.
Since we are only a small bunch, another option could be to programatically crypt/decrypt the data on the client side just before writing/reading. But this seems to me somehow tiresome.
Then, what about having a keyfile on a terminal server?
So many questions!
Thank you once more for your help.
Now... I just remembered about cold boot attacks. Do we really need guns? Are we that doomed?
Related
Let say I have an application that should run on a VPS. The app utilizes a configuration file that contains very important private keys, in a sense that no one should ever have access to! I know VPS providers can easily access my files. So, how may I "hide" the sensitive data from malicious acts while still have them usable for the app?
I believe encryption will be of no help, since the decryption should be done on the same machine! Also, I know running my own private server is a no-brainier; but, that's not an option, unfortunately.
You cannot solve this problem. Whatever workaround you can find, there will be a way for someone with access to repeat the same steps. You can only solve this if you have full control over the server (both hardware and software), otherwise, it's a lost battle.
Some links:
https://cheatsheetseries.owasp.org/cheatsheets/Key_Management_Cheat_Sheet.html
https://owaspsamm.org/model/implementation/secure-deployment/stream-b/
https://security.stackexchange.com/questions/223457/how-to-store-api-keys-when-algo-trading
You can browse security SE for some direction, and ask a more target question.
This problem is mitigated with using your own servers, using specialized hardware for key storage, trusting to your host provider or cloud, and using well-designed security protocols.
But the VPS provider doesn't know how your app will decrypt the keys in the file? Perhaps your app has a decrypt key embedded in it, or maybe it is something even simpler. Without decompiling your app they are no closer to learning the secrets. Of course if your "app" is just a few scripts then they can work it out.
For example if the first key in the file is customerID, they don't know that all the other keys are simply xor'ed against a hash of your customerID - they don't even know the hashing algorithm you used.
Ok, that might be too simplistic of you used one of the few well known hashes, but if there are only a few clients, it can be enough.
Obviously, they could be listening to the network traffic your app is sending, but then that should be end-to-end encrypted already, if you are that paranoid.
We have this computer code which requires anyone who has access to it pay a license fee. We will pay the fee for our developers but they want our sysadmins to be licensed too as they can see the code archives. But if the code is stored encrypted in the archives then the sysadmins can see the files but not see the contents.
So does any software version control system allow encryption so that only the persons who are checking out the code will require the key and so be able to see the files decrypted.
I was thinking it wouldn't be hard to add this to pserver and cvs but if it is already done elsewhere why reinvent the wheel.
Any insight would be helpful.
There is no way to set up a source control system that can perform server-side diffs in a way that would prevent a sysadmin from at least theoretically accessing the contents. (i.e.: The source control system would not be able to store the decryption key in a place that the sysadmin couldn't access.) Unless your sysadmins habitually browse the source control database contents, such a system should have no practical difference from an unencrypted system from the perspective of your vendor.
The only way to make the source control database illegible to a server admin is to encrypt files on the client before submitting them to the server. For this to meet the desired goal, the decryption keys would need to be inaccessible to the admins, which is unlikely to be practical in most organizations since server admins typically have admin access on all client machines as well. Ignoring this picky detail, it would also mean that all your source control system would ever see is encrypted binaries, which means no server-side diff or blame. It also means potentially horrible bloat of your database size since every file will require complete replacement on each commit. Are you really willing to sacrifice useability of your source control system in order to save licensing fees and/or placate this vendor?
Basically, you want to give all your developers some secret key that they plug into the encryption/decryption routines of git's smudge and clean filters. And you want an encryption scheme that is capable of performing deltas.
First, see Encrypted version control for some examples in git. As written, this can dramatically increase disk usage. However, there are ways to make more "diff-friendly" encryption at the cost of some security. See diph for an example of how you might attack that. Also, any system that uses AES-ECB mode would diff quite well. (You generally shouldn't use AES-ECB mode because of its security flaws... one of those security flaws is that it can diff quite well... hey, that's what you wanted, so this seems a reasonable exception.)
I am currently working on a java search project that will be distributed to the clients' local server, the project contains some valuable data that we hope it cannot be accessed directly on the machine, but can only be accessed from the project services/apis. The data will be updated on a daily basis and need to be avaliable for query 24/7.
I am thinking of eCryptFs, but after some test, it seems that once the encrypted data is mounted under the service user, say 'root1', as I have to keep the encrypted data in the mounted state to support query, all the other login users can access the de-crypted data without password. Is there anyway to support my scenario? Thanks.
If your users don't have root access, you can simply store the encryption key in a file and deny read access to other users.
If your users do have root access, there is nothing you can do.
EDIT:
Under most circumstances, someone with root account can do anything that the other users can do. So, even if you did get per user r/w permissions on a file but only for a certain user (which is very possible), it would be rather pointless. (Someone with sudo/root access could just run sudo su USER, where USER is the account with the r/w permissions. I think a better way to go about this is to look at options that users do not have control over.
The first thing that came to mind was compiled programs. While they are not really meant for holding secure information, you could compile a simple program to output a little bit of the information after a time delay (to prevent them from just running it continuously and then compiling all of the data they get from it.) Actually, modifying your Java program might be easier; just have it store the information as an enormous string or something. :D
These open source Java obfuscators will make it harder (but certainly not impossible) to reverse engineer your program and, along with it, the data inside.
A more secure option would be to write a C program, compile it, and have it output information (after a time delay) that the JAVA file can then manage. In order to make it harder to decompile, you could add some encryption methods to the string so if the Decompiler messes up on any part of it, it's still worthless information to them.
Final verdict: Nothing is really 100% secure when it is stored on someone else's computer(s) but, then again, neither is it 100% secure on your own server. I would suggest looking into other options, but if you have no other option and you have legal protection on the information, this might work for you.
I was wondering whether a login system that implies that have to upload a certain file and then the server verifies that this is equal to the one stored in the server would be useful.
I was thinking that to its advantage, it would have that the "password" (the file) could be quite large (without you having to remember it).
Also it would mean that you would have to require a login name.
On the other hand one disadvantage would be that you would have to "carry around" the file everyone in able to login.
I dont want to turn this into a philosophical rather a programming one.
I'm trying to see the usability, safety/vulnerabilities etc
Is this or something similar done?
I am definitely not a security expert, but here are some thoughts.
This sounds somewhat similar to public key encryption. If you look into how that works, I think you will get a sense of the same sort of issues. For example, see http://en.wikipedia.org/wiki/Public-key_encryption
In addition to the challenge of users having to carry the file around with them, another issue is how to keep that file secure. What if somebody's computer or thumb drive is stolen? A common approach with public-key encryption is to encrypt the private key itself, and require a password to use it. Unless you provide the file in a form which requires this, you are counting on your users to protect the file. Even if you are willing to count on them, there is the question of how to give them the tools they need so they can protect the file.
Note that just like passwords, these files would be vulnerable if a user used one to login from a public machine (which might have all sorts of spyware on it). It's an open question whether a file-based system might slip under the spyware since they might not be looking for it. However, that is not so different from security by obscurity.
Also you would want to make sure that you hashed or encrypted the files on your system. Otherwise, you would be doing the equivalent of storing passwords in plain text which would open the possibility of someone hacking your system, and then being able to log in as any user.
what you are saying can match to a physical factor of two factor (password + physical factor) authentication system. But it can not be a replacement of password, because password is something you know & file is something you have. Now if you turn the password into file you are losing one factor and somehow you have to compensate that :-) Maybe using something you are.
I was at a meeting recently for our startup. For half an hour, I was listening to one of the key people on the team talk about timelines, the market, confidentiality, being there first and so on. But I couldn't help ask myself the question: all that talk about confidentiality is nice, but there isn't much talk about physical security. This thing we're working on is web-hosted. What if after uploading it to the webhost, someone walks into the server room (don't even know where that is) and grabs a copy of the code and the database. The database is encrypted, but with access to the machine, you'd have the key.
What do the big boys do to guard the code from being stolen off? Is it common for startups to host it themselves in some private data center or what? Does anyone have facts about what known startups have done, like digg, etc.? Anyone has firsthand experience on this issue?
Very few people are interested in seeing your source code. The sysadmins working at your host are most likely in this group. It's probably not the case that they can copy your code, paste it on another host and be up and running, stealing your customers in 42 minutes.
People might be interested in seeing the contents of your DB if you're storing things like user contact information (or even more extreme, financial information). How do you protect against this? Do the easy, host independent things (like storing passwords as hashes, offloading financial data to financial service providers, HTTPS/SSL, etc.) and make sure you use a host with a good reputation. Places like Amazon (with AWS) and RackSpace would fail quickly if it got out that they regularly let employees walk off with customer (your) data.
How do the big boys do it? They have their own infrastructure (places like Google, Yahoo, etc.) or they use one of the major players (Amazon AWS, Rackspace, etc.).
How do other startups do it? I remember hearing that Stack Overflow hosts their own infrastructure (details, anyone?). This old piece on Digg indicates that they run themselves too. These two instances do not mean that all (or even most) startups have an internal infrastructure.
Most big players in the hosting biz have a solid security policy on their servers. Some very advanced technology goes into securing most high end data centers.
Check out the security at the host that I use
http://www.liquidweb.com/datacenter/
What if after uploading it to the webhost, someone walks into the server room (don't even know where that is) and grabs a copy of the code and the database. The database is encrypted, but with access to the machine, you'd have the key.
Then you're screwed :-) Even colo or rented servers should be under an authorized-access only policy, that is physically enforced at the site. Of course that doesn't prevent anyone from obtaining the "super secret" code otherwise. For that, hire expensive lawyers and get insurance.
By sharing user accounts on the same system you have more to worry about. It can be done without ever having a problem, but you are less secure than if you controlled the entire system.
Make sure you code is chmod 500, or even chmod 700, as long as the last 2 are zeros then your better off. If you do a chmod 777, then everyone on the system will be able to access your files.
However there are still problems. A vulnerability in the Linux kernel would give the attacker access to all accounts. A vulnerability in MySQL would give the attacker access to all databases. By having your own system, then you don't have to worry about these attacks.