How safe is code hosted elsewhere - security

I was at a meeting recently for our startup. For half an hour, I was listening to one of the key people on the team talk about timelines, the market, confidentiality, being there first and so on. But I couldn't help ask myself the question: all that talk about confidentiality is nice, but there isn't much talk about physical security. This thing we're working on is web-hosted. What if after uploading it to the webhost, someone walks into the server room (don't even know where that is) and grabs a copy of the code and the database. The database is encrypted, but with access to the machine, you'd have the key.
What do the big boys do to guard the code from being stolen off? Is it common for startups to host it themselves in some private data center or what? Does anyone have facts about what known startups have done, like digg, etc.? Anyone has firsthand experience on this issue?

Very few people are interested in seeing your source code. The sysadmins working at your host are most likely in this group. It's probably not the case that they can copy your code, paste it on another host and be up and running, stealing your customers in 42 minutes.
People might be interested in seeing the contents of your DB if you're storing things like user contact information (or even more extreme, financial information). How do you protect against this? Do the easy, host independent things (like storing passwords as hashes, offloading financial data to financial service providers, HTTPS/SSL, etc.) and make sure you use a host with a good reputation. Places like Amazon (with AWS) and RackSpace would fail quickly if it got out that they regularly let employees walk off with customer (your) data.
How do the big boys do it? They have their own infrastructure (places like Google, Yahoo, etc.) or they use one of the major players (Amazon AWS, Rackspace, etc.).
How do other startups do it? I remember hearing that Stack Overflow hosts their own infrastructure (details, anyone?). This old piece on Digg indicates that they run themselves too. These two instances do not mean that all (or even most) startups have an internal infrastructure.

Most big players in the hosting biz have a solid security policy on their servers. Some very advanced technology goes into securing most high end data centers.
Check out the security at the host that I use
http://www.liquidweb.com/datacenter/

What if after uploading it to the webhost, someone walks into the server room (don't even know where that is) and grabs a copy of the code and the database. The database is encrypted, but with access to the machine, you'd have the key.
Then you're screwed :-) Even colo or rented servers should be under an authorized-access only policy, that is physically enforced at the site. Of course that doesn't prevent anyone from obtaining the "super secret" code otherwise. For that, hire expensive lawyers and get insurance.

By sharing user accounts on the same system you have more to worry about. It can be done without ever having a problem, but you are less secure than if you controlled the entire system.
Make sure you code is chmod 500, or even chmod 700, as long as the last 2 are zeros then your better off. If you do a chmod 777, then everyone on the system will be able to access your files.
However there are still problems. A vulnerability in the Linux kernel would give the attacker access to all accounts. A vulnerability in MySQL would give the attacker access to all databases. By having your own system, then you don't have to worry about these attacks.

Related

Client Server Security Architecture

I would like go get my head around how is best to set up a client server architecture where security is of up most importance.
So far I have the following which I hope someone can tell me if its good enough, or it there are other things I need to think about. Or if I have the wrong end of the stick and need to rethink things.
Use SSL certificate on the server to ensure the traffic is secure.
Have a firewall set up between the server and client.
Have a separate sql db server.
Have a separate db for my security model data.
Store my passwords in the database using a secure hashing function such as PBKDF2.
Passwords generated using a salt which is stored in a different db to the passwords.
Use cloud based infrastructure such as AWS to ensure that the system is easily scalable.
I would really like to know is there any other steps or layers I need to make this secure. Is storing everything in the cloud wise, or should I have some physical servers as well?
I have tried searching for some diagrams which could help me understand but I cannot find any which seem to be appropriate.
Thanks in advance
Hardening your architecture can be a challenging task and sharding your services across multiple servers and over-engineering your architecture for semblance security could prove to be your largest security weakness.
However, a number of questions arise when you come to design your IT infrastructure which can't be answered in a single SO answer (will try to find some good white papers and append them).
There are a few things I would advise which is somewhat opinionated backed up with my own thought around it.
Your Questions
I would really like to know is there any other steps or layers I need to make this secure. Is storing everything in the cloud wise, or should I have some physical servers as well?
Settle for the cloud. You do not need to store things on physical servers anymore unless you have current business processes running core business functions that are already working on local physical machines.
Running physical servers increases your system administration requirements for things such as HDD encryption and physical security requirements which can be misconfigured or completely ignored.
Use SSL certificate on the server to ensure the traffic is secure.
This is normally a no-brainer and I would go with a straight, "Yes"; however you must take into consideration the context. If you are running something such as a blog site or documentation-related website that does not transfer any sensitive information at any point in time through HTTP then why use HTTPS? HTTPS has it's own overhead, it's minimal, but it's still there. That said, if in doubt, enable HTTPS.
Have a firewall set up between the server and client.
That is suggested, you may also want to opt for a service such as CloudFlare WAF, I haven't personally used it though.
Have a separate sql db server.
Yes, however not necessarily for security purposes. Database servers and Web Application servers have different hardware requirements and optimizing both simultaneously is not very feasible. Additionally, having them on separate boxes increases your scalability quite a bit which will be beneficial in the long run.
From a security perspective; it's mostly another illusion of, "If I have two boxes and the attacker compromises one [Web Application Server], he won't have access to the Database server".
At foresight, this might seem to be the case but is rarely so. Compromising the Web Application server is still almost a guaranteed Game Over. I will not go into much detail into this (unless you specifically ask me to) however it's still a good idea to keep both services separate from eachother in their own boxes.
Have a separate db for my security model data.
I'm not sure I understood this, what security model are you referring to exactly? Care to share a diagram or two (maybe an ERD) so we can get a better understanding.
Store my passwords in the database using a secure hashing function such as PBKDF2.
Obvious yes; what I am about to say however is controversial and may be flagged by some people (it's a bit of a hot debate)—I recommend using BCrypt instead of PKBDF2 due to BCrypt being slower to compute (resulting in slower to crack).
See - https://security.stackexchange.com/questions/4781/do-any-security-experts-recommend-bcrypt-for-password-storage
Passwords generated using a salt which is stored in a different db to the passwords.
If you use BCrypt I would not see why this is required (I may be wrong). I go into more detail regarding the whole username and password hashing into more detail in the following StackOverflow answer which I would recommend you to read - Back end password encryption vs hashing
Use cloud based infrastructure such as AWS to ensure that the system is easily scalable.
This purely depends on your goals, budget and requirements. I would personally go for AWS, however you should read some more on alternative platforms such as Google Cloud Platform before making your decision.
Last Remarks
All of the things you mentioned are important and it's good that you are even considering them (most people just ignore such questions or go with the most popular answer) however there are a few additional things I want to point:
Internal Services - Make sure that no unrequired services and processes are running on server especially in productions. These services will normally be running old versions of their software (since you won't be administering them) that could be used as an entrypoint for your server to be compromised.
Code Securely - This may seem like another no-brainer yet it is still overlooked or not done properly. Investigate what frameworks you are using, how they handle security and whether they are actually secure. As a developer (and not a pen-tester) you should at least use an automated web application scanner (such as Acunetix) to run security tests after each build that is pushed to make sure you haven't introduced any obvious, critical vulnerabilities.
Limit Exposure - Goes somewhat hand-in-hand with my first point. Make sure that services are only exposed to other services that depend on them and nothing else. As a rule of thumb, keep everything entirely closed and open up gradually when strictly required.
My last few points may come off as broad. The intention is to keep a certain philosophy when developing your software and infrastructure rather than a permanent rule to tick on a check-box.
There are probably a few things I have missed out. I will update the answer accordingly over time if need be. :-)

Is basic HTTP auth in CouchDB safe enough for replication across EC2 regions?

I can appreciate that seeing "basic auth" and "safe enough" in the same sentence is a lot like reading "Is parachuting without a parachute still safe?", so I'll do my best to clarify what I am getting at.
From what I've seen online, people typically describe basic HTTP auth as being unsecured due to the credentials being passed in plain text from the client to the server; this leaves you open to having your credentials sniffed by a nefarious person or man-in-the-middle in a network configuration where your traffic may be passing through an untrusted point of access (e.g. an open AP at a coffee shop).
To keep the conversation between you and the server secure, the solution is to typically use an SSL-based connection, where your credentials might be sent in plain text, but the communication channel between you and the server is itself secured.
So, onto my question...
In the situation of replicating one CouchDB instance from an EC2 instance in one region (e.g. us-west) to another CouchDB instance in another region (e.g. singapore) the network traffic will be traveling across a path of what I would consider "trusted" backbone servers.
Given that (assuming I am not replicating highly sensitive data) would anyone/everyone consider basic HTTP auth for CouchDB replication sufficiently secure?
If not, please clarify what scenarios I am missing here that would make this setup unacceptable. I do understand for sensitive data this is not appropriate, I just want to better understand the ins and outs for non-sensitive data replicated over a relatively-trusted network.
Bob is right, it is better to err on the side of caution, but I disagree. Bob could be right in this case (see details below), but the problem with his general approach is that it ignores the cost of paranoia. It leaves "peace dividend" money on the table. I prefer Bruce Schneier's assessment that it is a trade-off.
Short answer
Start replicating now! Do not worry about HTTPS.
The greatest risk is not wire sniffing, but your own human error, followed by software bugs, which could destroy or corrupt your data. Make a replica!. If you will replicate regularly, plan to move to HTTPS or something equivalent (SSH tunnel, stunnel, VPN).
Rationale
Is HTTPS is easy with CouchDB 1.1? It is as easy as HTTPS can possibly be, or in other words, no, it is not easy.
You have to make an SSL key pair, purchase a certificate or run your own certificate authority—you're not foolish enough to self-sign, of course! The user's hashed password is plainly visible from your remote couch! To protect against cracking, will you implement bi-directional SSL authentication? Does CouchDB support that? Maybe you need a VPN instead? What about the security of your key files? Don't check them into Subversion! And don't bundle them into your EC2 AMI! That defeats the purpose. You have to keep them separate and safe. When you deploy or restore from backup, copy them manually. Also, password-protect them so if somebody gets the files, they can't steal (or worse, modify!) your data. When you start CouchDB or replicate, you must manually input the password before replication will work.
In a nutshell, every security decision has a cost.
A similar question is, "should I lock my house at night? It depends. Your profile says you are in Tuscon, so you know that some neighborhoods are safe, while others are not. Yes, it is always safer to always lock all of your doors all of the time. But what is the cost to your time and mental health? The analogy breaks down a bit because time invested in worst-case security preparedness is much greater than twisting a bolt lock.
Amazon EC2 is a moderately safe neighborhood. The major risks are opportunistic, broad-spectrum scans for common errors. Basically, organized crime is scanning for common SSH accounts and web apps like Wordpress, so they can a credit card or other database.
You are a small fish in a gigantic ocean. Nobody cares about you specifically. Unless you are specifically targeted by a government or organized crime, or somebody with resources and motivation (hey, it's CouchDB—that happens!), then it's inefficient to worry about the boogeyman. Your adversaries are casting broad nets to get the biggest catch. Nobody is trying to spear-fish you.
I look at it like high-school integral calculus: measuring the area under the curve. Time goes to the right (x-axis). Risky behavior goes up (y-axis). When you do something risky you saved time and effort, but the the graph spikes upward. When you do something the safe way, it costs time and effort, but the graph moves down. Your goal is to minimize the long-term area under the curve, but each decision is case-by-case. Every day, most Americans ride in automobiles: the single most risky behavior in American life. We intuitively understand the risk-benefit trade-off. Activity on the Internet is the same.
As you imply, basic authentication without transport layer security is 100% insecure. Anyone on EC2 that can sniff your packets can see your password. Assuming that no one can is a mistake.
In CouchDB 1.1, you can enable native SSL. In earlier version, use stunnel. Adding SSL/TLS protection is so simple that there's really no excuse not to.
I just found this statement from Amazon which may help anyone trying to understand the risk of packet sniffing on EC2.
Packet sniffing by other tenants: It is not possible for a virtual instance running in promiscuous mode to receive or "sniff" traffic that is intended for a different virtual instance. While customers can place their interfaces into promiscuous mode, the hypervisor will not deliver any traffic to them that is not addressed to them. This includes two virtual instances that are owned by the same customer, even if they are located on the same physical host. Attacks such as ARP cache poisoning do not work within EC2. While Amazon EC2 does provide ample protection against one customer inadvertently or maliciously attempting to view another's data, as a standard practice customers should encrypt sensitive traffic.
http://aws.amazon.com/articles/1697

Steps to protect sensitive information in a MySQL Database

I consider myself to be quite a good programmer but I know very little about sever administration. I'm sorry if these questions are noobish but I would really appreciate some advice or links on steps I can take to make this more secure.
I've completed a project for a client that involves storing some very sensitive information, ie personal details of big donors. From a programming perspective it's protected using user authentication.
I don't mind spending some money if it means the info will be more secure, what other steps should I take?
Can the database be encrypted some how so that even if the server is compromised people can't just dump the mysqldb and have everything?
Is it worth purchasing an ssl certificate?
The site is currently hosted on a personal hosting plan with a reasonably trustworthy host. Would a virtual private server be more secure? Are there special hosts I can use that take additional steps to protect info (ie would it be more secure on amazon s3)?
As a side note to the specific question, I would recommend reading some books on computer/programming security. Some good ones are 19 Deadly Sins of Software Security and Writing Solid Code.
You don’t need to encrypt the database itself, just encrypt the data before storing it. (Make sure to use real, cryptographically-secure algorithms instead of making one up yourself.)
Using SSL is definitely an important step if you want to avoid MITM attacks or snooping. A certificate allows you to use SSL without having to take extra steps like installing a self-signed one on each of the client systems (not to mention other benefits like revocation of compromised certs and such).
It depends on just how sensitive the information is and how bad leakage would be. You may want to read some reviews of hosts to get an idea of how good the host is. (If possible, sort the reviews ascending by rating and look at the bad reviews to see if they are objective problems that could apply to you and/or have to do with security, or if they are just incidental or specific issues to that reviewer.) As for the “cloud”, you would kind of be taking a chance since real-world security and privacy of it has yet to be determined. Obviously, if you do go with it, you’ll want a notable, trustworthy host like Amazon or Microsoft since they have benefits like accountability and work constantly and quickly to fix any problems.
HTH

Are services like AWS secure enough for an organization that is highly responsible for it's clients privacy?

Okay, so we have to store our clients` private medical records online and also the web site will have a lot of requests, so we have to use some scaling solutions.
We can have our own share of a datacenter and run something like Zend Server Cluster Manager on it, but services like Amazon EC2 look a lot easier to manage, and they are incredibly cheaper too. We just don't know if they are secure enough!
Are they?
Any better solutions?
More info: I know that there is a reference server and it's highly secured and without it, even the decrypted data on the cloud server would be useless. It would be a bunch of meaningless numbers that aren't even linked to each other.
Making the question more clear: Are there any secure storage and process service providers that guarantee there won't be leaks from their side?
First off, you should contact AWS and explain what you're trying to build and the kind of data you deal with. As far as I remember, they have regulations in place to accommodate most if not all the privacy concerns.
E.g., in Germany such thing is a called a "Auftragsdatenvereinbarung". I have no idea how this relates and translates to other countries. AWS offers this.
But no matter if you go with AWS or another cloud computing service, the issue stays the same. And therefor, whatever is possible is probably best answered by a lawyer and based on the hopefully well educated (and expensive) recommendation, I'd go cloud shopping, or maybe not. If you're in the EU, there are a ton of regulations especially in regards to medical records -- some countries add more to it.
From what I remember it's basically required to have end to end encryption when you deal with these things.
Last but not least security also depends on the setup and the application, etc..
For complete and full security, I'd recommend a system that is not connected to the Internet. All others can fail.
You should never outsource highly sensitive data. Your company and only your company should have access to it - in both software and hardware terms. Even if your hoster is generally trusted someone there might just steal hardware.
Depending on the size of your company you should have your custom servers - preferable even unaccessible for the technicans in your datacenter (supposing you don't own the datacenter ;).
So the more important the data is, the less foreign people should have access to it in any means. In the best case you can name all people that have access to them in any way.
(Update: This might not apply to anonymous data, but as you're speaking of customers I don't think that applies here?)
(On a third thought: There're are probably laws to take into consideration of how you have to handle that kind of information ;)

How to enforce locking workstation when leaving? Is this important?

Within your organization, is every developer required to lock his workstation when leaving it?
What do you see a risks when workstations are left unlocked, and how do you think such risks are important compared to "over-wire" (network hacking) security risks?
What policies do you think are most efficient to enforce locking the workstations? (The policies might be either something "technical", like a domain group security settings for screen-savers to be locking, or a "social", like applying some penalties to those who do not lock, or encouraging Goating?)
The primary real world risks are your co-workers "goating" you. You can enforce this by setting a group policy to run the screen saver after X minutes, which can lock the computer as well.
For me, this has become habit. On a Windows machine, pressing Windows-L is a quick way of locking the machine.
The solution might be social rather than technical. Convince people that they don't want anyone else reading their email or spoofing their accounts when they step away.
In my org (government), yes. We deal with sensitive data (medical and SSN). It's instinctual for me: Windows+L every time I walk away.
The policy is both social and technical. On the social side, we're reminded that personal security is important, and everyone is aware of the data with which we are privy. On the technical side, the workstations use a group policy that turns on the screensaver after 2 minutes, with "On resume, password protect" turned on (and unable to be turned off).
No, but I'm an organization of 1 - the last time I worked in a large organization, we were not required, but encouraged to. If I were in an enviroment with other people, I would probably lock my workstation now when I left it.
While certainly people with physical access can add hardware keyloggers, locking it does add an additional level of security. Depending on the type of organization you are, I think the risks are more from internal organizational snooping than over-the-wire attacks.
I used to work at a very large corp where the workstation required your badge to be inserted inside it to work. You weren't allowed to move in the building (you needed the badge to open the doors anyway) without that badge on you. Taking the badge out of the workstation's smartcard reader logged you out automatically.
Off topic but even neater, the workstations were more like "networkstations" (note that it is not a necessity to use the system I've just described, though) and the badge held your session. Pop it into another workstation in another building and here's your session just as you left it when you pulled the badge on the other computer.
So they basically solved the issue by physically forcing people to log off their workstation, which I think is the best way to enforce any kind of security-critical policy. People forget, it's human nature.
The only place I have seen where this is truly important is government, defense, and medical facilities. The best way to enforce it is through user policies on Windows and "dot files" on Unix systems where a screensaver and timeout are pre-chosen for you when you log in and you aren't allowed to change them.
I never lock my workstation.
When my coworker and friend mocked me and theatened to send embarassing emails from my machine, I mocked him back for thinking that locking does ANYTHING when I have physical access to his machine, and I linked him to this url:
http://www.google.com/search?hl=en&q=USB+keylogger
I don't work with any sensitive data my coworkers wouldn't already have equal access to, but I am doubtful of the effectiveness of workstation locking against a determined snoopish coworker.
edit: the reason I don't lock is because I used to, but it kept creating weird instabilities in windows. I reboot only on demand, so I expect my machine to run for months without becoming unstable and locking was getting in the way.
Goating can get you fired, so I don't recommend it. However, if that's not the case where you work, it can be effective, even if it's just a broad email that says, "I will always lock my workstation from now on."
At the very least, machines should lock themselves after X minutes of inactivity, and this should be set via group policy.
Security is about raising the bar, making a greater amount of effort necessary to accomplish something bad. Not locking your workstation at all lowers that bar.
We combine social and technical methods to encourage IT people to lock their PCs: default screen saver/locking settings plus the threat of goating. (The last place I worked actually locked the screen saver settings.)
One thing to keep in mind is that if you have applications (particularly if they are SSO) that track activity, changes, or both, the data you collect may be less valuable if you can't be sure the user recorded in the data is the user who actually made those changes.
Even at a company like ours, where there isn't a lot of company-related sensitive information available to most users, there's certainly potential for someone to acquire NBR data from another employee via an unlocked workstation. How many people save passwords to websites on their computer? Amazon? Fantasy football? (A dangerous goating technique: drop a key player from someone else's roster. It's really only funny if the commish is in on it with you, so the player can be restored ...)
Another thing to consider is that you can't be sure that everyone in your building belongs there. It's much easier to hack into a network if you're actually in the building: of course the vast majority of people in the building are there because they're allowed to be, but you really don't want to be the victimized company when that one guy in a million does get into the building. (It doesn't even have to be an intentionally bad guy: it could be somebody's kid, a friend, a relative ...) Of course the employee who let that person in could also let that person use their computer, but that kind of attack is much more difficult to stop.
locking your workstation each time you go for a coffee means that you type your password 10 times per day rather than once. And everyone around you can see you type it. And once they have that password they can impersonate you from remote computers which is far more difficult to prove than using your PC in the office with everyone watching. So surely locking your workstation is actually more of a security risk?
I'm running Pageant and have my SSH-public key distributed over all the servers here. Whoever sits down on my workstation can basically log into any account everywhere with my keys.
Therefore I always lock my machine, even for a 30s break. (Windows-L is basically the only Windows-key based shortcut I know.)
I personally think the risk is low, but in my experience most of the time it's not matter of opinion -- it's often a requirement for big corporate or government clients who will actually come in and audit your security. In that case, some kind of technical (group policy) solution would be best because you can actually prove you are complying with the requirement. I would also do it in cases where there is a legal privacy requirement (like medical data and HIPAA.)
I worked at a place where the people who supplied some of our equipment were from a company in direct competition with us. They were in the building when the equipment required maintenance. An email would go out every now and then saying they would be there, please lock your machine when you're not at it. If a competitor got our source because a developer forgot to lock their machine, the developer would be looking for a new job.
We are required to at work, and we enforce it ourselves. Mass chats are started professing love for people, emails are sent, backgrounds are changed, etc. Gotta love the first day when it happens to a new hire, everyone is sure to leave a nice note :)
The place I used to work had a policy on always locking your workstation. They enforced it by setting up a company wide mailing list - if you left your workstation unlocked, your co-workers would send an embarrassing mail to the list from your account, then lock your machine. It was kind of funny, and also kind of annoying, but it generally worked.
You could start sitting down at people's workstations and loading up [insert anything bad here] right after they walk away. That will work I'm sure.
In some/most government offices I've visited, that have the possibility of having members of the public walking about they have smartcards that plug into a USB reader on the PC. The card is on a necklace around the user's neck and locks the workstation when it's removed.
The owner of my company (and a developer), will make a minor change to your code window if you left your computer unlocked, making you go crazy wondering why your code isn't working until you find it.
Have to say, I never keep my computer unlocked after hearing about that prank, I go crazy enough as it is with some of my code.
You could rig up a simple foolproof way, have a fingerprint reader plugged into the computer programmed for your password, then wear this necklace with a usb receiver, and if you move away from the workstation, the screen saver actively locks it, then when you appear within the range, swipe your finger off the fingerprint reader to unlock the workstation - I think that would be a quite cheap way of doing it, simple, un-intrusive, and clutter free, no forgetting to lock via 'WinKey+L'
Hope this helps,
Best regards,
Tom.
Is it important or is it a good habit? Locking your computer is may not be important say in your own house, but if you are at a client's office and walk away then I would say it is important.
As for how to enforce...
After reading Jeff's blog entry on Don't Forget To Lock Your Computer; I like to change co-worker's desktop backgrounds to...
1,238px × 929px
Needless to say, co-workers started locking their computers.
GateKeeper is an easy solution to this. It locks the workstation automatically when the user walks away, and unlocks automatically when the user comes back within range of the computer. It can also require two factor authentication and other methods of lock/unlock.

Resources