Secure version control - security

I would like to have your opinion about the subject "version control",
but focusing on security.
Some common features:
allowing to access to source code using clients only
(no way to access the source code on the server directly)
granting permission to access only the
source code which I am allowed to modify (i.e.: a developer should be able
to access the source code related to his project only).
So it should be possible to create user groups and granting different
levels of access.
tracking modifications, check-ins, and check-outs and the
developers who made them...
...and, surely, I am forgetting something.
Which are the most "paranoid" version control systems that you know?
Which features do they implement?
My aim is creating an enviroment for developing applications managing sensible data: credit cards, passwords, and so on...
A malicious developer may insert backdoor or intentionally alter some security features. So the access to the source code should be controlled strictly.
I must confess that my knowledge of version control systems is poor, so, I fear, customizing SVN could be a hard task for me.
Thanks

Perforce is widely used in the Finance Industry where security of code is sometimes an issue.
You can setup gatekeepers and access controls to restrict visibility of code and produce audit trails for various activities for SOX compliance.

I know that the ones you want are not the ones you want. For example, Clearcase or Serena Dimensions can do all the above... but you'd be bonkers to want to use them. (ah, I hear you say, I'm the admin so I don;t have to take that pain. Well, these also require lots of care and attention - we had 8 Clearcase admins at the last company I worked for. You don't want the nightmare of continually helping users with them).
So. You can have the horrible ones, or you could just use the friendly, easy-to-use SVN and implement your own checkout-tracking (using http transport and Apache logs), and slap access control permissions on every directory. You'd also have to secure the end-repository on disc, but you have to do this with every SCM, even something like Dimensions stores its database in Oracle - if you had access to Oracle instance, you could fiddle with the saved bits, so you have to secure that anyway.

Perforce has those features and is a really good product imho.

Use a well-known, industry standard system like subversion. It can control access to individual projects very simply, and using the web server authz configuration can control individual access to specific files in each project.
The only non-stanard issue is logging check-outs. But the web server can easily log this information for you.
Your users will thank you.

github is a wrapper for git which provides these features for git server. Compared to raw git servers, it notably includes access control, and it also has useful web interfaces to the code for authorised users.

Related

Centralized vs. Distributed version control security

As my company begins to further explore moving from centralized version control tools (CVS, SVN, Perforce and a host of others) to offering teams distributed version control tools (mercurial in our case) I've run into a problem:
The Problem
A manager has raised the concern that distributed version control may not be as secure as our CVCS options because the repo history is stored locally on the developer's machine.
It's been difficult to nail down his exact security concern but I've gathered that it centers on the fact that a malicious employee could steal not only the latest intellectual properly but our whole history of changes just by copying a single folder.
The Question(s)
Do distributed version control system really introduce new security concerns for projects?
Is it easier to maliciously steal code?
Does the complete history represent an additional threat that the latest version of the code does not?
My Thoughts
My take is that this may be a mistaken thought that the centralized model is more secure because the history seems to be safer as it is off on its own box. Given that users with even read access to a centralized repo could selectively extract snapshots of the project at any key revision I'm not sure the DVCS model makes it all that easier. Also, most CVCS tools allow you to extract the whole repo's history with a single command so that you can import them into other tools.
I think the other issue is just how important the history is compared to the latest version. Granted someone could have checked in a top secret file, then deleted it and the history would pretty quickly be significant. But even in that scenario a CVCS user could checkout that top secret version with a single command.
I'm sure I could be missing something or downplaying risks as I'm eager to see DVCS become a fully supported tool option. Please contribute any ideas you have on security concerns.
If you have read access to a CVCS, you have enough permissions to convert the repo to a DVCS, which people do all the time. No software tool is going to protect you from a disgruntled employee stealing your code, but a DVCS has many more options for dealing with untrusted contributors, such as a gatekeeper workflow. Hence its widespread use in open source projects.
You are right in that distributed version control does not really introduce any new security concerns since the developer has already access to the code in both cases. I can only think that since it is easier to work offline and offsite with GIT, developers might become more tempted to do it than in centralized. I would push to force encryption on all corporate laptops with code
not really easier, just the same. If you enable logs, then you will have the same information when the code is accessed.
I personally do not think so. It might represent the thought process leading to certain decisions but not necessarily more.
It comes down to knowledge on how to implement security measures in both cases. If you have more experience in one system vs another then you are more likely to implement more to prevent such loss but at the end of the day, you are trusting your developers with code the minute you allow them access to it. No way around that.
DVCS provides various protections against unauthorized writing. This is why it is popular with opensource teams. It has several frustrating limitations for controlling reading. Opensource teams do not care about this.
The first problem is that most DVCS encourage many copies of the full source. The typical granularity is the full repo. This can include many unneeded branches and even entire other projects, besides the concern of history (along with searchable commit comments that can make the code even more useful to the attacker). CVCS encourages developers to copy as little as possible to their desktop, since the less they copy, the faster it works. The less you put on mobile devices, the easier it is to secure.
When DVCS is implemented with many devices acting as servers, it is much more difficult to implement effective network security. Attacking a local CVCS workspace requires the attacker to gain access to the filesystem. Attacking a DVCS node generally requires attacking the DVCS itself on any device hosting the information (and remember: the folks who maintain most DVCS's are opensource guys; they don't care nearly as much about read controls). The more devices that host repositories, the more likely that users will set up anonymous read access (which again, DVCS encourages because of its opensource roots). This greatly simplifies the job of an attacker who is doing random sweeps.
CVCS that are based on URLs (like subversion) open the opportunity for quite fine-grain access control, such as per-branch access. DVCS tends to fight this kind of access control.
I know developers like DVCS, but there's no way it can be secured as effectively as CVCS. Most environments do a terrible job of securing their CVCS, and if that's the case then it doesn't matter which you use. But if you take access control seriously, you can have much greater control with CVCS as part of a broader least-privilege infrastructure.
Many may argue that there's no reason to protect source code. That's fine and people can argue about it. But if you are going to protect your source code, the best implementation is to not copy the source to random laptops (which are very hard to secure well), and rather have developers mount it from a central server. CVCS works well this way. DVCS makes no sense if you are going to keep it on a single server this way. If you are going to copy files to mobile devices, make sure you copy as little as possible. That's the opposite of DVCS.
There are a bunch of "security" issues; whether they are an issue depends on your setup:
There's more data floating around, which means the notional "attack surface" might be bigger (it depends on how you count).
But how much data does the "typical" developer check out? You might want to use a sparse checkout in svn, but lazy people and some GUI tools don't support that, so they'll have all your code checked out anyway. Git users might be more likely to use multiple repos. This depends on you.
Authentication/access control might be better (and it might be worse!). This is largely a function of the VCS, not whether it is "D" or "C". svn:// is plaintext.
Is deleting files a priority, and how easy is this to do? An accidental commit of a confidential file is more painful to do in git if it happened in the distant past (but people might be more likely to notice).
Are you really going to notice a malicious user pulling the entire history instead of merely doing a checkout? It depends on how big your repository is and what your branches are like. It's easy for a full SVN checkout to take up more space than the repository itself due to branches.
Change history is generally not something you want to give away for free (even to people with a source code license), but how valuable is it? Maybe you have top-secret design methodologies or confidential information in your commit messages, but this seems unlikely.
And finally, security economics:
How much is the extra security worth?
How much is increased productivity worth?
How much is caring about the concerns about your developers worth?
(IIRC it turns out that users should ignore security advice, because the expected cost is more than the expected benefit — this is especially true for things like certificates that expired yesterday. How much does it cost you to check the address bar every time you type in password? How often do you catch a phishing attempt? What is the cost to you per thwarted phishing attempt? What is the cost per successful phish?)

Guarantee anonymity to users

I have programmed a system for internal behavior reporting for my company's intranet. I should not have access to its data (not being part of the controlling committee, but I have.
I've locked my account away from the data, but I could unlock it. I could store the data in an encrypted format, but, even if chosen by someone else, I should store the salt somewhere and hence read it -> decrypt the data.
From a theoretical point of view (I'm not talking about a particular system or framework or utility), how can I not have access to the data stored in a system I have complete control of?
Seems to me that you could just set passwords such that only one user has access to the database, then allow someone else to set that password. It would make maintenance a bit more tricky, but then again a database shouldn't need a ton of maintenance on a tool like this once all is said, done, and thoroughly tested.
If this is internal, it would be nothing to setup a dedicated, physically secure WAMP or similar machine that's solely dedicated to this purpose. Have someone else tweak root passwords and store them with the "committee" and you're off the hook, in theory.
I suppose if one was to be completely paranoid, one could build a web service to isolate the database completely on a separate network from the reporting functionality. In theory, you could setup the web service on a remote machine that your access is removed from, then use the front-end to collect data and pass it to the webservice. From there, it's completely out of your hands, with no "data out" webservice to retrieve data.
Security is always a messy subject. I've worked in banking, ecommerce, and sports (drug testing) environments where I'm knee-deep in confidential data and it is more than just a bit scary. At some point, you just have to do the best you can do, document your safeguards, be "read in" on proper protocol and required background checks, do thorough testing with independent testers, and then just maintain complete transparency. In the IT world we have access to a ridiculous amount of information, and that's never going to go away.
The basic answer is Mandatory Access Control. The kind of access control most computer user are familir with is Discressionary Access Control. In DAC (Discressionary Access Control) everything on the computer is owned by a user. Users can grant access of an object (file, service, peripheral, memory, etc) to another user. Users can even transfer ownership of an object to another user. In MAC (Manditory Access Control) at least some objects are not owned by any user. The rules governing how users can access or interact with these objects are fixed and unchangable by any user.
In your example the data generated by the reporting system should be protected by Manditory Access Control, but the reporting system configuration may be owned by you. So you can control how the system behaves but not have access to the data it generates.
Microsoft began implementing MAC with Windows Vista. In Vista it was called Mandatory Integrity Control (MIC).
Linux can implement MAC with SELinux or AppArmor.
Mac OS X uses an implementation of the TrustedBSD MAC.
So, why isn't MAC used more often?
I takes effort. It is not easy to set up MAC, and it is hard to change once it is set up. It can be complicated. Most systems and services are built on the DAC model. Turning on MAC often makes services stop working.

Source code security on Trac

I have trac set up together with subversion. I want to allow some people to be able to add tickets, but I don't want them to access the repository. There will be other users who will be able to access the repo via trac. Currently I am using Apache 2 for authentication.
How secure is trac? How difficult is it for someone with limited access to access the source via trac?
I am not asking on how to disallow access to the source via trac. I know how to do that.
The question again is: How hard is it for someone without access to the source to hack in and get at the source?
If Trac itself has access to your repository, and it gets compromised, the attacker has access to your repository by definition. In order to protect your repository from attackers taking over your Trac installation, you need to block Trac itself from accessing your repository; this will however also prevent it from giving access to the repository to anybody as well.
A compromised system still has all access permissions that it had before it was compromised, and whoever compromised it can make it do whatever they wish with its access permissions.
Funny thing, you actually can't rely on any answer provided here. I would say the correct approach is to conclude Trac is not that secure (just an assumption) and try to mitigate potential risks.
I assume your goal is to make sure "users" and "developers" can collaborate, but users will not be able to access sources under any circumstances (which is very good, by the way).
There are quite a lot of relevant recipes on the net, but I will provide the simplest one:
put your Trac behind Apache (you did that already)
use mod-rewrite to make sure "users" will not get access to [your URL]/browser, ...
configure Trac permissions as well
[paranoid mode], change default URLs in order to eliminate guessing
Basically, the idea is to filter users as early as possible in order not to rely on Trac's internal security.
You can also use OWASP Zed Attack Proxy Project to test Trac yourself:
The Zed Attack Proxy (ZAP) is an easy to use integrated penetration
testing tool for finding vulnerabilities in web applications.
It is designed to be used by people with a wide range of security experience
and as such is ideal for developers and functional testers who are new
to penetration testing. ZAP provides automated scanners as well as a
set of tools that allow you to find security vulnerabilities manually.
You can set permissions for every Trac user. For example, you can have user accounts that can only access the ticket system, but not the source browser, timeline or wiki.
In particular, you want to not grant the following permissions:
BROWSER_VIEW # View directory listings in the repository browser
LOG_VIEW # View revision logs of files and directories in the repository browser
FILE_VIEW # View files in the repository browser
CHANGESET_VIEW #View repository check-ins
I am not sure what you mean by "secure". Trac will enforce the permissions you have set for all its web access. It will not show the source browser pages to someone who does not have the proper permissions. In addition to that, you will have to configure SVN as well to not allow anonymous repository read access (otherwise they could by-pass Trac and access the repository directly).
This is possible.
trac-admin /path/to/project permission remove <user> BROWSER_VIEW
trac-admin /path/to/project permission remove <user> LOG_VIEW
trac-admin /path/to/project permission remove <user> FILE_VIEW
trac-admin /path/to/project permission remove <user> CHANGE_VIEW
This will remove all repository related permissions. We use trac, works well, haven't had any security problems as of yet.
Your question is hard to answer by nature. If you want to know about known security holes, you should check their own, or your distribution's, bug tracker. Debian doesn't report any security related bugs in trac, for example. So to the best of my knowledge, it is impossible to crack trac itself and gain ungranted access.
Of course, that doesn't mean there are no security holes. It only means they haven't been found by good guys (who would have reported them). But it's the best you can get when it comes to security, short of doing (or hiring people to do) a full source audit.

bitrock installBuilder issues

I have recently been tasked with finding a suitable installShield replacement and I am leaning towards InstallBuilder over Install4J and InstallAnywhere. Has anyone come across any issues with creating installers that installBuilder has been unable to handle? For example very strict security on the client machine.
*Comment added for additional clarity
For instance a system that has all accounts disabled sans the admin account with a very unique domain policy for instance, the inability to write files to the temp directory. Also how extensible is your product, from playing around with it I notice it is purely xml so is there anyway to write some extensions to the core?
this is Daniel from BitRock. Our installers do not need admin privileges in any platform (unless you explicitly require them) and can install as regular users. If you need to check permissions in the filesystem, registry, etc. from within the installer to see what is available, there is code to do that as well. I am not sure if the above answered your question. Can you provide more details about what you mean with restricted security in the client side? We take great pride in our level of support, and we encourage you to contact our support team with any questions or suggestions you may have, just to see by yourself.
You should also take a look at InstallJammer just for comparison. It's a lot more open than most of the ones you mention and gives you the ability do practically anything from within your installer.

How does your company do "Enterprise" Password Management?

We've talked about personal password management here but how do you guys manage your passwords at a company wide level?
I thought I'd report back after my week of searching...
I've settled on PassPack I've been using it for a few days now for my personal passwords and I'm a total fanboy.
They use the Host-Proof Hosting pattern so the only one that can access your stuff is you and if you forget your password they can't help you.
They have some nice Offline apps written with Adobe AIR and Google Gears.
But, best of all, they fit my "enterprise" requirement because an upcoming release will support sharing within a trusted group.
Plus, I learned about The "Blog" of "Unnecessary" Quotation Marks in their forum.
We have managed to plan our company applications so they are mainly web based and open source or in-house developed. This then allowed us to use LDAP to hook into active directory for logging into our intranet. From there we modified the logins into various products we use (MediaWiki, Wordpress, SugarCRM etc.) so that if the user is authenticated in the intranet, they are automatically logged into these other products as well.
This has taken some time setting up the process and creating a script to set all the appropriate user details in each system when someone joins the company, however now we have a situation where everyone only has to remember one password, removing the need for managing a growing list of passwords.
Obviously this may not be viable in many companies, but now that we have it setup it was worth the effort.
We use Password Agent: http://www.moonsoftware.com/pwagent.asp
It stores everything from PC admin logins to website logins and product keys for products we all use.
We use Active Directory to store user credentials, and developed custom library for Desktop and Web
We are using KeePass application with success.
We create file per project and/or per business domain.
We share the password to appropriate KeePass file between people who should have access.
It's not the best solution. We also have Cyber-Ark software installed corporate-wide, but due to some strange configuration rules it does not work for us as good as the previous solution. It might be also related to the fact that we have an old version.
We maintain an in-house Lotus Notes database that stores absolutely everything from passwords to server change records. It is big, cumbersome, takes an age to load, and is generally not, uh, nice.
No, this is not a sane way to do it. :-|
Obviously I'm biased because I work there, but we use Enterprise Random Password Manager from Lieberman Software. Yes, we do actually dogfood our own tool in our own network. It has some nice features, like web accessibility with delegation, scheduled operation with retry, propagation to other things using accounts (services, COM+ apps, etc.), system/account discovery, Linux/Unix account management, etc.
I'm sure a salesperson could give a better pitch, but that I am not. I'd encourage you to check it out. :)
For passwords related to my work, I store them in a plain unencrypted passwords.txt file in my user storage area on the main company file server. Normally, other people in the company can't read files in my user storage area, so there is little risk of exposure. However, if something were to happen to me, then all my passwords for company related activities would be trivially available to others inside the company - just ask MIS.
This is a very different security model than what I use for my personal passwords, of course.
Just a heads up: Microsoft have a product managing credentials/passwords/identity across varied systems: Identity Lifecycle Manager
Secret Server is something that grew from an internal need (within our software company) to a viable product that is now used all over the world. It is web-based and allows you to store passwords and then securely share them with other users and groups (even AD users and groups). It is also able to actively reach out and change passwords on automatic schedules, even handling associated dependencies such as Windows Services for service accounts.
Enterprise Password Management (free 30 day trial).
Use Apache Directory Server, which is an LDAP-standard implementation.
You can manage the directory database using Apache Directory Studio so it's quite user friendly (or at least, admin-friendly).
Then you can hook the directory programmatically to any application that requires access to the credentials, LDAP client libraries are widely available on popular programming platforms such as Java, C++, PHP, Ruby, etc.
My business friend adviced me to check out Passwork (https://passwork.me). They use self-hosted version on own servers, i found out that Passwork also has SaaS.
So i and my colleagues store our company passwords in Passwork.
We had tried another enterprise pw managers before but weren't able to trust them.
We had a look at a product that had these features:
Can give access privleges to password using roles.
Handles delegation.
Logs access to passwords.
Can Randomize passwords.
Can automatically re-randomize a password X days after access to it.
Unfortunately, I can't couldn't it's name when I posted this... It was "Secret Server"

Resources