Is it possible to enable my users to create Perforce stream depots without giving them super access everywhere?
I just upgraded to Perforce Server 2011.1 and I am eager to use the new streams feature.
If I understand correctly, streams have a couple restrictions: 1) streams must be in special stream depots and 2) stream depots contain branches at their top-level.
We currently have a single local depot (called "mylocaldepot") which contains multiple projects. Given the above restrictions and in keeping with the practices shown in Perforce documentation, it would seem that "one stream depot per project" is more sensible. But creating depots requires super access. It would be more convenient if our users could freely create stream depots for their projects themselves. But I don't want to give everyone unrestricted super access everywhere. And I don't want them to accidentally delete our existing local depot.
Is it possible to setup Perforce permissions in such a way that users can be granted only the ability to create stream depots? Perhaps I could use p4 protect and some combination of permission settings like this?
write user * * //...
super user * * //*
super user * * -//mylocaldepot
BTW: I've found the best information on streams to be these two videos: Introduction to Streams and Streams for Codeline Management; and this document: Perforce Streams Adoption Guide
The super user must create the depot, but ordinary users can then create streams in that depot. You should definitely not give all your users super permission, and you should not need more than one (or, perhaps, a few) stream depot.
You could also look into using the Perforce Broker to provide a project creation command. The broker can, behind the scenes, do the necessary steps to create a new stream depot, while not granting super access to the users. (The broker would need to be able to use a super account, of course, but that would be hidden from the users.)
Related
We have been setting up Perforce in the studio, and we decided to work with Streams for the sake of simplicity. One of the problems that I have been running into is not being able to remap more than one folder from the Stream into the same target folder in the Workspace.
I know about the Overlay Operator (+), but this isn't allowed when setting up the Stream View Path. I tried to do it with Workspace Remap, but it doesn't seem to be working.
Basically what I am trying to do is
Dev/FolderA/... Dev/...
+Dev/FolderB/... Dev/...
FolderA and FolderB have different files that don't share the same name, and my only interest in having them in the same folder is for Build purposes in the local drive.
Any ideas?
PS: I know this is similar to perforce client spec - making different depot paths map to the same client workspace path , the difference being that this only works for traditional local depot views (not streams).
Thanks!
The "+" lines are called overlay mappings and they can't be used stream workspace view specs.
The streams framework has several constraints that raw Perforce does not. The simplicity you're looking for relies on these constraints. The other constraint you may have already noticed with views is that you can't have leading or embedded wildcards like '...this/example/...' or 'this/.../one'.
So if you're trying to make streams do exactly what you're used to doing with native Perforce, you could end up putting a lot of work into it for not a whole lot of gain. As with any framework, the best way to get the most out of streams is to start fresh and spin up a new workflow based on its strengths.
That being said, there is a sneaky trick you can try. You can create static, non-stream client views that map stream depot paths. Any mapping syntax you like can be used in non-stream client views.
Non-stream views can be used to sync stream files and to work on them, but not to submit them. To submit files you'll have to switch your workspace to a stream client, submit the files, and switch the workspace back. As long as you don't re-sync between switching views your files won't be rearranged on local disk.
How well this works for you will depend on, among other things, which clients tools you're using. Some tools may not allow non-stream clients to work on stream files. Others tools may allow it, but they might show inconsistent states because they don't expect you to be doing that. And of course subverting the framework like this might just make things more complicated for you in the long run.
I want to implement a webapp - a feed that integrates data from various sources and displays them to users. A user should only be able to see the feed items that he has permissions to read (e.g. because they belong to a project that he is a member of). However, a feed item might (and will) be visible by many users.
I'd really like to use CouchDB (mainly because of the cool _changes feed and map/reduce views). I was thinking about implementing the app as a pure couchapp, but I'm having trouble with the permissions model. AFAIK, there are no per-document permissions in CouchDB and this is commonly implemented using per-user databases and replication.
But when there is a lot of overlap between what various users see, that would introduce a LOT of overhead...stuff would be replicated all over the place and duplicated in many databases. I like the elegance of this approach, but the massive overhead just feels like a dealbreaker... (Let's say I have 50 users and they all see the same data...).
Any ideas how on that, please? Alternative solution?
You can enforce read permissions as described in CouchDB Authorization on a Per-Database Basis.
For write permissions you can use validation functions as described on CouchDB
The Definitive Guide - Security.
You can create a database for each project and enforce the permissions there, then all the data is shared efficiently between the users. If a user shares a feed himself and needs permissions on that as well you can make the user into a "project" so the same logic applies everywhere.
Using this design you can authorize a user or a group of users (roles) for each project.
Other than (as victorsavu3 has suggested already) handling your read auth in a proxy between your app and couch, there are only two other alternatives that I can think of.
First is to just not care, disk is cheap and having multiple copies of the data may seem like a lot of unnecessary duplication, but it massively simplifies your architecture and you get some automatic benefits like easy scaling up to handle load (by just moving some of your users' DBs off to other servers).
Second is to have the shared data split into a different DB. This will occasionally limit things you can do in views (eg. no "Linked Documents") but this is not a big deal in many situations.
I have been trying to understand what should be the right way in using BackupRead and BackupWrite for backing up data on a computer and especially about restoring it reliably.
Now I understand how to use the API and have been successful. However there's one thing that bothers me.
You can backup, beside the file content itself, any alternate data streams also the security information (ACLs).
Now if I would store the ACL data for backup and then later, once the data needs to be restored on a different machine OR a newly setup machine what should I do with the SIDs which are related to the ACL?
The SID is most likely no longer valid for the machine and how should the right user be selected?
Now I am looking at this on a bigger scale let's say this is a computer with multiple users and hundreds or thousands of objects with different settings this would be mess to get the data restored with the security settings applied to them again.
Is this something, if the user of the software wishes to backup the security settings, what the user has to take about himself and update them accordingly or what?
Additionally BackupRead and BackupWrite will give me the raw binary data of those items which is not all too hard to use however obviously this API does not even intend to face this issue.
Anyone has an idea how a backup application should handle this situation? What is your thought, or any pointers on guidelines for this specific topic?
Thanks a lot.
I think you understand correctly the problems with backup and restore of data. I think that correct understanding of problems is a half of its solving. I suppose that you are, like the most of users of the stackoverflow site, mostly software developer and not an administrator of a large network. So you see on the problem from another side of software developer and not from the side of the administrator. An administrator knows the restrictions of backup and restore of ACLs and already use it.
In general you should understand that the main purpose of backups to save the data and to restore the data later on the same computer or server. Another standard case is: one restore backup from one server to another server after the changing of hardware. In the case the old server will no more exist. Mostly one makes backups of servers and organize to work on the clients so, that no important data will be saved of the client computer.
In the most cases the backed up data has Domain Groups SIDs, Domain Users SIDs, well-known SIDs or SID aliases from the BUILTIN domain in the security descriptors. In the case one need make no changes of SIDs at all. If the administrator do will make some changes in ACL he can use different existing utilities like SubInACL.exe.
If you write Backup/Restore software which you want use for moving the data with the security information you can include in the backup some additional meta-information about the local SIDs of accounts/groups used in the saved security descriptors. In the Restore software you can provide the possibilities to replace SIDs from the saved security descriptors. Many year ago I wrote for one large customer some utilities to clear up the SIDs in SD in the file system, registry and services after domain migration. It was not so complex. So I suggest that you could implement the same feature in you Backup/restore software.
I do believe the Backup* APIs are primarily intended to backup and restore on the same machine, which would render the SID problem irrelevant. However, assuming a scenario where you need to restore a backup on a new install, here's my thoughts on solutions.
For well-known SIDs such as Everyone, Creator Owner and so on, there isn't really any problem.
For domain dependent SIDs you can store them as is, and upon restore you could fixup the domain part, if needed. Likely you should store the domain name as well for such SIDs.
For local users and groups, you should at least store the user/group name for each SID. Fixup on restore could be partially automatic based on these names, or manual (assuming an user interface for the application) where you ask the user whether he wishes to map this user to a new local user, convert these SIDs to a well-known SID, or keep as is.
Most of the issues related to such SIDs can (and probably typically will) be possible to handle automatically. I'd certainly appreciate a backup application that was smart enough to do the restore I asked it to and figure out that "Erik" on the old machine must be "Erik" on the new machine as well.
And a side note, if you do decide to go with such a solution, remember how annoying it is to start an overnight data transfer just to get back to something 5% done blocking on a popup it could just as easily defer :)
Can I encrypt shared files on windows server and allow only authenticated domain users have access to these files?
The scenario as follows:
I have a software development company, and I would like to protect my source code from being copied by my programmers.
One problem is that some programmers use their own laptops to developing the company's software.
In this scenario it's impossible to prevent developers from copying the source code for their laptops.
In this case I thought about the following solution, but i don't know if it's possible to implement.
The idea is to encrypt the source code and they are accessible (decrypted) only when developers are logged into the AD domain, ie if they are not logged into the AD domain, the source code would be encrypted be useless.
Can be implemented this ?
What technology should be used?
It' depend on how you understand "allow only authenticated domain users have access to these file": from "permit selected user from Active Directory access EFS file" or "encrypted network traffic, from a file share". There are much more other interpretation ways of your question. Most scenarios are possible especially is you have Active Directory integrated PKI. I don't know which knowledge in the area you have. Do you know for example the main principles how EFS work? (see for example, http://go.microsoft.com/fwlink/?LinkID=85746 and http://technet.microsoft.com/en-us/library/bb457116.aspx).
So if you write a short question an answer could be much longer and can give not the information which you need.
Moreover stackoverflow.com is a site for software development only. Probably https://serverfault.com/ or https://superuser.com/ are better suitable for your question.
Best regards
UDPATED: EFS on the server is really not the best solution because of problem with data recovery on the server. If a user forget your laptop or if you want restore the backup data or in case of other not standard situations you can be required to implement new special processes in your company in case of usage EFS on the server. If you don't do this you can receive encrypted data on the server which nobody can read. Because of this problem the most of large companies deny EFS on servers. One use local EFS or hard disk encryption on laptops, but use only a good designed NTFS permission system on the server.
It seems to me that you can solve all your permission problem problems also without any EFS. For example, you can create on the server a directory with change permission for Creator Owner. Then every programmers of your company can create a subdirectory on the share and copy his project source in the subdirectory. He/she receive change permission to this directory, but nobody else. If you add to the root share directory a permission for Domain Administrators of for your account, then Domain Administrators or you will also have corresponding permission to the data of your programmers.
If some persons work on one project you can create a directory for the project, create a corresponding Group in the Active Directory, place persons who belongs to the project as a members of the group, and grand change permission in NTFS for this group. Only persons from the group will be able access the directory.
Sorry if I write a well known things (I don't know your knowledge). I want only gives you some examples, which shows, that all problems which you described in your question can be solved not with respect of encryption, but just with granting permission in the file system. Should you probably choose this way?
I would like to have your opinion about the subject "version control",
but focusing on security.
Some common features:
allowing to access to source code using clients only
(no way to access the source code on the server directly)
granting permission to access only the
source code which I am allowed to modify (i.e.: a developer should be able
to access the source code related to his project only).
So it should be possible to create user groups and granting different
levels of access.
tracking modifications, check-ins, and check-outs and the
developers who made them...
...and, surely, I am forgetting something.
Which are the most "paranoid" version control systems that you know?
Which features do they implement?
My aim is creating an enviroment for developing applications managing sensible data: credit cards, passwords, and so on...
A malicious developer may insert backdoor or intentionally alter some security features. So the access to the source code should be controlled strictly.
I must confess that my knowledge of version control systems is poor, so, I fear, customizing SVN could be a hard task for me.
Thanks
Perforce is widely used in the Finance Industry where security of code is sometimes an issue.
You can setup gatekeepers and access controls to restrict visibility of code and produce audit trails for various activities for SOX compliance.
I know that the ones you want are not the ones you want. For example, Clearcase or Serena Dimensions can do all the above... but you'd be bonkers to want to use them. (ah, I hear you say, I'm the admin so I don;t have to take that pain. Well, these also require lots of care and attention - we had 8 Clearcase admins at the last company I worked for. You don't want the nightmare of continually helping users with them).
So. You can have the horrible ones, or you could just use the friendly, easy-to-use SVN and implement your own checkout-tracking (using http transport and Apache logs), and slap access control permissions on every directory. You'd also have to secure the end-repository on disc, but you have to do this with every SCM, even something like Dimensions stores its database in Oracle - if you had access to Oracle instance, you could fiddle with the saved bits, so you have to secure that anyway.
Perforce has those features and is a really good product imho.
Use a well-known, industry standard system like subversion. It can control access to individual projects very simply, and using the web server authz configuration can control individual access to specific files in each project.
The only non-stanard issue is logging check-outs. But the web server can easily log this information for you.
Your users will thank you.
github is a wrapper for git which provides these features for git server. Compared to raw git servers, it notably includes access control, and it also has useful web interfaces to the code for authorised users.