What's the relationship between client and workspace in perforce? - perforce

Yes, the company i'm working for is still using perforce.
Workspace is a set of files in local.
But what's the relationship between p4 client and workspace pls ?

They're essentially synonyms. "Client" is usually shorthand for "client workspace". Which is the only kind of workspace there is. So "client" = "workspace".
Related concepts include:
Client spec: the specification form for a workspace. This defines the:
Client root: the root folder of the workspace
Client view: the mapping between the workspace and the server repository (depots)
Client options: stuff like noclobber and rmdir that affect how you sync files
Local files: all of the files that are in your workspace
Client "have list": the server's records of which depot revisions your local files correspond to
If someone says just "client" or "workspace", they could be talking about the "workspace" as an aggregation of all of the above data, or they could be talking about the local files, or they could be talking about the client spec. Sometimes it might even be the client application (e.g. P4V, P4, or whatever you use to talk to the server and manage your workspace). It's usually pretty obvious from context.
Typically, P4V uses the term "workspace" whereas the command line client application (and the server API, which the CLI is a thin wrapper around) uses the term "client".

Related

Read only access to svn repository via ssh (svn+ssh)

We desire to make subversion repositories read only. Doing this for a single repository in a subversion instance did not work regarding ssh. ssh access appears to bypass the controls of svn.
Followed the suggestions here:
Read-only access of Subversion repository
Write access should be restricted but that did not happen.
The repository is still write accessible despite changes to the repository for read only.
The easiest way to restrict access (assuming there are no users who require write access) is to remove the w (write) bit on the files in the SVN repo.
chmod -R gou-w /path/to/svn-repo
That will prevent writes at the filesystem / OS level.
If some users still require access, you can create separate svn+ssh endpoints for each user class that map to different users on the host server, using group write vs other write bits to determine which group has access to affect writes:
mkgrp writers-grp
chgrp -R writers-grp /path/to/svn-repo
chmod ug+w /path/to/svn-repo
chmod o-w /path/to/svn-repo
I would then register the SSH keys for writers against the writing user on the server, and prevent password access.
The "read-only" users could be allowed a well-known password.
This isn't as "clever" or "elegant" as configuring the SVN server configs, but it works pretty darned well as long as the users keep their SSH keys secret.
Restrict commit access with a start-commit hook.
Description
The start-commit hook is run before the commit transaction is even
created. It is typically used to decide whether the user has commit
privileges at all.
If the start-commit hook program returns a nonzero exit value, the
commit is stopped before the commit transaction is even created, and
anything printed to stderr is marshalled back to the client.
Input Parameter(s)
The command-line arguments passed to the hook program, in order, are:
Repository path
Authenticated username attempting the commit
Colon-separated list of capabilities that a client passes to the server, including depth, mergeinfo, and log-revprops (new in
Subversion 1.5).
Common uses
Access control (e.g., temporarily lock out commits for some reason).
A means to allow access only from clients that have certain
capabilities.

PouchDB sync deleted DB

I have a remote CouchDB named 'mydb', and a local PouchDB at client side sync with it. The situation is client can go offline and back, so during client offline, I DELETED the remote 'mydb' and re-create one with same name and added some random new files to the new db.
When the client come back online, is it going to sync back the old file and overwrite those with same name?
If you need bi-directional replication you might do:
// use "sync"
localDB.sync(remoteDB)
// another option is to use "replicate" with both "to" and "from"
localDB.replicate.to(remoteDB)
localDB.replicate.from(remoteDB)
If you need uni-directional replication, you might do:
// use "replicate" with only "to"
localDB.replicate.to(remoteDB)
Take a look at this.

where to store admin password in sinatra + heroku app?

I have a small Sinatra app I'm running on Heroku that uses a single admin password, plus a couple API authentication keys.
Where's the best place to store these things? Do I put them in environment variables, and use
heroku config:add ADMIN_PASSWORD=foobar
? Or do I use a config file that contains them, and I simply don't commit the config file?
I stick API keys and that sort of thing in a config yaml, like so
development:
twitter_api_key: stringstringstring
chunky: bacon
production:
twitter_api_key: gnirtsgnirtsgnirts
foo: bar
then use Sinatra's builtin set to handle the data.
configure do
yaml = YAML.load_file(settings.config + "/config.yaml")[settings.environment.to_s]
yaml.each_pair do |key, value|
set(key.to_sym, value)
end
end
And I can then access them from the settings object. I'm not sure why you wouldn't commit the config file, though . . . there's no major security risk here, since only those paths that you've explicitly defined can be accessed via the web. I guess the admin password could be stored in the same manner if you don't want to put it in a database, but I would at least encrypt it with a salt.
Just be careful not to step on Sinatra's Configuration settings when defining your own.
EDIT:
I think I just realized why you would prefer not to commit the config file. If you're working on an open source project, you certainly wouldn't want to commit the config file to your open source repo, but you would need to commit the file to Heroku in order for it to work. If this is the case, I'd either:
Use two separate local repos: one for the open source project, and one for the heroku project. Just set the open source project as an upstream repository in the Heroku project, then you can fetch changes.
Put both the API keys and encrypted/salted password in a database; MongoHQ offers a free tier to Heroku users as an addon for simple nosql storage using MongoDB.

how to safely receive files from end-users via rsync

I'd like to allow users of my web application to upload the contents of a directory via rsync. These are just users who've signed up online, so I don't want to create permanent unix accounts for them, and I want to ensure that whatever files they upload are stored on my server only under a directory specific to their account. Ideally, the flow would be something like this:
user says "I'd like to update my files with rsync" via authenticated web UI
server says "OK, please run: rsync /path/to/yourfiles uploaduser123abc#myserver:/"
client runs that, updating whatever files have changed onto the server
upload location is chrooted or something -- we want to ensure client only writes to files under a designated directory on the server
ideally, client doesn't need to enter a password - the 123abc in the username is enough of a secret token to keep this one rsync transaction secure, and after the transaction this token is destroyed - no more rsyncs until a new step 1 occurs.
server has an updated set of user's files.
If you've used Google AppEngine, the desired behavior is similar to its "update" command -- it sends only the changed files to appengine for hosting.
What's the best approach for implementing something like this? Would it be to create one-off users and then run an rsync daemon in a chroot jail under those accounts? Are there any libraries (preferably Python) or scripts that might do something like this?
You can run ssh jailrooted and rsync normally, just use PAM to authenticate against an "alternate" authdb.

perforce server settings

Is it possible to set the Perforce server to by default don't let the users check out a directory, instead of letting everybody update their view-spec to exclude that directory?
Eg: if you would like to check out //code/heavy/stuff you must explicitly add that directory to your view-spec instead of adding a -//code/heavy/stuff yo your spec.
You can install a trigger on the server that generates the default clientspec for a user. With this flexible tool, you can achieve a number of designs. The idea is that when a user creates a new clientspec, the server would fill it in with something other than the default //depot/... mapping for each depot.
One simple idea would be to define the default clientspec to include -//code/heavy/stuff mapping automatically.
Another more advanced idea would be to check to see what groups the user is a member of, and then auto-generate a clientspec appropriate for that user based on his group membership.
If you just want to block people from getting it, then it might be possible with permission mapping.
But then they would never by able to access it, even if they change their client spec.
Maybe you need to (re-)structure your repository so that the heavy part isn't in most users' client specs.

Resources