Perforce: Set up Workspace by copying existing workspace? - perforce

My project is somewhere in the order of 100 GB per stream, with an additional 60 GB added to each workspace for local local cache files.
Rather than downloading and rebuilding from the depot every time I need a workspace for a new stream, is there a way to copy a workspace I already have downloaded and set up, then have Perforce recognize it as part of a different stream?

Where your first workspace is clientA rooted at /home/clientA and your new workspace is going to be clientB, do:
cp -r /home/clientA/ /home/clientB/
p4 set P4CLIENT=clientB # or use P4CONFIG files
p4 client -t clientA
p4 sync -k #clientA
p4 clean
Now you have clientB set up as a copy of clientA -- the sync -k command tells the server "sync everything that clientA has but don't send me the actual files, just pretend that I synced them." The p4 clean command should be a no-op, but if you somehow messed up the copy or you had open files in clientA or something, this will fix it by forcing a re-sync of the files that are wrong.
Now that you've done that you can do:
p4 switch STREAMNAME
which will switch you to a different stream, syncing only the files that are different. Many people just have a single workspace and use p4 switch to hop between streams; it automatically stashes your work in progress and everything, and you conserve local disk space by not having multiple copies of everything. (A good case for having multiple workspaces would be if you have the space to spare and you don't want to rebuild those 60GB of cache files each time your workspace contents change...)

Related

Will rclone --ignore-existing prevent ransomware damages?

I am rclone backing up files multiple times a day. I would like my backup server to be a recovery point from ransomware or any other error.
Am I correct that if I do a
rclone copy --ignore-existing
, my backup server is safe from the ransomware. If all of my files on my main server get encrypted the file name would stay the same and they wouldn't overwrite my backup server files with the encrypted files because I have --ignore-existing. It will ignore any size/time/checksum changes and not transfer those files over because they already exist on the back up? It won't transfer over the encrypted files that overwrite my existing good files?
I could then delete my main server and copy everything from my recovery over to the main and restore everything?
I just read the rclone documentation and it looks like the --ignore-existing is almost especially for preventing ransomware/encryption attacks according to the docs:
--ignore-existing Using this option will make rclone unconditionally skip all files that exist on the destination, no matter the content of
these files.
While this isn't a generally recommended option, it can be useful in
cases where your files change due to encryption. However, it cannot
correct partial transfers in case a transfer was interrupted.
So I think it will work to prevent that.

Make a "fork" of a depot repo in perforce

In perforce I have a repository in depot. I want to make a copy of this repo under tasks or streams. Speaking with git terminology - to make a fork. How can I make it? I have a write-access to a repo in depot.
Since I might confuse perforce terms, I will show with screenshot examples:
Under depot I have several folders like this:
I want to copy one of the folders under depot and paste it under streams as shown on here:
I'll give two different answers, neither of which uses the word "repo" or "fork" since those aren't terms in Perforce and they could mean two different things ("repo" could be a "depot" or a "server" -- the confusion is compounded by the fact that people sometimes say "depot" to mean "server" if their server only has one depot):
To branch a path //depot/thing from your classic depot depot into a new stream on the same server:
Create a new stream depot: p4 depot -t stream streams
Create a new stream: p4 stream //streams/thing
Populate the stream from //depot: p4 populate //depot/thing/... //streams/thing/...
To clone that path from your shared server into a mainline stream on a new personal server:
p4 clone -f //depot/thing/...
(The p4 clone command automatically creates a stream depot, a mainline stream, and a client workspace on your personal server that will be created in the current working directory -- you should run this someplace outside of the client workspace that you use on the shared server.)
I just create a new depot in the GUI and then add a new stream (again from GUI) make it a top level stream and then it asks if you want to branch an existing stream across. Select the depot/stream you need to fork and then it will copy that into your new stream.

P4V - Duplicate workspace pointing to existing data

I was wondering if anyone had any advice on how to do the following task in p4v (I am not too familiar with P4V commands, so apologise if this is some basic command that I am missing).
Currently I have a workspace setup and data synced to my root
e.g. C:\Data\
I access this workspace from two different windows machine. (data is on both machines at c:\Data
Now, I need to move the location of where the data is stored on ONE of the machines and not the other (Machine A : c:\Data, Machine B: D:\Data\
Is this possible to do, without having to sync all the data again from the server (there is a lot and bandwidth limitations).
My initial thoughts were to create another workspace pointing to another root, but I do not know how to get this new workspace pick up the data files at this location.
Any help would be greatly appreciated
Thanks in advance
I don't know of a way to do this through P4V, but it can be done with the command line client. Here's the procedure.
After you have moved your files on machine B, and created a new workspace (without performing an "update all"), you can pass the -k switch to the sync command to let the server know what files you have.
From the web page to which I linked:
Keep existing workspace files; update the have list without updating
the client workspace. Use p4 sync -k only when you need to update the
have list to match the actual state of the client workspace.
And the command line help has this to say:
The -k flag updates server metadata without syncing files. It is
intended to enable you to ensure that the server correctly reflects
the state of files in the workspace while avoiding a large data
transfer. Caution: an erroneous update can cause the server to
incorrectly reflect the state of the workspace.
FYI: p4 flush is an alias for p4 sync -k
You can also look at the AltRoots field in the workspace. You could have one root at c:\data and the other at d:\data. As raven mentioned since the data is living on two separate disks you'll need to make sure that the data is kept in sync on both machines, although I assume you've already figured this part out since you've been running on two machines.
Any reason you can't just have one workspace per machine?

How to move a perforce depot between two different servers such that revision history is copied but user info and workspaces are not?

I need to copy a depot from one Perforce server to another. The file revision history needs to be intact but the user information and workspace information can not be copied to the new server.
I've tried a standard checkpoint creation and restore procedure, but if there exist users or workspaces with the same name on both servers, the source server will overwrite this info on the destination server. This is pretty bad if those user accounts and workspaces do not have exactly identical details.
The goal of this sort of operation is to allow two separate, disconnected groups to view a versioned source tree with revision history. Updates would be single directional with one group developing and one just viewing. Each group's network is completely enclosed, no outside connections of any kind.
Any ideas would be appreciated, i've been busting my brains on this one for a while.
EDIT:
Ultimately my solution was to install an intermediate Perforce server on the same machine as my source server. Using that I could do a standard backup/restore from the source server to the intermediate server and then delete all unwanted meta data in the intermediate server before backing up from the intermediate server to the final destination server. Pretty complicated but it got the job done and it can all be done programatically in Windows Power Shell.
There are a few ways, but I think you are going about this one the hard way.
Continue to do what you are doing, but delete the db.user, db.view(I think) and db.group. Then when you start the perforce server, it will create these, but they will be empty, which will make it hard for anyone to log in. So you'll have to create users/groups. I'm not sure if you can take those db files from another server and copy them in, never tried that.
The MUCH easier way, make a replica. http://www.perforce.com/perforce/r10.2/manuals/p4sag/10_replication.html Make sure you look at the p4d -M flag to make sure it's a read only replica. I assume you have a USB drive or something to move between networks, so you can just issue a p4 pull onto the USB drive, then move the drive, and either run it off the USB, or issue another p4 pull, pulling to a final server. Never tried this, but with some work it should be possible, you'll have to run a server off the USB to issue the final p4 pull.
You could take a look at perforce git fusion, and make some git clones.
You could also look at remote depots. Basically you create a new depot on your destination server, and point it at a depot on your source server. This works if you have a fast connection between the 2 servers. Protections are handled by the destination server, as to who has access to that new depot. The source server can be set up to share it out as read only to the destination server. Here is some info
http://answers.perforce.com/articles/KB_Article/Creating-A-Remote-Depot
Just make sure you test it during a slow period, as it can slow down the destination server. I tried it from 2 remote locations, both on the east coast US, and it was acceptable, but not too useful. If both servers are in the same building it would be fine.

Verify Perforce client file copies

I have a large Perforce depot and I believe my client currently has about 2GB of files that are in sync with the server, but what's the best way to verify my files are complete, in-sync, and up to date to a given change level (which is perhaps higher then a handful of files on the client currently)?
I see the p4 verify command, and it's MD5s, but these just seem to be from the server's various revisions for the file. Is there a way to compare the MD5 on the server with the MD5 of the revision required on my client?
I am basically trying to minimize bandwidth and time consumed to achieve a complete verification. I don't want to have to sync -f to a specific revision number. I'd just like a list of any files that are inconsistent with the change level I am attempting to attain. Then I can programmatically force a sync of those few files.
You want "p4 diff -se".
This should do an md5 hash of the client's file and compare it to the stored hash on the server.
Perforce is designed to work when you keep it informed about the checked out status of all your files. If you or other programmers in your team are using perforce and editing files that are not checked out then that is the real issue you should fix.
There is p4 clean -n (equivalent to p4 reconcile -w -n)
which would also get you a list of files that p4 would update. Of course you could also pass a changelist to align to.
You might want to disable checking for local files that it would delete tho!
If you don't have many incoming updates one might consider an offline local manifest file with sizes and hashes of all the files in the repository. Iterating over it and checking for existence, size and hash yielding missing or changed files.
In our company, having the p4 server on the intranet checking via local manifest it's actually not much faster than asking for p4 clean. But a little!! And it uses no bandwidth at all. Now over internet and VPN even better!!

Resources