Is there a mapping between a workspace and it's depot? - perforce

Is there a mapping between a workspace and a depot in Perforce? For example if I have a workspace created on D:/myWorkspace can I run a Perforce command to find the depot from where it is created?

If you have a connection to the Perforce server this is very simple; run a command like p4 where //... or p4 client -o to see the depot(s) associated with the current client workspace.
If you're connected to the Perforce server but those connection settings don't include the name of the workspace that might be associated with the current directory, you'll need to use the p4 clients command and match the Host value to the client hostname, and then look for Root values that match the directory. Note that it's possible to have multiple matches -- people will sometimes do things like create a client with no Host value (allowing it to be used from any host) and/or a null Root value (allowing it to map any directory).
If you don't even know the address of the Perforce server and it's not set in the environment you might be out of luck; Perforce commands will automatically pick up the P4PORT setting from the system environment, the registry, P4CONFIG files, etc, but there isn't a guarantee that any given Perforce client machine will have a connection set up via one of these mechanisms (in the most perverse case, someone might specify the P4PORT on every command via the -p global flag).
When scripting Perforce commands it is generally reasonable to assume a correctly configured environment that includes valid P4PORT/P4USER/P4CLIENT settings, and error out if the user hasn't provided those. A script run from within a shell where the user is using the P4 CLI will already have a correctly configured environment, as will a script run from P4V as a "Custom Tool".

Related

How to move Perforce to a different Hard Drive/SSD on Linux server

I recently bought a dedicated server that has 2x480GB SSD. I installed Ubuntu1604-desktop_64 on it, then installed Perforce on it following Michael Allar's tutorial : https://youtu.be/5krob9SlVu4. Everything went well, I populated the server with my files with P4V, but was surprised to see that I apparently only have 20GB storage.
By using PuTTY, I connected to the server and with the df -h command, here's what it shows me :
Server space
From what I see, the Perforce server is on /dev/md1, and only has 20GB of storage. It seems it would be way better to have it on /dev/md2, that has 399GB available. Is there a way that I can transfer the Perforce server/depot to that drive instead?
Thank you!
You will need to log in to the server and move the actual files, and let Perforce know where you moved them to. The two directories you might be concerned with are:
the server root. This is defined by your P4ROOT environment variable, or the -r flag on the p4d startup command. The server root is where the database files (db.*) live. It's also by default where everything else lives, although in practice for best performance/reliability it's generally recommended to have the db on its own drive and configure checkpoints and archives to live elsewhere.
the depot(s). This is defined by the Map: field in the p4 depot spec. The depot is where actual file content lives (usually the bulk of the data in a Perforce server, and also infrequently accessed relative to the database -- it's pretty common to put the depot on a larger slower disk/RAID while having the db on an SSD). By default this is a relative path (and interpreted relative to P4ROOT), but you can set it to an absolute path.
Decide which of those you're moving, move it, and update the corresponding configuration (i.e. P4ROOT if you moved the server root, or the depot Map if you moved the depot).

Perforce. Copy from one server to another

We have two perforce servers. I need to copy everything from depot on one server to depot on another server. The copy command doesn't take into account different servers.
Is it possible?
You didn't mention if you just need the head revisions or if you need full history, whether this is a one-time request or part of a regular process, whether both servers are under your control, etc.
So some of this is speculation, but here's three possible ways:
Create a workspace for each server, both pointing to the same place on your workstation. Sync the files from the source server, then submit them to the target server.
Create a remote depot on the target server, pointing to the source server. Then integrate the files from the remote depot to their desired location in the target server.
Use the P4Transfer utility: https://swarm.workshop.perforce.com/projects/perforce-software-p4transfer/
If none of these seem appropriate for you, perhaps you have special needs. There are a number of other options available, including special tools that need some assistance to use, but if you find you have such custom needs you should contact Perforce Technical Support for more precise guidance.

P4V - Duplicate workspace pointing to existing data

I was wondering if anyone had any advice on how to do the following task in p4v (I am not too familiar with P4V commands, so apologise if this is some basic command that I am missing).
Currently I have a workspace setup and data synced to my root
e.g. C:\Data\
I access this workspace from two different windows machine. (data is on both machines at c:\Data
Now, I need to move the location of where the data is stored on ONE of the machines and not the other (Machine A : c:\Data, Machine B: D:\Data\
Is this possible to do, without having to sync all the data again from the server (there is a lot and bandwidth limitations).
My initial thoughts were to create another workspace pointing to another root, but I do not know how to get this new workspace pick up the data files at this location.
Any help would be greatly appreciated
Thanks in advance
I don't know of a way to do this through P4V, but it can be done with the command line client. Here's the procedure.
After you have moved your files on machine B, and created a new workspace (without performing an "update all"), you can pass the -k switch to the sync command to let the server know what files you have.
From the web page to which I linked:
Keep existing workspace files; update the have list without updating
the client workspace. Use p4 sync -k only when you need to update the
have list to match the actual state of the client workspace.
And the command line help has this to say:
The -k flag updates server metadata without syncing files. It is
intended to enable you to ensure that the server correctly reflects
the state of files in the workspace while avoiding a large data
transfer. Caution: an erroneous update can cause the server to
incorrectly reflect the state of the workspace.
FYI: p4 flush is an alias for p4 sync -k
You can also look at the AltRoots field in the workspace. You could have one root at c:\data and the other at d:\data. As raven mentioned since the data is living on two separate disks you'll need to make sure that the data is kept in sync on both machines, although I assume you've already figured this part out since you've been running on two machines.
Any reason you can't just have one workspace per machine?

Is there any jsch ChannelSftp's function work like command 'cp'

These days,I am work with jsch-0.1.41,operate resources on a remote linux server via ChannelSftp.I find that there is no function provide the functionality similar to shell command "cp".Now I want to copy a file from a directory to the other,these two directory both remote directory on linux server.
Any wrong point in my presentation,please point it out.Thanks.
The SFTP protocol doesn't offer such a command, and thus also JSch's ChannelSftp doesn't offer it.
You have basically two choices:
Use a combination of get and put, i.e. download the file and upload it again. You can do this without local storage (simply connect one of the streams to the other), but this still requires moving the data twice through the network (and encrypting/decrypting twice), where it wouldn't be really necessary. Use this only if the other way doesn't work.
Don't use SFTP, but use an exec channel to execute a copy command on the server. On unix servers, this command is usually named cp, on Windows servers likely copy. (This will not work if the server's administrator somehow limited your account to SFTP-only access.)

How should I completely mirror a server user rsync, without using root password?

I use rsync to mirror a remote server. It is said that using a root password with rsync is dangerous, so I created a special rsync user. It seems to work fine, but cannot copy some files because of file permissions. I want to mirror whole directories for backup, and I guess this cannot be done without using root password, I mean if root does not give permissions on a specific files, no other account can read them. Is there other solutions and why shouldn't I use root account in rsync (I only do one way copying, that does not effect source).
If you want the whole server, then yes, you need root. However, instead of "pulling" (where you have a cron on your local server that does "rsync remote local"), can you possibly do it by "push" (where you have a cron on the remote server that does "rsync local remote"?) In this case, you won't need to configure the remote server to accept inbound root connections.
One option is to use an ssh login as root, but using ssh pubkey authentication instead of a password. In general, pubkeys are the wya to go, if you want to automate this later.
You'll want to look into the PermitRootLogin sshd_config setting, in particular the without-password setting or, if you want to get even more sophisticated and (probably) secure, the forced-commands-only setting.
Some useful links:
http://troy.jdmz.net/rsync/index.html
http://www.debian-administration.org/articles/209

Resources