I have to regularly do a clean Perforce sync to new hardware/virtual machines over the VPN. This can take hours as the project is quite large. Is there a way that I can simply copy an up-to-date tree from an existing client and tell Perforce to use this tree?
The Perforce Proxy is the right way to go, but if you really want to, there is a way to do what you asked via the sync command, with the -k switch:
The -k flag bypasses the client file
update. It can be used to make the
server believe that a client workspace
already has the file. Typically this
flag is used to correct the Perforce
server when it is wrong about what
files are on the client, use of this
option can confuse the server if you
are wrong about the client's contents.
p4 sync -k //depot/someProject/...
You can also use flush, which is a synonym for sync -k:
p4 flush //depot/someProject/...
Just be careful. Remember those last words, "...use of this option can confuse the server if you are wrong about the client's contents."
Perforce Proxy is almost definitely the way to go, assuming you can dedicate a local machine for this purpose.
A useful tip for a Proxy is to get it to refresh its contents overnight, just by creating a dummy client (perhaps on the proxy machine), and kicking off a nightly task to do a sync - a normal sync will do, does not need to be a clean one. This will ensure any big changes people have checked in won't necessarily cause a massive lag the first time you need to do a local sync.
Note that you need a live VPN connection between the proxy and the server - the proxy still has to talk to the server to determine if it has the right versions cached. So the proxy needs a reasonably low latency link to the server, but at least you don't have to wait for actual file transfer.
Another alternative you may want to try is to use the compress option in your client specs (workspaces). This tells the server to compress each file before it gets sent, and your p4 client will decompress automatically. The trade-off here is CPU time on both the server and the client. However, given you want to sync several local clients, I think proxy will ultimately be the better solution.
No, but you shouldn't need to: Why do you need to do a clean perforce sync? What's wrong with a normal sync? If you need to clean the tree, then why not work on a copy of the tree?
One alternative might be to run a p4proxy on your end of the VPN connection, then unchanged files won't have to be transferred over the VPN.
If you only require an export - that is you don't need to keep it up-to-date or submit changes from it, then you could simply copy an existing checkout, and never use perforce against that tree. But I don't know anyway of convincing perforce server that you have a checkout without p4 actually checking out the files.
Related
My task is to sync folders between two computers. One which acts as a windows server which is the host and the other one is a linux based server. The file transfer has to be secure and encrypted. Are there are any free softwares which will help me do this task.
Additionally the syncing should automatically happen after every pre decided interval.
I have a recollection that WinSCP can be invoked through command line. There, you have the option to synchronize folders (and the whole hierarchy there in). It may be worth trying.
Total Commander also has FTP/SFTP capabilities, but I'm not sure you can invoke it through command line.
One point to consider: If the process is to run automatically, you need to hard-code the username and password for the connection. There your security becomes compromised.
I need to copy a depot from one Perforce server to another. The file revision history needs to be intact but the user information and workspace information can not be copied to the new server.
I've tried a standard checkpoint creation and restore procedure, but if there exist users or workspaces with the same name on both servers, the source server will overwrite this info on the destination server. This is pretty bad if those user accounts and workspaces do not have exactly identical details.
The goal of this sort of operation is to allow two separate, disconnected groups to view a versioned source tree with revision history. Updates would be single directional with one group developing and one just viewing. Each group's network is completely enclosed, no outside connections of any kind.
Any ideas would be appreciated, i've been busting my brains on this one for a while.
EDIT:
Ultimately my solution was to install an intermediate Perforce server on the same machine as my source server. Using that I could do a standard backup/restore from the source server to the intermediate server and then delete all unwanted meta data in the intermediate server before backing up from the intermediate server to the final destination server. Pretty complicated but it got the job done and it can all be done programatically in Windows Power Shell.
There are a few ways, but I think you are going about this one the hard way.
Continue to do what you are doing, but delete the db.user, db.view(I think) and db.group. Then when you start the perforce server, it will create these, but they will be empty, which will make it hard for anyone to log in. So you'll have to create users/groups. I'm not sure if you can take those db files from another server and copy them in, never tried that.
The MUCH easier way, make a replica. http://www.perforce.com/perforce/r10.2/manuals/p4sag/10_replication.html Make sure you look at the p4d -M flag to make sure it's a read only replica. I assume you have a USB drive or something to move between networks, so you can just issue a p4 pull onto the USB drive, then move the drive, and either run it off the USB, or issue another p4 pull, pulling to a final server. Never tried this, but with some work it should be possible, you'll have to run a server off the USB to issue the final p4 pull.
You could take a look at perforce git fusion, and make some git clones.
You could also look at remote depots. Basically you create a new depot on your destination server, and point it at a depot on your source server. This works if you have a fast connection between the 2 servers. Protections are handled by the destination server, as to who has access to that new depot. The source server can be set up to share it out as read only to the destination server. Here is some info
http://answers.perforce.com/articles/KB_Article/Creating-A-Remote-Depot
Just make sure you test it during a slow period, as it can slow down the destination server. I tried it from 2 remote locations, both on the east coast US, and it was acceptable, but not too useful. If both servers are in the same building it would be fine.
I bought a new server and I want to move all the data (directories, sub directories, users, passwords, ..etc) from my old server to it.
Is there a way to do that?
Thanks,
Do you have physical access to both servers? If so you can use the dd command to make a clone of the disk from the old server to the disk that is going into the new server.
In order to do this though, both hard drives have to be installed in one of the servers.
You can also use netcat and dd to clone a disk over a network.
for the directories and files, use a FTP client from your server, if it allows you to, if not, just download all the content to your computer and upload it to the new server.
For the users and passwords, i guess they are in a Database, connect to the database using SSH, telnet, or MysqlAdmin or any RMDB client system and export a dump file, then log in to the new server's SQL system and import that dump file.
Anyway you should give more details of both servers anyway so we can help you, for example, are they Shared hosting or dedicated machine? and what kind of access do you have to them, also, their operative system would help people to reply you accurately
In principle, yes.
If the hardware is similar (= just more RAM, disk space but same CPU architecture and no special graphics card drivers), you might be able to copy every file and then install the boot loader once more (the boot loader config usually changes when the hard disk size changes).
Or you can create a list of all services that you use, determine which config files each one uses and then just copy those. Ideally, you shouldn't copy them but compare the old and the new versions and merge them.
The most work intensive way is to use a tool like puppet. In a nutshell, puppet allows to create install scripts for services (along with all the configuration that you need). So if you need to install a service again (new hardware, second server), you just tell puppet to do it. On the plus side, your whole installation will be documented, too. If you ever wonder why something is the way it is, you can look into the puppet files.
Of course, this approach takes a lot of time and discipline, so it might not be worth it in your case. Apply common sense.
Hi it's a question and it may be redundant but I have a hunch there is a tool for this - or there should be and if there isn't I might just make it - or maybe I am barking up the wrong tree in which case correct my thinking:
But my problem is this: I am looking for some way to migrate large virtual disk drives off a server once a week via an internet connection of only moderate speed, in a solution that must be able to be throttled for bandwidth because the internet connection is always in use.
I thought about it and the problem is familar: large files that can moved that also be throttled that can easily survive disconnection/reconnection/large etc etc - the only solution I am familiar with that just does it perfectly is torrents.
Is there a way to automatically strategically make torrents and automatically "send" them to a client download list remotely? I am working in Windows Hyper-V Host but I use only Linux for the guests and I could easily cook up a guest to do the copying so consider it a windows or linux problem.
PS: the vhds are "offline" copies of guest servers by the time I am moving them - consider them merely 20-30gig dum files.
PPS: I'd rather avoid spending money
Bittorrent is an excellent choice, as it handles both incremental updates and automatic resume after connection loss very well.
To create a .torrent file automatically, use the btmakemetainfo script found in the original bittorrent package, or one from the numerous rewrites (bittornado, ...) -- all that matters is that it's scriptable. You should take care to set the "disable DHT" flag in the .torrent file.
You will need to find a tracker that allows you to track files with arbitrary hashes (because you do not know these in advance); you can either use an existing open tracker, or set up your own, but you should take care to limit the client IP ranges appropriately.
This reduces the problem to transferring the .torrent files -- I usually use rsync via ssh from a cronjob for that.
For point to point transfers, torrent is an expensive use of bandwidth. For 1:n transfers it is great as the distribution of load allows the client's upload bandwidth to be shared by other clients, so the bandwidth cost is amortised and everyone gains...
It sounds like you have only one client in which case I would look at a different solution...
wget allows for throttling and can resume transfers where it left off if the FTP/http server supports resuming transfers... That is what I would use
You can use rsync for that (http://linux.die.net/man/1/rsync). Search for the --partial option in man and that should do the trick. When a transfer is interrupted the unfinished result (file or directory) is kept. I am not 100% sure if it works with telnet/ssh transport when you send from local to a remote location (never checked that) but it should work with rsync daemon on the remote side.
You can also use that for sync in two local storage locations.
rsync --partial [-r for directories] source destination
edit: Just confirmed the crossed out statement with ssh
I have a large Perforce depot and I believe my client currently has about 2GB of files that are in sync with the server, but what's the best way to verify my files are complete, in-sync, and up to date to a given change level (which is perhaps higher then a handful of files on the client currently)?
I see the p4 verify command, and it's MD5s, but these just seem to be from the server's various revisions for the file. Is there a way to compare the MD5 on the server with the MD5 of the revision required on my client?
I am basically trying to minimize bandwidth and time consumed to achieve a complete verification. I don't want to have to sync -f to a specific revision number. I'd just like a list of any files that are inconsistent with the change level I am attempting to attain. Then I can programmatically force a sync of those few files.
You want "p4 diff -se".
This should do an md5 hash of the client's file and compare it to the stored hash on the server.
Perforce is designed to work when you keep it informed about the checked out status of all your files. If you or other programmers in your team are using perforce and editing files that are not checked out then that is the real issue you should fix.
There is p4 clean -n (equivalent to p4 reconcile -w -n)
which would also get you a list of files that p4 would update. Of course you could also pass a changelist to align to.
You might want to disable checking for local files that it would delete tho!
If you don't have many incoming updates one might consider an offline local manifest file with sizes and hashes of all the files in the repository. Iterating over it and checking for existence, size and hash yielding missing or changed files.
In our company, having the p4 server on the intranet checking via local manifest it's actually not much faster than asking for p4 clean. But a little!! And it uses no bandwidth at all. Now over internet and VPN even better!!