perforce workspace disappear after inactivity - perforce

for a project I usually create several workspaces on the same host to work on different aspect of the project. However I've find that the workspaces that I stop using for more than a couple weeks disappears(I don't interact with it through command line or GUI). and I'd get a 'clientroot missing' error. The workspace folder is still on my local drive. Is there a limit to how many workspace one can create on 1 host/how long a workspace can stay inactive before being deleted? Is there a way for me to get it back somehow?
Thanks!

This isn't normal behavior for Perforce and my guess is that your admin is running some sort of home-made cleanup script, which is probably unnecessary or at the very least overzealous (unless you're using the free version and limited on how many workspaces you can create, in which case I'd suggest changing your workflow to not burn so much of a limited resource).
If that is the case, you'll need to talk to your admin about exactly what the rules are and whether the workspaces are being archived in any way before they're purged.

Related

Cygwin intermittently loses it's mapped drives in /cygdrive

So, I have a collection of Windows Server 2016 virtual machines that are used to run some tests in pairs. To perform these tests, I copy a selection of scripts and files from the network on to the machine, before performing the tests.
I'm basically using a selection of scripts that have existed around here since before my time and whilst i would like to use other methods, so much of our infrastructure relies on these scripts that overhauling the system would be a colossal task.
First up, i sort out the mapped drives with
net use X: \\network\location1 /user:domain\user password
net use Y: \\network\location2 /user:domain\user password
and so on
Soon after, i use rsync to copy files from a location in /cygdrive/y/somewhere to /cygdrive/c/somewhere_else
During the rsync, i will get errors that "files have vanished" (I'm currently unable to post the exact error, I will edit this later to include this). When i check what's currently in the /cygdrive directory, all i see is /cygdrive/c and everything else has disappeared.
I've tried making a symbolic link to /cygdrive/y in a different location, I've tried including persistent:yes on the net use command, I've changed the power settings on the network card to not sleep. None of these work.
I'm currently looking into the settings for the virtual machines themselves at this point, but I have some doubts as we have other virtual windows machines that do not seem to have this issue.
Has anyone has heard of anything similar and/or knows of a decent method to troubleshoot this?
Right, so I've been working on this all day and finally noticed a positive change, but since my systems are in VMware's vCloud, this may not work for some people. It's was simply a matter of having the VM turned off and upgrading the Virtual Hardware Version to the latest version. I have noticed with this though, that upon a restart, one of the first messages that comes up mentions that the computer is "disabling group policies".
I did a bit of research into this and found out that Windows 8 and 10 (no mention of any Windows Server machines) both automatically update Group Policies in the background, disconnecting and reconnecting mapped drives to recreate them.
It's possible that changing the Group Policy drive from "recreate" to "update" should fix this issue, and that the Virtual Hardware update happened to resolve this in a similar manner.

P4V - Duplicate workspace pointing to existing data

I was wondering if anyone had any advice on how to do the following task in p4v (I am not too familiar with P4V commands, so apologise if this is some basic command that I am missing).
Currently I have a workspace setup and data synced to my root
e.g. C:\Data\
I access this workspace from two different windows machine. (data is on both machines at c:\Data
Now, I need to move the location of where the data is stored on ONE of the machines and not the other (Machine A : c:\Data, Machine B: D:\Data\
Is this possible to do, without having to sync all the data again from the server (there is a lot and bandwidth limitations).
My initial thoughts were to create another workspace pointing to another root, but I do not know how to get this new workspace pick up the data files at this location.
Any help would be greatly appreciated
Thanks in advance
I don't know of a way to do this through P4V, but it can be done with the command line client. Here's the procedure.
After you have moved your files on machine B, and created a new workspace (without performing an "update all"), you can pass the -k switch to the sync command to let the server know what files you have.
From the web page to which I linked:
Keep existing workspace files; update the have list without updating
the client workspace. Use p4 sync -k only when you need to update the
have list to match the actual state of the client workspace.
And the command line help has this to say:
The -k flag updates server metadata without syncing files. It is
intended to enable you to ensure that the server correctly reflects
the state of files in the workspace while avoiding a large data
transfer. Caution: an erroneous update can cause the server to
incorrectly reflect the state of the workspace.
FYI: p4 flush is an alias for p4 sync -k
You can also look at the AltRoots field in the workspace. You could have one root at c:\data and the other at d:\data. As raven mentioned since the data is living on two separate disks you'll need to make sure that the data is kept in sync on both machines, although I assume you've already figured this part out since you've been running on two machines.
Any reason you can't just have one workspace per machine?

Linux: Remove application settings after program is uninstalled?

I'm writing a program for Linux that stores its data and settings in the home directory (e.g. /home/username/.program-name/stuff.xml). The data can take up 100 MB and more.
I've always wondered what should happen with the data and the settings when the system admin removes the program. Should I then delete these files from every (!) home directory, or should I just leave them alone? Leaving hundreds of MB in the home directories seems quite wasteful...
I don't think you should remove user data, since the program could be installed again in future, or since the user could choose to move his data on another machine, where the program is installed.
Anyway this kind of stuff is usually handled by some removal script (it can be make uninstall, more often it's an unsinstallation script ran by your package manager). Different distributors have got different policies. Some package managers have got an option to specify whether to remove logs, configuration stuff (from /etc) and so on. None touches files in user homes, as far as I know.
What happens if the home directories are shared between multiple workstations (ie. NFS mounted)? If you remove the program from one of those workstations and then go blasting the files out of every home directory, you'll probably really annoy the people who are still using the program on other workstations.

vlad the deployer - deploying with different users?

We're using vlad the deployer for deploying rails apps to production and test servers. All our servers are Ubuntu servers.
We have a problem related with linux permissions.
Vlad uses ssh to put files on any server, be it production or test. My company has several people, and each one has a different account on each server.
On the other hand, the way our Apache server is configured, it uses the "owner" of a website directory for reading files on that directory.
As a result, the user that makes the first deployment becomes the "owner" of the site; other users can't make deployments - Apache will not be able to read the modified files, since the owner has changed.
Normally this isn't much of an issue, but now holidays are approaching and we'd like to solve this as cleanly as possible - for example, we'd like to avoid sharing passwords/ssh keys.
Ideally I would need one vlad task that does something to the permissions of the deployed files so they could be completely modified by other users. I don't know enough about unix commands in order to do this.
I would do it with group permissions.
have the web root be /var/www/your-app/current
/var/www/your-app/ should be group writable by the group that all persons doing deploys belong to.
set up the deploy scripts so that they write to a directory called /var/www/your-app/>timestamp< where timestamp is the current timestamp.
/var/www/your-app/current is a symlink, and when you have sucessfully copied all files to the new directory you update the target of the symlink, so that it points to the directory you created.
This way everyone can deploy, and you can see who deployed what version.
This also makes the deploy atomic, so nothing will break if you lose your network connection in the middle of the deploy.
Since you won't delete the old catalogs, you can easy do a rollback to a "last good" state, if you manage to introduce some bug.
Why don't you make all the files publicly readable? In the ~/.bashrc of each user put the line
umask o=r
http://en.wikipedia.org/wiki/Umask
BTW I have never heard of such an Apache option; are you saying when Apache reads a file from /home/USER it runs with the UID of USER, instead of "nobody" or "apache"? That sounds wonky.
I've been fighting with it for a couple months now and I've only found a couple ways to do it:
Use a single shared account for all the users deploying to the server (boo!)
Use different accounts, but make a chown to a common user account (www-data, rails, or similar) before performing account-dependant tasks (such as the svn update). This might work, but I haven't tested it.
Use access control lists. Someone has hinted at me that this might be the right solution. However, I don't have the knowledge or time to make this work properly.
For now, we are just continuing using one single user per project, and chowning everything manually when needed. It's a bit of a pain, but it works.

What's the best way to keep multiple Linux servers synced?

I have several different locations in a fairly wide area, each with a Linux server storing company data. This data changes every day in different ways at each different location. I need a way to keep this data up-to-date and synced between all these locations.
For example:
In one location someone places a set of images on their local server. In another location, someone else places a group of documents on their local server. A third location adds a handful of both images and documents to their server. In two other locations, no changes are made to their local servers at all. By the next morning, I need the servers at all five locations to have all those images and documents.
My first instinct is to use rsync and a cron job to do the syncing over night (1 a.m. to 6 a.m. or so), when none of the bandwidth at our locations is being used. It seems to me that it would work best to have one server be the "central" server, pulling in all the files from the other servers first. Then it would push those changes back out to each remote server? Or is there another, better way to perform this function?
The way I do it (on Debian/Ubuntu boxes):
Use dpkg --get-selections to get your installed packages
Use dpkg --set-selections to install those packages from the list created
Use a source control solution to manage the configuration files. I use git in a centralized fashion, but subversion could be used just as easily.
An alternative if rsync isn't the best solution for you is Unison. Unison works under Windows and it has some features for handling when there are changes on both sides (not necessarily needing to pick one server as the primary, as you've suggested).
Depending on how complex the task is, either may work.
One thing you could (theoretically) do is create a script using Python or something and the inotify kernel feature (through the pyinotify package, for example).
You can run the script, which registers to receive events on certain trees. Your script could then watch directories, and then update all the other servers as things change on each one.
For example, if someone uploads spreadsheet.doc to the server, the script sees it instantly; if the document doesn't get modified or deleted within, say, 5 minutes, the script could copy it to the other servers (e.g. through rsync)
A system like this could theoretically implement a sort of limited 'filesystem replication' from one machine to another. Kind of a neat idea, but you'd probably have to code it yourself.
AFAIK, rsync is your best choice, it supports partial file updates among a variety of other features. Once setup it is very reliable. You can even setup the cron with timestamped log files to track what is updated in each run.
I don't know how practical this is, but a source control system might work here. At some point (perhaps each hour?) during the day, a cron job runs a commit, and overnight, each machine runs a checkout. You could run into issues with a long commit not being done when a checkout needs to run, and essentially the same thing could be done rsync.
I guess what I'm thinking is that a central server would make your sync operation easier - conflicts can be handled once on central, then pushed out to the other machines.
rsync would be your best choice. But you need to carefully consider how you are going to resolve conflicts between updates to the same data on different sites. If site-1 has updated
'customers.doc' and site-2 has a different update to the same file, how are you going to resolve it?
I have to agree with Matt McMinn, especially since it's company data, I'd use source control, and depending on the rate of change, run it more often.
I think the central clearinghouse is a good idea.
Depends upon following
* How many servers/computers that need to be synced ?
** If there are too many servers using rsync becomes a problem
** Either you use threads and sync to multiple servers at same time or one after the other.
So you are looking at high load on source machine or in-consistent data on servers( in a cluster ) at given point of time in the latter case
Size of the folders that needs to be synced and how often it changes
If the data is huge then rsync will take time.
Number of files
If number of files are large and specially if they are small files rsync will again take a lot of time
So all depends on the scenario whether to use rsync , NFS , Version control
If there are less servers and just small amount of data , then it makes sense to run rysnc every hour.
You can also package content into RPM if data changes occasionally
With the information provided , IMO Version Control will suit you the best .
Rsync/scp might give problems if two people upload different files with same name .
NFS over multiple locations needs to be architect-ed with perfection
Why not have a single/multiple repositories and every one just commits to those repository .
All you need to do is keep the repository in sync.
If the data is huge and updates are frequent then your repository server will need good amount of RAM and good I/O subsystem

Resources