While creating workspace in perforce, I got below error
You should define workspace view in more detail. (minimum 2 depth)
This is not a standard Perforce error, and is therefore most likely coming from a custom trigger set up by your Perforce admin. In order to resolve a custom trigger failure you will need to consult with your Perforce administrator (i.e. the person who defined the trigger) to determine what conditions are required to satisfy the trigger.
If you would like to learn more about how to define triggers, see https://www.perforce.com/manuals/p4sag/Content/P4SAG/chapter.scripting.html
(this is not useful to you as an end user encountering a trigger failure, but may provide additional context on how triggers work from your admin's perspective).
The workaround/fix i got is,
make sure to select a folder in "Workspace root" , which has last two level of empty folders,
for example,
Suppose you choose "C:\Users\stackuser\workspace\project\codes", then make sure, "project and codes are two empty folders
Related
I recently exported and imported a VSTS build process definition to create a similar build for a similar project. However, when I try to save the definition, VSTS displays an error that:
"No agent queue was found with identifier x."
Does anyone know of a cause? I looked at some other online posts and they were related to security settings, which are all correct.
There are also a few related to building capabilities, but this is not that exception.
When you import a build definition, it pulls across all the values from drop-downs, including the "Default agent queue". For projects in the same VSTS account, this value ("Hosted" in my case) will have the same name in the original definition and the new definition you are creating. However, the IDs may be different or not "hooked up" correctly by the import process.
Select the "Process" header before the beginning of your task list.
Click the "Default agent queue" drop-down.
Select your "other" agent pool that has the same name. In my case, there were 2 items named "Hosted". I picked the one that was not already selected.
Now your build will save and queue.
I am trying to create workspace into perforce so it will be available to other person who would like to add or submit file. I was following link which suggest to create new workspace
https://www.perforce.com/perforce/doc.current/manuals/p4v-gs/01_p4v-gs.html#1060773
I got one question, why perforce is looking into my local directory which can be deleted at any time.
The local directory that you mention is the root of the workspace. That's how Perforce knows where to put files when you tell it to get files from it.
You have a couple of options. One is to create a workspace and put /dev/null (or some other non-existent directory) and ask the eventual owner to change it before using the workspace. They should also change ownership to themselves. It is important that you don't "lock" the workspace when you create it, otherwise they won't be able to modify it, take ownership, or even use it.
Another option would be for you to create a workspace (for yourself), and then ask the other user to create a workspace, using yours as a template. The command line way is p4 client -t . Doing this via P4V, you would ask the user to find your workspace from the workspaces view, right click, and then from the contextual menu choose "Create/update Client using Clientname as Template..." (the wording might be slightly different, I don't have P4V open right now).
I recommend you go with option two. Doing so will automatically set ownership to the correct user, and the user will also be prompted to choose a local directory for the workspace root.
A remote CMIS repository contains many folders/files.
I am writing a software that keeps a local copy of these folders/files in sync.
At first run I just download everything recursively.
At later runs, I check what has changed, and download any changes.
What is the most efficient way to check the remote changes?
(additional/removal of files/folders)
Most efficient = Least bandwidth usage.
I can only use the CMIS protocol, and I can not run any custom software on the remote server.
My ideas so far:
Idea 1: Re-download everthing every time.
Idea 2: Check the root folder's modification date, hoping modification dates are recursive.
Idea 3: Use CMIS search to find all files that are more recent than the last time I synchronized. Problem: that won't tell me which files have been removed.
Any other ideas?
I don't know the CMIS protocol much, there might be something more convenient.
Using the repository's change log is the right way to go, but realize that not every repository supports this. For example, for Alfresco you must configure the audit sub-system and you must set audit.cmischangelog.enabled=true in alfresco-global.properties.
To find out if your repo supports changes you can look as the results of the repository's getCapabilities response. If you see 'Changes' set to 'None' then your repository doesn't support change logs.
Assuming it does, you need to ask the repository for its latest change log token. You can get that from getRepositoryInfo. Save that before you call getContentChanges. Then, on the next call, pass in the token. You'll get the changes made since the token was issued.
So, your code needs to:
Check getCapabilities for something other than Changes = None
Save the getRepositoryInfo's latestChangeLogToken
The first time you ask, call getContentChanges with no arguments
The next time you ask, call getcontentChanges with the last saved token
You can then process the result set. Each change log entry tells you its type (created, updated, deleted, permissions, etc., see spec for exact values) and provides the cmis:objectId of the changed object.
Repeat with step 2.
I have a "cmis-sync" script that does one-way synchronization using this approach implemented in Python. I've tested it against Alfresco as the source and the OpenCMIS InMemory repository as the target. If there is interest I can make it available.
A more ideal version of idea 3 is easily accomplished according to some digging through the CMIS protocol you posted.
2.1.11 Change Log
CMIS provides a “change log” mechanism to allow applications to easily discover the set of changes that have occurred to objects stored in the repository since a previous point in time. This change log can then be used by applications such as search services that maintain an external index of the repository to efficiently determine how to synchronize their index to the current state of the repository (rather than having to query for all objects currently in the repository).
Entries recorded in the change log are referred to below as “change events”.
Note that change events in the change log MUST be returned in ascending order from the time when the change event occurred.
Using whatever tools of your choice, you should be able to do an initial pull of the entire repository and save the time the pull was performed. Subsequent queries to the repository (at an interval of your choosing) are done with the following procedure:
Pull down the CMIS changelog from the repository
Parse all changes created after the previous pulls
Perform operations based on the ChangeType enum: for example, if the "deleted" enum is present for an objectID, delete that object locally.
I have currently 3 codelines in my perforce depot
Main
Development
Release
The idea being changes will be integrated into Main from release and dev branches. But as of now some of the devs are making changes directly to Main branch. Is there a way to freeze check-ins for the "Main" codeline and allow integrations via branch mappings in perforce?
OR if there is any other best practice (restrictions) out there that can be applied to avoid direct check-ins into the Main branch.
Thx
As Adam said, you should use permissions to limit access to the Main branch. You can do this either by using the Admin tool, or by running p4 protect from the command line (as long as you have super user access).
You should limit the permissions for Main to read for most people, and allow write for those you trust to submit to the Main branch. You can also choose to give "normal" users open access instead of read, which will allow them to check files out, but not submit them.
Another thing to consider apart from just setting the permissions is the approach we are running for some of our branches:
We use a review tool (Reviewboard in our case) and have a Perforce trigger in place, which checks if there is a review in ReviewBoard that matches the following criteria:
there is a review associated with the current Perforce change number
the latest review has the "ship it" flag set to True
the reviewer in ReviewBoard is NOT the submitter of the change list.
the list of files in the review matches the list of files in the change list
You could be even more restrictive, e.g. that the reviewer has be to a special person (guard of the branch). The advantage to just setting hard permissions is that you get control over what to submit and in what quality. This would also enable you to submit important bugfixes to the main-branch without prior messing with p4 protect.
I have a team that will be using CruiseControl for continuous integration, and CC will be running on a Linux server. More than one team member may need to access the CC configuration itself: changing how tests are run, changing notification rules, etc.
What is the best practice for this?
My first thought was to set up a cc-users group, then make a shared directory somewhere (such as /usr/local, I suppose, or a new directory like /projects), where the directory has r/w for the group.
Am I missing any complications with this plan? Obviously, I've never been in charge of such a project before, otherwise I wouldn't ask such a question.
FWIW, my intention is to have all the cc configuration files under mercurial so we can roll back in case of breakage.
I have version-controlled the whole of cruisecontrol configuration, along with the project specific config files underneath it.This way, the write-access can be controlled per requirement, using your source control tool's access control method (in our case subversion) thus providing tracking as well. Whomsoever needs to make a change can checkout the file config.xml in their own workspace and make their changes and then commit. You may want to consider the same approach.