How to solve the problem of using Clean... in P4V but calling Reconcile -n preview result and causing exclusive lock file cannot be cleaned - perforce

As mentioned in the question, I often encounter the situation shown in the picture below:
Problems demonstrate
I'm using the Clean... directive here, not the Reconcile Offline Work...directive
Clean...

This bug was fixed in the server in release 2016.2:
https://www.perforce.com/perforce/doc.current/user/relnotes.txt
#1382996 (Job #74886, #86396) **
'p4 clean' would fail to sync files when needed when they
are exclusively opened by another client. This has been
fixed.
Double check that you're on release 2016.2+ of the server; if you are, you should be at the very least be able to use p4 clean as a workaround (e.g. by adding it as a custom tool to P4V).

Related

Understanding how libzip works

I have started working with the libzip library today. But I do not understand the principle how libzip works.
My focus is on zipping a directory with all the files and dirs within
into a zip-file.
Therefore, I started with zip_open(), then I read the directory
contents and add all the dirs with zip_dir_add() to the archive.
After that, I closed the zip-file with zip_close(). Everything was
fine. The next step should be to add all the files to the archive with
zip_file_add(). But it doesn't work. The last step closing the file
fails.
OK, I forgot to create a zip_source to get this done. I added a
statement a line before to get this source (zip_source_file()). But
still it doesn't work.
What is wrong in my thinking? Do I have to fopen() and fclose() the file on the filesystem also?
And what is the difference between zip_source_file() and zip_source_filep()?
Do I have to fopen() and fclose() the file on the filesystem also?
No, you can just use zip_source_file().
From your comments I think you have the right general idea, but there is probably some detail that is making it fail. Make sure you perform all the error checking the documentation suggests after each libzip call so you can get more information about what is causing it to fail.
You could also compare your code with https://gist.github.com/clalancette/bb5069a09c609e2d33c9858fcc6e170e

"p4 interchanges" lists a changelist that has already been integrated

I'm running p4 interchanges -b my_branch, and I get a ton of results, the first one being a changelist that we integrated a long time ago.
So I try to integrate again, but p4 integrate -b my_branch //...#changelist,#changelist just returns "All revision(s) already integrated".
The only way to unblock this is to do a forced integration (-f in the integrate command) and then simply accept target (-at when resolving), and that works - p4 interchanges then no longer lists this changelist.
But how can Perforce get into this state to begin with? This happened after we've done a bunch of integrating across multiple branches, but I nothing that I'd think would cause a changelist to become "unintegrated" somehow.
This is on a 2014.1 server.
Thank you for specifying your server version.
The 'p4 interchanges' command can give the "All revision(s) already integrated" message with misleading results when cherry-picking is involved.
There is a command line example here:
http://answers.perforce.com/articles/KB_Article/Cherry-Picking-Integrations
You could also be affected by a bug that was patched in 2014.1 listed here in the server release notes:
http://www.perforce.com/perforce/doc.current/user/relnotes.txt
Bugs fixed in 2014.1 PATCH5
#880506 (Bug #71725) **
The istat.mimic.ichanges configurable controls the reporting
of revisions between stream and parent. If set, istat will
not report cherry-picked revisions already present in the target.
The default behavior will report any changes not credited, even
when the content may already be in the target.
If you would like, you can pull the most recent build of the server P4D for your OS from our ftp site: http://ftp.perforce.com/perforce/r14.1/
REFERENCE
http://answers.perforce.com/articles/KB_Article/Integration-Changes-Reporting

sync two vobs file (by clearfsimport) without checking in the updated file

I am using following command to sync B vob files from A vob
clearfsimport -master -follow -nsetevent -comment $2 /vobs/A/xxx/*.h /vobs/B/xxx/
It works fine. But it will check in all the changes automatically. Is there a way to do the same task but leave the update files in a check out status?
I want to update the file for B from A. Build my programme, and then re-cover the branch. So if the updated files is an check out status, I can do unco later. Well with my command before, everything is checked in. I cann't re-cover my branch then.
Thanks.
As VonC said, it's impossible to prevent "clearfsimport" to do the check in. And he suggested to use a label to recover back.
For me, the branch where I did "clearfsimport" is branched from a label.Let's call it LABEL_01. So I guess I can use that label for recovery. Is there an easy way (one command) to recover the files under /vobs/B/xxx/ to label LABEL_01 ? I want to do it in my bash script, so the less/easy the command is, the better.
Thanks.
After having a look at the man page for clearfsimport, no, it isn't possible to prevent the checkins.
I would set a label before the clearfsimport, and modify the config spec for the new version to be created in a branch (similar to this config spec).
That way, "re-cover" the initial branch would be easy: none of the new version would have been created in it.

svn clear projects with all revisions

is there a commend to delete project from svn with all its revisions(total cleanup) ?
cheers
The answer is in the Subversion FAQ:
There are special cases where you
might want to destroy all evidence of
a file or commit. (Perhaps somebody
accidentally committed a confidential
document.) This isn't so easy, because
Subversion is deliberately designed to
never lose information. Revisions are
immutable trees which build upon one
another. Removing a revision from
history would cause a domino effect,
creating chaos in all subsequent
revisions and possibly invalidating
all working copies.
The project has plans, however, to
someday implement an svnadmin
obliterate command which would
accomplish the task of permanently
deleting information. (See issue 516.)
In the meantime, your only recourse is
to svnadmin dump your repository, then
pipe the dumpfile through
svndumpfilter (excluding the bad path)
into an svnadmin load command. See
chapter 5 of the Subversion book for
details about this.
No, I don't believe there is.
If you really need to remove files completely from SVN history, I think the only way to do it would be to do something like dumping the repository, filtering out the files you don't want with svndumpfilter, and then recreate the repository from the dump.
Why do you want to do this?
rm -rf on the repository usually works fine.
this is how to do this on Linux:
/>svnadmin dump /path/to/repos > proj.dump
/>cat proj.dump | svndumpfilter exclude somefolder > cleanproj.dump
/>service svn stop
/>BACKUP /path/to/repos/conf /path/to/repos/hooks (all custom configuration for this repository)
/>DELETE /path/to/repos
/>svnadmin create /path/to/repos
/>RESTORE /path/to/repos/conf /path/to/repos/hooks
/>svnadmin load /path/to/repos < cleanproj.dump
/>service svn start
done
I'm assuming that you are talking about multiple projects under the same repository:
myrepo/
project1/
project2/
If you simply want a project to 'disappear' without screwing with the repository history, you can simply hide this path if you are using an authentication mechanism that utilizes authz. In other words, you are not using 'svn+ssh' to access the repository.
Let's say I already have a group in my authz called 'everyone'. The in my authz I will set something like:
[/project1]:
#everyone =

CruiseControl.Net Deleted Files

I'm using CC.net on against a Source Safe database, and have a problem that someone deleted some files from the database, and the deleted files weren't removed. I didn't see a config switch or anything that I could set for it to clear the code directory prior to building.
Am I missing something?
As Alex says there is a CleanCopy flag in the source code block. However, my situation was a little different. I use subversion and I found the CleanCopy flag was NOT doing what it said it would on the box.
To solve the problem I added a task which runs a batch file that clears out the build's working copy prior to checkout. It is a bit slower (about 1 min for code base of 400Mb) but guarantees no old code.
Kindness,
Dan
All you need to do is set CleanCopy to true in your source control block. The documentation is very clear on this. The above answer is the wrong way.

Resources