Not sure if this is possible or not, but I figured I'd ask to see if anyone knows. Is it possible to find a file containing a string in a Perforce repository? Specifically, is it possible to do so without syncing the entire repository to a local directory first? (It's quite large - I don't think I'd have room even if I deleted lots of stuff - that's what the archive servers are for anyhow.)
There's any number of tools that can search through files in a local directory (I personally use Agent Ransack, but it's just one of many), but these will not search a remote Perforce directory, unless there's some (preferably free) tool I'm not aware of that has this capability, or maybe some hidden feature within Perforce itself?
p4 grep is your friend. From the perforce blog
'p4 grep' allows users to use simple file searches as well as regular
expressions to search through file contents of head as well as earlier
revisions of files stored on the server. While not every single option
of a standard grep is supported, the most important options are
available. Here is the syntax of the command according to 'p4 help
grep':
p4 grep [ -a -i -n -v -A after -B before -C context -l -L -t -s -F -G ] -e pattern file[revRange]...
See also, the manual page.
Update: Note that there is a limitation on the number of files that Perforce will search in a single p4 grep command. Presumably this is to help keep the load on the server down. This manifests as an error:
Grep revision limit exceeded (over 10000).
If you have sufficient perforce permissions, you can use p4 configure to increase the dm.grep.maxrevs setting from this default of 10K to something larger. e.g. to set to 1 million:
p4 configure set dm.grep.maxrevs=1M
If you do not have permission to change this, you can work around it by splitting the p4 grep up into multiple commands over the subdirectories. You may have need to split further into sub-subdirectories etc depending on your depot structure.
For example, this command can be used at a bash shell to search each subdirectory of //depot/trunk one at a time. Makes use of the p4 dirs command to obtain the list of subdirectories from the server.
for dir in $(p4 dirs //depot/trunk/*); do
p4 grep -s -i -e the_search_string $dir/...
done
Actually, solved this one myself. p4 grep indeed does the trick. Doc here. You have to carefully narrow it down before it'll work properly - on our server at least you have to get it down to < 10000 files. I also had to redirect the output to a file instead of printing it out in the console, adding > output.txt, because there's a limit of 4096 chars per line in the console and the file paths are quite long.
It's not something you can do with the standard perforce tools. One helpful command might be p4 print but it's not really faster than syncing I would think.
This is a big if but if you have access to the server you can run agent ransack on the perforce directory. Perforce stores all versioned files on disk, it's only the metadata that's in a database.
Related
I want to design a shell script that will take the resolve decision based on the diff chunks shown by 'p4 resolve'
The current state of my script is something like -
p4 integ -b #=CL #This will open the files in my default CL.
p4 resolve #It will now resolve the files opened in my default CL one by one
Now my Problem is, with p4 resolve, I cannot redirect its output to some file and or read the diff chunks because, it shows diff chunks for one file and waits for the user input for resolve decision. I want the script to take the very obvious resolve decision.
So, is there any way to find the diff chunks for the files in my default CL? So that, I can read these diff chunks anyhow and make these obvious resolve decisions?
I wanted to do something similar; output the actual conflicts in a Jenkins job so that the information could be sent to the relevant development engineer.
I noticed the handy p4-dump-conficts script by jwd, based on that, perhaps the following is enough
echo -e "d\ns\n" | P4EDITOR=cat p4 resolve
(i.e. get p4 resolve to (d)iff and then (s)kip and use cat to output the results. This works for all 'pending' files in the workspace)
I recently had a similar need, and hacked together a solution with a shell script.
The problem, for me, was that p4 resolve wants to be interactive, but I just wanted to see the conflicts in a file.
I didn't want to run p4 resolve -f since that would mark the file as resolved in Perforce.
I just wanted to view the conflicts, maybe email them to someone else, without really doing the resolve. You can do this interactively with the (e)dit and (s)kip commands of p4 resolve, but I wanted a hands-off version.
Save this script as p4-dump-conflicts:
#!/usr/bin/env bash
function cat_to_file() {
cat "$#" > "$OUTFILE"
}
export -f cat_to_file
destFileName=$(dirname "$1")/CONFLICTS_$(basename "$1")
OUTFILE="$destFileName" P4EDITOR=cat_to_file p4 resolve "$1" <<END_INPUT
e
s
END_INPUT
If you run p4-dump-conflicts myfile, it will write out CONFLICTS_myfile, containing the conflict markers. It will leave the file un-resolved in Perforce, though.
Note: Only works for one file, as-is.
For your question in particular, you could process the CONFLICTS_xxx file however you want, then use the results of that for the final merge resolution.
Yes, you can use p4 resolve -n to print the same output as p4 resolve, without prompting for input or taking any action. From the p4 resolve docs:
-n List the files that need resolving without actually performing the resolve.
I need to find out if file/folder is under specific source control.
The easiest way of doing this is to find some hidden folders. (this does not guaranty that partifical file is under source control, but with some probality says that this source control was used )
It's quite straightforward with SVN, GIT, as they have hidden folders.
But I can not find the same things for Perforce and ClearCase. Are there any universal way to understand what VSC is used in those paricular cases?
Perforce does not litter the drive, but keeps the info on the server. Also, files can be mapped in different structures, and mixed with non-controlled files, so it's not something you can determine by looking at the file itself.
However you can simply ask Perforce. For example, at the CLI:
P4 fstat FILENAME
Will give you info about a file if it is under source control.
If you need to script it for Perforce, there is an option (-s) that makes things easier (since the exit code of p4 doesn't indicate success or failure of the Perforce command). So, for bourne-like shells something like this should work:
if p4 -s fstat FILENAME | grep 'exit: 0' >/dev/null 2>&1 ; then
echo "Perforce knows this file"
else
echo "Perforce don't care"
fi
For ClearCase, you will find a hidden file named view.dat at the root directory of a (snapshot) view .
If the file is under M:\ (Windows) or /view/vobs (Unix), no need to look for an hidden file or directory: you know it is a dynamic view.
Another way is to execute, in the parent directory of a file:
cleartool lsview -cview.
If that directory is in a view, that command will return its name.
Similarly, i you can run a command like p4 reconcile or p4 status, and it doesn't return an error, chances are you are in a Perforce workspace.
I am trying to rsync directory A of server1 with directory B of server2.
Sitting in the directory A of server1, I ran the following commands.
rsync -av * server2::sharename/B
but the interesting thing is, it synchronizes all files and directories except .htaccess or any hidden file in the directory A. Any hidden files within subdirectories get synchronized.
I also tried the following command:
rsync -av --include=".htaccess" * server2::sharename/B
but the results are the same.
Any ideas why hidden files of A directory are not getting synchronized and how to fix it. I am running as root user.
thanks
This is due to the fact that * is by default expanded to all files in the current working directory except the files whose name starts with a dot. Thus, rsync never receives these files as arguments.
You can pass . denoting current working directory to rsync:
rsync -av . server2::sharename/B
This way rsync will look for files to transfer in the current working directory as opposed to looking for them in what * expands to.
Alternatively, you can use the following command to make * expand to all files including those which start with a dot:
shopt -s dotglob
See also shopt manpage.
For anyone who's just trying to sync directories between servers (including all hidden files) -- e.g., syncing somedirA on source-server to somedirB on a destination server -- try this:
rsync -avz -e ssh --progress user#source-server:/somedirA/ somedirB/
Note the slashes at the end of both paths. Any other syntax may lead to unexpected results!
Also, for me its easiest to perform rsync commands from the destination server, because it's easier to make sure I've got proper write access (i.e., I might need to add sudo to the command above).
Probably goes without saying, but obviously your remote user also needs read access to somedirA on your source server. :)
I had the same issue.
For me when I did the following command the hidden files did not get rsync'ed
rsync -av /home/user1 server02:/home/user1
But when I added the slashes at the end of the paths, the hidden files were rsync'ed.
rsync -av /home/user1/ server02:/home/user1/
Note the slashes at the end of the paths, as Brian Lacy said the slashes are the key. I don't have the reputation to comment on his post or I would have done that.
I think the problem is due to shell wildcard expansion. Use . instead of star.
Consider the following example directory content
$ ls -a .
. .. .htaccess a.html z.js
The shell's wildcard expansion translates the argument list that the rsync program gets from
-av * server2::sharename/B
into
-av a.html z.js server2::sharename/B
before the command starts getting executed.
The * tell to rsynch to not synch hidden files. You should not omit it.
On a related note, in case any are coming in from google etc trying to find while rsync is not copying hidden subfolders, I found one additional reason why this can happen and figured I'd pay it forward for the next guy running into the same thing: if you are using the -C option (obviously the --exclude would do it too but I figure that one's a bit easier to spot).
In my case, I had a script that was copying several folders across computers, including a directory with several git projects and I noticed that the I couldn't run any of the normal git commands in the copied repos (yes, normally one should use git clone but this was part of a larger backup that included other things). After looking at the script, I found that it was calling rsync with 7 or 8 options.
After googling didn't turn up any obvious answers, I started going through the switches one by one. After dropping the -C option, it worked correctly. In the case of the script, the -C flag appears to have been added as a mistake, likely because sftp was originally used and -C is a compression-related option under that tool.
per man rsync, the option is described as
--cvs-exclude, -C auto-ignore files in the same way CVS does
Since CVS is an older version control system, and given the man page description, it makes perfect sense that it would behave this way.
I have 10k perforce files mentioned in my file.txt.
I need to open them using p4 edit command.
I expect some command like "p4 edit ?????file.txt". Can you help me to check these files out?
You can use the -x flag on p4. This is assuming a UNIX shell.
cat file.txt | p4 -x - edit
I assume you have some copy of directories structure where you have changes, and now you need to add those files to a change list. Which is impossible to do without checking them out. Am I right?
If I needed to change that much amount of files, I would do like this:
Copy all files I wanted to check in replacing read-only files (Wondows Explorer can do that)
In P4V go to a directory you need to check out files in, and then call "Reconcile offline work".
In appeared dialog choose all files.
Get new changelist with changed files being checked out.
I used this solution a couple of times - it works for added, changed and deleted files.
Just use below command to edit all files present in file.txt
p4 -x file.txt edit
I know this is a strange question, but is there a p4 command that is the reverse of 'sync'? That is, I'd like whatever files are in my local workspace directory to be pushed to the depot.
I know your first thought is probably "but WHY?", and the answer is, it's complicated.
Reconciling offline work could help you. It pushes files that are added, or changed or deleted . Its a bit trickier with renamed files.
Out of curiosity. What exactly is "it's complicated"
Both 'submit' and 'shelve' can send file content from your local workspace to the depot.
In either case, you have to use 'add' or 'edit' first to mark the files to be sent to the depot. (How you do this depends on the client tool you're using.)
Here's a script, derived from the O'Reilly Perforce book
You probably want to start with
p4 sync -k ...
which makes perforce think it has synchronized to the current head, but in fact
makes no changes to the filesystem. So the below diffs will behave (to perforce) like changes to the current head.
# a reverse 'synchronize' (sync what's on disk with what's in the depot)
# see "practical perforce" (o'reilly) page 46
#
# changed files
p4 diff -se | p4 -x- edit
#
# deleted files
p4 diff -sd | p4 -x- delete
#
# added files
find . -type f -o -type l | p4 -x- add -f
I have not tested the above in isolation (it came from a longer script that has been widely used) but I believe it should work.