How to test whether file at path exists in repo? - perforce

Given a path (on my computer), how can I test whether that file is under version control (ie. a copy exists in the Perforce depot)? I'd like to test this at the command line.

Check p4 help files. In short, you run p4 files <your path here> and it will give you the depot path to that file. If it isn't in the depot, you'll get "no such file(s)".

For scripting, p4 files FILE is insufficient because it doesn't change its exit code when there is no such file.
Instead, you can pass it through grep, which looks for perforce paths' telltale pair of leading slashes:
# silently true when the file exists or false when it does not.
p4_exists() {
p4 files -e "$1" 2>/dev/null |grep -q ^//
}
You can get rid of the 2>/dev/null and grep's -q if you want visible output.
Before p4 files version 2012.1 (say p4 files version 2011.1), it didn't support -e. You'll have to add |grep -v ' - delete [^-]*$' before the grep above.
Warning: A future p4 release could change the formatting and break this logic.

Similar to Adam Katz's solution except more likely to be supported by future releases of p4, you can pass global option -s which 'prepends a descriptive field' to each line. This is one of 'text', 'info', 'error' or 'exit' followed by a colon (and a space, it seems). This is intended for facilitating scripting.
For all files passed in to the p4 -s files command, you should get one line back per file. If the file exists in the depot, the line starts with info: whereas if the file does not exist in the depot, the line starts with error:. For example:
info: //depot/<depot-path-to-file>
error: <local-path-to-file>
So, essentially, the status lines are the equivalent of an exit code but on a per-file basis. An exit code alone wouldn't cope neatly with an arbitrary number of files passed in.
Note that if there is a different error (e.g. a connection error) then an error line is still output. So, if you really want the processing to be robust, you may want to combine this with what Adam Katz suggested or perhaps grep for the basename of the file in the output line.

Related

Perforce: get ONLY the local path of a depot file

I need to get the local path of a depot file, for example:
input: //MyDepot/MyBranch/my file.txt
output (mac): /Volumes/p4/MyDepot/MyBranch/my file.txt
or:
output (windows): F:\p4\MyDepot\MyBranch\my file.txt
both 'p4 have' and 'p4 where' produce parse-unfriendly output, including mapping info I don't need. Is there a p4 command for that simple task?
On a non-Windows OS, Run:
p4 -ztag -F %path% where <depotFilePath>
This will give you the one-liner with only the local path.
On Windows, %path% expands to the system environment path so this only works on non-Windows systems.
The -F is a client-side option, so if you don't have it, you may need to update your p4 binary. If you don't have it, and can't update the binary, you can also just run p4 -ztag where <depotFilePath>.
You'd get back easy to parse output. The key is path to get the local path to the file.
The output will look like:
... depotFile //depot/a
... clientFile //gabe/a
... path /Users/gabe/tmp/a
Very close, Samwise. I would have made this a comment on the main answer, but not enough rep yet, sorry.
To prevent Windows from expanding the environment variable, both percents need to be carrot escaped, and that part of the expression must not be in quotes.
example: p4 -ztag -F ^%change^%,^%time^%,^%path^% changes -m 3
example from 2020/4/2: https://community.perforce.com/s/article/15148

Run additional command when rsync detects a file

I am currently running the following script to make an automatic backup of my Music:
#!/bin/bash
while :; do
rsync -ruv /mnt/hdd1/Music/ /mnt/hdd2/Music/
done
Whenever a new file is added to my music folder, it is detected by rsync and it is copied to my other disk. This script runs fine, but I would also like to convert the detected file to an ogg opus file for putting on my phone.
My question is: How do I run a command on a new file found by rsync -u?
I will also accept answers which work totally differently, but have the same result.
rsync -ruv /mnt/hdd1/Music /mnt/hdd2/ | sed -n 's|^Music/||p' >~/filelist.tmp
while IFS= read filename
do
[ -f "$filename" ] || continue
# do something with file
echo "Now processing '$filename'"
done <~/filelist.tmp
With the -v option, rsync prints the names of files it copies to stdout. I use sed to capture just those filenames, excluding the informational messages, to a file. The filenames in that file can be processed later as you like.
The approach with sed above depends on rsync displaying filenames starting with the final part of the source directory, e.g. "Music/" in my example above, which is then removed assuming that you don't need it. Alternately, one could try an explicit approach for excluding noise messages.

Find the diff chunks without p4 resolve

I want to design a shell script that will take the resolve decision based on the diff chunks shown by 'p4 resolve'
The current state of my script is something like -
p4 integ -b #=CL #This will open the files in my default CL.
p4 resolve #It will now resolve the files opened in my default CL one by one
Now my Problem is, with p4 resolve, I cannot redirect its output to some file and or read the diff chunks because, it shows diff chunks for one file and waits for the user input for resolve decision. I want the script to take the very obvious resolve decision.
So, is there any way to find the diff chunks for the files in my default CL? So that, I can read these diff chunks anyhow and make these obvious resolve decisions?
I wanted to do something similar; output the actual conflicts in a Jenkins job so that the information could be sent to the relevant development engineer.
I noticed the handy p4-dump-conficts script by jwd, based on that, perhaps the following is enough
echo -e "d\ns\n" | P4EDITOR=cat p4 resolve
(i.e. get p4 resolve to (d)iff and then (s)kip and use cat to output the results. This works for all 'pending' files in the workspace)
I recently had a similar need, and hacked together a solution with a shell script.
The problem, for me, was that p4 resolve wants to be interactive, but I just wanted to see the conflicts in a file.
I didn't want to run p4 resolve -f since that would mark the file as resolved in Perforce.
I just wanted to view the conflicts, maybe email them to someone else, without really doing the resolve. You can do this interactively with the (e)dit and (s)kip commands of p4 resolve, but I wanted a hands-off version.
Save this script as p4-dump-conflicts:
#!/usr/bin/env bash
function cat_to_file() {
cat "$#" > "$OUTFILE"
}
export -f cat_to_file
destFileName=$(dirname "$1")/CONFLICTS_$(basename "$1")
OUTFILE="$destFileName" P4EDITOR=cat_to_file p4 resolve "$1" <<END_INPUT
e
s
END_INPUT
If you run p4-dump-conflicts myfile, it will write out CONFLICTS_myfile, containing the conflict markers. It will leave the file un-resolved in Perforce, though.
Note: Only works for one file, as-is.
For your question in particular, you could process the CONFLICTS_xxx file however you want, then use the results of that for the final merge resolution.
Yes, you can use p4 resolve -n to print the same output as p4 resolve, without prompting for input or taking any action. From the p4 resolve docs:
-n List the files that need resolving without actually performing the resolve.

How to detect if folder is under Perforce/ source control

I need to find out if file/folder is under specific source control.
The easiest way of doing this is to find some hidden folders. (this does not guaranty that partifical file is under source control, but with some probality says that this source control was used )
It's quite straightforward with SVN, GIT, as they have hidden folders.
But I can not find the same things for Perforce and ClearCase. Are there any universal way to understand what VSC is used in those paricular cases?
Perforce does not litter the drive, but keeps the info on the server. Also, files can be mapped in different structures, and mixed with non-controlled files, so it's not something you can determine by looking at the file itself.
However you can simply ask Perforce. For example, at the CLI:
P4 fstat FILENAME
Will give you info about a file if it is under source control.
If you need to script it for Perforce, there is an option (-s) that makes things easier (since the exit code of p4 doesn't indicate success or failure of the Perforce command). So, for bourne-like shells something like this should work:
if p4 -s fstat FILENAME | grep 'exit: 0' >/dev/null 2>&1 ; then
echo "Perforce knows this file"
else
echo "Perforce don't care"
fi
For ClearCase, you will find a hidden file named view.dat at the root directory of a (snapshot) view .
If the file is under M:\ (Windows) or /view/vobs (Unix), no need to look for an hidden file or directory: you know it is a dynamic view.
Another way is to execute, in the parent directory of a file:
cleartool lsview -cview.
If that directory is in a view, that command will return its name.
Similarly, i you can run a command like p4 reconcile or p4 status, and it doesn't return an error, chances are you are in a Perforce workspace.

Find a string in Perforce file without syncing

Not sure if this is possible or not, but I figured I'd ask to see if anyone knows. Is it possible to find a file containing a string in a Perforce repository? Specifically, is it possible to do so without syncing the entire repository to a local directory first? (It's quite large - I don't think I'd have room even if I deleted lots of stuff - that's what the archive servers are for anyhow.)
There's any number of tools that can search through files in a local directory (I personally use Agent Ransack, but it's just one of many), but these will not search a remote Perforce directory, unless there's some (preferably free) tool I'm not aware of that has this capability, or maybe some hidden feature within Perforce itself?
p4 grep is your friend. From the perforce blog
'p4 grep' allows users to use simple file searches as well as regular
expressions to search through file contents of head as well as earlier
revisions of files stored on the server. While not every single option
of a standard grep is supported, the most important options are
available. Here is the syntax of the command according to 'p4 help
grep':
p4 grep [ -a -i -n -v -A after -B before -C context -l -L -t -s -F -G ] -e pattern file[revRange]...
See also, the manual page.
Update: Note that there is a limitation on the number of files that Perforce will search in a single p4 grep command. Presumably this is to help keep the load on the server down. This manifests as an error:
Grep revision limit exceeded (over 10000).
If you have sufficient perforce permissions, you can use p4 configure to increase the dm.grep.maxrevs setting from this default of 10K to something larger. e.g. to set to 1 million:
p4 configure set dm.grep.maxrevs=1M
If you do not have permission to change this, you can work around it by splitting the p4 grep up into multiple commands over the subdirectories. You may have need to split further into sub-subdirectories etc depending on your depot structure.
For example, this command can be used at a bash shell to search each subdirectory of //depot/trunk one at a time. Makes use of the p4 dirs command to obtain the list of subdirectories from the server.
for dir in $(p4 dirs //depot/trunk/*); do
p4 grep -s -i -e the_search_string $dir/...
done
Actually, solved this one myself. p4 grep indeed does the trick. Doc here. You have to carefully narrow it down before it'll work properly - on our server at least you have to get it down to < 10000 files. I also had to redirect the output to a file instead of printing it out in the console, adding > output.txt, because there's a limit of 4096 chars per line in the console and the file paths are quite long.
It's not something you can do with the standard perforce tools. One helpful command might be p4 print but it's not really faster than syncing I would think.
This is a big if but if you have access to the server you can run agent ransack on the perforce directory. Perforce stores all versioned files on disk, it's only the metadata that's in a database.

Resources