P4 Server 2010.2/347035
I am trying to list jobs that were created by me in p4. I was hoping the following command
would work
p4 -u xxx jobs -m 50
but it does not. The above command does not list me jobs for that xxx user instead it just shows me all jobs.
Questions
How can I list just my jobs? (Or a particular user's job)
Is there any way to petty print output generated by p4? Most p4 commands have tabular output and I would like tabular output?
You want something like 'p4 jobs -e ReportedBy=CalmStorm' or maybe 'p4 jobs -e OwnedBy=CalmStorm'.
Since the job spec is customizable, the exact syntax will depend on what fields you have defined for your jobspec.
Have a look at http://www.perforce.com/perforce/doc.current/manuals/cmdref/jobs.html#1040665 for more information about the jobs query language.
Regarding pretty-printing, there isn't anything built into the command line itself. Many users use a scripting tool such as Perl, Python, or Ruby for these purposes; there are Perforce packages for all the major scripting languages so you can pick your favorite one. Or use P4V -- it's got a nice GUI for searching and viewing jobs.
The command
p4 jobs -r -m 100 -e xxx
kinda achieves this but it lists jobs assigned to me not just the ones created by me.
You could grep for your name in the jobs list:
p4 jobs |grep $P4USER
Related
I'm trying to write a script that automatically runs a data analysis program. The data analysis takes a file, analyzes it, and puts all the outputs into a folder. The program can be run on two terminals simultaneously (each analyzing a different subject file).
I wrote a script that can do all the inputs automatically. However, I can only get my script to run one automatically. If I run my script simultaneously it will analyze the same subject twice (useless)
Currently, my script looks like:
for name in `ls [file_directory]`
do
[Data analysis commands]
done
If you run this on two terminals, it will start from the top of the directory containing all the data files. This is a problem, so I tried to do checks for duplicates but they weren't very effective.
I tried a name comparison with the if command (didn't work because all the output files except one were of a unique name, so it would check the first outfput folder at the top of the directory and say the name was different even though an output folder further down had the same name). It looked something like..
for name in `ls <file_directory>`
do
for output in `ls <output directory>`
do
If [ name==output ]
then
echo "This file has already been analyzed."
else
<Data analyis commands>
fi
done
done
I thought this was the right method but apparently not. I would need to check all the names before some decision was made (rather one by one which that does)
Then I tried moving completed data files with the mv command (didn't work because "name" in the for statement stored all the file names so it went down the list regardless of what was in the folder at present). I remember reading something about how shell scripts do not do things in "real time" so it makes sense that this didn't work.
My thought was looking for some sort of modification to that if statement so it does all the name checks before I make a decision (how?)
Also are there any other commands I could possibly be missing that I could possibly try?
One pattern I use often is to use split command.
ls <file_directory> > file_list
split -d -l 10 file_list file_list_part
This will create files like file_list_part00 to file_list_partnn
You can then feed these file names to you script.
for file_part in `ls file_list_part*`
do
for file_name in `cat file_part | tr '\n' ' '`
do
data_analysis_command file_name
done
done
Never use "ls" in a "for" (http://mywiki.wooledge.org/ParsingLs)
I think you should use a fifo (see mkfifo)
As a follow-on from the comments, you can install GNU Parallel with homebrew:
brew install parallel
Then your command becomes:
parallel analyse ::: *.dat
and it will process all your files in parallel using as many CPU cores as you have in your Mac. You can also add in:
parallel --dry-run analyse ::: *.dat
to get it to show you the commands it would run without actually running anything.
You can also add in --eta (Estimated Time of Arrival) for an estimate of when the jobs will be done, and -j 8 if you want to run, say 8, jobs at a time. Of course, if you specifically want the 2 jobs at a time you asked for, use -j 2.
You can also have GNU Parallel simply distribute jobs and data to any other machines you may have available via ssh access.
dear all,
I am using the Perforce Windows x64 GUI 2014.1888424. Per my test, the answer is NO. It will use the assigned revision number.
I found the "integrate" and "branch" commands also don't keep the revision info.
Are these commands designed to behavior like this?
PS: I know that rename/move will keep the revision info.
Would you please give me any hint? Thank you very much!
You didn't describe precisely what test you did, so I'm speculating slightly, but: the Perforce integrate, copy, and merge commands do preserve the overall revision history of the file.
However, many of the commands which display file history do not display the full history by default; you have to specify additional commands to instruct the server to output the history across integrations.
For example, compare the output of
p4 filelog -i
versus
p4 filelog
or
p4 changes -i
versus
p4 changes
for your experiments.
The -i flag tells the server that you want to see the history of the file from prior to the integration or copy from its previous location. Here's how it's described in 'p4 help filelog':
The -i flag includes inherited file history. If a file was created by
branching (using 'p4 integrate'), filelog lists the revisions of the
file's ancestors up to the branch points that led to the specified
revision. File history inherited by renaming (using 'p4 move') is
always displayed regardless of whether -i is specified.
Note that, as you've observed, renaming is indeed handled differently than integ/copy/merge.
As for myself, when I am studying the history of a file, I tend to nearly always use the P4V tool and its supremely powerful Revision Graph and Time Lapse View tools. These tools know how to navigate all sorts of complex integration history, and make studying the history of a file much easier.
When submitting changelists in Perforce I need to allocate a job. The jobs which I am supposed to associate with my changelist are not allocated to me and does not show up in the list of available jobs when I invoke "p4 submit". I know the job number which I am going to use, but can't find a way to specify it. Basically, I want to do something like:
p4 submit -j
But there is no -j option...
You can create a numbered pending changelist, and attach jobs (p4 fix) to that.
I don't think there's a one shot way of submitting the default changelist with an arbitrary job.
In your change spec (when you do p4 submit or p4 change) simply add a Jobs: section with a newline separated list of jobs you wish to fix.
If you edit your user (p4 user) and add a jobview: section, the jobs section will appear automatically.
See http://kb.perforce.com/?article=052
Problem: 2 projects shared trunk and were updating some of the same files. Now one project needs to be released, so a new branch was created from a checkpoint before the projects started.
I have a list of just my changelist numbers from the mainline. Using that I can generate a list changed files and diff output using a script with a series of 'p4 describe #' commands.
Can I reformat that output and apply it to the new branch somehow?
Response to the title: "Is it possible to create a patch using a set of changelists?"
Yes.
p4 diff2 -u //path_to_your_sources/...#cln_minus_1 //path_to_your_sources/...#cln > /tmp/cln.patch.
You can then use /tmp/cln.patch as input to the patch utility. Here, 'cln' is the submitted change list number that you want to create a patch for.
I've just spent two hours struggling with this. Using cygwin patch, I had to munge the paths until they were recognised.
In the end, the magic incantation looked like this (broken across lines):
p4 diff2 -u //depot/foo/main/...#100003 //depot/foo/main/...#100000 |
sed 's#//depot/#E:/Source/#g' |
sed '/^+++\|---/s#/#\\#g' |
patch
That is:
Use p4 diff2 to get a unified diff (-u) of part of the depot between the two revisions that I care about. The second changelist is the one before the first one I want, otherwise it's not included in the diff.
Use sed to change the //depot/ to E:/Source/, which is where my workspace lives.
Change forward slashes to double backslashes (this seems to make it work).
Pipe the results through patch.
Cygwin patch is smart enough to check files out of Perforce, but I'm not sure how to get it to do it silently. It prompts with Get file 'e:\Source\foo\whatever' from Perforce with lock?.
This is with p4 version 2010.1, a fairly recent installation of Cygwin, running on PowerShell.
Oh, and after this, patch wrote out Unix-style line endings, so I used u2d to fix those up.
Perforce will let you cherry-pick changelists for integration, which may be easier than trying to generate and apply a patch. Perforce will keep track of what revisions you've integrated where, which may make future integrations easier.
Let's assume you used to have one trunk:
//depot/mycode/trunk
And you checked in all of your changes there. You branched trunk at some point in the past to:
//depot/mycode/rel
And you have a list of changelists on trunk to merge. From a client spec that maps rel, integrate each changelist:
p4 integrate //depot/mycode/trunk/...#1234,1234 //depot/mycode/rel/...
where 1234 is the changelist number. Resolve after each integration. You may also wish to build, test, and commit your integrations at various checkpoints during your integration, if you can identify good points to do so. (Perforce can handle multiple integrations per commit, but if you make a mistake you'll need to revert to the last version checked in and redo the intermediate integrations and resolves.)
p4 describe -S -du <CL number>
is the shorter and most concise command, in my opinion.
How can I figure out the state of the files in my client, I want to know if the file needs an updated, or patched, or modified etc. In CVS, I used to simply run "cvs -n -q update . > file". Later look for M,U,P,C attributes to get the current status of the file.
In perforce, "p4 sync -n" doesn't give output like "cvs -n -q update". How can I see the current status of files, in case of Perforce?
To my knowledge, there isn't a command that will give you exactly what you want. In looking what the update command does, there is no single alternative in Perforce. I think that the closest that you will come will be to use the 'p4 fstat' command and parse the output from there to get the information that you need.
You might find this page helpful.
I also found this link to a p4wrapper that claims to wrap in come CVS commands (including update) into a script. There might be others like this one around as well.
I also wanted to comment that the answer to this question is like many with Perforce when asking "how do I do...". The answer usually comes down to writing a script to take the output from perforce commands to get the results that you need. Their philosophy is to provide bare bones commands and have developers build off of the basic functionality. Love it or hate it, that's the basic model. Many good scripts can be found in the Perforce Public Depot here.
Not sure if this is what you're looking for, but the p4 diff command has a few useful options. From the usage:
-sa Opened files that are different from the revision
in the depot, or missing.
-sb Opened for integrate files that have been resolved
but have been modified after being resolved.
-sd Unopened files that are missing on the client.
-se Unopened files that are different from the revision
in the depot.
-sl Every unopened file, along with the status of
'same, 'diff', or 'missing' as compared to its
revision in the depot.
-sr Opened files that are the same as the revision in the
depot.
Full disclosure: I work for Perforce
There will be a 2 new commands "p4 status" and "p4 reconcile" in the up-coming 2012.1 release. See the following for more details:
http://www.perforce.com/blog/120126/new-20121-p4reconcile-p4status
Not quite sure what you mean. If you are talking about seeing what files need "resolving" (in perforce language) then you can use:
p4 resolve -n
See the p4 command line manual website here:
http://www.perforce.com/perforce/doc.current/manuals/cmdref/resolve.html#1040665
Also P4V has a nice feature to highlight unsubmitted and dirty files, if you use that client. Right-click on a fodler in the workspace view, and select "reconcile offline work." After a bit of processing you'll get a list of files that are out of sync with the depot.
Hope this helps.