I'm trying to find how to edit a specific job field using p4.exe command line.
I tried to watch the output of P4V when I edit one manually, but it give no advice else than:
p4 job -i
Job job0001 saved.
p4 job -o job0001
I tried many commands like: p4 job -i job0001 filed_name= value , ...
As usual, the doc explain nothing.
Thanks
using the '-i' flag with the 'p4 job' command, allows a job specification to be read in from a file, if you also use the '<' operator.
For example, I have a standard description, that I want to include in all my jobs, so I run:
p4 job -o > job file
to create a file that contains a job spec.
I then edit this file, so that the 'Description' field contains my description.
I then run the following:
bash-3.2$ p4 job -i < jobfile
Job job000003 saved.
Running the following command, shows that 'job000003' contains the description, that I wrote in 'job file':
bash-3.2$ p4 job -o job000003
# A Perforce Job Specification.
#
# Job: The job name. 'new' generates a sequenced job number.
# Status: Either 'open', 'closed', or 'suspended'. Can be changed.
# User: The user who created the job. Can be changed.
# Date: The date this specification was last modified.
# Description: Comments about the job. Required.
Job: job000003
Status: open
User: jen
Date: 2015/03/31 12:08:16
Description:
My test job.
Hope this helps,
Jen!
Related
Answered - previously titled 'Cron job for shell script not running'
I recently downloaded Speedtest onto my Raspberry Pi, and wrote a script to output the results in csv format to a CSV file.
I'm trying to do this regularly via a cron job, but for some reason, it won't execute the shell script as intended.
Here's the script below. I've commented/cut out a lot to try and find the issue
#!/bin/bash
# Commented out if statement detects presence of data file and creates one if it doesn't exist. Was going to adjust later to include variables/input options if I wanted to used script on alternate systems, but commented out while working on main issue.
file='/home/User/Documents/speedtestdata.csv'
# have tried this with and without quotes, does not seem to make a difference either way
#HEADERS='/usr/bin/speedtest-cli --csv-header'
SPEEDTEST='/usr/bin/speedtest-cli --csv'
# Used absolute path for the executable
#LOG=/home/User/scripts/testreclog.txt
#DATE=$( date )
# Was using the above to log steps of script running successfully with timestamp, commented out
#if [ ! -f $file ]
#then
# echo "Creating results file">>$LOG
# touch $file
# $HEADERS > $file
#fi
#echo "Running speedtest">>$LOG
$SPEEDTEST >> $file
#echo "Formatting results">>$LOG
#column -s, -t < $file
# this step was used to format the log file neatly
#echo "Time completed ",$DATE>>$LOG
And here's how the crontab currently looks
# Edit this file to introduce tasks to be run by cron.
#
# Each task to run has to be defined through a single line
# indicating with different fields when the task will be run
# and what command to run for the task
#
# To define the time you can provide concrete values for
# minute (m), hour (h), day of month (dom), month (mon),
# and day of week (dow) or use '*' in these fields (for 'any').
#
# Notice that tasks will be started based on the cron's system
# daemon's notion of time and timezones.
#
# Output of the crontab jobs (including errors) is sent through
# email to the user the crontab file belongs to (unless redirected).
#
# For example, you can run a backup of all your user accounts
# at 5 a.m every week with:
# 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/
#
# For more information see the manual pages of crontab(5) and cron(8)
#
# m h dom mon dow command
*/5 * * * * /bin/bash /home/User/scripts/testandrec.sh
# 2> /home/User/scripts/testrecerror.txt
# Was attempting to log errors to this file, nothing seen so commented out on a newline.
#* * * * * /home/User/scripts/testscript.sh test to verify cron works (it does)
I've added my scripts folder to the end of my path, but for some reason this only shows up when I'm using the Pi directly, when I ssh in I'm missing the scripts folder on the end.
However, given that I've used absolute path for everything I'm not sure why this would be an issue.
First I tested whether a simple Cron job would work, so I created testscript.sh, which simply returned 'Test' and a timestamp to a specific file and used the same shebang, and used the absolute paths, and functioned as intended.
I have checked systemctl for Cron, restarted Cron with sudo service cron restart and made sure a new line is in place in the crontab.
I have tried with and without /bin/bash in the cron tab entry, it seemingly hasn't made a difference.
I tried cd /home/User/scripts && ./testandrec.sh but no luck.
I changed the run time to every 5 then every 10 minutes, which has not worked.
I have noticed that when I ran the script manually with column -s, -t < $file left in, when cating the results file it is formatted as intended.
However, the next instance of when the cron job should run reverts this to CSV with a , as a delimitter, so clearly something is running.
To confuse matters further, I think the script may be firing once after restarting cron, and then not working when it should be running subsequently. When I leave the column line in, this appears to just revert the formatting, but if I comment it out it appears to run a speed test and append the results, but only once. However, I may be wrong on this and reproducing it
If I instead try 0 * * * * /usr/bin/speedtest-cli --csv >> /home/User/Documents/speedtestdata.csv && column -s, -t < /home/User/Documents/speedtestdata.csv, it appeared to perform/append speedtest but does not action the column command.
I would much rather neatly tie up the process in a shell script, however, rather than have the above which isn't very DRY code.
I've looked extensively, but none of the solutions I've found on this site or others have fixed the issue.
Any troubleshooting suggestions/help would be greatly appreciated.
Here you go - the solution is simple:
#!/bin/bash
# Commented out if statement detects presence of data file and creates one if it doesn't exist. Was going to adjust later to include variables/input options if I wanted to used script on alternate systems, but commented out while working on main issue.
file='/home/User/Documents/speedtestdata.csv'
# have tried this with and without quotes, does not seem to make a difference either way
#HEADERS='/usr/bin/speedtest-cli --csv-header'
SPEEDTEST='/usr/bin/speedtest-cli --csv'
# Used absolute path for the executable
#LOG=/home/User/scripts/testreclog.txt
#DATE=$( date )
# Was using the above to log steps of script running successfully with timestamp, commented out
#if [ ! -f $file ]
#then
# echo "Creating results file">>$LOG
# touch $file
# $HEADERS > $file
#fi
#echo "Running speedtest">>$LOG
$SPEEDTEST | column -s, -t >> $file
Just check the last line ;)
Is there any way to do a dry run of xgettext on source files, in order to simply check if there are any differences compared to the current .pot file?
I have set up a Github workflow that will run xgettext on source files any time a change to a source file is pushed to the repository. The result is that often the change to the source file didn't change the translation strings, so the only difference in the resulting .pot file is the Creation date, which gets updated every time xgettext is run. This makes for unnecessary commits, and triggers unnecessary webhook calls to my weblate instance which picks up on an "updated" .pot file, and winds up generating its own Pull Request with an "updated" .pot file.
If there were a way to do a dry run and first check if there are any actual differences in the strings, I could avoid unnecessary commits and PRs from polluting my repo. Any ideas?
I was able to add a filter to my Github workflow, checking whether there are any significant changes besides a simple update to the value of POT-Creation-Date in the .pot file. I added an id to the step that takes care of running xgettext, then after running xgettext I save the count of significant lines changed in the pot file to a variable that will be accessible to the next step:
- name: Update source file translation strings
id: update_pot
run: |
sudo apt-get install -y gettext
xgettext --from-code=UTF-8 --add-comments='translators:' --keyword="pgettext:1c,2" -o i18n/litcal.pot index.php
echo "::set-output name=POT_LINES_CHANGED::$(git diff -U0 | grep '^[+|-][^+|-]' | grep -Ev '^[+-]"POT-Creation-Date' | wc -l)"
Then I check against this variable before running the commit step:
- name: Push changes # push the output folder to your repo
if: ${{ steps.update_pot.outputs.POT_LINES_CHANGED > 0 }}
uses: actions-x/commit#v4
with:
# The committer's email address
email: 41898282+github-actions[bot]#users.noreply.github.com
# The committer's name
name: github-actions
# The commit message
message: regenerated i18n/litcal.pot from source files
etc.
I'm putting together a snakemake slurm workflow and am having trouble with my working directory becoming cluttered with slurm output files. I would like my workflow to, at a minimum, direct these files to a 'slurm' directory inside my working directory. I currently have my workflow set up as follows:
config.yaml:
reads:
1:
2:
samples:
15FL1-2: /datasets/work/AF_CROWN_RUST_WORK/2020-02-28_GWAS/data/15FL1-2
15Fl1-4: /datasets/work/AF_CROWN_RUST_WORK/2020-02-28_GWAS/data/15Fl1-4
cluster.yaml:
localrules: all
__default__:
time: 0:5:0
mem: 1G
output: _{rule}_{wildcards.sample}_%A.slurm
fastqc_raw:
job_name: sm_fastqc_raw
time: 0:10:0
mem: 1G
output: slurm/_{rule}_{wildcards.sample}_{wildcards.read}_%A.slurm
Snakefile:
configfile: "config.yaml"
workdir: config["work"]
rule all:
input:
expand("analysis/fastqc_raw/{sample}_R{read}_fastqc.html", sample=config["samples"],read=config["reads"])
rule clean:
shell:
"rm -rf analysis logs"
rule fastqc_raw:
input:
'data/{sample}_R{read}.fastq.gz'
output:
'analysis/fastqc_raw/{sample}_R{read}_fastqc.html'
log:
err = 'logs/fastqc_raw/{sample}_R{read}.out',
out = 'logs/fastqc_raw/{sample}_R{read}.err'
shell:
"""
fastqc {input} --noextract --outdir 'analysis/fastqc_raw' 2> {log.err} > {log.out}
"""
I then call with:
snakemake --jobs 4 --cluster-config cluster.yaml --cluster "sbatch --mem={cluster.mem} --time={cluster.time} --job-name={cluster.job_name} --output={cluster.output}"
This does not work, as the slurm directory does not already exist. I don't want to manually make this before running my snakemake command, that will not work for scalability. Things I've tried, after reading every related question, are:
1) simply trying to capture all the output via the log within the rule, and setting cluster.output='/dev/null'. Doesn't work, the info in the slurm output isn't captured as it's not output of the rule exactly, its info on the job
2) forcing the directory to be created by adding a dummy log:
log:
err = 'logs/fastqc_raw/{sample}_R{read}.out',
out = 'logs/fastqc_raw/{sample}_R{read}.err'
jobOut = 'slurm/out.err'
I think this doesn't work because sbatch tries to find the slurm folder before implementing the rule
3) allowing the files to be made in the working directory, and adding bash code to the end of the rule to move the files into a slurm directory. I believe this doesn't work because it tries to move the files before the job has finished writing to the slurm output.
Any further ideas or tricks?
You should be able to suppress these outputs by calling sbatch with --output=/dev/null --error=/dev/null. Something like this:
snakemake ... --cluster "sbatch --output=/dev/null --error=/dev/null ..."
If you want the files to go to a directory of your choosing you can of course change the call to reflect that:
snakemake ... --cluster "sbatch --output=/home/Ensa/slurmout/%j.out --error=/home/Ensa/slurmout/%j.out ..."
So this is how I solved the issue (there's probably a better way, and if so, I hope someone will correct me). Personally I will go to great lengths to avoid hard-coding anything. I use a snakemake profile and an sbatch script.
First, I make a snakemake profile that contains a line like this:
cluster: "sbatch --output=slurm_out/slurm-%j.out --mem={resources.mem_mb} -c {resources.cpus} -J {rule}_{wildcards} --mail-type=FAIL --mail-user=me#me.edu"
You can see the --output parameter to redirect the slurm output files to a subdirectory called slurm_out in the current working directory. But AFAIK, slurm can't create that directory if it doesn't exist. So...
Next I make a small sbatch script whose only job is to make the subdirectory, then call the sbatch script to submit the workflow. This "wrapper" looks like:
#!/bin/bash
mkdir -p ./slurm_out
sbatch snake_submit.sbatch
And finally, the snake_submit.sbatch looks like:
#!/bin/bash
ml snakemake
snakemake --profile <myprofile>
In this case both the wrapper and the sbatch script that it calls will have their slurm out files in the current working directory. I prefer it that way because it's easier for me to locate them. But I think you could easily re-direct by adding another #SBATCH --output parameter to the snake_submit.sbatch script (but not the wrapper, then it's turtles all the way down, you know?).
I hope that makes sense.
How to create and submit a new file using p4python?
create_and_submit_file(full_path_in_depot, new_file_text_content):
logging.basicConfig(level=logging.INFO, format=LOG_FORMAT)
p4 = get_p4() # implemented
try: # Catch exceptions with try/except
connect_and_login_p4(p4) # implemented
# .. implementation here .. p4.some_call()
LOGGER.info('done')
except P4Exception:
for e in p4.errors: # Display errors
LOGGER.error(e)
If the file already exists on the local filesystem within a workspace, all you need to do is p4.run_add(file) and p4.run_submit('-d', 'this is my awesome file').
If the file doesn't exist, you need to create it, and if you don't have a workspace, you need to create that too. For the sake of brevity, here's how you'd do that from the command line completely from scratch (this maps pretty directly to P4Python but I don't know enough about your environment to give you code that'll work out of the box so I won't attempt the translation):
echo "my awesome file content" > my_awesome_file
p4 set P4CLIENT=my_awesome_client
p4 --field "View=//depot/... //my_awesome_client/..." client -o | p4 client -i
p4 add my_awesome_file
p4 submit -d "this is my awesome file"
Check out the example for p4.save_client to see a simple example of how you can create/modify a client spec with P4Python and modify the fields to suit your environment (similar to how I used the --field flag to set the View such that the root of my_awesome_client corresponds to //depot/...):
https://www.perforce.com/perforce/r14.2/manuals/p4script/python.p4.html#python.p4.save_spectype
In perforce changelists get renumbered on submission. So for e.g. when the changelist was created it would be numbered 777 , but on submission of changelist it would get renumbered to say 790.
My question is how do I get the new CL number (790) , if I know the old CL number 777 , or vice versa ?
If you really want the original changelist number, that can be retrieved from Perforce without having to embed the original changelist number in the description. You can use the -ztag command line option to get at it. And you can only get at it through the 'changes' command (as far as I know):
d:\sandbox>p4 submit -c 24510
Submitting change 24510.
Locking 1 files ...
edit //depot/testfile.txt#2
Change 24510 renamed change 24512 and submitted.
d:\sandbox>p4 -ztag changes -m1 //depot/testfile.txt
... change 24512
... time 1294249178
... user test.user
... client client-test.user
... status submitted
... oldChange 24510
... desc <enter description here>
<saved
As pointed out, it's probably not that useful. However, I did want to note that it's possible to get at it.
The only way I can think of is adding the original changelist number as part of the changelist description field. First, you'll need a script to store the original changelist number:
#!/bin/env perl
$id = $ARGV[0];
open (CHANGE_IN, "p4 change -o $id|");
open (CHANGE_OUT, "|p4 change -i $id");
while (<CHANGE_IN>)
{
if (/^Description:/ and not /ORIGID/)
{
s/(^Description:)(.*)$/$1 ORIGID $id. $2/;
}
print CHANGE_OUT $_;
}
close (CHANGE_IN);
close (CHANGE_OUT);
Save this as origid.pl on the Perforce server with the executable bit set. Then setup a trigger with p4 triggers.
Triggers:
add_origid change-submit //depot/... /usr/bin/origid.pl %change%
Version 2012.1 of Perforce introduced the -O (capital oh) argument to p4 describe, which allows you to query a changelist by its original number (before being renumbered by p4 submit).
I find this very helpful, since I often find myself keeping notes about a changeset before it is submitted, then forgetting to note what it was renumbered to on submission.
So if I have a note talking about change 12300, I can now see what it refers to by typing:
p4 describe -s -O 12300
and having Perforce tell me:
Change 12345 by me#myhost on 2013/10/31 00:00:00
Fix that thing I wrote that note about
Affected files ...
... //Proj/MAIN/foo.c
The ztag method mentioned earlier can be used to find the old changelist number of a submitted change:
> p4 -ztag describe -s 12345 | grep oldChange
... oldChange 12300
Adding to Eric Miller's reply, because I can't comment (not enough points):
to just emit the 1 number
p4 -ztag describe $ORIG | sed -e 's/^\.\.\. oldChange //;t-ok;d;:-ok'
e.g.
OLD=$(p4 -ztag describe $ORIG | sed -e 's/^\.\.\. oldChange //;t-ok;d;:-ok')
or if you want to look up many numbers, this will output a map of new old on each line (may be useful with the "join" command). If there is no old commit, then it re-emits the new commit.
p4 -ztag describe 782546 782547 ... | sed -e '${x;p};s/^\.\.\. change //;t-keep;b-next;:-keep;x;/./p;g;G;s/\n/ /;x;d;:-next;s/^\.\.\. oldChange //;t-ok;d;:-ok;H;x;s/ .*\n/ /;x;d;'
I tried to avoid using GNU extensions to sed.
Unless you do something like Tim suggests the old change list number will be lost on submission.
Change list numbers are only temporary until you actually submit. So if you create a new change list (777 say) and then decide to delete it, the next change list you create will be 778.
It can be a bit more elegant if you use the P4 Python module.
i.e.
import P4
p4 = P4.P4()
p4.connect() # having a valid p4 workspace/connection is on you :)
c = p4.run_describe('969696') # describe a Submitted, renumbered changelist, i.e. 969696
old_pending_cl_number = c['oldChange'] # print out prior/pending CL# if this exists.
Cheers