I've created two tags in ReadyAPI: "Test" and "Stage". I've tagged two testcases with "Test" and one with "Stage".
When I launch testrunner I only would like to run testcase/s tagged with "Stage":
I add follow parameters:
"C:\Program Files\SmartBear\ReadyAPI-3.20.2\bin\testrunner.bat" -r -a -j -f${WORKSPACE} "-RJUnit-Style HTML Report" -FXML "-EDefault environment" "-TTestCase Stage" C:\TestOfTags-readyapi-project.xml -Dreadyapi.skip.endpoints.checks=true
But my tagged testcase don't run.
ReadyAPI 3.20.2 TestCaseRunner Summary
Time Taken: 753ms
Total TestSuites: 1
Total TestCases: 0 (0 failed)
Total TestSteps: 0
Total Request Assertions: 0
Total Failed Assertions: 0
Total Exported Results: 0
What am I missing here?
So I figured it out myself:
Create, let's say a tag with name = exampleTest
Tag testcases that you want to run with "exampleTest"
In testrunner simply add: "-TTestCase exampleTest"
Done
Related
We have the following output at the end of our Jest test base:
Test Suites: 273 passed, 273 total
Tests: 1 skipped, 1923 passed, 1924 total
Snapshots: 61 passed, 61 total
Time: 38.885 s, estimated 39 s
You see there is one skipped test.
When I search my test files either for it.skip or test.skip or generally skip I find nothing.
What I also tried is outputting the test run into JSON via:
jest --json --outputFile=testrun.json
In the top of the file I find this information:
{
"numFailedTestSuites": 0,
"numFailedTests": 0,
"numPassedTestSuites": 273,
"numPassedTests": 1923,
"numPendingTestSuites": 0,
"numPendingTests": 1,
"numRuntimeErrorTestSuites": 0,
"numTodoTests": 0,
"numTotalTestSuites": 273,
"numTotalTests": 1924,
...
}
so it looks like that numPendingTests is the one pointing to the skipped one. But when I search the output file, again, no trace of a skipped test. In fact, I did a search for "status": "[a-z]and there is no other status to be found than passed.
Short of looking through 270+ test suites, how else could a skipped test hide from me? Is there any way to find it?
As mentioned by johnrsharp as a comment, another way to skip tests in Jest is to prefix the term it(test) with x- so if you want to scan the files for skipped tests, you also need to look out for xit or xtest.
I am noticing that all my rules request memory twice, one at a lower maximum than what I requested (mem_mb) and then what I actually requested (mem_gb). If I run the rules as localrules they do run faster. How can I make sure the default settings do not interfere?
resources: mem_mb=100, disk_mb=8620, tmpdir=/tmp/pop071.54835, partition=h24, qos=normal, mem_gb=100, time=120:00:00
The rules are as follows:
rule bwa_mem2_mem:
input:
R1 = "data/results/qc/{species}.{population}.{individual}_1.fq.gz",
R2 = "data/results/qc/{species}.{population}.{individual}_2.fq.gz",
R1_unp = "data/results/qc/{species}.{population}.{individual}_1_unp.fq.gz",
R2_unp = "data/results/qc/{species}.{population}.{individual}_2_unp.fq.gz",
idx= "data/results/genome/genome",
ref = "data/results/genome/genome.fa"
output:
bam = "data/results/mapped_reads/{species}.{population}.{individual}.bam",
log:
bwa ="logs/bwa_mem2/{species}.{population}.{individual}.log",
sam ="logs/samtools_view/{species}.{population}.{individual}.log",
benchmark:
"benchmark/bwa_mem2_mem/{species}.{population}.{individual}.tsv",
resources:
time = parameters["bwa_mem2"]["time"],
mem_gb = parameters["bwa_mem2"]["mem_gb"],
params:
extra = parameters["bwa_mem2"]["extra"],
tag = compose_rg_tag,
threads:
parameters["bwa_mem2"]["threads"],
shell:
"bwa-mem2 mem -t {threads} -R '{params.tag}' {params.extra} {input.idx} {input.R1} {input.R2} | "
"samtools sort -l 9 -o {output.bam} --reference {input.ref} --output-fmt CRAM -# {threads} /dev/stdin 2> {log.sam}"
and the config is:
cluster:
mkdir -p logs/{rule} && # change the log file to logs/slurm/{rule}
sbatch
--partition={resources.partition}
--time={resources.time}
--qos={resources.qos}
--cpus-per-task={threads}
--mem={resources.mem_gb}
--job-name=smk-{rule}-{wildcards}
--output=logs/{rule}/{rule}-{wildcards}-%j.out
--parsable # Required to pass job IDs to scancel
default-resources:
- partition=h24
- qos=normal
- mem_gb=100
- time="04:00:00"
restart-times: 3
max-jobs-per-second: 10
max-status-checks-per-second: 1
local-cores: 1
latency-wait: 60
jobs: 100
keep-going: True
rerun-incomplete: True
printshellcmds: True
scheduler: greedy
use-conda: True # Required to run with local conda enviroment
cluster-status: status-sacct.sh # Required to monitor the status of the submitted jobs
cluster-cancel: scancel # Required to cancel the jobs with Ctrl + C
cluster-cancel-nargs: 50
Cheers,
Angel
Right now there are two separate memory resource requirements:
mem_mb
mem_gb
From the perspective of snakemake these are different, so both will be passed to the cluster. A quick fix is to use the same units, e.g. if the resource really requires only 100 mb, then the default resource should be changed to:
default-resources:
- partition=h24
- qos=normal
- mem_mb=100
While trying to set up Nextflow with Azure Batch (NF-Core), I am getting following error. I tried this on multiple workflows (sarek, ataseq etc.) I get the same error -
N E X T F L O W ~ version 22.04.0
Pulling nf-core/atacseq ...
downloaded from https://github.com/nf-core/atacseq.git
Launching `https://github.com/nf-core/atacseq` [rhl6d5529] DSL1 - revision: 1b3a832db5 [1.2.1]
Downloading plugin nf-azure#0.13.1
----------------------------------------------------
,--./,-.
___ __ __ __ ___ /,-._.--~'
|\ | |__ __ / ` / \ |__) |__ } {
| \| | \__, \__/ | \ |___ \`-._,-`-,
`._,._,'
nf-core/atacseq v1.2.1
----------------------------------------------------
Run Name : rhl6d5529
Data Type : Paired-End
Design File : https://raw.githubusercontent.com/nf-core/test-datasets/atacseq/design.csv
Genome : Not supplied
Fasta File : https://raw.githubusercontent.com/nf-core/test-datasets/atacseq/reference/genome.fa
GTF File : https://raw.githubusercontent.com/nf-core/test-datasets/atacseq/reference/genes.gtf
Mitochondrial Contig : MT
MACS2 Genome Size : 1.2E+7
Min Consensus Reps : 1
MACS2 Narrow Peaks : No
MACS2 Broad Cutoff : 0.1
Trim R1 : 0 bp
Trim R2 : 0 bp
Trim 3' R1 : 0 bp
Trim 3' R2 : 0 bp
NextSeq Trim : 0 bp
Fingerprint Bins : 100
Save Genome Index : No
Max Resources : 6 GB memory, 2 cpus, 12h time per job
Container : docker - nfcore/atacseq:1.2.1
Output Dir : ./results
Launch Dir : /
Working Dir : /nextflow/atacseq/rhl6d5529
Script Dir : /.nextflow/assets/nf-core/atacseq
User : root
Config Profile : test,azurebatch
Config Description : Minimal test dataset to check pipeline function
Config Contact : Venkat Malladi (#vsmalladi)
Config URL : https://azure.microsoft.com/services/batch/
----------------------------------------------------
Uploading local `bin` scripts folder to az://nextflow/atacseq/rhl6d5529/tmp/66/bd55d79e42999df38ba04a81c3aa04/bin
[- ] process > CHECK_DESIGN -
[- ] process > CHECK_DESIGN [ 0%] 0 of 1
[- ] process > CHECK_DESIGN [ 0%] 0 of 1
Error executing process > 'CHECK_DESIGN (design.csv)'
Caused by:
Cannot find a matching VM image with publisher=microsoft-azure-batch; offer=centos-container; OS type=linux; verification type=verified
[58/55b7f7] process > CHECK_DESIGN (design.csv) [100%] 1 of 1, failed: 1
Error executing process > 'CHECK_DESIGN (design.csv)'
Caused by:
Cannot find a matching VM image with publisher=microsoft-azure-batch; offer=centos-container; OS type=linux; verification type=verified
I tried looking into the source code of nextflow. I found the error to be in AzBatchService.groovy (line number below).
https://github.com/nextflow-io/nextflow/blob/0e593e6ab82880810d8139a4fe6e3c47ff69a531/plugins/nf-azure/src/main/nextflow/cloud/azure/batch/AzBatchService.groovy#L442
I did some further digging in my Azure Batch account instance. Basically, I wanted to confirm if the list of supported images being received from the Azure Batch account has the one that is required for this pipeline. I could confirm that the server did indeed respond with the required image -
What could be the issue here? I remember running the exact same pipeline a few weeks back and it did work a few times. Am I missing something?
Just had another look through the Azure Cloud docs and think this might be relevant:
By default, Nextflow creates CentOS 8-based pool nodes, but this
behavior can be customised in the pool configuration. Below the
configurations for image reference/SKU combinations to select two
popular systems.
Ubuntu 20.04:
sku = "batch.node.ubuntu 20.04"
offer = "ubuntu-server-container"
publisher = "microsoft-azure-batch"
CentOS 8 (default):
sku = "batch.node.centos 8"
offer = "centos-container"
publisher = "microsoft-azure-batch"
I think the issue here is a mismatched nodeAgentSkuId. Nextflow is expecting a CentOS 8 node agent SKU, but you have a CentOS 7 SKU. If it's not possible to change the nodeAgentSkuId somehow, the node agent SKU that Nextflow uses should be able to be overridden by adding this to your nextflow.config:
azure.batch.pools.<name>.sku = 'batch.node.centos 7'
Where <name> is the pool identifier:
azure.batch.pools.<name>.sku
Specify the ID of the Compute Node agent SKU which the pool identified with <name> supports (default: batch.node.centos 8, requires nf-azure#0.11.0).
https://www.nextflow.io/docs/edge/azure.html#advanced-settings
I'm trying to run jmeter load testing scripts in non GUI mode to generate HTML report with below command
./jmeter.sh -n -t "/home/dsbloadtest/DSB_New_21_01_2022/apache-jmeter-5.4.3/dsb_test_plans/SERVICE_BOOKING.jmx" -l /home/dsbloadtest/DSB_New_21_01_2022/apache-jmeter-5.4.3/dsb_test_results/testresults.csv -e -o /home/dsbloadtest/DSB_New_21_01_2022/apache-jmeter-5.4.3/dsb_test_results/HTMLReports
It was working fine, but now not getting the result as im getting as below
summary = 0 in 00:00:00 = \*\*\*\*\*\*/s Avg: 0 Min: 9223372036854775807 Max: -9223372036854775808 Err: 0 (0.00%)
Tidying up ... # Fri Apr 01 11:22:40 IST 2022 (1648792360414)
Error generating the report: org.apache.jmeter.report.core.SampleException: Could not read metadata !
... end of run
I have tried to generate HTML report in J meter non GUI mode.
summary = 0 in 00:00:00 = ******/s Avg: 0 Min: 9223372036854775807 Max: -9223372036854775808 Err: 0 (0.00%)
it means that JMeter didn't execute any Sampler, your testresults.csv is empty and you don't have any data to generate the dashboard from.
The reason for test failure normally can be figured out from jmeter.log file, the most common mistakes are:
the file referenced in the CSV Data Set Config doesn't exist
the JMeter Plugins used in the test are not installed for this particular JMeter instance
I have a snakemake workflow for a metagenomics project. At a point in the workflow, I map DNA sequencing reads (either single or paired-end) to metagenome assemblies made by the same workflow. I made an input function conform the Snakemake manual to map both single end and paired end reads with one rule. like so
import os.path
def get_binning_reads(wildcards):
pathpe=("data/sequencing_binning_signals/" + wildcards.binningsignal + ".trimmed_paired.R1.fastq.gz")
pathse=("data/sequencing_binning_signals/" + wildcards.binningsignal + ".trimmed.fastq.gz")
if os.path.isfile(pathpe) == True :
return {'reads' : expand("data/sequencing_binning_signals/{binningsignal}.trimmed_paired.R{PE}.fastq.gz", PE=[1,2],binningsignal=wildcards.binningsignal) }
elif os.path.isfile(pathse) == True :
return {'reads' : expand("data/sequencing_binning_signals/{binningsignal}.trimmed.fastq.gz", binningsignal=wildcards.binningsignal) }
rule backmap_bwa_mem:
input:
unpack(get_binning_reads),
index=expand("data/assembly_{{assemblytype}}/{{hostcode}}/scaffolds_bwa_index/scaffolds.{ext}",ext=['bwt','pac','ann','sa','amb'])
params:
lambda w: expand("data/assembly_{assemblytype}/{hostcode}/scaffolds_bwa_index/scaffolds",assemblytype=w.assemblytype,hostcode=w.hostcode)
output:
"data/assembly_{assemblytype}_binningsignals/{hostcode}/{binningsignal}.bam"
threads: 100
log:
stdout="logs/bwa_backmap_samtools_{assemblytype}_{hostcode}.stdout",
samstderr="logs/bwa_backmap_samtools_{assemblytype}_{hostcode}.stdout",
stderr="logs/bwa_backmap_{assemblytype}_{hostcode}.stderr"
shell:
"bwa mem -t {threads} {params} {input.reads} 2> {log.stderr} | samtools view -# 12 -b -o {output} 2> {log.samstderr} > {log.stdout}"
When I make an arbitrary 'all-rule' like this, the workflow runs successfully.
rule allbackmapped:
input:
expand("data/assembly_{assemblytype}_binningsignals/{hostcode}/{binningsignal}.bam", binningsignal=BINNINGSIGNALS,assemblytype=ASSEMBLYTYPES,hostcode=HOSTCODES)
However, when the files created by this rule are required for subsequent rules like so:
rule backmap_samtools_sort:
input:
"data/assembly_{assemblytype}_binningsignals/{hostcode}/{binningsignal}.bam"
output:
"data/assembly_{assemblytype}_binningsignals/{hostcode}/{binningsignal}.sorted.bam"
threads: 6
resources:
mem_mb=5000
shell:
"samtools sort -# {threads} -m {mem_mb}M -o {output} {input}"
rule allsorted:
input:
expand("data/assembly_{assemblytype}_binningsignals/{hostcode}/{binningsignal}.sorted.bam",binningsignal=BINNINGSIGNALS,assemblytype=ASSEMBLYTYPES,hostcode=HOSTCODES)
The workflow closes with this error
WorkflowError in line 416 of
/stor/azolla_metagenome/Azolla_genus_metagenome/Snakefile: Can only
use unpack() on list and dict
To me, this error suggests the input function for the former rule is faulty. This however, seems not to be the case for it ran successfully when no subsequent processing was queued.
The entire project is hosted on github. The entire Snakefile and a github issue.