Creating a security realm on WildFly 8.1.0 - Node path format is wrong around 'x' on issuing a command to create the realm - security

When issuing the following command on jboss-cli.bat (an MS-DOS batch file for windows) to create a security realm on WildFly 8.1.0 final as mentioned in this migration guide,
./subsystem=security/security-domain=app:add(cache-type="default")
cd ./subsystem=security/security-domain=app
./authentication=classic:add(
login-modules=[ {
code="Database",
flag="required",
module-options={
dsJndiName="java:/jdbc/project_datasource",
principalsQuery="SELECT password FROM user_role_table WHERE user_id=?",
rolesQuery="SELECT group_id, 'Roles'
FROM group_table gt INNER JOIN user_role_table urt ON gt.user_group_id = urt.user_id
WHERE urt.user_id=?", hashAlgorithm="SHA-256",
hashEncoding="BASE64",
unauthenticatedIdentity="guest"
}
}, {
code="RoleMapping",
flag="required",
module-options={
rolesProperties="file:${jboss.server.config.dir}/app.properties",
replaceRole="false"
}
}
])
I get the following error on the cli prompt :
Node path format is wrong around 'cd.' (index 67)
If cd is removed, then the following error is reported.
Failed to perform read-opration-description to validate the request:
java.util.concurrent.ExecutionException: Operation failed
The command is given in a contiguous text format as follows.
./subsystem=security/security-domain=app:add(cache-type="default") cd ./subsystem=security/security-domain=app ./authentication=classic:add(login-modules=[ {code="Database",flag="required",module-options={dsJndiName="java:/jdbc/project_datasource",principalsQuery="SELECT password FROM user_role_table WHERE user_id=?",rolesQuery="SELECT group_id, 'Roles' FROM group_table gt INNER JOIN user_role_table urt ON gt.user_group_id = urt.user_id WHERE urt.user_id=?",hashAlgorithm="SHA-256",hashEncoding="BASE64",unauthenticatedIdentity="guest"}},{code="RoleMapping",flag="required",module-options={rolesProperties="file:${jboss.server.config.dir} /app.properties",replaceRole="false"}}])
What is the fix? I just do not want to copy/past the XML to the configuration file as it might be different from version to version.

The problem is with combining more commands on a single line.
The simple solution for you is to use external file to store the CLI commands.
E.g. create security-domain.cli file in wildfly-8.1.0.Final/bin folder with following content (if you want to split a command to more lines, put backslash as a last character):
/subsystem=security/security-domain=app:add(cache-type="default")
/subsystem=security/security-domain=app/authentication=classic:add()
/subsystem=security/security-domain=app/authentication=classic/login-module=Database:add( \
code="Database", \
flag="required", \
module-options=[ \
("dsJndiName"=>"java:/jdbc/project_datasource"), \
("principalsQuery"=>"SELECT password FROM user_role_table WHERE user_id=?"), \
("rolesQuery"=>"SELECT group_id, 'Roles' FROM group_table gt INNER JOIN user_role_table urt ON gt.user_group_id = urt.user_id WHERE urt.user_id=?"), \
("hashAlgorithm"=>"SHA-256"), \
("hashEncoding"=>"BASE64"), \
("unauthenticatedIdentity"=>"guest") \
])
/subsystem=security/security-domain=app/authentication=classic/login-module=RoleMapping:add( \
code="RoleMapping", \
flag="required", \
module-options=[ \
("rolesProperties"=>"file:${jboss.server.config.dir}/app.properties"), \
("replaceRole"=>"false") \
])
(Your sample contains old style of setting login modules. It's a deprecated way now, so the example usage the new way.)
Run the new file with JBoss CLI tool:
jboss-cli.bat -c --file=security-domain.cli

Related

GATK GnarlyGenotyper limit of alleles

I am joint calling 167 samples with GATK GEnomicsDBImpot. But I got this kind of error:
Sample/Callset 45( TileDB row idx 107) at Chromosome Chr1 position
1320197 (TileDB column 247913574) has too many genotypes in the
combined VCF record : 1081 : current limit : 1024 (num_alleles,
ploidy) = (46, 2). Fields, such as PL, with length equal to the
number of genotypes will NOT be added for this sample for this
location.
Following the advises I have found on the link below, I decided to use GnarlyGenotyper to call the variants, as it seems to manage more alleles.
https://gatk.broadinstitute.org/hc/en-us/community/posts/360072168712-GenomicsDBImport-Attempting-to-genotype-more-than-50-alleles?page=1#community_comment_360012343671
The following script has been run, with the correct option to accept more alleles:
~/gatk-4.2.0.0/gatk GnarlyGenotyper \
-R "$reference" \
-V gendb://GenomicsDBImport_GATK \
--max-alternate-alleles 100 \
-O GenotypeGVCFs_gnarly.vcf
Unfortunately I got the following error as well:
Chromosome Chr1 position 198912 (TileDB column 246792289) has too many
alleles in the combined VCF record : 7 : current limit : 6. Fields,
such as PL, with length equal to the number of genotypes will NOT be
added for this location.
Has anyone already used this tool? Is it possible to input more alleles?

Minting token in cardano mainnet --tx-out error

I am trying to mint a token in cardano mainnet. I have built a block and staking pool. I am working to mint a token and i am running into an error "unexpected '2', expecting space, "+" or end of input.
Here is the linux code I'm running:
cardano-cli transaction build-raw --shelley-era --fee $fee --tx-in $txhash#$txix --tx-out $address+$output+"$tokenamount $policyid.$tokenname1" --mint="$tokenamount $policyid.$tokenname1" --minting-script-file policy/policy.script --out-file matx.raw
Here is the error:
option --tx-out:
unexpected '2'
expecting space, "+" or end of input
Inputs:
I have tried different outputs of 10000000, 5000000, and 0.
$tokenamount="10000000"
$address=$(cat payment.addr)
$tokenname1="CpoolTest"
https://developers.cardano.org/docs/native-tokens/minting/
Please help
I guess i found the error.
Check $policyid whats inside. It should contain only 1x adress.
Try echo $policyid. It should only display 1 address
If its not the case, yo can try:
to delete your policyID:
rm -rf policy/policyID
After deleting create a brand new one:
cardano-cli transaction policyid --scriptfile ./policy/policy.script >> policy/policyID
Now set the variable:
policyid=$(cat policy/policyID)
Echo it
echo $policyid
There should be exactly 1 address displayed. Your code should work now

Filter non equal values in pyspark, using condition.\ where(array_contains())

I have a pyspark code
condition_no_hypertension = condition.\
where(array_contains('clinicalStatus.coding.code', 'active')).\
where(array_contains('verificationStatus.coding.code', 'confirmed')).\
where(array_contains('code.coding.code', '38341003')).\
where(condition.onsetDateTime > '1900-01-01').\
withColumn('condition_status', condition['clinicalStatus.coding.code'].getItem(0)).\
withColumn('verification_status', condition['verificationStatus.coding.code'].getItem(0)).\
withColumn('snomed_code', condition['code.coding.code'].getItem(0)).\
withColumn('snomed_name', condition['code.coding.display'].getItem(0)).\
select(\
(condition['subject.reference'].substr(10, 40).alias('patient_id')),
'condition_status',\
'verification_status',\
'snomed_code', \
'snomed_name',\
to_date(condition['onsetDateTime']).alias('first_observation_date'))
How to change this code and pick up everything but code?
I tried
where(array_contains('code.coding.code', !='38341003')).\
but it does not work.
You can use ~ (not):
where(~array_contains('code.coding.code', '38341003'))

Snakemake refuses to unpack input function when rule A is a dependency of rule B, but accepts it when rule A is the final rule

I have a snakemake workflow for a metagenomics project. At a point in the workflow, I map DNA sequencing reads (either single or paired-end) to metagenome assemblies made by the same workflow. I made an input function conform the Snakemake manual to map both single end and paired end reads with one rule. like so
import os.path
def get_binning_reads(wildcards):
pathpe=("data/sequencing_binning_signals/" + wildcards.binningsignal + ".trimmed_paired.R1.fastq.gz")
pathse=("data/sequencing_binning_signals/" + wildcards.binningsignal + ".trimmed.fastq.gz")
if os.path.isfile(pathpe) == True :
return {'reads' : expand("data/sequencing_binning_signals/{binningsignal}.trimmed_paired.R{PE}.fastq.gz", PE=[1,2],binningsignal=wildcards.binningsignal) }
elif os.path.isfile(pathse) == True :
return {'reads' : expand("data/sequencing_binning_signals/{binningsignal}.trimmed.fastq.gz", binningsignal=wildcards.binningsignal) }
rule backmap_bwa_mem:
input:
unpack(get_binning_reads),
index=expand("data/assembly_{{assemblytype}}/{{hostcode}}/scaffolds_bwa_index/scaffolds.{ext}",ext=['bwt','pac','ann','sa','amb'])
params:
lambda w: expand("data/assembly_{assemblytype}/{hostcode}/scaffolds_bwa_index/scaffolds",assemblytype=w.assemblytype,hostcode=w.hostcode)
output:
"data/assembly_{assemblytype}_binningsignals/{hostcode}/{binningsignal}.bam"
threads: 100
log:
stdout="logs/bwa_backmap_samtools_{assemblytype}_{hostcode}.stdout",
samstderr="logs/bwa_backmap_samtools_{assemblytype}_{hostcode}.stdout",
stderr="logs/bwa_backmap_{assemblytype}_{hostcode}.stderr"
shell:
"bwa mem -t {threads} {params} {input.reads} 2> {log.stderr} | samtools view -# 12 -b -o {output} 2> {log.samstderr} > {log.stdout}"
When I make an arbitrary 'all-rule' like this, the workflow runs successfully.
rule allbackmapped:
input:
expand("data/assembly_{assemblytype}_binningsignals/{hostcode}/{binningsignal}.bam", binningsignal=BINNINGSIGNALS,assemblytype=ASSEMBLYTYPES,hostcode=HOSTCODES)
However, when the files created by this rule are required for subsequent rules like so:
rule backmap_samtools_sort:
input:
"data/assembly_{assemblytype}_binningsignals/{hostcode}/{binningsignal}.bam"
output:
"data/assembly_{assemblytype}_binningsignals/{hostcode}/{binningsignal}.sorted.bam"
threads: 6
resources:
mem_mb=5000
shell:
"samtools sort -# {threads} -m {mem_mb}M -o {output} {input}"
rule allsorted:
input:
expand("data/assembly_{assemblytype}_binningsignals/{hostcode}/{binningsignal}.sorted.bam",binningsignal=BINNINGSIGNALS,assemblytype=ASSEMBLYTYPES,hostcode=HOSTCODES)
The workflow closes with this error
WorkflowError in line 416 of
/stor/azolla_metagenome/Azolla_genus_metagenome/Snakefile: Can only
use unpack() on list and dict
To me, this error suggests the input function for the former rule is faulty. This however, seems not to be the case for it ran successfully when no subsequent processing was queued.
The entire project is hosted on github. The entire Snakefile and a github issue.

What is the importance of "pre" in nodejs version v0.6.21-pre?

Some versions are having "-pre" at the end of version number and some are not having.
What is the importance of "-pre"?
It means the binary was built from a development or "preview" commit.
From src/node_version.h:
#if NODE_VERSION_IS_RELEASE
# define NODE_VERSION_STRING NODE_STRINGIFY(NODE_MAJOR_VERSION) "." \
NODE_STRINGIFY(NODE_MINOR_VERSION) "." \
NODE_STRINGIFY(NODE_PATCH_VERSION)
#else
# define NODE_VERSION_STRING NODE_STRINGIFY(NODE_MAJOR_VERSION) "." \
NODE_STRINGIFY(NODE_MINOR_VERSION) "." \
NODE_STRINGIFY(NODE_PATCH_VERSION) "-pre"
#endif
The -pre is removed for releases (ex: 2012.08.03 Version 0.6.21 (maintenance)) and added again with the subsequent version bump (ex: now working on 0.6.22).

Resources