ddins verb in J ODBC library mangling SQL table name? - j

I am having trouble doing a bulk insert using ddins from the J ODBC library. The error message is "Invalid object name" and the object name included in the error message is indeed invalid, but also not what I typed. All the underscores and periods have been removed. Note that other SQL operations, including INSERT, work fine with this same table. Does anyone know what is going on? The database I am connected to is SQL Server. My J version is 802. My operating system is Windows 7 Professional (64 bit)
Any help appreciated. -Michael
Some sample output for context:
dddbms ch
┌────┬────────┬─────────┬──────────┬─────┬──────────┬─────────────┬──────────┬─┬─┬───┐
│ODBC│t4bwhsql│US\mberry│T4BSQL01AD│MSSQL│12.00.5000│sqlncli11.dll│11.00.2100│3│1│256│
└────┴────────┴─────────┴──────────┴─────┴──────────┴─────────────┴──────────┴─┴─┴───┘
query
select sales_region, country, historic_save_rate, target_save_rate from t4b.sales.r_bl_inside_sales_ccf_country_target
sh=: query ddsel ch
sh
6260928
ddfet sh
┌────────┬─────────┬───┬────┐
│Americas│Argentina│0.8│0.83│
└────────┴─────────┴───┴────┘
ddend sh
0
'truncate table t4b.sales.r_bl_inside_sales_ccf_country_target' ddsql ch
0
'insert into t4b.sales.r_bl_inside_sales_ccf_country_target(sales_region, country, historic_save_rate, target_save_rate) values(''Americas'', ''Argentina'', 0.8, 0.83)' ddsql ch
0
sh=: query ddsel ch
sh
71014208
ddfet sh
┌────────┬─────────┬───┬────┐
│Americas│Argentina│0.8│0.83│
└────────┴─────────┴───┴────┘
ddend sh
0
data
┌────────┬─────────────┬────────┬────┐
│Americas│Argentina │0.820513│0.83│
│Americas│Bolivia │0.923077│ 0.9│
│Americas│Brazil │0.909091│ 0.9│
│Americas│Canada │0.795918│0.81│
│Americas│Chile │ 0.85│0.86│
│Americas│Colombia │0.904762│ 0.9│
│Americas│Costa Rica │0.805556│0.82│
│Americas│Ecuador │0.888889│ 0.9│
│Americas│Mexico │0.840909│0.85│
│Americas│other │ 0.89│ 0.9│
│Americas│Peru │0.666667│0.68│
│Americas│United States│0.837709│0.85│
└────────┴─────────────┴────────┴────┘
(query;data) ddins ch
_1
dderr ''
42S02 208 [Microsoft][SQL Server Native Client 11.0][SQL Server]Invalid object name 't4bsalesrblinsidesalesccfcountrytarget'. - more error info available (1)

For the record, I received an answer to this question through the J forum. The J ODBC library did not support SQL Server Native Client. That has now been fixed.

Related

Why Do I Keep Receving A "Requests.Exceptions.InvalidSchema: No Connection Adapters Were Found For '0" Error?

I'm trying to create a script that returns domain and backlink numbers for each URL held in a dataframe from an SEMRUSH API.
The dataframe cotaining the URLs has some of the following information:
0
0 www.ig.com/jp/trading-strategies/swing-trading...
1 www.ig.com/it/news-e-idee-di-trading/criptoval...
2 www.ig.com/uk/news-and-trade-ideas/the-omicron...
[1468 rows x 1 columns]
When I run my script I get the following error:
requests.exceptions.InvalidSchema: No connection adapters were found for '0 https://api.semrush.com/analytics/v1/?key=1f0e...\nName: 0, dtype: object'
Here is the part of the code that generates the error:
for index, url in gsdf.iterrows():
rr = requests.request("GET","https://api.semrush.com/analytics/v1/?key="+API_KEY+"&type=backlinks_tld&target="+url+"&target_type=url&export_columns=domains_num,backlinks_num&display_limit=1",headers=headers, data = payload)
data=json.loads(rr.text.encode('utf8'))
srdf=srdf.append({domains_num:data, backlinks_num:data}, ignore_index=True)
I'm not sure why this happens as I'm new to Python. Can you help?
Kind thanks
Mark

Escaping a space in a properties file for WildFly with Oracle DB

I am having a hard time using an environment variable with a space in a properties file read by WildFly (24) in Linux using Oracle 19 in RDS. One like:
SELECT 1 FROM DUAL
The issue is that wildfly won't even parse the file if the spaces are in there with the normal quoting methods.
I have it setup so that variable is in a file called datasource.properties that gets read from standalone.conf where this variable sits:
JAVA_OPTS="$JAVA_OPTS -DDATABASE_CONNECTION_CHECK=${DATABASE_CONNECTION_CHECK}"
It's read in with the following in standalone.conf:
set -a
. /opt/wildfly_config/datasource.properties
set +a
That in turn gets populated in standalone.xml with:
<connection-url>${env.DATABASE_JDBC_URL}</connection-url>
I try putting it in quotes and oddly enough it doesn’t start at all. Standalone.sh is no longer able to parse it:
Error: Could not find or load main class 1 Caused by: java.lang.ClassNotFoundException: 1
I have tried many things such as:
DATABASE_CONNECTION_CHECK="SELECT{ }1{ }FROM{ }DUAL"
DATABASE_CONNECTION_CHECK="'SELECT 1 FROM DUAL'"
DATABASE_CONNECTION_CHECK='SELECT 1 FROM DUAL'
DATABASE_CONNECTION_CHECK="SELECT+1+FROM+DUAL"
DATABASE_CONNECTION_CHECK="SELECT\ 1\ FROM\ DUAL"
DATABASE_CONNECTION_CHECK="\"SELECT 1 FROM DUAL\""
DATABASE_CONNECTION_CHECK="\"'SELECT 1 FROM DUAL'\""
DATABASE_CONNECTION_CHECK="SELECT%201%20FROM%20DUAL"
DATABASE_CONNECTION_CHECK="SELECT\{ }1\{ }FROM\{ }DUAL"
DATABASE_CONNECTION_CHECK='SELECT{ }1{ }FROM{ }DUAL'
DATABASE_CONNECTION_CHECK="'SELECT{ }1{ }FROM{ }DUAL'"
DATABASE_CONNECTION_CHECK="''SELECT{ }1{ }FROM{ }DUAL''"
DATABASE_CONNECTION_CHECK="SELECT%1%FROM%DUAL"
(I realize some of these don't make sense but I was looking for anything different.)
Startup looks good in the log output this with some of these, but then java doesn’t like it, for some reason it sees the escape usage:
Caused by: Error : 936, Position : 9, Sql = SELECT+1+FROM+DUAL, OriginalSql = SELECT+1+FROM+DUAL, Error Msg = ORA-00936: missing expression
or
Caused by: Error : 911, Position : 6, Sql = SELECT%1%FROM%DUAL, OriginalSql = SELECT%1%FROM%DUAL, Error Msg = ORA-00911: invalid character
or
WARN [org.jboss.jca.adapters.jdbc.local.LocalManagedConnectionFactory] (ServerService Thread Pool -- 46) IJ030027: Destroying connection that is not valid, due to the following exception: oracle.jdbc.driver.T4CConnection#2c1456f8: java.sql.SQLException: Non supported SQL92 token at position: 7
This last one is the only one that really netted anything different. I got that with:
DATABASE_CONNECTION_CHECK="SELECT{}1{}FROM{}DUAL"
I can use sed to change the value in the standalone.xml, but all of the other properties I am doing work fine with the exception of this one. I had a hard time with a semicolon in the jdbc string with MSSQL and putting the semicolon in braces like "{;}" fixed that. This DB apparently does not follow the same syntax.
Is there an encoding type that will help this with Oracle and keeps wildfly happy?
EDIT: More tests:
DATABASE_CONNECTION_CHECK=\"SELECT' '1' 'FROM' 'DUAL\"
gets
Caused by: Error : 900, Position : 0, Sql = "SELECT 1 FROM DUAL", OriginalSql = "SELECT 1 FROM DUAL", Error Msg = ORA-00900: invalid SQL statement'
(doesn't seem to like the quotes)
But without the escaping of the quotes I get:
Caused by: Error : 923, Position : 9, Sql = SELECT' '1' 'FROM' 'DUAL, OriginalSql = SELECT' '1' 'FROM' 'DUAL, Error Msg = ORA-00923: FROM keyword not found where expected
A better solution was to change the sourcing of the file from:
set +a
. /opt/PrimeKey/wildfly_config/datasource.properties
set -a
to
. /opt/PrimeKey/wildfly_config/datasource.properties
and make it so all the variables brought in were variables and not properties:
export DATABASE_CONNECTION_CHECK="SELECT 1 FROM DUAL"

Nextflow with Azure Batch - Cannot find a matching VM image

While trying to set up Nextflow with Azure Batch (NF-Core), I am getting following error. I tried this on multiple workflows (sarek, ataseq etc.) I get the same error -
N E X T F L O W ~ version 22.04.0
Pulling nf-core/atacseq ...
downloaded from https://github.com/nf-core/atacseq.git
Launching `https://github.com/nf-core/atacseq` [rhl6d5529] DSL1 - revision: 1b3a832db5 [1.2.1]
Downloading plugin nf-azure#0.13.1
----------------------------------------------------
,--./,-.
___ __ __ __ ___ /,-._.--~'
|\ | |__ __ / ` / \ |__) |__ } {
| \| | \__, \__/ | \ |___ \`-._,-`-,
`._,._,'
nf-core/atacseq v1.2.1
----------------------------------------------------
Run Name : rhl6d5529
Data Type : Paired-End
Design File : https://raw.githubusercontent.com/nf-core/test-datasets/atacseq/design.csv
Genome : Not supplied
Fasta File : https://raw.githubusercontent.com/nf-core/test-datasets/atacseq/reference/genome.fa
GTF File : https://raw.githubusercontent.com/nf-core/test-datasets/atacseq/reference/genes.gtf
Mitochondrial Contig : MT
MACS2 Genome Size : 1.2E+7
Min Consensus Reps : 1
MACS2 Narrow Peaks : No
MACS2 Broad Cutoff : 0.1
Trim R1 : 0 bp
Trim R2 : 0 bp
Trim 3' R1 : 0 bp
Trim 3' R2 : 0 bp
NextSeq Trim : 0 bp
Fingerprint Bins : 100
Save Genome Index : No
Max Resources : 6 GB memory, 2 cpus, 12h time per job
Container : docker - nfcore/atacseq:1.2.1
Output Dir : ./results
Launch Dir : /
Working Dir : /nextflow/atacseq/rhl6d5529
Script Dir : /.nextflow/assets/nf-core/atacseq
User : root
Config Profile : test,azurebatch
Config Description : Minimal test dataset to check pipeline function
Config Contact : Venkat Malladi (#vsmalladi)
Config URL : https://azure.microsoft.com/services/batch/
----------------------------------------------------
Uploading local `bin` scripts folder to az://nextflow/atacseq/rhl6d5529/tmp/66/bd55d79e42999df38ba04a81c3aa04/bin
[- ] process > CHECK_DESIGN -
[- ] process > CHECK_DESIGN [ 0%] 0 of 1
[- ] process > CHECK_DESIGN [ 0%] 0 of 1
Error executing process > 'CHECK_DESIGN (design.csv)'
Caused by:
Cannot find a matching VM image with publisher=microsoft-azure-batch; offer=centos-container; OS type=linux; verification type=verified
[58/55b7f7] process > CHECK_DESIGN (design.csv) [100%] 1 of 1, failed: 1
Error executing process > 'CHECK_DESIGN (design.csv)'
Caused by:
Cannot find a matching VM image with publisher=microsoft-azure-batch; offer=centos-container; OS type=linux; verification type=verified
I tried looking into the source code of nextflow. I found the error to be in AzBatchService.groovy (line number below).
https://github.com/nextflow-io/nextflow/blob/0e593e6ab82880810d8139a4fe6e3c47ff69a531/plugins/nf-azure/src/main/nextflow/cloud/azure/batch/AzBatchService.groovy#L442
I did some further digging in my Azure Batch account instance. Basically, I wanted to confirm if the list of supported images being received from the Azure Batch account has the one that is required for this pipeline. I could confirm that the server did indeed respond with the required image -
What could be the issue here? I remember running the exact same pipeline a few weeks back and it did work a few times. Am I missing something?
Just had another look through the Azure Cloud docs and think this might be relevant:
By default, Nextflow creates CentOS 8-based pool nodes, but this
behavior can be customised in the pool configuration. Below the
configurations for image reference/SKU combinations to select two
popular systems.
Ubuntu 20.04:
sku = "batch.node.ubuntu 20.04"
offer = "ubuntu-server-container"
publisher = "microsoft-azure-batch"
CentOS 8 (default):
sku = "batch.node.centos 8"
offer = "centos-container"
publisher = "microsoft-azure-batch"
I think the issue here is a mismatched nodeAgentSkuId. Nextflow is expecting a CentOS 8 node agent SKU, but you have a CentOS 7 SKU. If it's not possible to change the nodeAgentSkuId somehow, the node agent SKU that Nextflow uses should be able to be overridden by adding this to your nextflow.config:
azure.batch.pools.<name>.sku = 'batch.node.centos 7'
Where <name> is the pool identifier:
azure.batch.pools.<name>.sku
Specify the ID of the Compute Node agent SKU which the pool identified with <name> supports (default: batch.node.centos 8, requires nf-azure#0.11.0).
https://www.nextflow.io/docs/edge/azure.html#advanced-settings

Puzzling "info" message regarding package body requirement using Ada?

I am experiencing a peculiar "info" message from GNAT 7.4.0 (running on an "Ubuntu 19.04" system) while in the early stages of developing a QR-code generator.
I'm using some fairly aggressive compilation switches:
gnatmake -gnata -gnateE -gnateF -gnatf -gnato -gnatv -gnatVa -gnaty -gnatwe -gnatw.e main.adb
My code does build without errors, but this info message does suggest that I'm not providing a body for the package "qr_symbol".
qr_symbol.ads
with QR_Versions; use QR_Versions;
generic
Ver : QR_Version;
package QR_Symbol is
procedure Export_As_SVG;
private
type Module_State is (
Uncommitted,
One,
Zero
);
type Module_Family is (
Uncommitted,
Finder,
Separator,
Alignment,
Timing,
Format_Spec,
Version_Spec,
Data_Codeword,
EC_Codeword,
Padding
);
type Module is
record
State : Module_State := Uncommitted;
Family : Module_Family := Uncommitted;
end record;
type Module_Matrix is array (
Positive range <>,
Positive range <>
) of Module;
end QR_Symbol;
qr_symbol.adb
with Ada.Text_IO; use Ada.Text_IO;
package body QR_Symbol is
Version : constant QR_Version := Ver; -- Ver is a formal generic parameter
Side_Length : constant Positive := 17 + (Positive (Ver) * 4);
Matrix : Module_Matrix (1 .. Side_Length, 1 .. Side_Length);
procedure Export_As_SVG is
begin
Put_Line ("in Export_As_SVG()...");
Put_Line (" Version: " & Version'Image);
Put_Line (" Side_Length: " & Side_Length'Image);
-- Matrix (1, 1).State := One;
Put_Line (" Matrix (1, 1).State: " & Matrix (1, 1).State'Image);
end Export_As_SVG;
end QR_Symbol;
And here's the info output that I do not understand...
GNAT 7.4.0
Copyright 1992-2017, Free Software Foundation, Inc.
Compiling: qr_symbol.adb
Source file time stamp: 2019-12-07 16:29:37
Compiled at: 2019-12-07 16:29:38
==============Error messages for source file: qr_symbol.ads
9. procedure Export_As_SVG;
|
>>> info: "QR_Symbol" requires body ("Export_As_SVG" requires completion)
29 lines: No errors, 1 info message
aarch64-linux-gnu-gnatbind-7 -x main.ali
aarch64-linux-gnu-gnatlink-7 main.ali
Program output (given correct input, does gives correct output)...
$ ./main '' V1
QR Version requested: V 1
in Export_As_SVG()...
Version: 1
Side_Length: 21
Matrix (1, 1).State: UNCOMMITTED
QUESTION:
Why is there an info message suggesting that I need to provide a body for this package when it is clear that I have already done so?
An info message is not used to suggest that you should change your program, only to provide some (useful or not) information. In your case, the information is true. If it weren't fulfilled, it'd turn to an error.
You may want to check if this flag is causing the generation of this message:
According to GNAT User's Guide:
-gnatw.e
`Activate every optional warning.'
This switch activates all optional warnings, including those which are not activated by -gnatwa. The use of this switch is not
recommended for normal use. If you turn this switch on, it is almost
certain that you will get large numbers of useless warnings. The
warnings that are excluded from -gnatwa are typically highly
specialized warnings that are suitable for use only in code that has
been specifically designed according to specialized coding rules.
And if you don't want to remove that switch, at least you can disable this specific info message:
-gnatw.Y
`Disable information messages for why package spec needs body.'
This switch suppresses the output of information messages showing why a package specification needs a body.

How do I insert a row with a TimeUUIDType column in Cassandra?

In Cassandra, I have the following Column Family:
<ColumnFamily CompareWith="TimeUUIDType" Name="Posts"/>
I'm trying to insert a record into it as follows using a C++ generated function generated by Thrift:
ColumnPath new_col;
new_col.__isset.column = true; /* this is required! */
new_col.column_family.assign("Posts");
new_col.super_column.assign("");
new_col.column.assign("1968ec4a-2a73-11df-9aca-00012e27a270");
client.insert("Keyspace1", "somekey", new_col, "Random Value", 1234, ONE);
However, I'm getting the following error: "UUIDs must be exactly 16 bytes"
I've even tried the Cassandra CLI with the following command:
set Keyspace1.Posts['somekey']['1968ec4a-2a73-11df-9aca-00012e27a270'] = 'Random Value'
but I still get the following error:
Exception null
InvalidRequestException(why:UUIDs must be exactly 16 bytes)
at org.apache.cassandra.thrift.Cassandra$insert_result.read(Cassandra.java:11994)
at org.apache.cassandra.thrift.Cassandra$Client.recv_insert(Cassandra.java:659)
at org.apache.cassandra.thrift.Cassandra$Client.insert(Cassandra.java:632)
at org.apache.cassandra.cli.CliClient.executeSet(CliClient.java:420)
at org.apache.cassandra.cli.CliClient.executeCLIStmt(CliClient.java:80)
at org.apache.cassandra.cli.CliMain.processCLIStmt(CliMain.java:132)
at org.apache.cassandra.cli.CliMain.main(CliMain.java:173)
Thrift is a binary protocol; 16 bytes means 16 bytes. "1968ec4a-2a73-11df-9aca-00012e27a270" is 36 bytes. You need to get your library to give you the raw, 16 bytes form.
I don't use C++ myself, but "version 1 uuid" is the magic string you want to google for when looking for a library that can do this. http://www.google.com/search?q=C%2B%2B+version+1+uuid

Resources