SPARQL BIND breaks VALUES? - switch-statement

I was using VALUES in many SPARQL queries recently, only to realize that in one of them I was not getting what I was expecting.
Here is a simple case:
#prefix ns: <http://some/ns> .
<http://some/uri> a ns:Document ;
ns:A5000 "00003381" ;
ns:A5080 "sredniowiecze" .
I am using VALUES to "translate" from nsA5080 literals to URIs. The simple query:
PREFIX ns: <http://some/ns>
SELECT ?document ?u ?p ?lp
WHERE
{
?document ns:A5080 ?p .
VALUES (?p ?u) {
( "sredniowiecze" ns:MiddleAges )
( "other" ns:Other )
}
}
works as expected:
Document U P LP
<http://some/uri> <http://some/nsMiddleAges> "sredniowiecze"
But if I will change it to:
SELECT ?document ?u ?p ?lp
WHERE
{
?document ns:A5080 ?p .
BIND ( LCASE(?p) AS ?lp )
VALUES (?lp ?u) {
( "sredniowiecze" ns:MiddleAges )
( "other" ns:Other )
}
}
I am getting:
Document U P Lp
<http://some/uri> <http://some/nsMiddleAges> "sredniowiecze" "sredniowiecze"
<http://some/uri> <http://some/nsOther> "sredniowiecze" "sredniowiecze"
Which does not make any sens to me. Where does the extra tuple come from? In the real query I have ca. 30+ tuples in VALUES and they all land in the results.
What is more interesting queries that look almost like that one - work just fine everywhere else.
Ideas?

I think the problem is that you use both VALUES and BIND to define ?lp at the same time. I suspect no one thought about that happening but in any case as the VALUES is set last it should have precedence in its BGP, however as BIND closes a BGP the result is going to be weird. I suspect that a query like this where you don't use BIND and VALUES on the same variable will work better.
SELECT ?document ?u ?p ?lp
WHERE
{
?document ns:A5080 ?p .
VALUES (?lp ?u) {
( "sredniowiecze" ns:MiddleAges )
( "other" ns:Other )
FILTER (sameTerm(LCASE(?p), ?lp))
}
}

Related

HowTo insert into tableName with select and specifying insert columns at Jooq?

I'm using Jooq to generate SQL
Here is resulting query
insert into MY_TABLE -- I want INSERT INTO(firstField,secondField)
select
?,
?
where not exists (
select 1
from MY_TABLE
where (
firstField = ?
)
)
returning id
MY_TABLE DDL:
create table IF NOT EXISTS MY_TABLE
(
id SERIAL PRIMARY KEY,
firstField int not null,
secondField int not null
)
I can't make Jooq add field names next to insert into MY_TABLE
My builder:
JooqBuilder.default()
.insertInto(table("MY_TABLE"))
.select(
select(
param(classOf[Int]), // 1
param(classOf[Int]), // 2
)
.whereNotExists(select(inline(1))
.from(table("MY_TABLE"))
.where(
DSL.noCondition()
.and(field("firstField", classOf[Long]).eq(0L))
)
)
).returning(field("id")).getSQL
I've tried
.insertInto(table("MY_TABLE"),field("firstField"), field("secondField"))
UPD:
I was confused by compiler exception.
The right solution is
```scala
JooqBuilder.default()
.insertInto(table("MY_TABLE"),
field("firstField",classOf[Int]),
field("secondField",classOf[Int])
)
.select(
select(
param(classOf[Int]),
param(classOf[Int])
)
.whereNotExists(select(inline(1))
.from(table("MY_TABLE"))
.where(
DSL.noCondition()
.and(field("firstField", classOf[Long]).eq(0L))
)
)
).returning(field("id")).getSQL
The thing is that Jooq takes field types from insertInto and doesn't compile if select field types don't match.
I've tried
.insertInto(table("MY_TABLE"),
field("firstField"),
field("secondField")
)
and it didn't compile since no match with
.select(
select(
param(classOf[Int]), // 1
param(classOf[Int]) // 2
)
I've added types to insertInto fields and got match, two ints in insert, two ints in select.
Jooq generated expected query
insert into MY_TABLE -- I want INSERT INTO(firstField,secondField)
select
?,
?
where not exists (
select 1
from MY_TABLE
where (
firstField = ?
)
)
jOOQ just generates exactly the SQL you tell it to generate. You're not listing firstField,secondField in jOOQ, so jOOQ doesn't list them in SQL. To list them in jOOQ, just add:
// ...
.insertInto(table("MY_TABLE"), field("firstField", classOf[Long]), ...)
// ...
Obviously, even without using the code generator, you can reuse expressions by assigning them to local variables:
val t = table("MY_TABLE")
val f1 = field("firstField", classOf[Long])
val f2 = field("secondField", classOf[Long])
And then:
// ...
.insertInto(t, f1, f2)
// ...
Using the code generator
Note that if you were using the code generator, which jOOQ recommends, your query would be much simpler:
ctx.insertInto(MY_TABLE, MY_TABLE.FIRST_FIELD, MY_TABLE.SECOND_FIELD)
.values(v1, v2)
.onDuplicateKeyIgnore()
.returningResult(MY_TABLE.ID)
.fetch();

cross apply an array of values recorded every 10 mins from a timestamp and generate their timestamps in stream analytics

I have the following stream analytics input:
{ "ID":"DEV-001-Test",
"TMSMUTC":"2021-10-14T14:00:00.000",
"MSGTYP":"TELEMETRY",
"THING":[
{
"TMSDUTC":"2021-10-14T13:00:00.000",
"DATA":[
{
"TAGID":"TAGB",
"VALUE":30
},
{
"TAGID":"TAGX",
"VALUE":[30.34,245.65,30.34,245.65,245.65,30.34]
}
]
}
]
}
in which the array of values for the "TAGX" is representing a value recorded from a sensor every 10 mins for one hour from the timestamp "TMSDUTC":"2021-10-14T13:00:00.000".
I was wondering how could make a query that would give me a similar output:
output
my main doubts are how to create the sequence of 10 mins from the timestamp and cross apply the values to it.
That's a good one! Note that I highly recommend you use VSCode and the ASA extension when working on these queries. The developer experience is much nicer than in the portal thanks to local testing, and you can also unit test your query via the npm package.
I took the following assumptions:
THING is an array of a single record. Let me know if that's not the case
[edited] TMSDUTC needs to be incremented by 10 minutes according to the position of each item in the array when applicable (TAGX)
With that, here is the query. It's split in multiple code blocks to explain the flow, but I also pasted it whole in the last code block.
First we bring all the required fields to the first level. It makes things easier to read, but not only. GetArrayElements needs an array to CROSS APPLY, but GetArrayElement (singular) doesn't return the type at compile time. Using an intermediary query step solves that.
WITH things AS (
SELECT
ID,
GetArrayElement(THING,0).TMSDUTC AS TMSDUTC,
MSGTYP AS MessageType,
GetArrayElement(THING,0).DATA AS DATA
FROM [input]
),
Then we expand DATA:
dataAll AS (
SELECT
T.ID,
T.TMSDUTC,
T.MessageType,
D.ArrayValue.TAGID AS Tag,
D.ArrayValue.Value AS [Value]
FROM things T
CROSS APPLY GetArrayElements(T.DATA) AS D
),
Then we create a subset for records that have a VALUE of type array (TAGX in your example). Here I avoid hard-coding per tag by detecting the type at runtime. These records will need another round of array processing in the following step.
dataArrays AS (
SELECT
A.ID,
A.TMSDUTC,
A.MessageType,
A.Tag,
A.[Value]
FROM dataAll A
WHERE GetType(A.[Value]) = 'array'
),
Now we can focus on expanding VALUE for those records. Note that we could not do that in a single pass (filter on arrays above and CROSS APPLY below), as GetArrayElements checks types before filtering is done.
[edited] To increment TMSDUTC, we use DATEADD on the index of each item in its array (ArrayIndex/ArrayValue are both returned from the array expansion, see doc below).
dataArraysExpanded AS (
SELECT
A.ID,
DATEADD(minute,10*V.ArrayIndex,A.TMSDUTC) AS TMSDUTC,
A.MessageType,
A.Tag,
V.ArrayValue AS [Value]
FROM dataArrays A
CROSS APPLY GetArrayElements(A.[Value]) AS V
),
We union back everything together:
newSchema AS (
SELECT ID, TMSDUTC, MessageType, Tag, [Value] FROM dataAll WHERE GetType([Value]) != 'array'
UNION
SELECT ID, TMSDUTC, MessageType, Tag, [Value] FROM dataArraysExpanded
)
And finally insert everything into the destination:
SELECT
*
INTO myOutput
FROM newSchema
[edited] Please note that the only order guaranteed on a result set is the one defined by the timestamp. If multiple records occur on the same timestamp, no order is guaranteed by default. Here, at the end of the query, all of the newly created events are still timestamped on the timestamp of the original event. If you now need to apply time logic on the newly generated TMSDUTC, you will need to output these records to Event Hub, and load them in another job using TIMESTAMP BY TMSDUTC. Currently the timestamp can only be changed directly at the very first step of a query.
What is used here :
GetArrayElement (singular) : doc
WITH aka Common Table Expression (CTE) : doc
CROSS APPLY + GetArrayElements : doc and doc, plus very good ref
GetType : doc
The entire thing for easier copy/pasting:
WITH things AS (
SELECT
ID,
GetArrayElement(THING,0).TMSDUTC AS TMSDUTC,
MSGTYP AS MessageType,
GetArrayElement(THING,0).DATA AS DATA
FROM [input]
),
dataAll AS (
SELECT
T.ID,
T.TMSDUTC,
T.MessageType,
D.ArrayValue.TAGID AS Tag,
D.ArrayValue.Value AS [Value]
FROM things T
CROSS APPLY GetArrayElements(T.DATA) AS D
),
dataArrays AS (
SELECT
A.ID,
A.TMSDUTC,
A.MessageType,
A.Tag,
A.[Value]
FROM dataAll A
WHERE GetType(A.[Value]) = 'array'
),
dataArraysExpanded AS (
SELECT
A.ID,
DATEADD(minute,10*V.ArrayIndex,A.TMSDUTC) AS TMSDUTC,
A.MessageType,
A.Tag,
V.ArrayValue AS [Value]
FROM dataArrays A
CROSS APPLY GetArrayElements(A.[Value]) AS V
),
newSchema AS (
SELECT ID, TMSDUTC, MessageType, Tag, [Value] FROM dataAll WHERE GetType([Value]) != 'array'
UNION
SELECT ID, TMSDUTC, MessageType, Tag, [Value] FROM dataArraysExpanded
)
SELECT
*
INTO myOutput
FROM newSchema

SQL Server 2017 - Dynamically generate a string based on the number of columns in another string

I have the following table & data:
CREATE TABLE dbo.TableMapping
(
[GenericMappingKey] [nvarchar](256) NULL,
[GenericMappingValue] [nvarchar](256) NULL,
[TargetMappingKey] [nvarchar](256) NULL,
[TargetMappingValue] [nvarchar](256) NULL
)
INSERT INTO dbo.TableMapping
(
[GenericMappingKey]
,[GenericMappingValue]
,[TargetMappingKey]
,[TargetMappingValue]
)
VALUES
(
'Generic'
,'Col1Source|Col1Target;Col2Source|Col2Target;Col3Source|Col3Target;Col4Source|Col4Target;Col5Source|Col5Target;Col6Source|Col6Target'
,'Target'
,'Fruit|Apple;Car|Red;House|Bungalo;Gender|Female;Material|Brick;Solution|IT'
)
I would need to be able to automatically generate my GenericMappingValue string dynamically based on the number of column pairs in the TargetMappingValue column.
Currently, there are 6 column mapping pairs. However, if I only had two mapping column pairs in my TargetMapping such as the following...
'Fruit|Apple;Car|Red'
then I would like for the GenericMappingValue to be automatically generated (updated) such as the following since, as a consequence, I would only have 2 column pairs in my string...
'Col1Source|Col1Target;Col2Source|Col2Target'
I've started building the following query logic:
DECLARE #Mapping nvarchar(256)
SELECT #Mapping = [TargetMappingValue] from TableMapping
print #Mapping
SELECT count(*) ColumnPairCount
FROM String_split(#Mapping, ';')
The above query gives me a correct count of 6 for my column pairs.
How would I be able to continue my logic to achieve my automatically generated mapping string?
I think I understand what you are after. This should get you moving in the right direction.
Since you've tagged 2017 you can use STRING_AGG()
You'll want to split your TargetMappingValue using STRING_SPLIT() with ROW_NUMER() in a sub-query. (NOTE: We aren't guaranteed order using string_split() with ROW_NUMBER here, but will work for this situation. Example below using OPENJSON if we need to insure accurate order.)
Then you can then use that ROW_NUMBER() as the column indicator/number in a CONCAT().
Then bring it all back together using STRING_AGG()
Have a look at this working example:
DECLARE #TableMapping TABLE
(
[GenericMappingKey] [NVARCHAR](256) NULL
, [GenericMappingValue] [NVARCHAR](256) NULL
, [TargetMappingKey] [NVARCHAR](256) NULL
, [TargetMappingValue] [NVARCHAR](256) NULL
);
INSERT INTO #TableMapping (
[GenericMappingKey]
, [GenericMappingValue]
, [TargetMappingKey]
, [TargetMappingValue]
)
VALUES ( 'Generic'
, 'Col1Source|Col1Target;Col2Source|Col2Target;Col3Source|Col3Target;Col4Source|Col4Target;Col5Source|Col5Target;Col6Source|Col6Target'
, 'Target'
, 'Fruit|Apple;Car|Red;House|Bungalo;Gender|Female;Material|Brick;Solution|IT' );
SELECT [col].[GenericMappingKey]
, STRING_AGG(CONCAT('Col', [col].[ColNumber], 'Source|Col', [col].[ColNumber], 'Target'), ';') AS [GeneratedGenericMappingValue]
, [col].[TargetMappingKey]
, [col].[TargetMappingValue]
FROM (
SELECT *
, ROW_NUMBER() OVER ( ORDER BY (
SELECT 1
)
) AS [ColNumber]
FROM #TableMapping
CROSS APPLY STRING_SPLIT([TargetMappingValue], ';')
) AS [col]
GROUP BY [col].[GenericMappingKey]
, [col].[TargetMappingKey]
, [col].[TargetMappingValue];
Here's an example of what an update would look like assuming your primary key is the GenericMappingKey column:
--This what an update would look like
--Assuming your primary key is the [GenericMappingKey] column
UPDATE [upd]
SET [upd].[GenericMappingValue] = [g].[GeneratedGenericMappingValue]
FROM (
SELECT [col].[GenericMappingKey]
, STRING_AGG(CONCAT('Col', [col].[ColNumber], 'Source|Col', [col].[ColNumber], 'Target'), ';') AS [GeneratedGenericMappingValue]
, [col].[TargetMappingKey]
, [col].[TargetMappingValue]
FROM (
SELECT *
, ROW_NUMBER() OVER ( ORDER BY (
SELECT 1
)
) AS [ColNumber]
FROM #TableMapping
CROSS APPLY [STRING_SPLIT]([TargetMappingValue], ';')
) AS [col]
GROUP BY [col].[GenericMappingKey]
, [col].[TargetMappingKey]
, [col].[TargetMappingValue]
) AS [g]
INNER JOIN #TableMapping [upd]
ON [upd].[GenericMappingKey] = [g].[GenericMappingKey];
Shnugo brings up a great point in the comments in that we are not guarantee sort order with string_split() and using row number. In this particular situation it wouldn't matter as the output mappings in generic. But what if you needed to used elements from your "TargetMappingValue" column in the final "GenericMappingValue", then you would need to make sure sort order was accurate.
Here's an example showing how to use OPENJSON() and it's "key" which would guarantee that order using Shnugo example:
SELECT [col].[GenericMappingKey]
, STRING_AGG(CONCAT('Col', [col].[colNumber], 'Source|Col', [col].[colNumber], 'Target'), ';') AS [GeneratedGenericMappingValue]
, [col].[TargetMappingKey]
, [col].[TargetMappingValue]
FROM (
SELECT [tm].*
, [oj].[Key] + 1 AS [colNumber] --Use the key as our order/column number, adding 1 as it is zero based.
, [oj].[Value] -- and if needed we can bring the split value out.
FROM #TableMapping [tm]
CROSS APPLY OPENJSON('["' + REPLACE([tm].[TargetMappingValue], ';', '","') + '"]') [oj] --Basically turn the column value into JSON string.
) AS [col]
GROUP BY [col].[GenericMappingKey]
, [col].[TargetMappingKey]
, [col].[TargetMappingValue];
if the data is already in the table and you want to break it out into columns, this should work
select
v.value
,left(v.value, charindex('|',v.value) -1) col1
,reverse(left(reverse(v.value), charindex('|',reverse(v.value)) -1)) col2
from String_split(#mapping,';') v

How to use subquery in jena for pagination?

I want to use jena for pagination. I use this query :
select distinct (?outEdge) (?inEdge) (?dest) (?source) { select distinct (?p as ?outEdge) (?q as ?inEdge) (?px as ?dest) (?qx as ?source) { { <http://dbpedia.org/resource/Japan> ?p ?px . } union { ?qx ?q <http://dbpedia.org/resource/Japan> . } } order by ?p } offset 0 limit 10000
However this query works on online dbpedia endpoint (Viruoso) but in jena this error occurs :
com.hp.hpl.jena.query.QueryParseException: Encountered " ")" ") "" at line 1, column 585.
SELECT (?x) ... isn't legal SPARQL 1.1. Try without the ()
The form is (expression AS variable)
Jena accepts it as an extension using SyntaxARQ.
The syntax error would be at line 1 col 26. "column 585" makes no sense. See http://www.sparql.org/query-validator.html

Oracle spatial data operator - SDO_nn - Not getting any results for sdo_num_res = 1

I am using SDO_NN operator to find the nearest hydrant next to a building.
Building:
CREATE TABLE "BUILDINGS"
(
"NAME" VARCHAR2(40),
"SHAPE" "SDO_GEOMETRY")
Hydrant:
CREATE TABLE "HYDRANTS"
( "NAME" VARCHAR2(10),
"POINT" "SDO_POINT_TYPE"
);
I have setup spatial indexes properly for buildings.shape and I run the query to get the nearest hydrant to the building 'Motel'
select b1.name as name, h.point.x as x, h.point.y as y from buildings b1, hydrants h where b1.name ='Motel' and
SDO_nn( b1.shape, MDSYS.SDO_GEOMETRY(2003,NULL, NULL,SDO_ELEM_INFO_ARRAY(1,1003,1),
SDO_ORDINATE_ARRAY( h.point.x,h.point.y)), 'sdo_num_res=1')= 'TRUE';
Here's the problem:
When I set the parameter sdo_num_res=1, I get zero tuples.
And when I make sdo_num_res=2, I get one tuple.
What is the reason for the weird behavior ?
Note: I am getting zero rows only when building.name= 'Motel', for all other tuples I am getting 1 row when sdo_num_res = 1
Edit:
Insert queries
Insert into buildings (NAME,SHAPE) values ('Motel',MDSYS.SDO_GEOMETRY(2003,NULL,NULL,MDSYS.SDO_ELEM_INFO_ARRAY(1,1003,1),MDSYS.SDO_ORDINATE_ARRAY(564,425,585,436,573,458,552,447)));
Insert into hydrants (name,POINT) values ('p57',MDSYS.SDO_POINT_TYPE(589,448,0));
To perform spatial comparisons between a point to a polygon, the SDO_GEOMETRY is defined with SDO_SRID=2001 and center set to a SDO_POINT_TYPE-> which we want to compare.
MDSYS.SDO_GEOMETRY(2001, NULL, SDO_POINT_TYPE(-79, 37, NULL), NULL, NULL)
First of all, your query does not do what you say it does: it actually returns the nearest building called "Motel" from any of your hydrants. To do what you want (i.e. the opposite) you need to reverse the order of the arguments to SDO_NN: all spatial operators search the first argument, using the value of the second argument.
Then the insert into your HYDRANTS table is wrong:
Insert into hydrants (name,POINT) values ('p57',MDSYS.SDO_POINT_TYPE(589,448,0));
The SDO_POINT_TYPE object is not designed to be used that way: it is only used inside the SDO_GEOMETRY type. The proper way is this:
insert into hydrants (name,POINT) values ('p57',sdo_geometry(2001, null, SDO_POINT_TYPE(589,448,null), null, null));
And of course you need to change your table definition accordingly.
Then your building is also incorrectly created: a polygon must always close, i.e. the last point must be the same as the first point. So the proper shape should be like this:
insert into buildings (NAME,SHAPE) values ('Motel', SDO_GEOMETRY(2003,NULL,NULL,SDO_ELEM_INFO_ARRAY(1,1003,1),SDO_ORDINATE_ARRAY(564,425,585,436,573,458,552,447,564,425)));
Here is the full example:
Create the tables:
create table buildings (
name varchar2(40) primary key,
shape sdo_geometry
);
create table hydrants(
name varchar2(10) primary key,
point sdo_geometry
);
Populate the tables:
insert into buildings (NAME,SHAPE) values ('Motel', SDO_GEOMETRY(2003,NULL,NULL,SDO_ELEM_INFO_ARRAY(1,1003,1),SDO_ORDINATE_ARRAY(564,425,585,436,573,458,552,447,564,425)));
insert into hydrants (name,POINT) values ('p57',sdo_geometry(2001, null, SDO_POINT_TYPE(589,448,null), null, null));
commit;
Confirm that the geometries are all correct:
select name, sdo_geom.validate_geometry_with_context (point, 0.05) from hydrants;
select name, sdo_geom.validate_geometry_with_context (shape, 0.05) from buildings;
Setup spatial metadata and create spatial indexes:
insert into user_sdo_geom_metadata (table_name, column_name, diminfo, srid)
values (
'BUILDINGS',
'SHAPE',
sdo_dim_array (
sdo_dim_element ('X', 0,1000,0.05),
sdo_dim_element ('Y', 0,1000,0.05)
),
null
);
commit;
create index buildings_sx on buildings (shape)
indextype is mdsys.spatial_index;
insert into user_sdo_geom_metadata (table_name, column_name, diminfo, srid)
values (
'HYDRANTS',
'POINT',
sdo_dim_array (
sdo_dim_element ('X', 0,1000,0.05),
sdo_dim_element ('Y', 0,1000,0.05)
),
null
);
commit;
create index hydrants_sx on hydrants (point)
indextype is mdsys.spatial_index;
Now Try the properly written query:
select h.name, h.point.sdo_point.x as x, h.point.sdo_point.y as y
from buildings b, hydrants h
where b.name ='Motel'
and sdo_nn(h.point, b.shape, 'sdo_num_res=1')= 'TRUE';
which returns:
NAME X Y
---------------- ---------- ----------
p57 589 448
1 row selected.

Resources