Get DNS suffix with nodejs - node.js

With NodeJs, the "dns" library don't use dns suffix to resolve hostname.
await dns.promises.resolve('foo.ad.toto.fr')
>>> [ '10.1.40.205' ]
await dns.promises.resolve('foo')
>>> Error: queryA ENOTFOUND foo
ipconfig /all
Configuration IP de Windows
Nom de l’hôte . . . . . . . . . . : arc-pc-y399539
Suffixe DNS principal . . . . . . : ad.toto.fr
Type de noeud. . . . . . . . . . : Hybride
Routage IP activé . . . . . . . . : Non
Proxy WINS activé . . . . . . . . : Non
Liste de recherche du suffixe DNS.: ad.toto.fr
So I need to get "DNS Suffixes list" but I don't know how...
And i can't parse return of command "ipconfig" since it must be compatible with windows and linux

I found the package "systeminformation" to get primary dns suffix, but not the "DNS Suffixes list"

Related

How to find the line number with maximum context length with shell script?

I have a file with each line containing a string.
I need to find the largest-N lines.
I am able to find the lines with largest number of lines. For example, largest-1 could be found by:
cat ./test.txt | awk '{ print length }' | sort -n | tail -1
Or largest-10 could be found by:
cat ./test.txt | awk '{ print length }' | sort -n | tail -10
But I also need to get the line number for those lines side by side using shell script.
Any help is appreciated.
Some lines from the input file:
testTweeVaderOverLedenInNLPeriodeGeboorte ( ) { java . util . List < nl . bzk . brp . model . objecttype . operationeel . BetrokkenheidModel > echtgenoten = java . util . Arrays . asList ( maakBetrokkenheden ( 20110701 , 20120101 , ( ( short ) ( 1 ) ) ) , maakBetrokkenheden ( 20120201 , 20120504 , ( ( short ) ( 1 ) ) ) ) ; org . mockito . Mockito . when ( relatieRepository . haalOpBetrokkenhedenVanPersoon ( org . mockito . Matchers . any ( nl . bzk . brp . model . objecttype . operationeel . PersoonModel . class ) , org . mockito . Matchers . any ( nl . bzk . brp . dataaccess . selectie . RelatieSelectieFilter . class ) ) ) . thenReturn ( echtgenoten ) ; org . mockito . Mockito . when ( persoonRepository . haalPersoonOpMetAdresViaBetrokkenheid ( echtgenoten . get ( 0 ) ) ) . thenReturn ( echtgenoten . get ( 0 ) . getBetrokkene ( ) ) ; org . mockito . Mockito . when ( persoonRepository . haalPersoonOpMetAdresViaBetrokkenheid ( echtgenoten . get ( 1 ) ) ) . thenReturn ( echtgenoten . get ( 1 ) . getBetrokkene ( ) ) ; java . util . List < nl . bzk . brp . model . objecttype . operationeel . PersoonModel > kandidaten = kandidaatVader . bepaalKandidatenVader ( new nl . bzk . brp . model . objecttype . operationeel . PersoonModel ( new nl . bzk . brp . model . objecttype . bericht . PersoonBericht ( ) ) , new nl . bzk . brp . model . attribuuttype . Datum ( 20120506 ) ) ; org . mockito . Mockito . verify ( persoonRepository , org . mockito . Mockito . times ( 2 ) ) . haalPersoonOpMetAdresViaBetrokkenheid ( ( ( nl . bzk . brp . model . objecttype . operationeel . BetrokkenheidModel ) ( org . mockito . Matchers . any ( ) ) ) ) ; "<AssertPlaceHolder>" ; } size ( ) { return elementen . size ( ) ; }
putListeners ( ) { final java . util . concurrent . atomic . AtomicInteger counter = new java . util . concurrent . atomic . AtomicInteger ( ) ; map . addListener ( new LRUMap . ModificationListener < java . lang . String , java . lang . Integer > ( ) { # java . lang . Override public void onPut ( java . lang . String key , java . lang . Integer value ) { counter . incrementAndGet ( ) ; } # java . lang . Override public void onRemove ( java . lang . String key , java . lang . Integer value ) { } } ) ; map . put ( "hello" , 1 ) ; map . put ( "hello2" , 2 ) ; "<AssertPlaceHolder>" ; } put ( java . lang . String , org . codehaus . httpcache4j . List ) { return super . put ( new org . codehaus . httpcache4j . util . CaseInsensitiveKey ( key ) , value ) ; }
testStatelessKieSession ( ) { org . kie . api . runtime . StatelessKieSession ksession = ( ( org . kie . api . runtime . StatelessKieSession ) ( org . kie . spring . tests . KieSpringComponentScanTest . context . getBean ( "ksession1" ) ) ) ; "<AssertPlaceHolder>" ; }
shouldHashSha1 ( ) { java . lang . String [ ] correctHashes = new java . lang . String [ ] { "da39a3ee5e6b4b0d3255bfef95601890afd80709" , "5baa61e4c9b93f3f0682250b6cf8331b7ee68fd8" , "285d0c707f9644b75e1a87a62f25d0efb56800f0" , "a42ef8e61e890af80461ca5dcded25cbfcf407a4" } ; java . util . List < java . lang . String > result = new java . util . ArrayList ( ) ; for ( java . lang . String password : fr . xephi . authme . security . HashUtilsTest . GIVEN_PASSWORDS ) { result . add ( fr . xephi . authme . security . HashUtils . sha1 ( password ) ) ; } "<AssertPlaceHolder>" ; } contains ( java . lang . String ) { return ( getObject ( path ) ) != null ; }
equalsOtherNullReturnsFalse ( ) { com . rackspacecloud . blueflood . types . BluefloodCounterRollup rollup = new com . rackspacecloud . blueflood . types . BluefloodCounterRollup ( ) ; "<AssertPlaceHolder>" ; } equals ( java . lang . Object ) { if ( ! ( obj instanceof com . rackspacecloud . blueflood . rollup . Granularity ) ) return false ; else return obj == ( this ) ; }
testFlatten ( ) { org . teiid . translator . document . Document doc = new org . teiid . translator . document . Document ( ) ; doc . addProperty ( "B" 4 , "AA" ) ; doc . addProperty ( "B" , "B" 2 ) ; org . teiid . translator . document . Document c1 = new org . teiid . translator . document . Document ( "c1" , false , doc ) ; c1 . addProperty ( "B" 1 , "11" ) ; org . teiid . translator . document . Document c2 = new org . teiid . translator . document . Document ( "c1" , false , doc ) ; c2 . addProperty ( "B" 3 , "B" 7 ) ; doc . addChildDocuments ( "c1" , java . util . Arrays . asList ( c1 , c2 ) ) ; org . teiid . translator . document . Document c4 = new org . teiid . translator . document . Document ( "c2" , false , doc ) ; c4 . addProperty ( "4" , "B" 0 ) ; org . teiid . translator . document . Document c5 = new org . teiid . translator . document . Document ( "c2" , false , doc ) ; c5 . addProperty ( "5" , "B" 6 ) ; doc . addChildDocuments ( "c2" , java . util . Arrays . asList ( c4 , c5 ) ) ; java . util . List < java . util . Map < java . lang . String , java . lang . Object > > result = doc . flatten ( ) ; java . util . List < java . util . Map < java . lang . String , java . lang . Object > > expected = java . util . Arrays . asList ( map ( "B" 4 , "AA" , "B" , "B" 2 , "c1/1" , "11" , "B" 5 , "B" 0 ) , map ( "B" 4 , "AA" , "B" , "B" 2 , "c1/2" , "B" 7 , "B" 5 , "B" 0 ) , map ( "B" 4 , "AA" , "B" , "B" 2 , "c1/1" , "11" , "c2/5" , "B" 6 ) , map ( "B" 4 , "AA" , "B" , "B" 2 , "c1/2" , "B" 7 , "c2/5" , "B" 6 ) ) ; "<AssertPlaceHolder>" ; } toArray ( ) { return java . util . Arrays . copyOf ( elementData , size ) ; }
testSetUnread ( ) { contact . setUnread ( 1 ) ; "<AssertPlaceHolder>" ; } getUnread ( ) { return unread ; }
testOracleDatabase ( ) { try { java . lang . String expectedSQL = ( org . pentaho . di . core . database . SelectCountIT . NonHiveSelect ) + ( org . pentaho . di . core . database . SelectCountIT . TableName ) ; org . pentaho . di . core . database . DatabaseMeta databaseMeta = new org . pentaho . di . core . database . DatabaseMeta ( org . pentaho . di . core . database . SelectCountIT . OracleDatabaseXML ) ; java . lang . String sql = databaseMeta . getDatabaseInterface ( ) . getSelectCountStatement ( org . pentaho . di . core . database . SelectCountIT . TableName ) ; "<AssertPlaceHolder>" ; } catch ( java . lang . Exception e ) { e . printStackTrace ( ) ; } } getSelectCountStatement ( java . lang . String ) { if ( ( databaseDialect ) != null ) { return databaseDialect . getSelectCountStatement ( tableName ) ; } return super . getSelectCountStatement ( tableName ) ; }
Expected output:
linenumber,length
5,5000
10,3850
2,2000
...
Using any version of these tools that exist on every Unix box:
$ awk -v OFS=',' '{print NR, length($0)}' file | sort -t, -rnk2 | head -n 5
1,1717
6,1649
8,883
2,762
4,656
Just add echo 'linenumber,length'; at the start to get a header line if you really want it.
The above is outputting 5 lines instead of 10 to demonstrate selection of largest-N since OP only provided 8 lines of sample input.
Using GNU awk:
$ gawk '{
a[NR]=length
}
END {
PROCINFO["sorted_in"]="#val_num_desc" # this line is GNU awk only
for(i in a)
print i,a[i]
}' file
Output:
1 1717
6 1649
8 883
2 762
4 656
5 375
3 268
7 107
If you don't have GNU awk but some other awk and sort pipe the output to:
$ awk ... | sort -k2nr
You can try below.
cat input.txt|awk 'BEGIN{print "linenumber,length"};{size[NR] = length;}END{for(lineNumber in size) print lineNumber","size[lineNumber]}'|sort -k2 -rn|head -10
UPDATE 2 : BENCHMARKING
After adding a Top-10-only filter to another solution above, here's how it fares for the same input set - 11.896 secs
fgc; ( time ( pvE0 < "${m3t}" |
gawk '{
a[NR]=length
}
END {
PROCINFO["sorted_in"]="#val_num_desc" # this line is GNU awk only
for(i in a) {
print i,a[i]; if (9 < ++_) { break } } }' ) |
pvE9 ) | gcat -b | lgp3 5
in0: 1.85GiB 0:00:08 [ 228MiB/s] [ 228MiB/s] [=>] 100%
out9: 0.00 B 0:00:11 [0.00 B/s] [0.00 B/s] [<=>]
1 6954837 18458
2 11417380 14247
3 6654331 11188
4 7576850 10352
5 12262953 10182
6 12279191 10156
7 12329231 9679
8 11479085 9568
9 12329230 9400
10 12418983 8666
out9: 143 B 0:00:11 [12.0 B/s] [12.0 B/s] [<=> ]
( pvE 0.1 in0 < "${m3t}" | gawk ; )
11.49s user 0.77s system 103% cpu 11.896 total
UPDATE 1 :
if you're willing to take a leap of faith and assume the input is fully valid UTF-8 to begin with, then by adding this small function, it can report line #, byte-count, and also UTF-8 chars count
function _______(_) { # only applicable for non-unicode aware awks
_=$(_<_)
gsub("[\\\200-\\\301\\\365-\\\377]+","",_)
return length(_)
}
in0: 1.85GiB 0:00:01 [1.63GiB/s] [1.63GiB/s] [========>] 100%
( pvE 0.1 in0 < "${m3t}" | mawk2 ; )
0.93s user 0.42s system 117% cpu 1.152 total
1 index :: 2 #-bytes :: 16024 | Ln:: 12417761 | #-utf8-chars :: 8663
2 index :: 3 #-bytes :: 16033 | Ln:: 12418983 | #-utf8-chars :: 8666
3 index :: 4 #-bytes :: 22261 | Ln:: 11417380 | #-utf8-chars :: 14247
4 index :: 5 #-bytes :: 20574 | Ln:: 6654331 | #-utf8-chars :: 11188
5 index :: 6 #-bytes :: 20714 | Ln:: 12329231 | #-utf8-chars :: 9679
6 index :: 7 #-bytes :: 20077 | Ln:: 12329230 | #-utf8-chars :: 9400
7 index :: 8 #-bytes :: 18870 | Ln:: 3781376 | #-utf8-chars :: 8416
8 index :: 9 #-bytes :: 16801 | Ln:: 9000781 | #-utf8-chars :: 8459
9 index :: 0 #-bytes :: 25891 | Ln :: 6954837 | #-utf8-chars :: 18458
10 index :: 1 #-bytes :: 16051 | Ln :: 11479085 | #-utf8-chars :: 9568
========================
It's muuuuuuuuuuuch faster doing it in awk - not even 1.2 secs to finish a 1.85 GB text file filled with utf-8 :
instead of storing every single line, it only updates entries in the 10-item array whenever the shortest existing entry will get knocked down to 11-th spot
since ties favor existing entries, the array is seldomly updated at all
it also temp stores the shortest entry into a variable, which is a lot less overhead than reading from and writing to the array for every one of the 12 million rows in the test file
|
fgc; ( time ( pvE0 < "${m3t}" |
mawk2 '
function ______(___,_,__,____,_____) {
__=(_=3)^_^_
_____=""
for(____ in ___) {
_____=__==(__=+(_=___[____])<+__\
?_:__) ?_____:____
} return _____
}
BEGIN {
split(sprintf("%0*.f",
(__=10)-!_,_),___,_)
_____=___[+_] = _*= \
FS = "^$"
} _____<(____=length($!__)) {
___[_]=____ "_|_LineNum_::_"NR
_____=+___[_=______(___)]
} END {
for(____=__+_;_<____;_++) {
print "index :: ",_%__,"_length :: ",___[_%__] } } ' ))
sleep 1
( time ( pvE0 < "${m3t}" |
mawk2 '{ print length($0),NR }' OFS== ) |
LC_ALL=C gsort -t= -k 1,1nr -k 2,2nr ) |
gsed -n '1,10p;10q' |
gsort -t= -k 1,1n | gcat -b | rs -t -c$'\n' -C= 0 3 | column -s= -t
in0: 1.85GiB 0:00:01 [1.64GiB/s] [1.64GiB/s] [============>] 100%
0.93s user 0.42s system 117% cpu 1.145 total
1 index :: 2 length :: 16024 | LineNum :: 12417761
2 index :: 3 length :: 16033 | LineNum :: 12418983
3 index :: 4 length :: 22261 | LineNum :: 11417380
4 index :: 5 length :: 20574 | LineNum :: 6654331
5 index :: 6 length :: 20714 | LineNum :: 12329231
6 index :: 7 length :: 20077 | LineNum :: 12329230
7 index :: 8 length :: 18870 | LineNum :: 3781376
8 index :: 9 length :: 16801 | LineNum :: 9000781
9 index :: 0 length :: 25891 | LineNum :: 6954837
10 index :: 1 length :: 16051 | LineNum :: 11479085
in0: 1.85GiB 0:00:07 [ 247MiB/s] [ 247MiB/s] [=========>] 100%
1 16024 12417761 5 18870 3781376 9 22261 11417380
2 16033 12418983 6 20077 12329230 10 25891 6954837
3 16051 11479085 7 20574 6654331
4 16801 9000781 8 20714 12329231
2.63s user 0.42s system 39% cpu 7.681 total
5.85s user 0.51s system 72% cpu 8.808 total

Importing external instances as owl file

I'm loading the ontology of https://users.ugent.be/~hvhaele/stad.gent.mini.ttl and import the four instances of https://users.ugent.be/~hvhaele/nwd.owl into Protégé.
the four instances
Then, using this shacl code, I should have no violations:
#prefix owl: <http://www.w3.org/2002/07/owl#> .
#prefix dcat: <http://www.w3.org/ns/dcat#> .
#prefix dct: <http://purl.org/dc/terms/> .
#prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
#prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
#prefix sh: <http://www.w3.org/ns/shacl#> .
#prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
#prefix nwd: <https://users.ugent.be/~hvhaele/nwd.owl#> .
[ rdf:type owl:Ontology ;
owl:imports <https://users.ugent.be/~hvhaele/nwd.owl>
] .
# Validation item 1: Not an opendefinition.org license type
ex:OpenLicenseShape a sh:NodeShape ;
sh:targetClass dcat:Catalog , dcat:Distribution ;
sh:property [
sh:path dct:license ;
sh:class nwd:OpenLicense ;
sh:severity sh:Warning ;
sh:message "Not an opendefinition.org license type."
] .
But instead, I get three violations, being:
https://data.stad.gent/api/v2/catalog/datasets/api-luftdateninfo-csv http://purl.org/dc/terms/license http://opendatacommons.org/licenses/odbl/
https://data.stad.gent/api/v2/catalog/datasets/api-luftdateninfo-geojson http://purl.org/dc/terms/license http://www.opendatacommons.org/licenses/odbl/1.0/
https://data.stad.gent/api/v2/catalog/datasets/api-luftdateninfo-shp http://purl.org/dc/terms/license http://opendatacommons.org/licenses/odbl/
What am I doing wrong. Any help appreciated!

Azure: create SAS manually

I'm trying to implement a Perl function for creating SAS tokens. Doing all according to documentation (https://learn.microsoft.com/en-us/rest/api/storageservices/create-service-sas) I always get error "Signature fields not well formed".
My testing resource is "https://ribdkstoragepoc.blob.core.windows.net/project-poc/foo/koteyka.jpg". Variables populated by my code:
signedpermissions = 'r'
signedexpiry = '2020-05-01T07:59:00Z'
canonicalizedresource = '/blob/ribdkstoragepoc/project-poc/foo/koteyka.jpg'
signedresource = 'b'
signedversion = '2018-03-28'
Other variables are just empty lines. I generate a string using the following pattern:
my $stringToSign =
$permissions . "\n" .
$start . "\n" .
$expiry . "\n" .
$canonicalizedResource . "\n" .
$identifier . "\n" .
$IP . "\n" .
$protocol . "\n" .
$version . "\n" .
$resource . "\n" .
$snapshotTime . "\n" .
$rscc . "\n" .
$rscd . "\n" .
$rsce . "\n" .
$rscl . "\n" .
$rsct;
The string to sign:
r
2020-05-01T07:59:00Z
/blob/ribdkstoragepoc/project-poc/foo/koteyka.jpg
2018-03-28
b
Calculationg of signature: my $sig = Digest::SHA::hmac_sha256_base64($stringToSign, $key);
At last the final URL looks like: https://ribdkstoragepoc.blob.core.windows.net/project-poc/foo/koteyka.jpg?sig=YcwWvOT2FtOZGbXQxMAoSxvA2HhRmMAUp%2B6WUY%2Bjbjg&sv=2018-03-28&se=2020-05-01T07%3A59%3A00Z&sr=b&sp=r
As I have already said that does not work. Does anyone have ideas what could be wrong?
Figured out what's wrong with your code. Basically you're using 2018-03-28 version of Storage REST API so you don't need to include $resource and $snapshotTime in your $stringToSign.
Furthermore, you will need to pad your signature with 1-4 = (Ref: https://github.com/smarx/waz-storage-perl/blob/master/WindowsAzure/Storage.pm#L42).
Based on these, here's the code:
use strict;
use warnings;
use Digest::SHA qw(hmac_sha256_base64);
use MIME::Base64;
my $permissions = 'r';
my $start = '';
my $expiry = '2020-01-31T00:00:00Z';
my $canonicalizedResource = '/blob/ribdkstoragepoc/project-poc/foo/koteyka.jpg';
my $identifier = '';
#my $resource = 'b';
my $IP = '';
my $version = '2018-03-28';
my $protocol = '';
#my $snapshotTime = '';
my $rscc = '';
my $rscd = '';
my $rsce = '';
my $rscl = '';
my $rsct = '';
my $stringToSign = $permissions . "\n" . $start . "\n" . $expiry . "\n" . $canonicalizedResource . "\n" . $identifier . "\n" . $IP . "\n" . $protocol . "\n" . $version . "\n" . $rscc . "\n" . $rscd . "\n" . $rsce . "\n" . $rscl . "\n" . $rsct;
print $stringToSign;
my $accountKey = 'your-base64-account-key';
my $sig = Digest::SHA::hmac_sha256_base64($stringToSign, decode_base64($accountKey));
$sig .= '=' x (4 - (length($sig) % 4));
print "\n---------------------\n";
print $sig;
print "\n";
Give this a try, it should work.

Building Geospatial Index when working with JENA FUSEKI

I would like to use the nearby geospatial function which is described as supported here through JENA FUSEKI - https://jena.apache.org/documentation/query/spatial-query.html
I need to build the geospatial Index for the query to work. The instructions are as follows (taken from above link):
Build the TDB dataset:
java -cp $FUSEKI_HOME/fuseki-server.jar tdb.tdbloader --tdb=assembler_file data_file
using the copy of TDB included with Fuseki. Alternatively, use one of the TDB utilities tdbloader or tdbloader2:
$JENA_HOME/bin/tdbloader --loc=directory data_file
then build the spatial index with the jena.spatialindexer:
java -cp jena-spatial.jar jena.spatialindexer --desc=assembler_file
Assuming I knew which file is the assembler file in my FUSEKI folder (I don't), I search for jena-spatial.jar in my latest jena download. Having found it is not there, I search for it and find a copy of the jar here - https://jar-download.com/?detail_search=g%3A%22org.apache.jena%22+AND+a%3A%22jena-spatial%22&search_type=av&a=jena-spatial&p=1
I try running it, but I get the error "Could not find or load main class jena.spatialindexer". I do searchers for jena.spatialindexer and I find a match (cannot post here as at link post limit).
At this point I am wondering would it be possible to make this just a little bit more complicated? You know, I obviously have all the time in the world to search through google trying to figure out these cryptic clues.
In short, if anyone out there has done this before, please could you point out where I am going wrong?
Kindest regards,
Kris.
Just in case it might help, find below my configuration
#prefix fuseki: <http://jena.apache.org/fuseki#> .
#prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
#prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
#prefix tdb: <http://jena.hpl.hp.com/2008/tdb#> .
#prefix ja: <http://jena.hpl.hp.com/2005/11/Assembler#> .
#prefix : <#> .
#prefix spatial: <http://jena.apache.org/spatial#> .
# TDB
[] ja:loadClass "org.apache.jena.tdb.TDB" .
tdb:DatasetTDB rdfs:subClassOf ja:RDFDataset .
tdb:GraphTDB rdfs:subClassOf ja:Model .
# Spatial
[] ja:loadClass "org.apache.jena.query.spatial.SpatialQuery" .
spatial:SpatialDataset rdfs:subClassOf ja:RDFDataset .
#spatial:SpatialIndexSolr rdfs:subClassOf spatial:SpatialIndex .
spatial:SpatialIndexLucene rdfs:subClassOf spatial:SpatialIndex .
## ---------------------------------------------------------------
## This URI must be fixed - it's used to assemble the spatial dataset.
:spatial_dataset rdf:type spatial:SpatialDataset ;
spatial:dataset <#tdb_dataset_readwrite> ;
##spatial:index <#indexSolr> ;
spatial:index <#indexLucene> ;
.
<#tdb_dataset_readwrite> rdf:type tdb:DatasetTDB ;
tdb:location "/myfolder" ;
## # Query timeout on this dataset (milliseconds)
## ja:context [ ja:cxtName "arq:queryTimeout" ; ja:cxtValue "1000" ] ;
## # Default graph for query is the (read-only) union of all named graphs.
tdb:unionDefaultGraph true ;
.
<#indexLucene> a spatial:SpatialIndexLucene ;
#spatial:directory <file:Lucene> ;
## spatial:directory "mem" ;
spatial:directory <file:/myfolder/spatial> ;
spatial:definition <#definition> ;
.
<#definition> a spatial:EntityDefinition ;
spatial:entityField "uri" ;
spatial:geoField "geo" ;
# custom geo predicates for 1) Latitude/Longitude Format
spatial:hasSpatialPredicatePairs (
[ spatial:latitude :latitude_1 ; spatial:longitude :longitude_1 ]
[ spatial:latitude :latitude_2 ; spatial:longitude :longitude_2 ]
) ;
# custom geo predicates for 2) Well Known Text (WKT) Literal
spatial:hasWKTPredicates (:wkt_1 :wkt_2) ;
# custom SpatialContextFactory for 2) Well Known Text (WKT) Literal
spatial:spatialContextFactory
"org.locationtech.spatial4j.context.jts.JtsSpatialContextFactory"
.
# "com.spatial4j.core.context.jts.JtsSpatialContextFactory"
<#service_tdb1> rdf:type fuseki:Service ;
rdfs:label "TDB Service" ;
fuseki:name "tdb_spatial" ;
fuseki:serviceQuery "query" ;
fuseki:serviceQuery "sparql" ;
fuseki:serviceUpdate "update" ;
fuseki:serviceUpload "upload" ;
fuseki:serviceReadWriteGraphStore "data" ;
# A separate read-only graph store endpoint:
fuseki:serviceReadGraphStore "get" ;
fuseki:dataset :spatial_dataset ;
as you can see I have changed the class name to org.locationtech.spatial4j.context.jts.JtsSpatialContextFactory. Besides when running fuseki I have to include another jar file in the classpath as otherwise I have issues with
EntityDefinitionAssembler WARN Custom SpatialContextFactory lib is not ready in classpath:com/vividsolutions/jts/geom/CoordinateSequenceFactory
Then my command line looks like:
java -cp "fuseki-server.jar:lib/jts-1.13.jar" org.apache.jena.fuseki.cmd.FusekiCmd -debug
I have downloaded jts-1.13.jar from here
Besides I would like to thank to Kai for pointing me in the right direction.
Note: I still have to fully understand the fields under spatial:EntityDefinition. I will try to edit with more information.

String behind two variables in directory path laravel 5

I want a "/" behind "vandiepen" and "test.txt. Now, Laravel gives an error, because it's not a good path to the file.
The file "C:\xampp\htdocs\systeembeheer\storage/download/vandiepentest.txt" does not exist
I tried to put the "/" behind the $folder and the $id variable.
$file = storage_path(). "/download/".$folder "/" .$id;
When I do that, Laravel gives an error:
syntax error, unexpected '"/"' (T_CONSTANT_ENCAPSED_STRING)
The problem is that you missed . (concatenation operator) after $folder.
It should be:
$file = storage_path() . "/download/" . $folder. "/" . $id;
Also you can use DIRECTORY_SEPARATOR.
$file = storage_path() . DIRECTORY_SEPARATOR . "download" . DIRECTORY_SEPARATOR . $folder . DIRECTORY_SEPARATOR . $id;

Resources