The import org.apache.jena.query cannot be resolved - search

I want to calculate distance between sensors deployed in georaphical area using longitude and latitude in sparql query issued in apache jena 2.11.(Sensor description and observation are stored as RDF triple in sensor.n3, eclipse as IDE and Fedora 19, TDB as triple store)
I found that "Spatial searches with SPARQL" will help in this regard. But when I import package given at http://jena.apache.org/documentation/query/spatial-query.html import org.apache.jena.query.spatial.EntityDefinition in eclipse I get the error The import org.apache.jena.query cannot be resolved. When browsed the folder ../apache-jena-2.11.1/javadoc-arq/org/apache/jena directory it contains only
(altas, common, web, riot) there is no query folder which is the reason why import is highlighted in red.
I have one more doubt whether Apache Solr need to be installed ( I have downloaded solr 4.10.1) or just use build path to import external jar.

You need to separately download jena-spatial. (Use maven to manage your dependencies.) You can use lucene instead of Solr. Again, maven will load the dependencies. AndyS

Related

Any python package that can be used to handle METEOR RADAR data sets?

Is there any existing python package that can be used to handle METEOR RADAR data sets which are in .hwd file format?
I want to work on atmoshpereic science project on tide analysis in the MLT region using python.So, the source of the data is METEOR RADAR which stores data in .hwd file format(height width depth).
I tried searching the internet for specific packages that could help me file handle .hwd files but ended up finding no packages or libraries that are currently active.
Could you please help me?
Thank you.
I figured this out!
There is no need for external packages to work on hwd files in python.
hwd files stand for Horizontal Wind Data files. So, METEOR radar stores data in hwd file format, which can be treated as a normal text(.txt) file for file handling in python.

Setting up Visual Studio Code to run models from Hugging Face

I am trying to import models from hugging face and use them in Visual Studio Code.
I installed transformers, tensorflow, and torch.
I have tried looking at multiple tutorials online but have found nothing.
I am trying to run the following code:
from transformers import pipeline
classifier = pipeline('sentiment-analysis')
result = classifier("I hate it when I'm sitting under a tree and an apple hits my head.")
print(result)
However, I get the following error:
No model was supplied, defaulted to distilbert-base-uncased-finetuned-sst-2-english and revision af0f99b (https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english).
Using a pipeline without specifying a model name and revision in production is not recommended.
Traceback (most recent call last):
File "c:\Users\user\Desktop\Artificial Intelligence\transformers\Workshops\workshop_3.py", line 4, in <module>
classifier = pipeline('sentiment-analysis')
File "C:\Users\user\Desktop\Artificial Intelligence\transformers\src\transformers\pipelines\__init__.py", line 702, in pipeline
framework, model = infer_framework_load_model(
File "C:\Users\user\Desktop\Artificial Intelligence\transformers\src\transformers\pipelines\base.py", line 266, in infer_framework_load_model
raise ValueError(f"Could not load model {model} with any of the following classes: {class_tuple}.")
ValueError: Could not load model distilbert-base-uncased-finetuned-sst-2-english with any of the following classes: (<class 'transformers.models.auto.modeling_auto.AutoModelForSequenceClassification'>, <class 'transformers.models.auto.modeling_tf_auto.TFAutoModelForSequenceClassification'>, <class 'transformers.models.distilbert.modeling_distilbert.DistilBertForSequenceClassification'>, <class 'transformers.models.distilbert.modeling_tf_distilbert.TFDistilBertForSequenceClassification'>).
I have already searched online for ways to set up transformers to use in Visual Studio Code but nothing is helping.
Do you know how to fix this error, or if someone knows how to successfully use models from Hugging Face into my code, it would be appreciated?
This question is a little less about Hugging Face itself and likely more about installation and the installation steps you took (and potentially your program's access to the cache file where the models are automatically downloaded to.).
From what I am seeing either:
1/ your program is unable to access the model
2/ your program is throwing specific value errors in a bit of an edge case
If 1/ Take a look here: [https://huggingface.co/docs/transformers/installation#cache-setup][1]
Notice that it the docs walks through where the pre-trained models are downloaded. Check that it was downloaded here: C:\Users\username\.cache\huggingface\hub (of course with your own username on your computer instead. Check in the cache location to make sure it was downloaded? (You can check in the cache locations mentioned.)
Second, if for some reason, there is an issue with downloading, you can try downloading manually and doing it via offline mode (this is more to get it up and running): https://huggingface.co/docs/transformers/installation#offline-mode
Third, if it is downloaded, do you have the right permissions to access the .cache? (Try running your program (if it is a program that you trust) on Windows Terminal as an administrator.). Various ways - find one that you're comfortable with, here are a couple hints from Stackoverflow/StackExchange: Opening up Windows Terminal with elevated privileges, from within Windows Terminal or this: https://superuser.com/questions/1560049/open-windows-terminal-as-admin-with-winr
If 2/ I have seen people bring up very specific issues on not finding specific values (not the same as yours but similar) and the issue was solved by installing PyTorch because some models only exist as PyTorch models. You can see the full response from #YokoHono here: Transformers model from Hugging-Face throws error that specific classes couldn t be loaded

How do you project geometries from one EPSG to another with Spark/Geomesa?

I am "translating" some Postgis code to Geomesa and I have some Postgis code like this:
select ST_Transform(ST_SetSRID(ST_Point(longitude, latitude), 4326), 27700)
which converts a point geometry from 4326 to 27700 for example.
On Geomesa-Spark-sql documentation https://www.geomesa.org/documentation/user/spark/sparksql_functions.html I can see ST_Point but I cannot find any equivalent ST_Transform function. Any idea?
I have used sedona library for the geoprocessing and it has the st_transform
function which I have used and working fine so if you want you can use it. Please find below link for the official documentation - https://sedona.apache.org/api/sql/GeoSparkSQL-Function/#st_transform
Even Geomesa is now supporting the function -
https://www.geomesa.org/documentation/3.1.2/user/spark/sparksql_functions.html#st-transform
For GeoMesa 1.x, 2.x, and the upcoming 3.0 release, there is not an ST_Transform presently. One could make their own UDF using GeoTools (or another library) to do the transformation.
Admittedly, this would require some work.
I recently run with the same issue on Azure Databricks. I was able to do it manually installing the JAR library from here.
And then running the following Scala code.
%scala
import org.locationtech.jts.geom._
import org.locationtech.geomesa.spark.jts._
import org.locationtech.geomesa.spark.geotools._
import org.apache.spark.sql.types._
import org.apache.spark.sql.functions._
import spark.implicits._
spark.withJTS
data_points = (
data_points
.withColumn("geom", st_makePoint(col("LONGITUDE"), col("LATITUDE")))
.withColumn("geom_5347", st_transform(col("geom"), lit("EPSG:4326"), lit("EPSG:5347")))
)
display(data_points)
Good luck.

Import Ecoinvent 2.2 Ecospold files into Brightway

I was trying to use the following code (Figure.1) to import Ecoinvent v2.2 into Brightway. I followed code from: https://github.com/PoutineAndRosti/Brightway-Seminar-2017/blob/master/Day%201%20AM/2%20-%20BW%20structure%20and%20first%20LCAs.ipynb
I obtained all XML (ecospold files) downloaded from Simapro (which is connected to ecoinvent database) and save all datafiles into the folder: C:\bw2-python\ecoSpold1.
However, when I run the next step, I ran into the following errors:
Figure 2
I am not sure what is wrong here. Any suggestion would be very helpful!
I think the ecospold files obtained from simapro and from the ecoinvent website are not the same. Simapro codes things a bit differently, which I think affects the naming of exchanges (that is why you got an invalid exchange). You either download the ecospold files from ecoinvent or use the tools see notebooks here and here to read exported csv files (the format prefered by simapro to export datasets).

[simple issue]: import .net file (word/occurences) into cytoscape...which attributes are which?

I took a corpus of text and put it into VosViewer to create a network for me. When I import this .net file into gephi, it works fine: I get a semantic network. Though I'm a little stuck for what attributes to select to import into cytoscape. Here is a CSV file of the network (.net import wouldn't work), and I just need to know which column to select as what.
Each column has the option of being imported as one of the following:
Source Node
Interaction type
Edge Attribute
Source Node Attribute
Target Node Attribute
An option would be to export your network from Gephi in GraphML format and then load it from Cytoscape. This will allow you to forget about the gory conversion details
Just go to File>Export>Graph file and select GraphML under the Files of type: option
Is this the .net format you are using?
https://gephi.org/users/supported-graph-formats/pajek-net-format/
If so, then you can see that the format (like any network file format) requires at least two columns of node identifiers, i.e., source and target. Your screenshot clearly shows the first columns "id" along with some node attributes. The column headers for "weight" and "cluster" are not as clear. My naive guess is that the "weight (occurrences)" column is actually the second list of "target" node ids and the "weight (co-occurrences)" is an edge attribute. But this is just a guess!
If you can deduce the meaning of the columns in your file, then it's a simple matter to assign and import them into Cytoscape for .net or any tabluar file format.
Hope this helps!

Resources