Moving datasets from brightway2 to openLCA - brightway

So, from the brightway2 documentation I see that I can export datasets in Excel files, simaPro CSVs, ecospodld 1&2s, or JSONs (with the bw2 custom structure from what I understand).
openLCA allows imports form excel, simaPro CSV, JSON-LD among others.
So before I go though all combinations and potentially run into dead ends, I was wondering if there is a preferred (if any?) way to do that?
thanks!

OpenLCA uses a slightly different mental model than Brightway, so I am not sure what the best way to do this is (Brightway makes a strong assumption that there is a graph with fixed edges, while OpenLCA allows more flexibility). There certainly isn't one recommended way.
Cauldron would be willing to fund a OpenLCA-compatible olca JSON schema exporter, but this doesn't help you right now...
You could also consider asking in the OpenLCA forums.

Related

Ontology Populating

Hello everyone,
because of my lack of experience with ontologies and web semantics, I have a conceptual misunderstanding. When we refer to 'ontology population', do we make clones of the ontology with our concrete data or do we map our concrete data to the ontology? If so, how is it done? My intention is to build a knowledge graph using an ontology (FIBO ontology for the loans domain) and I have also an excel file with loans data. Not every entry in my excel file corresponds to the ontology classes predefined. However, that is not a major problem I suppose. So, to make myself more clear, I want to know how do I practically populate the ontology?
Also, I would like to note that I am using neo4j as a graph database and python as my implementation language, so the process of the population of the ontology would have been done using its libraries.
Thanks in advance for your time!
This video could inform your understanding about modelling and imports for graph database design: https://www.youtube.com/watch?v=oXziS-PPIUA
He steps through importing a CSV in to Neo4j and uses python.
The terms ontology and web semantics (OWL) are probably not what you're asking about (being loans/finance domain, rather than web). Further web semantics is not taken very seriously by professionals these days.
"Graph database modelling" is probably a useful area of research to solve your problem.
I can recommend you use Apache Jena to populate your ontology with the data source. You can use either Java or Python. The first step begins with extracting triples from the loaded data depending on the RDF schema, which is the basis of triple extraction. The used parser in this step may differ to be compatible with the data source in your case it is the excel file. After extracting triples, an intermediate data model (IDM) is used for mapping from the triple format. IDM could be in any useful format for mapping, like JSON. After mapping, the next step will be loading the individuals from the intermediate data model to the RDF schema that was previously used. Now the RDF schema is updated to contain the individuals too. At this phase, you should review the updated schema to check whether it needs more data, then run the logic reasoner to evaluate and correct the possible problems and inconsistencies. If the Reasoner runs without any error, the RDF schema now contains all the possible individuals and you could use it for visualisation using Neo4j

How can I load my own dataset for person?

How can I load a dataset for person reidentification. In my dataset there are two folders train and test.
I wish I could provide comments, but I cannot yet. Therefore, I will "answer" your question to the best of my ability.
First, you should provide a general format or example content of the dataset. This would help me provide a less nebulous answer.
Second, from the nature of your question I am assuming that you are fairly new to python in general. Forgive me if I'm wrong in my assumption. With that assumption, depending on what kind of data you are trying to load (i.e. text, numbers, or a mixture of text and numbers) there are various ways to load the data. Some of the methods are easier than others. If you are strictly loading numbers, I suggest using numpy.loadtxt(<file name>). If you are using text, you could use the Pandas package, or if it's in a CSV file you could use the built-in (into Python that is) CSV package. Alternatively, if it's in a format that Tensorflow can read, you could use the provided load data functions.
Once you have loaded your data you will need to separate the data into the input and output values. Considering that Tensorflow models accept either lists or numpy arrays, you should be able to use these in your training and testing steps.
Checkout modules csv (import csv) or load your dataset via open(filename, „r“) or so. It might be easiest if you provide more context/info.

Persisting only part of a data source

I'm using intake to access the catalog catalog.ocean.GFDL_CM2_6.GFDL_CM2_6_control_ocean_surface.
At the moment I only work with small patches of that data, but accessing that data every single time is still quite costly (it's on Google Cloud Storage). So I want to use the persist option of intake to store that data locally. However as far as I've understood from the docs, it looks like one can only persist the whole dataset. For that specific dataset that would amount to almost 400 dollars if I take a cost of 0.1$ per GB, since the total data is 3976GB.
Hence my questions:
Is there a way (especially for a zarr file which in theory should make this quite easy) to persist only parts of the data (for instance only a subset of the variables)
This is probably more complicated, but can I push things further, by persisting regions of data I'm interested in (in terms of coordinates values for instance)?
There is no direct Intake way to do what you are asking for. Intake was conceived as a way to get your data into a format that you can then manipulate as you normally do, i.e., deal with only the loading part, so that a persisted data-set is the same as the original.
However, it is not hard to accomplish manually: you should grab the xarray, filter for the region you need, and call to_zarr to save the new dataset. You can then point a simple catalogue entry like the old one at the new location.
You could have done this manipulation in a driver directly if this was a specific pattern that would repeat a lot. In fact, we have mooted the idea of whether/how to implement such processing steps in Intake, but there is no plan yet. In the end, we may take the work on pipelines in Holoviews to describe processing steps.

Py-tables vs Blaze vs S-Frames

I am working on an exploratory data analysis using python on a huge Dataset (~20 Million records and 10 columns). I would be segmenting, aggregating data and create some visualizations, I might as well create some decision trees liner regression models using that dataset.
Because of the large data set I need to use a data-frame that allows out of core data storage. Since I am relatively new to Python and working with large data-sets, i want to use a method which would allow me to easily use sklearn on my data-sets. I'm confused weather to use Py-tables, Blaze or s-Frame for this exercise. If someone could help me understand what are their pros and cons. What are the factors that are important in this kind of decision making that would be much appreciated.
good question! one option you may consider is to not use any of the libraries aformentioned, but instead read and process your file chunk-by-chunk, something like this:
csv="""\path\to\file.csv"""
pandas allows to read data from (large) files chunk-wise via a file-iterator:
it = pd.read_csv(csv, iterator=True, chunksize=20000000 / 10)
for i, chunk in enumerate(it):
...

How to migrate data from one product to another without revealing database structure?

What is best way to best way to migrate data from one product to another if both software companies refuse to reveal the database structure to one another?
Define some database structure capable of holding all the relevant information (it doesn't need to be optimized in any way), have the first company dump their data into that and the second one import it.
dump the data to a csv and hand it to the other company. A good DBA can take the data and write a sufficient import script that maps them to the correct datatypes on the other end.
morse code? smoke signals? braille?
really, it sounds like an untenable situation between two parties that have no confidence in once another and sounds poisonous.
Holy smoke. Sounds like great fun of yours ;-).
Just some ideas:
If you have access to the database... for a good dba the migration should be no problem at all.
Another way would be that one company creates views to hide the underlaying datastructures and the other company can import those views.
Otherwise just let them dump the data into flat files and import those files.

Resources