Can't find migration biosphere-2-3-categories - brightway

I import an Excel database:
imp = bw.ExcelImporter(os.path.join("myfile.xls"))
And then apply strategies to it:
imp.apply_strategies()
But this issue arises:
AssertionError: Can't find migration biosphere-2-3-categories
I would like to understand what's happening and actually what is this "migration biosphere 2-3"?
My exchanges in my excel file concern only biosphere-3... but I suppose it doesn't have much to do with it.

Running the function bw2io.bw2setup calls the function bw2io.migrations.create_core_migrations, which installs sets of metadata to translate from one nomenclature system to another. If you can't find this migration data, then you need to import and call create_core_migrations.
You can see the actual changes in the names of elementary flow categories from ecoinvent version 2 to version 3 here.

Related

How to use generators with kedro?

Thanks to David Beazley's slides on Generators I'm quite taken with using generators for data processing in order to keep memory consumption minimal. Now I'm working on my first kedro project, and my question is how I can use generators in kedro. When I have a node that yields a generator, and then run it with kedro run --node=example_node, I get the following error:
DataSetError: Failed while saving data to data set MemoryDataSet().
can't pickle generator objects
Am I supposed to always load all my data into memory when working with kedro?
Hi #ilja to do this you may need to change the type of assignment operation that MemoryDataSet applies.
In your catalog, declare your datasets explicitly, change the copy_mode to one of copy or assign. I think assign may be your best bet here...
https://kedro.readthedocs.io/en/stable/kedro.io.MemoryDataSet.html
I hope this works, but am not 100% sure.

Writing DRY Spock tests

Assume that I have a Spock specification that given a city and state tests for the correct zip code. Assume I have a text file of cities and states that is used to drive the tests in the where clause.
Now assume that I want to split the tests so that I can run for "Virginia" or "Maryland". The approach that I have taken is to create a new VirginiaSpec and a new MarylandSpec and in that spec, I modify the where clause.
This works, but seems inefficient because every other part of the VirginiaSpec and MarylandSpec is exactly the same. In addition, if the logic changes, then I would need to change it in every spec that I have.
So what I am looking for is an approach that allows me to have one StateSpec in which the where clause can be parameterized.
I realize I have not included a code example, however if my question is not clear, then I can provide one. Thanks for your help.
-Dan
You have a couple of options. You could put the basic setup and structure and even the test itself in a base test class, then extend that class w/ your VirginiaSpec and MarylandSpec. Your spec classes would be very small probably just defining a constant that is the state for the spec.
But that seems needless. If both the cities and states are in this file, you could just read in the file in the where section of your test.
https://snekse.github.io/test-often-and-prosper-slides/#/42
If you cannot get the WHERE section working, you could always read in your file during the setupSpec and store the data in some kind of data structure then loop through it.
Spock: Reading Test Data from CSV File
But in general, using the Where label is going to be the right answer.

Getting FAGLL03H report using pyrfc

This is a mixed question between SAP and the usage of the pyrfc module. I need to use the FAGLL03H transaction code (tcode) to replicate a G/L report into a database on a daily basis. Now, the thing is that FAGLL03H is not a table per se, but a G/L Account Line Item Browser (G/L View), so I need to access that Tcode and pass a series of parameters in order to get the information we need.
How can I use the RFC protocol to access that tcode and generate a report?
is it possible to do (1) through pyrfc?
This is the code I use to consult tables:
import pyrfc
from pprint import PrettyPrinter
conn = pyrfc.Connection(ashost=...)
options = [{'TEXT': "FCURR = 'USD'"}]
pp = PrettyPrinter(indent=4)
ROWS_AT_A_TIME = 10
rowskips = 0
while True:
print(u"----Begin of Batch---")
result = conn.call('RFC_READ_TABLE', \
QUERY_TABLE='TCURR', \
OPTIONS=options, \
ROWSKIPS=rowskips, ROWCOUNT=ROWS_AT_A_TIME)
pp.pprint(result['DATA'])
rowskips += ROWS_AT_A_TIME
if len(result['DATA']) < ROWS_AT_A_TIME:
break
No way
No
The main point you need to understand is the difference between SAP transaction (tcode) and SAP RFC. The difference is huge and makes it impossible to use them in the similar manner. You are trying to call FAGLL03H report like a table via RFC_READ_TABLE, but it is not a table, it is much more, it is a transaction.
SAP tcode is nothing than a shortcut in SAP that points to some program, usually GUI program, and can contain hundreds of modules, including RFC-enabled ones. And some of these modules are internal and have no RFC equivalent, so it is impossible to call them remotely, at least but not the last it is necessary to know how to call them (in what order) and which parameters to pass.
SAP RFC is like a container for ABAP code (but also a protocol for calling this code) which implements some functionality, either a small piece like converting characters' case or converting measure units, or huge one, for example posting financial documents and creating enterprise hierarchy objects like workcenters, cost centers, sales organizations, etc. RFC-modules can be likened to Python modules or Java methods and they are usually implemented for one single task, and are usually used not standalone but in combination with other methods.
The above-mentioned transaction is huge and is intended for output of G/L account lines and cannot be called via PyRFC. PyRFC features are limited to calling only RFC-modules from which FAGLL03H consists of.
The only thing you can do here is to find equivalent function modules which returns the same items as FAGLL03H. Possible candidates:
BAPI_GLX_GETDOCITEMS
FAGL_GET_OPEN_ITEMS_GL
FAGL_GET_OPEN_ITEMS_KU
FAGL_GET_OPEN_ITEMS_LI
FAGL_GET_OPEN_ITEMS
FKK_GL_LINE_ITEMS_SELECT
BAPI_AP_ACC_GETBALANCEDITEMS
BAPI_AR_ACC_GETBALANCEDITEMS
BAPI_AP_ACC_GETOPENITEMS
BAPI_AR_ACC_GETOPENITEMS
You should try each and compare the output with your tcode, if it is identical. Only after then you can use PyRFC to call them.
Check this in order to get all the specific Tables:
https://www.recercat.cat/bitstream/handle/2072/5419/PFCLopezRuizAnnex3.pdf?sequence=4
You can then either build from there or create a Report (transaction SQ01) and execute through RSAQ_REMOTE_QUERY_CALL.
Your business requirements should decide your code, not the opposite.

Reading a grib2 message into an Iris cube

I am currently exploring the notion of using iris in a project to read forecast grib2 files using python.
My aim is to load/convert a grib message into an iris cube based on a grib message key having a specific value.
I have experimented with iris-grib, which uses gribapi. Using iris-grib I have not been to find the key in the grib2 file, althrough the key is visible with 'grib_ls -w...' via the cli.
gribapi does the job, but I am not sure how to interface it with iris (which is what, I assume, iris-grib is for).
I was wondering if anyone knew of a way to get a message into an iris cube based on a grib message key having a specific value. Thank you
You can get at anything that the gribapi understands through the low-level grib interface in iris-grib, which is the iris_grib.GribMessage class.
Typically you would use for msg in GribMessage.messages_from_filename(xxx): and then access it like e.g. msg.sections[4]['productDefinitionTemplateNumber']; msg.sections[4]['parameterNumber'] and so on.
You can use this to identify required messages, and then convert to cubes with iris_grib.load_pairs_from_fields().
However, Iris-grib only knows how to translate specific encodings into cubes : it is quite strict about exactly what it recognises, and will fail on anything else. So if your data uses any unrecognised templates or data encodings it will definitely fail to load.
I'm just anticipating that you may have something unusual here, so that might be an issue?
You can possibly check your expected message contents against the translation code at iris_grib:_load_convert.py, starting at the convert() routine.
To get an Iris cube out of something not yet supported, you would either :
(a) extend the translation rules (i.e. a Github PR), or
(b) sometimes you can modify the message so that it looks like something
that can be recognised.
Failing that, you can
(c) simply build an Iris cube yourself from the data found in your GribMessage : That can be a little simpler than using 'gribapi' directly (possibly not, depending on detail).
If you have a problem like that, you should definitely raise it as an issue on the github project (iris-grib issues) + we will try to help.
P.S. as you have registered a Python3 interest, you may want to be aware that the newer "ecCodes" replacement for gribapi should shortly be available, making Python3 support for grib data possible at last.
However, the Python3 version is still in beta and we are presently experiencing some problems with it, now raised with ECMWF, so it is still almost-but-not-quite achievable.

TFS Query results with a list of linked work item IDs in Excel?

How can I get a list of linked work item IDs for a set of work items?
Excel-hosted queries preferred. API Sample is acceptable.
Direct DB table query is acceptable (read-only and unsupported of course!)
Many thanks in advance! -Zephan
MORE INFORMATION
UPDATE: No answers for my original Q so broadening scope of acceptable answers as follows:
Answer for TFS2015 (migrating very shortly) or TFS2013 (potentially useful for TFS2015) is preferred over TFS2010
Coding acceptable if there are any APIs or PowerShell cmdlets (MS or community).
Connecting directly (read-only!) to TFS DB tables is acceptable (source tables and related relationship link table names). Yes, directly referencing TFS DB tables is VERY unsupported, read-only, and "AT YOUR OWN RISK." Still beats having to manually copy/paste data or reconstruct list of links in Excel.
ORIGINAL QUESTION & DETAILS
My team uses TFS2010 (soon 2013 or hoping 2015) and VS2010-2015. I need to support traceability reports and analyze/quantify our coverage of ~300 Test Case work items linked to ~400 Requirement work items. Direct Link and Tree queries are close but don't give me related links on the same row as parent work item. Many thanks in advance for your suggestions and any related code fragments.
Example:
3 test cases (Test1, Test2, Test3)
4 Requirements (Req1, Req2, Req3, Req4)
For simplicity let's just use TFS work item IDs to represent each TestN and ReqN. In actuality, I have a keyword to identify my validation requirements (separate from the 1,000's of other requirements in this Team Project). The only Test Case WI I care about for this problem are those linked to one or more Validation Requirement trace-ability.
Scenarios:
1:1 (simple) Test1 is linked to Req1
1:2 (1:n) Test2 is linked to Req2 and Req3
2:1 (n:1) Test3 (and Test2) are both linked to Req3
0:1 (Requirement missing Test coverage) Req4 has no test case links
I have a good coverage gap query by creating a Direct Link query for all Requirements then set "linking filters" to Only return items that do not have the specified links.
Desired output (all tests with list of related work items):
|Test1 | Req1 |
|Test2 | Req2, Req3 |
|Test3 | Req3 |
For row #2 I am OK with other separators or even entire list using same separator (.CSV or TAB delimited).
Skip right to answer now if you have a tidy answer. If not then I added considerable RELATED RESEARCH info below to help kick-start an idea that fits the need! Especially since this hasn't been discoverably solved in the last 5 years :-).
RELATED RESEARCH (loooong but may be useful)
1. Visual Studio Queries
Flat Queries should support a list of linked items out-of-the-box... but it does not. RelatedLinkCount field is handy for knowing if there are any links to chase, but that's it for flat queries. 
Direct Link queries give a list of all direct links, but the related IDs are on rows below the parent work item. I am seriously considering creating a formula to look on the next X rows to build a list of IDs, but this would be fragile especially when over 3 requirements are linked to same test. Still might solve 80% of my tracing needs.
Tree Queries also show links, but on different rows. Additionally they tend to follow just one link type. Ideally I will need list of User Requirements linked to Functional Requirements linked to Test Case(s).
2. Tools / Plug-ins
SmartExcel4TFS (eDEVTech, http://www.modernrequirements.com/smartexcel4tfs/) has 3 reports it supports, but none get me the core data I need in easily used format. At least it is FREE if you have an MSDN Premium subscription.
Requirements to Tests Trace Matrix is super-interesting. Alass, right now I need to go the other way (Requirements linked to a given test case). Also it merges cells and has sub-sections that are hard to manipulate I think. (I may revisit this option though.)
Intersection Traceability Matrix report is WAY too wide for a full 300 x 400 grid :-O.
Work Item Decomposition Matrix also didn't give me desired contents. (though frankly I've forgotten this report layout from when I checked ~1 month ago.)
3. TFS API calls
I have actually avoided this route in favor of native Excel solution... but if I can get an example of Excel VBA code (or other code with link to calling within Excel) I may go this route. At this point I don't have time to dig into rolling my own... but this would be cool assuming performance is acceptable.
Relevant API/code fragments:
Retrieving TFS Results from a Tree Query (Blogs.msdn.com 2012.02.22) - Looks like this would get me the data I need, but it is not in Excel so I'd need a bridge example of some sort calling this within Excel.
Retrieving work items and their linked work items in a single query using the TFS APIs (stackoverflow.com 2012.01.12) - Also looks very promising, but not connected to Excel. Gives hints for 2 level and 3 level nested links and performance consideration (don't make second call for each item returned!)
Retrieving work items using the Team Foundation Server API (pwee167.github.io 2012.09.18) - Excellently written introductory walkthrough blog posting to learn how to build an (ASP.Net MVC3) app that calls TFS APIs to run Flat or Tree queries. Start here if writing C# (which I could do but don't have time/justification unless easy example to integrate with Excel).
How can I query work items and their linked changesets in TFS? (stackoverflow.com 2011.05.10) - I don't need changesets but this has VB code to instantiate new TfsTeamProjectCollection which might work directly in Excel VBA (assuming proper reference is found and added)
var projectCollection = new TfsTeamProjectCollection(
new Uri("http://localhost:8080/tfs"),
new UICredentialsProvider());
OK, that's everything I have gathered on this problem. Please help contribute with the missing magic tool/snippet or follow the info above to build that last bit I have not had time to prototype & debug. Many thanks in advance!! -Zephan

Resources