Why am I getting empty list from call to ossie.utils.sb.catalog() in RedhawkSDR manual's initial example of the Python Sandbox - redhawksdr

I have just installed a Dockerized version of Redhawk from this Git repo: docker-redhawk-ubuntu
I'm attempting to work my way through the first sandbox exercise in the Redhawk manual Redhawk-Manual, but am encountering the following difficulty. The first two steps in this exercise are:
>>> from ossie.utils import sb
>>> sb.catalog()
['rh.HardLimit', 'rh.SigGen', ...]
However, the response I get from the call to sb.catalog() is:
>>> sb.catalog()
[]
What am I failing to see here? How do I need to set-up/initialize things so that I get the correct response from the call to sb.catalog()?

Related

ValueError: Comments are not supported by the python backend

The ijson module has a documented option allow_comments=True, but when I include it,
an error message is produced:
ValueError: Comments are not supported by the python backend
Below is a transcript using the file test.py:
import ijson
for o in ijson.items(open(0), 'item'):
print(o)
Please note that I have no problem with a similar documented option, multiple_values=True.
Transcript
$ python3 --version
Python 3.10.9
$ python3 test.py <<< [1,2]
1
2
# Now change the call to: ijson.items(open(0), 'item', allow_comments=True)
$ python3 test.py <<< [1,2]
Traceback (most recent call last):
File "/Users/user/test.py", line 5, in <module>
for o in ijson.items(open(0), 'item', allow_comments=True):
File "/usr/local/lib/python3.10/site-packages/ijson/utils.py", line 51, in coros2gen
f = chain(events, *coro_pipeline)
File "/usr/local/lib/python3.10/site-packages/ijson/utils.py", line 29, in chain
f = coro_func(f, *coro_args, **coro_kwargs)
File "/usr/local/lib/python3.10/site-packages/ijson/backends/python.py", line 284, in basic_parse_basecoro
raise ValueError("Comments are not supported by the python backend")
ValueError: Comments are not supported by the python backend
$
Take a look at the Backends section of the documentation, which says:
Ijson provides several implementations of the actual parsing in the form of backends located in ijson/backends:
yajl2_c: a C extension using YAJL 2.x. This is the fastest, but might require a compiler and the YAJL development files to be present when installing this package. Binary wheel distributions exist for major platforms/architectures to spare users from having to compile the package.
yajl2_cffi: wrapper around YAJL 2.x using CFFI.
yajl2: wrapper around YAJL 2.x using ctypes, for when you can’t use CFFI for some reason.
yajl: deprecated YAJL 1.x + ctypes wrapper, for even older systems.
python: pure Python parser, good to use with PyPy
And later on in the FAQ it says:
Q: Are there any differences between the backends?
...
The python backend doesn’t support allow_comments=True It also internally works with str objects, not bytes, but this is an internal detail that users shouldn’t need to worry about, and might change in the future.
If you want support for allow_comments=True, you need to be using one of the yajl based backends. According to the docs:
Importing the top level library as import ijson uses the first available backend in the same order of the list above, and its name is recorded under ijson.backend. If the IJSON_BACKEND environment variable is set its value takes precedence and is used to select the default backend.
You'll need the necessary libraries, etc, installed on your system in order for this to work.

How can I use Brightway2 with US LCI database?

Short version:
I am trying to upload US LCI database to Brightway2 and I am failing miserably. Has anyone succeeded? If so, could you share it with me? :D
Long version:
I am following the notebook IO - Importing the US LCI database notebook and I am having a lot of problems. I am aware that, as the notebook indicates, it is a work in progress. Anyhow, I wanted to give it a try:
I tried uploading every ecospold version database found here, following the method from the notebook. The only one that gave me a similar results was version FY20.Q3.02. However, right off the bat I get the following differences/errors:
Same as the notebook, I get this error: Couldn't apply strategy link_technosphere_by_activity_hash: Object in source database can't be uniquely linked to target database. And two activities that are linked. When I follow the instructions of ignoring these datasets, it throws me that error over and over again.
Trying to move on with the tutorial, I get more errors and at the end I end up with all exchanges unlinked:
633 datasets
37513 exchanges
37505 unlinked exchanges
Finally, after running the code in line [15]:
import functools
f = functools.partial(link_iterable_by_fields,
other=Database(config.biosphere),
kind='biosphere'
)
sp.apply_strategy(f)
sp.statistics(f)
I end up with:
0 datasets
0 exchanges
0 unlinked exchanges
Which is hilarious and sad at the same time. Since I am new with Python and BW, my troubleshooting is clumpsy and probably erroneous (I promise I googled a lot and went through the code). And concluded I am failing and it is time to ask questions:
Has anybody succeeded uploading the US LCI database to Brightway2?
If so, how? Which file did you use?
Thank you!!!!
This is an excellent question. I have added text to the offending notebook to note that it is obsolete.
In general, I think trying to import the ecospold files is a fools errand, as though they are labeled ecospold2, they are actually ecospold1 (which is a totally different format):
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<ecoSpold xmlns="http://www.EcoInvent.org/EcoSpold01">
The most recent export also raises an error when I try the ecospold1 importer:
AttributeError: no such child: {http://www.EcoInvent.org/EcoSpold01}modellingAndValidation
This is a required attribute in ecospold1.
I think the best way forward would be to consume the JSON-LD directly. Note that it is important not to run bw2setup(), as you would also want to use their list of elementary flows and LCIA methods. Currently the experimental JSON-LD importer fails because the provided datasets need allocation, but don't provide a set of consistent allocation methods. When I use the git checkout of bw2io and do the following:
uslci = JSONLDImporter(
"/Users/cmutel/Downloads/National_Renewable_Energy_Laboratory-USLCI_Database/",
"US LCI",
preferred_allocation="CAUSAL_ALLOCATION"
)
uslci.apply_strategies()
I get the following error:
UnallocatableDataset: We currently only support exchange-specific CAUSAL_ALLOCATION
This is fixable, but someone would need to step through this and fix the allocation procedure, and I don't have the time to do that now.

Jupyter notebook python library testbook not giving any results

Here's my jupyter notebook's cell 1 (notebook is called tested.ipynb)
def func(a,b):
return a+b
Here's the testbook testing python code (tester.py):
import testbook
#testbook.testbook('tested.ipynb',execute=True)
def test_func(tb):
func=tb.ref("func")
assert func(1,2)==0
I then run the following command from terminal:
python tester.py
It should fail the unit test. But I'm not getting any output at all. No failures, no messages. How do I make the failure appear?
That's because you still need to use pytest, or another unit testing library, to run your tests. Note under 'Features' it says:
"Works with any unit testing library - unittest, pytest or nose" -SOURCE
Testbook just makes writing the unit tests easier. See here under 'Unit testing with testbook' for an example of the process of using testbook in your toolchain with pytest, although bear in mind a lot of the syntax doesn't match the the current documentation. And so instead of running python tester.py from the terminal, run the following command from terminal if you've installed pytest:
pytest tester.py
One thing I note, is that your import and decorator lines don't match the current documentation. Nevertheless, your code works when using pytest tester.py. However, it may be best to adopt the current best practices illustrated in the documentation to keep your code more robust as development continues.

Python 3.6.4 - urllib.request.urlopen certificate verify failed error

Trying to teach myself how to use urllib.request in Python 3.6.4 - however, I can't seem to get a basic example to work. Below is the code that I am running, copied straight from Python's documentation at this link.
>>> import urllib.request
>>> with urllib.request.urlopen('http://www.python.org/') as f:
... print(f.read(300))
A picture of the error I get is here. It tells me that the SSL certificate verify failed (I'm unsure of what this means).
I don't think there is anything wrong with the code I am running, but maybe I'm missing a step to setting up the environment. From what I can tell, it should be as simple as running those few lines of code. Any help is greatly appreciated.
As a quick background, I've taken 2 computer science courses at university, so I am by no means an expert, but I do have a pretty solid understanding of basic programming with Python. I'm trying to use this to scrape data in conjunction with BeautifulSoup.

pyldavis Unable to view the graph

I am trying to visually depict my topics in python using pyldavis. However i am unable to view the graph. Is it that we have to view the graph in the browser or will it get popped upon execution. Below is my code
import pyLDAvis
import pyLDAvis.gensim as gensimvis
print('Pyldavis ....')
vis_data = gensimvis.prepare(ldamodel, doc_term_matrix, dictionary)
pyLDAvis.display(vis_data)
The program is continuously in execution mode on executing the above commands. Where should I view my graph? Or where it will be stored? Is it integrated only with the Ipython notebook?Kindly guide me through this.
P.S My python version is 3.5.
This not work:
pyLDAvis.display(vis_data)
This will work for you:
pyLDAvis.show(vis_data)
I'm facing the same problem now.
EDIT:
My script looks as follows:
first part:
import pyLDAvis
import pyLDAvis.sklearn
print('start script')
tf_vectorizer = CountVectorizer(strip_accents = 'unicode',stop_words = 'english',lowercase = True,token_pattern = r'\b[a-zA-Z]{3,}\b',max_df = 0.5,min_df = 10)
dtm_tf = tf_vectorizer.fit_transform(docs_raw)
lda_tf = LatentDirichletAllocation(n_topics=20, learning_method='online')
print('fit')
lda_tf.fit(dtm_tf)
second part:
print('prepare')
vis_data = pyLDAvis.sklearn.prepare(lda_tf, dtm_tf, tf_vectorizer)
print('display')
pyLDAvis.display(vis_data)
The problem is in the line "vis_data = (...)".if I run the script, it will print 'prepare' and keep on running after that without printing anything else (so it never reaches the line "print('display')).
Funny thing is, when I just run the whole script it gets stuck on that line, but when I run the first part, got to my console and execute purely the single line "vis_data = pyLDAvis.sklearn.prepare(lda_tf, dtm_tf, tf_vectorizer)" this is executed in a couple of seconds.
As for the graph, I saved it as html ("simple") and use the html file to view the graph.
I ran into the same problem (I use PyCharm as IDE) The problem is that pyLDAvize is developed for Ipython (see the docs, https://media.readthedocs.org/pdf/pyldavis/latest/pyldavis.pdf, page 3).
My fix/workaround:
make a dict of lda_tf, dtm_tf, tf_vectorizer (eg., pyLDAviz_dict)dump the dict to a file (eg mydata_pyLDAviz.pkl)
read the pkl file into notebook (I did get some depreciation info from pyLDAviz, but that had no effect on the end result)
play around with pyLDAviz in notebook
if you're happy with the view, dump it into html
The cause is (most likely) that pyLDAviz expects continuous user interaction (including user-initiated "exit"). However, I rather dump data from a smart IDE and read that into jupyter, than develop/code in jupyter notebook. That's pretty much like going back to before-emacs times.
From experience this approach works quite nicely for other plotting rountines
If you received the module error pyLDA.gensim, then try this one instead:
import pyLdAvis.gensim_models
You get the error because of a new version update.

Resources