Error when running meta-analysis of LCIA methods notebook - brightway

I am learning Brightway2 and I have been doing the notebooks from the brightway2 Github repo. So far all notebooks I have done have run smoothly, I am stuck in one concerning Meta-analysis of LCA methods, more specifically when running line [8].
This line computes 50.000 LCA calculations and times them. Here is the code:
from time import time
start = time()
lca_scores, methods, activities = get_lots_of_lca_scores()
print(time() - start)</code>
This enters into a never-ending loop, with the following message repeating:
Traceback (most recent call last):
File "/Users/.../miniconda3/envs/bw2_rosetta/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/Users/.../miniconda3/envs/bw2_rosetta/lib/python3.9/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/Users/.../miniconda3/envs/bw2_rosetta/lib/python3.9/multiprocessing/pool.py", line 114, in worker
task = get()
File "/Users/.../miniconda3/envs/bw2_rosetta/lib/python3.9/multiprocessing/queues.py", line 368, in get
return _ForkingPickler.loads(res)
AttributeError: Can't get attribute 'many_activities_one_method' on <module '__main__' (built-in)>
I tried looking at the called functions def many_activities_one_method(activities, method) and def get_lots_of_lca_scores(). But I had no luck and when I make changes I have the feeling I make things worse.
Here is my question: Has anyone tried this notebook and worked successfully? What could I be missing?
*Note: I have done the required notebook Getting started with Brightwway2
Thank you!

The notebook has been updated to remove this error.

Related

Kernel restarts when training a sklearn regression model in Sagemaker

I have been trying to train a regression model, with big data on AWS Sagemaker.
The instance I used on my last try was ml.m5.12xlarge and I was confident it will work this time, but no. I still get the error.
After some minutes in the training I get this error on Cloudwatch:
[E 07:00:35.308 NotebookApp] KernelRestarter: restart callback <bound method ZMQChannelsHandler.on_kernel_restarted of ZMQChannelsHandler(f92aff37-be6b-48df-a5f5-522bcc6dd072)> failed
Traceback (most recent call last):
File "/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.7/site-packages/jupyter_client/restarter.py", line 86, in _fire_callbacks
callback()
File "/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.7/site-packages/notebook/services/kernels/handlers.py", line 473, in on_kernel_restarted
self._send_status_message('restarting')
File "/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.7/site-packages/notebook/services/kernels/handlers.py", line 469, in _send_status_message
self.write_message(json.dumps(msg, default=date_default))
File "/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.7/site-packages/tornado/websocket.py", line 337, in write_message
raise WebSocketClosedError()
tornado.websocket.WebSocketClosedError
Does anyone might know what the error could be?

Python Firestore insert return error 503 DNS resolution failed

I have a problem during the execution of my python script from crontab, which consists of an insert operation in the firestore database.
db.collection(u'ab').document(str(row["Name"])).collection(str(row["id"])).document(str(row2["id"])).set(self.packStructure(row2))
When I execute normally with python3 script.py command it works, but when I execute it from crontab it return the following error:
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/axatel/angel_bridge/esportazione_firebase/main.py", line 23, in <module>
dato.getDati(dato, db, cursor, cursor2, fdb, select, anagrafica)
File "/home/axatel/angel_bridge/esportazione_firebase/dati.py", line 19, in getDati
db.collection(u'ab').document(str(row["Name"])).collection(str(row["id"])).document(str(row2["id"])).set(self.packStructure(row2))
File "/home/axatel/.local/lib/python3.7/site-packages/google/cloud/firestore_v1/document.py", line 234, in set
write_results = batch.commit()
File "/home/axatel/.local/lib/python3.7/site-packages/google/cloud/firestore_v1/batch.py", line 147, in commit
metadata=self._client._rpc_metadata,
File "/home/axatel/.local/lib/python3.7/site-packages/google/cloud/firestore_v1/gapic/firestore_client.py", line 1121, in commit
request, retry=retry, timeout=timeout, metadata=metadata
File "/home/axatel/.local/lib/python3.7/site-packages/google/api_core/gapic_v1/method.py", line 145, in __call__
return wrapped_func(*args, **kwargs)
File "/home/axatel/.local/lib/python3.7/site-packages/google/api_core/retry.py", line 286, in retry_wrapped_func
on_error=on_error,
File "/home/axatel/.local/lib/python3.7/site-packages/google/api_core/retry.py", line 184, in retry_target
return target()
File "/home/axatel/.local/lib/python3.7/site-packages/google/api_core/timeout.py", line 214, in func_with_timeout
return func(*args, **kwargs)
File "/home/axatel/.local/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 59, in error_remapped_callable
six.raise_from(exceptions.from_grpc_error(exc), exc)
File "<string>", line 3, in raise_from
google.api_core.exceptions.ServiceUnavailable: 503 DNS resolution failed for service: firestore.googleapis.com:443
I really don't understand what's the problem, because the connection at the database works every time the script is started in both ways.
Is there a fix for this kind of issue?
I found something that might be helpful. There is nice troubleshooting guide and there is a part there, which seems to be related:
If your command works by invoking a runtime like python some-command.py perform a few checks to determine that the runtime
version and environment is correct. Each language runtime has quirks
that can cause unexpected behavior under crontab.
For python you might find that your web app is using a virtual
environment you need to invoke in your crontab.
I haven't seen such error running Firestore API, but this seems to match to your issue.
I found the solution.
The problem occured because the timeout sleep() value was lower than expected, so the database connection function starts too early during boot phase of machine. Increasing this value to 45 or 60 seconds fixed the problem.
#time.sleep(10) # old version
time.sleep(60) # working version
fdb = firebaseConnection()
def firebaseConnection():
# firebase connection
cred = credentials.Certificate('/database/axatel.json')
firebase_admin.initialize_app(cred)
fdb = firestore.client()
if fdb:
return fdb
else:
print("Error")
sys.exit()

Airflow scheduler starts up with exception when parallelism is set to a large number

I am new to Airflow and I am trying to use airflow to build a data pipeline, but it keeps getting some exceptions. My airflow.cfg look like this:
executor = LocalExecutor
sql_alchemy_conn = postgresql+psycopg2://airflow:airflow#localhost/airflow
sql_alchemy_pool_size = 5
parallelism = 96
dag_concurrency = 96
worker_concurrency = 96
max_threads = 96
broker_url = postgresql+psycopg2://airflow:airflow#localhost/airflow
result_backend = postgresql+psycopg2://airflow:airflow#localhost/airflow
When I started up airflow webserver -p 8080 in one terminal and then airflow scheduler in another terminal, the scheduler run will have the following execption(It failed when I set the parallelism number greater some amount, it works fine otherwise, this may be computer-specific but at least we know that it is resulted by the parallelism). I have tried run 1000 python processes on my computer and it worked fine, I have configured Postgres to allow maximum 500 database connections but it is still giving me the errors.
[2019-11-20 12:15:00,820] {dag_processing.py:556} INFO - Launched DagFileProcessorManager with pid: 85050
Process QueuedLocalWorker-18:
Traceback (most recent call last):
File "/usr/local/Cellar/python/3.7.4/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/managers.py", line 811, in _callmethod
conn = self._tls.connection
AttributeError: 'ForkAwareLocal' object has no attribute 'connection'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/Cellar/python/3.7.4/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
self.run()
File "/Users/edward/.local/share/virtualenvs/avat-utils-JpGzQGRW/lib/python3.7/site-packages/airflow/executors/local_executor.py", line 111, in run
key, command = self.task_queue.get()
File "<string>", line 2, in get
File "/usr/local/Cellar/python/3.7.4/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/managers.py", line 815, in _callmethod
self._connect()
File "/usr/local/Cellar/python/3.7.4/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/managers.py", line 802, in _connect
conn = self._Client(self._token.address, authkey=self._authkey)
File "/usr/local/Cellar/python/3.7.4/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/connection.py", line 492, in Client
c = SocketClient(address)
File "/usr/local/Cellar/python/3.7.4/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/connection.py", line 619, in SocketClient
s.connect(address)
ConnectionRefusedError: [Errno 61] Connection refused
Thanks
Updated: I tried run in Pycharm, and it worked fine in Pycharm but sometimes failed in the terminal and sometimes it's not
I had the same issue. Turns out I had set max_threads=10 in airflow.cfg in combination with LocalExecutor. Switching max_threads=2 solved the issue.
Found out few days ago, Airflow actually starts up all the parallel process when starting up, I was thinking max_sth and parallelism as the capacity but it is the number of processes it will run when start up. So it looks like this issue is caused by the insufficient resources of the computer.

AttributeError: module 'resource' has no attribute 'getpagesize'

I am trying to use Tensorflow Object Detection API and I follow the steps mentioned in the given link -
https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/install.html#tf-models-install
When I try to access the Object Detection Jupyter Notebook through jupyter notebook
I am facing the below exception
Traceback (most recent call last):
File "/usr/local/bin/jupyter-notebook", line 7, in <module>
from notebook.notebookapp import main
File "/home/dinesh/.local/lib/python3.6/site-
packages/notebook/notebookapp.py", line 79, in <module>
from .base.handlers import Template404, RedirectWithParams
File "/home/dinesh/.local/lib/python3.6/site-
packages/notebook/base/handlers.py", line 32, in <module>
import prometheus_client
File "/home/dinesh/.local/lib/python3.6/site-
packages/prometheus_client/__init__.py", line 7, in <module>
from . import process_collector
File "/home/dinesh/.local/lib/python3.6/site-
packages/prometheus_client/process_collector.py", line 12, in <module>
_PAGESIZE = resource.getpagesize()
AttributeError: module 'resource' has no attribute 'getpagesize'
I am using
Python - 3.6.3
Jupyter - 1.0.0
How can I overcome this exception?
Got a similar error. My project contained modules (folders)
model
resource (replaced with resources)
service
So I changed the name of the resource module to resources (change name to any appropriate module name)
I have the same error ,after rename my resource module in the PYTHONPATH , it works right. check your PYTHONPATH, is there a resource module?
I had a similar issue on starting Jupyter Notebooks on Windows 10.
When I initially ran the regular startup script, I got a windows terminal that opened and immediately closed, too fast to see any error messages. So, I opened a windows 10 powershell terminal and ran
conda update conda
and
conda update --all
then I ran
jupyter-notebook at the windows prompt. The results were:
Traceback (most recent call last):
File "E:\Users\Bob\anaconda3\Scripts\jupyter-notebook-script.py", line 6, in
from notebook.notebookapp import main
File "E:\Users\Bob\anaconda3\lib\site-packages\notebook\notebookapp.py", line 76, in
from .base.handlers import Template404, RedirectWithParams
File "E:\Users\Bob\anaconda3\lib\site-packages\notebook\base\handlers.py", line 24, in
import prometheus_client
File "E:\Users\Bob\anaconda3\lib\site-packages\prometheus_client_init_.py", line 3, in
from . import (
File "E:\Users\Bob\anaconda3\lib\site-packages\prometheus_client\process_collector.py", line 11, in
_PAGESIZE = resource.getpagesize()
AttributeError: module 'resource' has no attribute 'getpagesize'
I opened process_collector.py in the site-packages\prometheus_client in notepad++ and changed
line 9 import resource to import resources
and
line 11 _PAGESIZE = resource.getpagesize() to _PAGESIZE = resources.getpagesize()
I searched for other instances of resource and found none. I then saved the file and reran jupyter-notebook at the windows terminal prompt.
This time I got:
Traceback (most recent call last):
File "E:\Users\Bob\anaconda3\Scripts\jupyter-notebook-script.py", line 10, in
sys.exit(main())
File "E:\Users\Bob\anaconda3\lib\site-packages\jupyter_core\application.py", line 254, in launch_instance
return super(JupyterApp, cls).launch_instance(argv=argv, **kwargs)
File "E:\Users\Bob\anaconda3\lib\site-packages\traitlets\config\application.py", line 844, in launch_instance
app.initialize(argv)
File "E:\Users\Bob\anaconda3\lib\site-packages\traitlets\config\application.py", line 87, in inner
return method(app, *args, **kwargs)
File "E:\Users\Bob\anaconda3\lib\site-packages\notebook\notebookapp.py", line 2126, in initialize
self.init_resources()
File "E:\Users\Bob\anaconda3\lib\site-packages\notebook\notebookapp.py", line 1697, in init_resources
old_soft, old_hard = resource.getrlimit(resource.RLIMIT_NOFILE)
AttributeError: module 'resource' has no attribute 'getrlimit'
Still having Notepad++ open, I opened notebookapp.py in site-packages\notebook and searched for resource. I found and changed the following lines:
Line 37 import resource to import resources
line 40 resource = None to resources = None
line 1036 resource is None to resources is None
line 1040 soft, hard = resource.getrlimit(resource.RLIMIT_NOFILE) to
soft, hard = resources.getrlimit(resources.RLIMIT_NOFILE)
line 1693 if resource is None: to if resources is None:
line 1697 old_soft, old_hard = resource.getrlimit(resource.RLIMIT_NOFILE) to old_soft, old_hard = resources.getrlimit(resources.RLIMIT_NOFILE)
line 1706 resource.setrlimit(resource.RLIMIT_NOFILE, (soft, hard)) to
resources.setrlimit(resources.RLIMIT_NOFILE, (soft, hard))
I searched for other instances of resource and found none. I then saved the notebookapp.py file and reran jupyter-notebook at the windows terminal prompt. This time Jupyter Notebooks opened with the expected files tab displayed. I quit out of Jupyter Notebooks and restarted using the normal link to the startup script and it worked as expected.
I am not sure what caused this issue. I did not intentionally update anything before I saw the issue. Yesterday, Jupyter Notebooks worked as expected, today when I tried to run it, I got the flicker screen as described above.
[Quick look for root cause]
Check if you have set PYTHONPATH in your system -
try to rename that momentarily
run "jupyter notebook"
[Fix the issue]
if the issue is resolved by doing this then
search "resource" folder within all paths mentioned in PYTHONPATH
if such folder found then rename/refactor it to other name e.g. "resources"

CherryPy Autoloader dies with bcrypt

I have pure project on CherryPy and want AuthTool to use bcrypt module for password processing.
But when I put "import bcrypt" line into AuthTool.py the CherryPy tells me:
[17/Nov/2017:17:55:57] ENGINE Error in background task thread function <bound method Autoreloader.run of <cherrypy.process.plugins.Autoreloader object at 0x0000026948C82E80>>.
AttributeError: cffi library '_bcrypt' has no function, constant or global variable named '__loader__'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\Serge\AppData\Local\Programs\Python\Python36\lib\site-packages\cherrypy\process\plugins.py", line 519, in run
self.function(*self.args, **self.kwargs)
File "C:\Users\Serge\AppData\Local\Programs\Python\Python36\lib\site-packages\cherrypy\process\plugins.py", line 651, in run
for filename in self.sysfiles() | self.files:
File "C:\Users\Serge\AppData\Local\Programs\Python\Python36\lib\site-packages\cherrypy\process\plugins.py", line 635, in sysfiles
hasattr(m, '__loader__') and
SystemError: <built-in function hasattr> returned a result with an error set
Exception in thread Autoreloader:
AttributeError: cffi library '_bcrypt' has no function, constant or global variable named '__loader__'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\Serge\AppData\Local\Programs\Python\Python36\lib\threading.py", line 916, in _bootstrap_inner
self.run()
File "C:\Users\Serge\AppData\Local\Programs\Python\Python36\lib\site-packages\cherrypy\process\plugins.py", line 519, in run
self.function(*self.args, **self.kwargs)
File "C:\Users\Serge\AppData\Local\Programs\Python\Python36\lib\site-packages\cherrypy\process\plugins.py", line 651, in run
for filename in self.sysfiles() | self.files:
File "C:\Users\Serge\AppData\Local\Programs\Python\Python36\lib\site-packages\cherrypy\process\plugins.py", line 635, in sysfiles
hasattr(m, '__loader__') and
SystemError: <built-in function hasattr> returned a result with an error set
After that everithing works fine but Autoloader. I have bcrypt (3.1.4) and CherryPy (11.1.0).
Can I do something to fix this issue this autoreloader? Merci.
You could disable the autoloader. Set 'engine.autoloader.on' to False. Of course you will loose auto loading when monitored files are changed and thus you'll need to manually restart the server on each change. Another way would be to put the server in production mode. See available mode and what they do at: http://docs.cherrypy.org/en/latest/config.html#id14
BTW: I am seeing a similar issue with cherrypy (13.0.0) and cryptography (2.1.4)

Resources