Python library "Crypto" conflict - python-3.x

I'm trying to integrate two frameworks, and I'm installing requirements for both of frameworks, but it seems like the library 'Crypto' used in both frameworks and have different versions of use, so if I install requirements for one of the frameworks, it returns me the first error:
Traceback (most recent call last):
File "dapp_bdb.py", line 134, in <module>
main()
File "dapp_bdb.py", line 112, in main
blockchain = LevelDBBlockchain(settings.chain_leveldb_path)
File "/home/ubuntu/.local/lib/python3.6/site-packages/neo/Implementations/Blockchains/LevelDB/LevelDBBlockchain.py", line 190, in __init__
self.Persist(Blockchain.GenesisBlock())
File "/home/ubuntu/.local/lib/python3.6/site-packages/neo/Implementations/Blockchains/LevelDB/LevelDBBlockchain.py", line 691, in Persist
account = accounts.GetAndChange(output.AddressBytes, AccountState(output.ScriptHash))
File "/home/ubuntu/.local/lib/python3.6/site-packages/neo/Core/TX/Transaction.py", line 121, in AddressBytes
return bytes(self.Address, encoding='utf-8')
File "/home/ubuntu/.local/lib/python3.6/site-packages/neo/Core/TX/Transaction.py", line 111, in Address
return Crypto.ToAddress(self.ScriptHash)
File "/home/ubuntu/.local/lib/python3.6/site-packages/neocore/Cryptography/Crypto.py", line 103, in ToAddress
return scripthash_to_address(script_hash.Data)
File "/home/ubuntu/.local/lib/python3.6/site-packages/neocore/Cryptography/Helper.py", line 78, in scripthash_to_address
return base58.b58encode(bytes(outb)).decode("utf-8")
AttributeError: 'str' object has no attribute 'decode'
and with the second requrirements of the framework, it returns me other error:
exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/usr/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "dapp_bdb.py", line 95, in custom_background_code
put_bdb("Hello world")
File "dapp_bdb.py", line 68, in put_bdb
fulfilled_creation_tx = bdb.transactions.fulfill(prepared_creation_tx, private_keys=private_key)
File "/home/ubuntu/.local/lib/python3.6/site-packages/bigchaindb_driver/driver.py", line 270, in fulfill
return fulfill_transaction(transaction, private_keys=private_keys)
File "/home/ubuntu/.local/lib/python3.6/site-packages/bigchaindb_driver/offchain.py", line 346, in fulfill_transaction
signed_transaction = transaction_obj.sign(private_keys)
File "/home/ubuntu/.local/lib/python3.6/site-packages/bigchaindb_driver/common/transaction.py", line 823, in sign
PrivateKey(private_key) for private_key in private_keys}
File "/home/ubuntu/.local/lib/python3.6/site-packages/bigchaindb_driver/common/transaction.py", line 823, in <dictcomp>
PrivateKey(private_key) for private_key in private_keys}
File "/home/ubuntu/.local/lib/python3.6/site-packages/bigchaindb_driver/common/transaction.py", line 817, in gen_public_key
public_key = private_key.get_verifying_key().encode()
File "/home/ubuntu/.local/lib/python3.6/site- packages/cryptoconditions/crypto.py", line 62, in get_verifying_key
return Ed25519VerifyingKey(self.verify_key.encode(encoder=Base58Encoder))
File "/home/ubuntu/.local/lib/python3.6/site-packages/nacl/encoding.py", line 90, in encode
return encoder.encode(bytes(self))
File "/home/ubuntu/.local/lib/python3.6/site-packages/cryptoconditions/crypto.py", line 15, in encode
return base58.b58encode(data).encode()
AttributeError: 'bytes' object has no attribute 'encode'
Is there any ideas how I can avoid it?

Looks like cryptoconditions library is doing it wrong.
You should file a bug asking to update required version of base58 and review all the calls to it. Usual behavior for Python3 is to return bytes on some_encoder_library.encode() and str on some_encoder_library.decode(). New versions of base58 module follow this rule although base58-encoded objects never contain any special symbols of course. cryptoconditions are still using previous version where b58encode were returning str.
Meanwhile you can make local modifications to the installed library or for it and install your fork instead.
It is likely that everything will work OK with encode() call removed from this line.

Related

DBT workflow on Databricks fails: AttributeError in object SeedNode

Today our DBT workflow in databricks failed. The workflow runs as:
dbt run --target workflow --project-dir dbt/projectdir/ --profiles-dir dbt/
Any suggestions what could be wrong or how to fix it?
Version reported in Databricks logs:
Running with dbt=1.4.1
The error message below:
'SeedNode' object has no attribute 'depends_on'
09:59:17 Traceback (most recent call last):
File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/dbt/main.py", line 135, in main
results, succeeded = handle_and_check(args)
File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/dbt/main.py", line 198, in handle_and_check
task, res = run_from_args(parsed)
File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/dbt/main.py", line 245, in run_from_args
results = task.run()
File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/dbt/task/runnable.py", line 454, in run
self._runtime_initialize()
File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/dbt/task/runnable.py", line 165, in _runtime_initialize
super()._runtime_initialize()
File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/dbt/task/runnable.py", line 94, in _runtime_initialize
self.load_manifest()
File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/dbt/task/runnable.py", line 81, in load_manifest
self.manifest = ManifestLoader.get_full_manifest(self.config)
File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/dbt/parser/manifest.py", line 203, in get_full_manifest
manifest = loader.load()
File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/dbt/parser/manifest.py", line 339, in load
self.parse_project(
File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/dbt/parser/manifest.py", line 467, in parse_project
parser.parse_file(block)
File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/dbt/parser/base.py", line 425, in parse_file
self.parse_node(file_block)
File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/dbt/parser/base.py", line 386, in parse_node
self.render_update(node, config)
File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/dbt/parser/base.py", line 363, in render_update
self.update_parsed_node_config(node, config, context=context)
File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/dbt/parser/base.py", line 336, in update_parsed_node_config
get_rendered(hook.sql, context, parsed_node, capture_macros=True)
File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/dbt/clients/jinja.py", line 590, in get_rendered
return render_template(template, ctx, node)
File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/dbt/clients/jinja.py", line 545, in render_template
return template.render(ctx)
File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/jinja2/environment.py", line 1301, in render
self.environment.handle_exception()
File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/jinja2/environment.py", line 936, in handle_exception
raise rewrite_traceback_stack(source=source)
File "", line 1, in top-level template code
File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/jinja2/sandbox.py", line 393, in call
return __context.call(__obj, *args, **kwargs)
File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/dbt/clients/jinja.py", line 328, in call
with self.track_call():
File "/usr/lib/python3.9/contextlib.py", line 117, in enter
return next(self.gen)
File "/local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.9/site-packages/dbt/clients/jinja.py", line 319, in track_call
self.node.depends_on.add_macro(unique_id)
AttributeError: 'SeedNode' object has no attribute 'depends_on'
Got the same issue but I am on snowflake
Seems this was a version issue. Explicitly setting the task to use an older version seems to have solved it:
dbt-core<=1.3.1
dbt-databricks<=1.3.1
This can be set in the Databricks workflow task settings.
I'm not sure which is the last version that would work, but 1.3.1 at least works in our case.

Python, pyinstaller raised an error as follows

when I ran the application, it raised error as follows.
My python version is 3.8.10 and pyinstaller version is 4.4.
How to deal with it?
Traceback (most recent call last):
File "run.py", line 360, in huaban.run.MainWorkPanel.start_download
File "run.py", line 594, in huaban.run.MainWorkPanel.real_run
File "scrapy\crawler.py", line 280, in __init__
File "scrapy\crawler.py", line 152, in __init__
File "scrapy\crawler.py", line 146, in _get_spider_loader
File "scrapy\spiderloader.py", line 67, in from_settings
File "scrapy\spiderloader.py", line 24, in __init__
File "scrapy\spiderloader.py", line 51, in _load_all_spiders
File "scrapy\utils\misc.py", line 83, in walk_modules
File "C:\soft\VirtualenvFiles\huaban\Lib\site-packages\PyInstaller\hooks\rthooks\pyi_rth_pkgutil.py", line 71, in _pyi_pkgutil_iter_modules
assert pkg_path.startswith(SYS_PREFIX)
TypeError: startswith first arg must be str or a tuple of str, not PureWindowsPath
Fixed in PyInstaller 4.5.
See here.

How can I compile the example xml file from the Open62541 tutorial?

I'm on chapter 11 of the official guide to the open62541 library. The html version is here. Before trying anything custom, I just want to try this feature in the most basic way by "compiling" their example xml file into C code, which can then be compiled with the GCC and run as an OPC server. (If you would like to follow along, download the full source code from the main pageā€”the nodeset compiler tool is in there.)
I'm in a Debian-based environment (CLI only). I made a copy of myNS.xml and saved it directly in the path ~/code/open62541-open62541-6249bb2/tools/nodeset_compiler/, which is also my current working directory in this example. I tried to use the nodeset compiler with exactly the same command that they use in the tutorial: python ./nodeset_compiler.py --types-array=UA_TYPES --existing ../../deps/ua-nodeset/Schema/Opc.Ua.NodeSet2.xml --xml myNS.xml myNS
The error message I got is this:
Traceback (most recent call last):
File "./nodeset_compiler.py", line 126, in <module>
ns.addNodeSet(xmlfile, True, typesArray=getTypesArray(nsCount))
File "/root/code/open62541-open62541-6249bb2/tools/nodeset_compiler/nodeset.py", line 224, in addNodeSet
nodesets = dom.parseString(fileContent).getElementsByTagName("UANodeSet")
File "/usr/lib/python2.7/xml/dom/minidom.py", line 1928, in parseString
return expatbuilder.parseString(string)
File "/usr/lib/python2.7/xml/dom/expatbuilder.py", line 940, in parseString
return builder.parseString(string)
File "/usr/lib/python2.7/xml/dom/expatbuilder.py", line 223, in parseString
parser.Parse(string, True)
xml.parsers.expat.ExpatError: syntax error: line 1, column 0
Any idea what I might be doing wrong?
UPDATE:
Alright, I found out there was a problem with my Opc.Ua.NodeSet2.xml file, which I corrected. If you are following along and would like to grab the version of the file I have, you can get it here.
But now I have this issue:
INFO:__main__:Preprocessing (existing) ../../deps/ua-nodeset/Schema/Opc.Ua.NodeSet2.xml
INFO:__main__:Preprocessing myNS.xml
Traceback (most recent call last):
File "./nodeset_compiler.py", line 178, in <module>
ns.allocateVariables()
File "/root/code/open62541-open62541-6249bb2/tools/nodeset_compiler/nodeset.py", line 322, in allocateVariables
n.allocateValue(self)
File "/root/code/open62541-open62541-6249bb2/tools/nodeset_compiler/nodes.py", line 291, in allocateValue
self.value.parseXMLEncoding(self.xmlValueDef, dataTypeNode, self)
File "/root/code/open62541-open62541-6249bb2/tools/nodeset_compiler/datatypes.py", line 161, in parseXMLEncoding
val = self.__parseXMLSingleValue(el, parentDataTypeNode, parent)
File "/root/code/open62541-open62541-6249bb2/tools/nodeset_compiler/datatypes.py", line 281, in __parseXMLSingleValue
extobj.value.append(extobj.__parseXMLSingleValue(ebodypart, parentDataTypeNode, parent, alias=None, encodingPart=e))
File "/root/code/open62541-open62541-6249bb2/tools/nodeset_compiler/datatypes.py", line 223, in __parseXMLSingleValue
alias=alias, encodingPart=enc[1], valueRank=enc[2] if len(enc)>2 else None)
File "/root/code/open62541-open62541-6249bb2/tools/nodeset_compiler/datatypes.py", line 198, in __parseXMLSingleValue
t.parseXML(xmlvalue)
File "/root/code/open62541-open62541-6249bb2/tools/nodeset_compiler/datatypes.py", line 330, in parseXML
self.value = int(unicode(xmlvalue.firstChild.data))
ValueError: invalid literal for int() with base 10: ''
UPDATE_2:
I tried doing the same thing on my Windows laptop, and here is the error I got:
INFO:__main__:Preprocessing (existing) ../../deps/ua-nodeset/Schema/Opc.Ua.NodeSet2.xml
INFO:__main__:Preprocessing myNS.xml
Traceback (most recent call last):
File "./nodeset_compiler.py", line 178, in <module>
ns.allocateVariables()
File "C:\Users\ekstraaa\Source\open62541\open62541-open62541-6249bb2\tools\nodeset_compiler\nodeset.py", line 322, in allocateVariables
n.allocateValue(self)
File "C:\Users\ekstraaa\Source\open62541\open62541-open62541-6249bb2\tools\nodeset_compiler\nodes.py", line 291, in allocateValue
self.value.parseXMLEncoding(self.xmlValueDef, dataTypeNode, self)
File "C:\Users\ekstraaa\Source\open62541\open62541-open62541-6249bb2\tools\nodeset_compiler\datatypes.py", line 161, in parseXMLEncoding
val = self.__parseXMLSingleValue(el, parentDataTypeNode, parent)
File "C:\Users\ekstraaa\Source\open62541\open62541-open62541-6249bb2\tools\nodeset_compiler\datatypes.py", line 281, in __parseXMLSingleValue
extobj.value.append(extobj.__parseXMLSingleValue(ebodypart, parentDataTypeNode, parent, alias=None, encodingPart=e))
File "C:\Users\ekstraaa\Source\open62541\open62541-open62541-6249bb2\tools\nodeset_compiler\datatypes.py", line 223, in __parseXMLSingleValue
alias=alias, encodingPart=enc[1], valueRank=enc[2] if len(enc)>2 else None)
File "C:\Users\ekstraaa\Source\open62541\open62541-open62541-6249bb2\tools\nodeset_compiler\datatypes.py", line 198, in __parseXMLSingleValue
t.parseXML(xmlvalue)
File "C:\Users\ekstraaa\Source\open62541\open62541-open62541-6249bb2\tools\nodeset_compiler\datatypes.py", line 330, in parseXML
self.value = int(unicode(xmlvalue.firstChild.data))
ValueError: invalid literal for int() with base 10: '\n '
The complete documentation for the open62541 nodeset compiler can be found here:
https://open62541.org/doc/current/nodeset_compiler.html
The command you are using also seems to be fine.
The last issue you are describing invalid literal for int() is due to a newline inside the value tag of a variable.
This will be fixed with
https://github.com/open62541/open62541/pull/2768
For a workaround you can change your .xml from
<Value>
<Int32>
</Int32>
</Value>
to (no newline):
<Value>
<Int32></Int32>
</Value>

PyYAML Error: TypeError: can't pickle _thread.RLock objects

I'm trying to dump what perhaps a somewhat complex Class with YAML and am seeing the following error. I don't know what pickle does, but I'm not engaged in any multithread programming to my knowledge. This happens while running a pyunit unit test:
Any idea how to find the offending attribute?
ERROR: test_multi_level_needs (test_needs.needs_TestCase)
-----------------------------------------------------------
Traceback (most recent call last):
File "/Users/rsalemi/.../test_needs.py", line 240, in test_multi_level_needs
print(yaml.dump(test2_comp))
File ".../.../yaml/__init__.py", line 200, in dump
<snipped lots of stack trace>
File ".../.../yaml/representer.py", line 91, in represent_sequence
node_item = self.represent_data(item)
File ".../.../yaml/representer.py", line 51, in represent_data
node = self.yaml_multi_representers[data_type](self, data)
File ".../.../yaml/representer.py", line 341, in represent_object
'tag:yaml.org,2002:python/object:'+function_name, state)
File ".../.../yaml/representer.py", line 116, in represent_mapping
node_value = self.represent_data(item_value)
File ".../.../yaml/representer.py", line 51, in represent_data
node = self.yaml_multi_representers[data_type](self, data)
File ".../.../yaml/representer.py", line 315, in represent_object
reduce = data.__reduce_ex__(2)
TypeError: can't pickle _thread.RLock objects

TypeError: can't pickle memoryview objects when running basic add.delay(1,2) test

Trying to run the most basic test of add.delay(1,2) using celery 4.1.0 with Python 3.6.4 and getting the following error:
[2018-02-27 13:58:50,194: INFO/MainProcess] Received task:
exb.tasks.test_tasks.add[52c3fb33-ce00-4165-ad18-15026eca55e9]
[2018-02-27 13:58:50,194: CRITICAL/MainProcess] Unrecoverable error:
SystemError(' returned a result with an error set',) Traceback (most
recent call last): File
"/opt/myapp/lib/python3.6/site-packages/kombu/messaging.py", line 624,
in _receive_callback
return on_m(message) if on_m else self.receive(decoded, message) File
"/opt/myapp/lib/python3.6/site-packages/celery/worker/consumer/consumer.py",
line 570, in on_task_received
callbacks, File "/opt/myapp/lib/python3.6/site-packages/celery/worker/strategy.py",
line 145, in task_message_handler
handle(req) File "/opt/myapp/lib/python3.6/site-packages/celery/worker/worker.py", line
221, in _process_task_sem
return self._quick_acquire(self._process_task, req) File "/opt/myapp/lib/python3.6/site-packages/kombu/async/semaphore.py",
line 62, in acquire
callback(*partial_args, **partial_kwargs) File "/opt/myapp/lib/python3.6/site-packages/celery/worker/worker.py", line
226, in _process_task
req.execute_using_pool(self.pool) File "/opt/myapp/lib/python3.6/site-packages/celery/worker/request.py",
line 531, in execute_using_pool
correlation_id=task_id, File "/opt/myapp/lib/python3.6/site-packages/celery/concurrency/base.py",
line 155, in apply_async
**options) File "/opt/myapp/lib/python3.6/site-packages/billiard/pool.py", line 1486,
in apply_async
self._quick_put((TASK, (result._job, None, func, args, kwds))) File
"/opt/myapp/lib/python3.6/site-packages/celery/concurrency/asynpool.py",
line 813, in send_job
body = dumps(tup, protocol=protocol) TypeError: can't pickle memoryview objects
The above exception was the direct cause of the following exception:
Traceback (most recent call last): File
"/opt/myapp/lib/python3.6/site-packages/celery/worker/worker.py", line
203, in start
self.blueprint.start(self) File "/opt/myapp/lib/python3.6/site-packages/celery/bootsteps.py", line
119, in start
step.start(parent) File "/opt/myapp/lib/python3.6/site-packages/celery/bootsteps.py", line
370, in start
return self.obj.start() File "/opt/myapp/lib/python3.6/site-packages/celery/worker/consumer/consumer.py",
line 320, in start
blueprint.start(self) File "/opt/myapp/lib/python3.6/site-packages/celery/bootsteps.py", line
119, in start
step.start(parent) File "/opt/myapp/lib/python3.6/site-packages/celery/worker/consumer/consumer.py",
line 596, in start
c.loop(*c.loop_args()) File "/opt/myapp/lib/python3.6/site-packages/celery/worker/loops.py", line
88, in asynloop
next(loop) File "/opt/myapp/lib/python3.6/site-packages/kombu/async/hub.py", line 354,
in create_loop
cb(*cbargs) File "/opt/myapp/lib/python3.6/site-packages/kombu/transport/base.py", line
236, in on_readable
reader(loop) File "/opt/myapp/lib/python3.6/site-packages/kombu/transport/base.py", line
218, in _read
drain_events(timeout=0) File "/opt/myapp/lib/python3.6/site-packages/librabbitmq-2.0.0-py3.6-linux-x86_64.egg/librabbitmq/init.py",
line 227, in drain_events
self._basic_recv(timeout) SystemError: returned a result with an error set
I cannot find any previous evidence of anyone hitting this error. I noticed from the celery site that only python 3.5 is mentioned as supported, is that the issue or is this something I am missing?
Any help would be much appreciated!
UPDATE: Tried with Python 3.5.5 and the problem persists. Tried with Django 4.0.2 and the problem persists.
UPDATE: Uninstalled librabbitmq and the problem stopped. This was seen after migration from Python 2.7.5, Django 1.7.7 to Python 3.6.4, Django 2.0.2.
After uninstalling librabbitmq, the problem was resolved.

Resources