When attempting to build with Pants, I am seeing the following error:
File "build/bdist.macosx-10.10-intel/egg/pants/contrib/go/tasks/go_fetch.py", line 154, in _transitive_download_remote_libs
all_known_addresses)
File "build/bdist.macosx-10.10-intel/egg/pants/contrib/go/tasks/go_fetch.py", line 105, in _transitive_download_remote_libs
fetcher.fetch(go_remote_lib.import_path, dest=tmp_fetch_root, rev=go_remote_lib.rev)
File "build/bdist.macosx-10.10-intel/egg/pants/contrib/go/subsystems/fetchers.py", line 437, in fetch
github_root, github_rev = self._map_import_path(import_path, rev)
File "/Users/chad/.cache/pants/setup/bootstrap/pants.mbFDa8/install/lib/python2.7/site-packages/pants/util/memo.py", line 95, in memoize
result = func(*args, **kwargs)
File "build/bdist.macosx-10.10-intel/egg/pants/contrib/go/subsystems/fetchers.py", line 454, in _map_import_path
raise self.FetchError('Invalid gopkg.in package and rev in: {}'.format(import_path))
Exception message: Invalid gopkg.in package and rev in: gopkg.in/amz.v1/aws
Here is the contents of my BUILD file:
# Auto-generated by pants!
# To re-generate run: `pants buildgen.go --materialize --remote`
go_remote_library(rev='v1')
Looking into the code, I see that the error comes from a failure to match a regex in fetchers.py, on line 453.
I am running Pants version 0.0.59 on Mac OS X 10.10 (Yosemite)
Noting that #Huckphin stumbled on a bug here in pantsbuild.pants<=0.0.59. He filed an issue and now things are fixed up for handling gopkg.in remote import paths that point to sub-packages in the remote repo. The fix will be released with the regular Friday release on 11/20/2015 in 0.0.60.
Related
I recently installed SerpentAI and I'm having an issue when creating a game plugin.
When running the command:
'''
serpent generate game
'''
It throws errors like SerpentAI Error When Creating a Game Plugin The inside is the same, and I tried the inside method, but it didn't work. Someone can help me
What is the name of the game? (Titleized, No Spaces i.e. AwesomeGame):
THProject
How is the game launched? (One of: 'steam', 'executable', 'web_browser'):
executable
c:\users\28734\.conda\envs\serpent\lib\site-packages\offshoot\base.py:38: UserWarning: 'offshoot.yml' not found! Using default configuration.
warnings.warn("'offshoot.yml' not found! Using default configuration.")
c:\users\28734\.conda\envs\serpent\lib\site-packages\offshoot\base.py:38: UserWarning: 'offshoot.yml' not found! Using default configuration.
warnings.warn("'offshoot.yml' not found! Using default configuration.")
OFFSHOOT: Attempting to install SerpentTHProjectGamePlugin...
c:\users\28734\.conda\envs\serpent\lib\site-packages\offshoot\base.py:38: UserWarning: 'offshoot.yml' not found! Using default configuration.
warnings.warn("'offshoot.yml' not found! Using default configuration.")
OFFSHOOT PLUGIN INSTALL: Verifying that plugin dependencies are installed...
OFFSHOOT PLUGIN INSTALL: Installing files...
There was a problem during installation... Reverting!
Traceback (most recent call last):
File "c:\users\28734\.conda\envs\serpent\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "c:\users\28734\.conda\envs\serpent\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\28734\.conda\envs\serpent\plugins\SerpentTHProjectGamePlugin\plugin.py", line 28, in <module>
offshoot.executable_hook(SerpentTHProjectGamePlugin)
File "c:\users\28734\.conda\envs\serpent\lib\site-packages\offshoot\base.py", line 185, in executable_hook
plugin_class.install()
File "c:\users\28734\.conda\envs\serpent\lib\site-packages\offshoot\plugin.py", line 35, in install
cls.install_files()
File "c:\users\28734\.conda\envs\serpent\lib\site-packages\offshoot\plugin.py", line 118, in install_files
raise e
File "c:\users\28734\.conda\envs\serpent\lib\site-packages\offshoot\plugin.py", line 91, in install_files
is_valid, messages = cls._validate_file_for_pluggable(plugin_file_path, file_dict["pluggable"])
File "c:\users\28734\.conda\envs\serpent\lib\site-packages\offshoot\plugin.py", line 235, in _validate_file_for_pluggable
raise PluginError("The Plugin definition specifies an invalid pluggable: %s => %s" % (file_path, pluggable))
offshoot.plugin.PluginError: The Plugin definition specifies an invalid pluggable: plugins\SerpentTHProjectGamePlugin\files\serpent_THProject_game.py => Game
I solved the problem myself. Installed 'serpent.game'.
All third-party libraries in the agent are sufficient. Some libraries are not installed
I have a problem during the execution of my python script from crontab, which consists of an insert operation in the firestore database.
db.collection(u'ab').document(str(row["Name"])).collection(str(row["id"])).document(str(row2["id"])).set(self.packStructure(row2))
When I execute normally with python3 script.py command it works, but when I execute it from crontab it return the following error:
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/axatel/angel_bridge/esportazione_firebase/main.py", line 23, in <module>
dato.getDati(dato, db, cursor, cursor2, fdb, select, anagrafica)
File "/home/axatel/angel_bridge/esportazione_firebase/dati.py", line 19, in getDati
db.collection(u'ab').document(str(row["Name"])).collection(str(row["id"])).document(str(row2["id"])).set(self.packStructure(row2))
File "/home/axatel/.local/lib/python3.7/site-packages/google/cloud/firestore_v1/document.py", line 234, in set
write_results = batch.commit()
File "/home/axatel/.local/lib/python3.7/site-packages/google/cloud/firestore_v1/batch.py", line 147, in commit
metadata=self._client._rpc_metadata,
File "/home/axatel/.local/lib/python3.7/site-packages/google/cloud/firestore_v1/gapic/firestore_client.py", line 1121, in commit
request, retry=retry, timeout=timeout, metadata=metadata
File "/home/axatel/.local/lib/python3.7/site-packages/google/api_core/gapic_v1/method.py", line 145, in __call__
return wrapped_func(*args, **kwargs)
File "/home/axatel/.local/lib/python3.7/site-packages/google/api_core/retry.py", line 286, in retry_wrapped_func
on_error=on_error,
File "/home/axatel/.local/lib/python3.7/site-packages/google/api_core/retry.py", line 184, in retry_target
return target()
File "/home/axatel/.local/lib/python3.7/site-packages/google/api_core/timeout.py", line 214, in func_with_timeout
return func(*args, **kwargs)
File "/home/axatel/.local/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 59, in error_remapped_callable
six.raise_from(exceptions.from_grpc_error(exc), exc)
File "<string>", line 3, in raise_from
google.api_core.exceptions.ServiceUnavailable: 503 DNS resolution failed for service: firestore.googleapis.com:443
I really don't understand what's the problem, because the connection at the database works every time the script is started in both ways.
Is there a fix for this kind of issue?
I found something that might be helpful. There is nice troubleshooting guide and there is a part there, which seems to be related:
If your command works by invoking a runtime like python some-command.py perform a few checks to determine that the runtime
version and environment is correct. Each language runtime has quirks
that can cause unexpected behavior under crontab.
For python you might find that your web app is using a virtual
environment you need to invoke in your crontab.
I haven't seen such error running Firestore API, but this seems to match to your issue.
I found the solution.
The problem occured because the timeout sleep() value was lower than expected, so the database connection function starts too early during boot phase of machine. Increasing this value to 45 or 60 seconds fixed the problem.
#time.sleep(10) # old version
time.sleep(60) # working version
fdb = firebaseConnection()
def firebaseConnection():
# firebase connection
cred = credentials.Certificate('/database/axatel.json')
firebase_admin.initialize_app(cred)
fdb = firestore.client()
if fdb:
return fdb
else:
print("Error")
sys.exit()
I am new to Python and trying to install Airflow in my Mac, by following this tutorial
While these two commands work fine:
$ airflow initdb
$ airflow webserver -p 8080
The scheduler command (airflow scheduler) throws the following error:
[2020-02-18 13:18:09,012] {scheduler_job.py:1382} ERROR - Exception when executing execute_helper Traceback (most recent call last):
File "/Users/mac/Workspace/airflow/airflow_venv/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 1380, in _execute
self._execute_helper()
File "/Users/mac/Workspace/airflow/airflow_venv/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 1413, in _execute_helper
self.processor_agent.start()
File "/Users/mac/Workspace/airflow/airflow_venv/lib/python3.8/site-packages/airflow/utils/dag_processing.py", line 554, in start
self._process.start()
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/process.py", line 121, in start
self._popen = self._Popen(self)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py", line 283, in _Popen
return Popen(process_obj)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_fork.py", line 19, in __init__
self._launch(process_obj)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 47, in _launch
reduction.dump(process_obj, fp)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'SchedulerJob._execute.<locals>.processor_factory'
[2020-02-18 13:18:09,035] {helpers.py:322} INFO - Sending Signals.SIGTERM to GPID None
Traceback (most recent call last): File "/Users/mac/Workspace/airflow/airflow_venv/bin/airflow", line 37, in <module>
args.func(args) File "/Users/mac/Workspace/airflow/airflow_venv/lib/python3.8/site-packages/airflow/utils/cli.py", line 75, in wrapper
return f(*args, **kwargs) File "/Users/mac/Workspace/airflow/airflow_venv/lib/python3.8/site-packages/airflow/bin/cli.py", line 1040, in scheduler
job.run() File "/Users/mac/Workspace/airflow/airflow_venv/lib/python3.8/site-packages/airflow/jobs/base_job.py", line 221, in run
self._execute() File "/Users/mac/Workspace/airflow/airflow_venv/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 1384, in _execute
self.processor_agent.end() File "/Users/mac/Workspace/airflow/airflow_venv/lib/python3.8/site-packages/airflow/utils/dag_processing.py", line 707, in end
reap_process_group(self._process.pid, log=self.log) File "/Users/mac/Workspace/airflow/airflow_venv/lib/python3.8/site-packages/airflow/utils/helpers.py", line 324, in reap_process_group
signal_procs(sig) File "/Users/mac/Workspace/airflow/airflow_venv/lib/python3.8/site-packages/airflow/utils/helpers.py", line 293, in signal_procs
os.killpg(pgid, sig)
TypeError: an integer is required (got type NoneType)
EDIT: Python 3.8 is supported now https://github.com/apache/airflow#requirements. So this answer might not be relevant now.
This due to the Python version you are using. Airflow doesn't support Python 3.8 yet https://github.com/apache/airflow#stable-version-1109.
Downgrade your Python to 3.7 and check.
Maybe there are some compatibility problems?
Using Python 3.6.10 and airflow v1.10.4, I can get airflow running. Maybe you could try some other versions?
This worked for me!
1- Make sure you are using the correct celery version that supports your other packages like RabbitMQ ( as V5 doesn't support AMQP in its usual format), my advice is to use V4.6.X
2-THIS HAS NOTHING TO DO WITH PYTHON VERSION IF YOU ARE USING AIRFLOW V2.0
3- simply make yourself happy with airflow db reset (command may differ if you are using airflow Version X<2.0 )
4- Avoid deleting any dag like you delete a file and use airflow dag ... commands to do so. (it makes up a mess in your environment that you wont like, trust me on this..)
Wish you luck bearing python stuff..
https://github.com/zzh8829/yolov3-tf2 is the project. I've installed all the correct versions ofthings I think.
google is telling me that it is probably a low VRAM issue but I am still looking around for other reasons. please help.
I am using :
Windows 10 (don't say "there's your problem" I need it)
cuDNN 7.4.6
CUDA 10.0
tensorflow 2.0.0
python 3.6
I have a gtx1660 super 6GB VRAM with a ryzen 7 2700x on 16GB of RAM. I'm getting a gt1080 8gig in a few days I'm going to add to the second PCI slot.
the Error is as follows:
2019-11-30 06:31:26.167368: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll
2019-11-30 06:31:27.843742: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn handle: CUDNN_STATUS_ALLOC_FAILED
2019-11-30 06:31:27.853725: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn handle: CUDNN_STATUS_ALLOC_FAILED
Traceback (most recent call last):
File ".\convert.py", line 34, in <module>
app.run(main)
File "C:\Program Files\Python36\lib\site-packages\absl\app.py", line 299, in run
_run_main(main, args)
File "C:\Program Files\Python36\lib\site-packages\absl\app.py", line 250, in _run_main
sys.exit(main(argv))
File ".\convert.py", line 25, in main
output = yolo(img)
File "C:\Program Files\Python36\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py", line 891, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "C:\Program Files\Python36\lib\site-packages\tensorflow_core\python\keras\engine\network.py", line 708, in call
convert_kwargs_to_constants=base_layer_utils.call_context().saving)
File "C:\Program Files\Python36\lib\site-packages\tensorflow_core\python\keras\engine\network.py", line 860, in _run_internal_graph
output_tensors = layer(computed_tensors, **kwargs)
File "C:\Program Files\Python36\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py", line 891, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "C:\Program Files\Python36\lib\site-packages\tensorflow_core\python\keras\engine\network.py", line 708, in call
convert_kwargs_to_constants=base_layer_utils.call_context().saving)
File "C:\Program Files\Python36\lib\site-packages\tensorflow_core\python\keras\engine\network.py", line 860, in _run_internal_graph
output_tensors = layer(computed_tensors, **kwargs)
File "C:\Program Files\Python36\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py", line 891, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "C:\Program Files\Python36\lib\site-packages\tensorflow_core\python\keras\layers\convolutional.py", line 197, in call
outputs = self._convolution_op(inputs, self.kernel)
File "C:\Program Files\Python36\lib\site-packages\tensorflow_core\python\ops\nn_ops.py", line 1134, in __call__
return self.conv_op(inp, filter)
File "C:\Program Files\Python36\lib\site-packages\tensorflow_core\python\ops\nn_ops.py", line 639, in __call__
return self.call(inp, filter)
File "C:\Program Files\Python36\lib\site-packages\tensorflow_core\python\ops\nn_ops.py", line 238, in __call__
name=self.name)
File "C:\Program Files\Python36\lib\site-packages\tensorflow_core\python\ops\nn_ops.py", line 2010, in conv2d
name=name)
File "C:\Program Files\Python36\lib\site-packages\tensorflow_core\python\ops\gen_nn_ops.py", line 1031, in conv2d
data_format=data_format, dilations=dilations, name=name, ctx=_ctx)
File "C:\Program Files\Python36\lib\site-packages\tensorflow_core\python\ops\gen_nn_ops.py", line 1130, in conv2d_eager_fallback
ctx=_ctx, name=name)
File "C:\Program Files\Python36\lib\site-packages\tensorflow_core\python\eager\execute.py", line 67, in quick_execute
six.raise_from(core._status_to_exception(e.code, message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.UnknownError: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a wa
rning log message was printed above. [Op:Conv2D]
I had the same problem in the same repository.
The solution that worked for me and my team was to upgrade cuDNN to version 7.5 or higher (as opposed to your 7.4).
The instructions for updating can be found on Nvidia's site:
https://docs.nvidia.com/deeplearning/sdk/cudnn-install/index.html
This could happen for a few reasons.
(1) As you mentioned, it may be a a memory issue, which you could try to verify by allocating less memory to the GPU and seeing if that error still occurs. You can do this in TF 2.0 like so (https://github.com/tensorflow/tensorflow/issues/25138#issuecomment-484428798):
import tensorflow as tf
tf.config.gpu.set_per_process_memory_fraction(0.75)
tf.config.gpu.set_per_process_memory_growth(True)
# your model creation, etc.
model = MyModel(...)
I see the code you're running sets dynamic memory growth if you have > 1 GPU (https://github.com/zzh8829/yolov3-tf2/blob/master/train.py#L46-L47), but since you only have 1 GPU, then it is likely just trying to allocate all memory (>90%) at the start.
(2) Some users seem to have experienced this on Windows when there were other TensorFlow or similar processes using the GPU simultaneously, either by you or by other users: https://stackoverflow.com/a/53707323/10993413
(3) As always, make sure your PATH variables are correct. Sometimes if you tried multiple installations and didn't clean things up properly, the PATHs may be finding the wrong version first and cause an issue. If you add new paths to the beginning of PATH, they should be found first: https://www.tensorflow.org/install/gpu#windows_setup
(4) As mentioned by #xenotecc, you could try upgrading to a newer version of CUDNN, though I'm not sure this will help since your config is listed as supported on TF docs: https://www.tensorflow.org/install/source#gpu. If this does solve it, it may have been PATH issue after all since you will likely update the PATHs after installing the newer version.
Got the same error and resolved by below:
gpus = tf.config.experimental.list_physical_devices('GPU')
tf.config.experimental.set_virtual_device_configuration(
gpus[0], [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=5000)])
(with GTX 1660, 6G memory, tensorflow 2.0.1)
Simple fix:
insert this line under the imports in "convert.py"
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
this will ignore your gpu while loading the weights.
I am trying to load data to Google bigquery using bq load from a named pipe.
Console Window1:
$ mkfifo /usr/pipe1
$ cat /dev1/item.dat > /usr/pipe1
Console Window2:
$ bq load --source_format=CSV projectid:dataset.itemtbl /usr/pipe1 field1:integer,field2:integer
Got the following error:
BigQuery error in load operation: Source path is not a file: /usr/pipe1
The BigQuery client bq.py does not support named pipes. It explicitly requires files:
https://code.google.com/p/google-bigquery-tools/source/browse/bq/bigquery_client.py?r=30df4638ff2ddb01d3f495af5c131ed3c2cfbd04#617
Allowing named pipes is a good feature suggestion. You can request it here:
https://code.google.com/p/google-bigquery/issues/list
It looks like you could tweak your copy of bigquery_client.py pretty easily to make this work as well. Good luck!
The bq load command it doesn't support pipe files.
this is the error when you change the code to bypass the pipe file validation.
== Error trace ==
Traceback (most recent call last):
File "/usr/local/share/google/google-cloud-sdk/platform/bq/bq.py", line 1001, in RunSafely
return_value = self.RunWithArgs(*args, **kwds)
File "/usr/local/share/google/google-cloud-sdk/platform/bq/bq.py", line 1355, in RunWithArgs
job = client.Load(table_reference, source, schema=schema, **opts)
File "/usr/local/share/google/google-cloud-sdk/platform/bq/bigquery_client.py", line 3504, in Load
upload_file=upload_file, **kwds)
File "/usr/local/share/google/google-cloud-sdk/platform/bq/bigquery_client.py", line 2924, in ExecuteJob
location=location)
File "/usr/local/share/google/google-cloud-sdk/platform/bq/bigquery_client.py", line 2901, in RunJobSynchronously
location=location)
File "/usr/local/share/google/google-cloud-sdk/platform/bq/bigquery_client.py", line 2755, in StartJob
resumable=resumable)
File "/usr/local/share/google/google-cloud-sdk/platform/bq/third_party/oauth2client_4_0/_helpers.py", line 134, in positional_wrapper
return wrapped(*args, **kwargs)
File "/usr/local/share/google/google-cloud-sdk/platform/bq/third_party/googleapiclient/http.py", line 562, in __init__
resumable=resumable)
File "/usr/local/share/google/google-cloud-sdk/platform/bq/third_party/oauth2client_4_0/_helpers.py", line 134, in positional_wrapper
return wrapped(*args, **kwargs)
File "/usr/local/share/google/google-cloud-sdk/platform/bq/third_party/googleapiclient/http.py", line 439, in __init__
self._fd.seek(0, os.SEEK_END)
IOError: [Errno 29] Illegal seek
========================================
Unexpected exception in load operation: You have encountered a bug in the
BigQuery CLI. Please file a bug report in our
public issue tracker:
https://issuetracker.google.com/issues/new?component=187149&template=0
Please include a brief description of the steps that led to this issue, as well
as any rows that can be made public from the following information:
Hora : 2018-10-05 10:04:02