I have a linar estimator in TF 2.2 and currently save it in following way
linear_est = tf.estimator.LinearClassifier(feature_columns=feature_columns)
...
serving_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(
tf.feature_column.make_parse_example_spec(feature_columns))
export_path = linear_est.export_saved_model(
"./model/", serving_input_fn)
With this I get a .pb file and a variables folder, but I need to run the prediction in tfjs, because python tf 2.2 is too big for AWS Lambda.
Is there a possiblity to save it directly from python to web format?
I already tried to convert it with this tutorial
https://www.tensorflow.org/js/tutorials/conversion/import_saved_model
but it is not working. I'm also not sure what is --output_node_names
I created the model with Python 3.8 and now I'm using 3.6.8 in the venv for the converter, because converter is not running with 3.8
(venv) PS C:\predict\web> tensorflowjs_converter --input_format=tf_saved_model --saved_model_tags=serve sold/model/1588619275 sold_web
2020-05-06 22:17:35.434178: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
WARNING:tensorflow:From c:\predict\web\venv\lib\site-packages\tensorflow_core\python\ops\resource_variable_ops.py:1786: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.
WARNING:tensorflow:Issue encountered when serializing global_step.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
to_proto not supported in EAGER mode.
WARNING:tensorflow:Issue encountered when serializing variables.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
to_proto not supported in EAGER mode.
WARNING:tensorflow:Issue encountered when serializing trainable_variables.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
to_proto not supported in EAGER mode.
2020-05-06 22:17:36.264215: I tensorflow/core/grappler/devices.cc:60] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 0 (Note: TensorFlow was not compiled with CUDA support)
2020-05-06 22:17:36.283583: I tensorflow/core/grappler/clusters/single_machine.cc:356] Starting new session
2020-05-06 22:17:36.296771: E tensorflow/core/grappler/grappler_item_builder.cc:656] Init node linear/linear_model/linear/linear_model/linear/linear_model/category_id/category_id_lookup/hash_table/table_init/LookupTableImportV2 doesn't exist in graph
WARNING:tensorflow:From c:\predict\web\venv\lib\site-packages\tensorflowjs\converters\tf_saved_model_conversion_v2.py:313: load (from tensorflow.python.saved_model.loader_impl) is deprecated and will be removed in a future version.
Instructions for updating:
This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.loader.load or tf.compat.v1.saved_model.load. There will be a new function for importing SavedModels in Tensorflow 2.0.
WARNING:tensorflow:From c:\predict\web\venv\lib\site-packages\tensorflowjs\converters\tf_saved_model_conversion_v2.py:315: convert_variables_to_constants (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.compat.v1.graph_util.convert_variables_to_constants`
WARNING:tensorflow:From c:\predict\web\venv\lib\site-packages\tensorflow_core\python\framework\graph_util_impl.py:277: extract_sub_graph (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.compat.v1.graph_util.extract_sub_graph`
Traceback (most recent call last):
File "c:\users\nibur\appdata\local\programs\python\python36\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "c:\users\nibur\appdata\local\programs\python\python36\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\predict\web\venv\Scripts\tensorflowjs_converter.exe\__main__.py", line 7, in <module>
File "c:\predict\web\venv\lib\site-packages\tensorflowjs\converters\converter.py", line 671, in pip_main
main([' '.join(sys.argv[1:])])
File "c:\predict\web\venv\lib\site-packages\tensorflowjs\converters\converter.py", line 675, in main
convert(argv[0].split(' '))
File "c:\predict\web\venv\lib\site-packages\tensorflowjs\converters\converter.py", line 618, in convert
weight_shard_size_bytes=weight_shard_size_bytes)
File "c:\predict\web\venv\lib\site-packages\tensorflowjs\converters\tf_saved_model_conversion_v2.py", line 462, in convert_tf_saved_model
weight_shard_size_bytes=weight_shard_size_bytes)
File "c:\predict\web\venv\lib\site-packages\tensorflowjs\converters\tf_saved_model_conversion_v2.py", line 142, in optimize_graph
', '.join(unsupported))
ValueError: Unsupported Ops in the model before optimization
SparseFillEmptyRows, Unique, LookupTableFindV2, ParseExampleV2, HashTableV2, SparseSegmentSum, AsString, SparseReshape
Thanks
So it seems you are using ops that are not supported in the browser implementation of TensorFlow.js (this does not apply to Node.js which can execute savedModels without conversion).
TensorFlow.js supports a few hundred or so ops from the original TensorFlow implementation for the browser based implementation, so the only way for this to run in the browser right now is to:
Implement the ops in JS - contribute to the open source code on
Github for the ops that are missing.
Change the ops you are using to use supported ops.
You can see the supported ops here: https://github.com/tensorflow/tfjs-converter/blob/master/tfjs-converter/docs/supported_ops.md
Related
Seeing the following traceback, while doing natsclient.connect with python 3.10.8.
File "/opt/optima/pce_dispatcher/pce_dispatcher.py", line 4213, in run
await self.nc.connect(
File "/usr/lib/python3.10/site-packages/nats/aio/client.py", line 310, in connect
self._flush_queue = asyncio.Queue(
File "/usr/lib/python3.10/asyncio/queues.py", line 34, in __init__
super().__init__(loop=loop)
File "/usr/lib/python3.10/asyncio/mixins.py", line 17, in __init__
raise TypeError(
TypeError: As of 3.10, the *loop* parameter was removed from Queue() since it is no longer necessary
Any suggestions on how to resolve this? Using Alpine 3.16 which is packaged with 3.10.8.
Appears like asyncio-nats-client-0.11.5 that got published back in Nov 2021.
No idea on how to resolve this unless there is a new version published for 3.10.8 as asyncio have taken some changes related to passing event loop parameter.
It has already been fixed in the GitHub repo:
Passing explicit loops to many asyncio apis is deprecated, and
it is discouraged in general. [...]
...but they seem to have changed the name of the PyPI package. Try pip install nats-py for the new version.
Not always, but occasionally when running my code this error appears.
At first, I doubted it was a connectivity issue but to do with cashing issue, as discussed on an older Git Issue.
Clearing cache didn't help runtime:
$ rm ~/.cache/huggingface/transformers/ *
Traceback references:
NLTK also gets Error loading stopwords: <urlopen error [Errno -2] Name or service not known.
Last 2 lines re cached_path and get_from_cache.
Cache (before cleared):
$ cd ~/.cache/huggingface/transformers/
(sdg) me#PF2DCSXD:~/.cache/huggingface/transformers$ ls
16a2f78023c8dc511294f0c97b5e10fde3ef9889ad6d11ffaa2a00714e73926e.cf2d0ecb83b6df91b3dbb53f1d1e4c311578bfd3aa0e04934215a49bf9898df0
16a2f78023c8dc511294f0c97b5e10fde3ef9889ad6d11ffaa2a00714e73926e.cf2d0ecb83b6df91b3dbb53f1d1e4c311578bfd3aa0e04934215a49bf9898df0.json
16a2f78023c8dc511294f0c97b5e10fde3ef9889ad6d11ffaa2a00714e73926e.cf2d0ecb83b6df91b3dbb53f1d1e4c311578bfd3aa0e04934215a49bf9898df0.lock
4029f7287fbd5fa400024f6bbfcfeae9c5f7906ea97afcaaa6348ab7c6a9f351.723d8eaff3b27ece543e768287eefb59290362b8ca3b1c18a759ad391dca295a.h5
4029f7287fbd5fa400024f6bbfcfeae9c5f7906ea97afcaaa6348ab7c6a9f351.723d8eaff3b27ece543e768287eefb59290362b8ca3b1c18a759ad391dca295a.h5.json
4029f7287fbd5fa400024f6bbfcfeae9c5f7906ea97afcaaa6348ab7c6a9f351.723d8eaff3b27ece543e768287eefb59290362b8ca3b1c18a759ad391dca295a.h5.lock
684fe667923972fb57f6b4dcb61a3c92763ad89882f3da5da9866baf14f2d60f.c7ed1f96aac49e745788faa77ba0a26a392643a50bb388b9c04ff469e555241f
684fe667923972fb57f6b4dcb61a3c92763ad89882f3da5da9866baf14f2d60f.c7ed1f96aac49e745788faa77ba0a26a392643a50bb388b9c04ff469e555241f.json
684fe667923972fb57f6b4dcb61a3c92763ad89882f3da5da9866baf14f2d60f.c7ed1f96aac49e745788faa77ba0a26a392643a50bb388b9c04ff469e555241f.lock
c0c761a63004025aeadd530c4c27b860ec4ecbe8a00531233de21d865a402598.5d12962c5ee615a4c803841266e9c3be9a691a924f72d395d3a6c6c81157788b
c0c761a63004025aeadd530c4c27b860ec4ecbe8a00531233de21d865a402598.5d12962c5ee615a4c803841266e9c3be9a691a924f72d395d3a6c6c81157788b.json
c0c761a63004025aeadd530c4c27b860ec4ecbe8a00531233de21d865a402598.5d12962c5ee615a4c803841266e9c3be9a691a924f72d395d3a6c6c81157788b.lock
fc674cd6907b4c9e933cb42d67662436b89fa9540a1f40d7c919d0109289ad01.7d2e0efa5ca20cef4fb199382111e9d3ad96fd77b849e1d4bed13a66e1336f51
fc674cd6907b4c9e933cb42d67662436b89fa9540a1f40d7c919d0109289ad01.7d2e0efa5ca20cef4fb199382111e9d3ad96fd77b849e1d4bed13a66e1336f51.json
fc674cd6907b4c9e933cb42d67662436b89fa9540a1f40d7c919d0109289ad01.7d2e0efa5ca20cef4fb199382111e9d3ad96fd77b849e1d4bed13a66e1336f51.lock
Code:
from transformers import pipeline, set_seed
generator = pipeline('text-generation', model='gpt2') # Error
set_seed(42)
Traceback:
2022-03-03 10:18:06.803989: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2022-03-03 10:18:06.804057: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
[nltk_data] Error loading stopwords: <urlopen error [Errno -2] Name or
[nltk_data] service not known>
2022-03-03 10:18:09.216627: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory
2022-03-03 10:18:09.216700: W tensorflow/stream_executor/cuda/cuda_driver.cc:269] failed call to cuInit: UNKNOWN ERROR (303)
2022-03-03 10:18:09.216751: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (PF2DCSXD): /proc/driver/nvidia/version does not exist
2022-03-03 10:18:09.217158: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-03-03 10:18:09.235409: W tensorflow/python/util/util.cc:368] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
All model checkpoint layers were used when initializing TFGPT2LMHeadModel.
All the layers of TFGPT2LMHeadModel were initialized from the model checkpoint at gpt2.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFGPT2LMHeadModel for predictions without further training.
Traceback (most recent call last):
File "/home/me/miniconda3/envs/sdg/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/me/miniconda3/envs/sdg/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/mnt/c/Users/me/Documents/GitHub/project/foo/bar/__main__.py", line 26, in <module>
nlp_setup()
File "/mnt/c/Users/me/Documents/GitHub/project/foo/bar/utils/Modeling.py", line 37, in nlp_setup
generator = pipeline('text-generation', model='gpt2')
File "/home/me/miniconda3/envs/sdg/lib/python3.8/site-packages/transformers/pipelines/__init__.py", line 590, in pipeline
tokenizer = AutoTokenizer.from_pretrained(
File "/home/me/miniconda3/envs/sdg/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py", line 463, in from_pretrained
tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs)
File "/home/me/miniconda3/envs/sdg/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py", line 324, in get_tokenizer_config
resolved_config_file = get_file_from_repo(
File "/home/me/miniconda3/envs/sdg/lib/python3.8/site-packages/transformers/file_utils.py", line 2235, in get_file_from_repo
resolved_file = cached_path(
File "/home/me/miniconda3/envs/sdg/lib/python3.8/site-packages/transformers/file_utils.py", line 1846, in cached_path
output_path = get_from_cache(
File "/home/me/miniconda3/envs/sdg/lib/python3.8/site-packages/transformers/file_utils.py", line 2102, in get_from_cache
raise ValueError(
ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.
Failed Attempts
I closed my IDE and bash terminal. Ran wsl.exe --shutdown in PowerShell. Relaunched IDE and bash terminal with same error.
Disconnecting/ different VPN.
Clear cache $ rm ~/.cache/huggingface/transformers/ *.
make sure you are not loading a tokenizer with an empty path. That solved it for me.
I saw a answer in github which you can have a try:
pass force_download=True to from_pretrained which will override the cache and re-download the files.
Link at :https://github.com/huggingface/transformers/issues/8690 By:patil-suraj
Since I am working in a conda venv and using Poetry for handling dependencies, I needed to re-install torch - a dependency for Hugging Face 🤗 Transformers.
First, install torch:
PyTorch's website lets you chose your exact setup/ specification for install. In my case, the command was
conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch
Then add to Poetry:
poetry add torch
Both take ages to process. Runtime was back to normal :)
I am running python3 on a Debian 10 (buster) system.
Up until yesterday, I was able to perform this import:
from metpy.plots import (StationPlot, StationPlotLayout, wx_code_map, current_weather)
After a general package update, I can no longer perform the import and instead get this string of errors:
Traceback (most recent call last): File
"/home/disk/bob/impacts/bin/ASOS_plot_data_hourly_ISU.py", line 37, in
from metpy.plots import (StationPlot, StationPlotLayout,
wx_code_map, current_weather) File
"/usr/lib/python3/dist-packages/metpy/init.py", line 35, in
from .xarray import * # noqa: F401, F403, E402 File
"/usr/lib/python3/dist-packages/metpy/xarray.py", line 27, in
from .units import DimensionalityError, UndefinedUnitError, units
File "/usr/lib/python3/dist-packages/metpy/units.py", line 40, in
lambda string: string.replace('%', 'percent') File
"/usr/lib/python3/dist-packages/pint/registry.py", line 74, in
call obj = super(_Meta, self).call(*args, **kwargs) TypeError: init() got an unexpected keyword argument
'preprocessors'
In fact, I can't even do a simple
import metpy
without getting the same error chain.
Obviously, there must be some sort of version discrepancy with xarray or some other package.
I currently have these versions installed: 1.0.0rc1.po of metpy and 0.12.1-1 of xarray.
Any thoughts about what the required combination of packages should be or who I might ask about this?
It's unclear from your post what versions of Pint and Python you have installed. From the error, it seems like you are having problems with too old a version of Pint installed, though MetPy 1.0.0rc1 should have had support to deal with that. Really, the whole 1.0.0rc1.po version makes me wonder almost if MetPy was installed from git at some point after rc1?
Regardless, MetPy 1.0.0rc1, which means that was the first Release Candidate for the 1.0 release of MetPy and is not a version I would rely upon. I would suggest updating to either MetPy 1.0.1 (if you are using Python 3.6) or MetPy 1.2 (for Python >= 3.7).
I am recieving an OSerror (withouth any other text) from h5py when loading an h5 model created with keras- tensorflow after updating my enviroment, or working with an up-to-date environment.
I trained some models with keras and tf in the older versions, and also with keras-tf v1.15 and saved them using the model.save('filename.h5') code. Afterwards i am able to load them and work with them further using before the keras.load_model, and now tensorflow.keras.models.load_model without any problems but recieving some warnings that my tf version was not compiled to use the avx2 instructions and so.
The version installed is tensorflow 1.15 using pip install tensorflow-cpu and it seems to work well, my enviroment installed is Anaconda3-2020.02-Windows-x86_64 installed from the anaconda binaries on Windows.
After trying to change the packages to tensorflow-mkl, and needing to update my enviroment because of enviromental conflicts (shows even with the fresh install of anaconda) the OSerror raised by h5py appears.
Using the default enviromental packages from the anaconda binary with tf-cpu seems to work fine, either by cloning the environment. When updating the environment with conda update --all it raises the error either with tfc-cpu or tf-mkl.
The version of h5py in both cases is: '2.10.0' and the error is the following:
Traceback (most recent call last):
File "C:\Users\Oscar\bwSyncAndShare\OPT_PV22WP_intern\pv2wp_control\SIM\Sim_future.py", line 88, in <module>
model = load_model(pathfile_model)
File "C:\Users\Oscar\anaconda3\envs\optimizer2\lib\site-packages\tensorflow_core\python\keras\saving\save.py", line 142, in load_model
isinstance(filepath, h5py.File) or h5py.is_hdf5(filepath))):
File "C:\Users\Oscar\anaconda3\envs\optimizer2\lib\site-packages\h5py\_hl\base.py", line 44, in is_hdf5
return h5f.is_hdf5(filename_encode(fname))
File "h5py\_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py\_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py\h5f.pyx", line 156, in h5py.h5f.is_hdf5
OSError
Have anyone had this problem?
I have tried training a model with the updated environment and saving
it, when loading i get the same error.
Updating to tf-cpu v2.3.1
with the base environment and loading works also.
Creating a new env, with conda create -n name python==3.7.x anaconda
and then installing tf, doesn´t work.
i think then some other library is making the problem, but i cannot figure out what is the problem.
I use hd5 instead of h5 as the extension,and solve the problem.
i can load my deep model in colab bu when i want load that model in pc i can't
I'm using an object-detection API to train my own model, but while running the training using this command:
python train.py --logtostderr --train_dir=training/ --pipeline_config_path=training/ssd_mobilenet_v1_coco.config
I get this error:
WARNING:tensorflow:From C:\Users\MHD\Anaconda3\envs\tf15\lib\site-packages\tensorflow\python\platform\app.py:124: main (from __main__) is deprecated and will be removed in a future version.
Instructions for updating:
Use object_detection/model_main.py.
Traceback (most recent call last):
File "train.py", line 179, in <module>
tf.app.run()
File "C:\Users\MHD\Anaconda3\envs\tf15\lib\site-packages\tensorflow\python\platform\app.py", line 124, in run
_sys.exit(main(argv))
File "C:\Users\MHD\Anaconda3\envs\tf15\lib\site-packages\tensorflow\python\util\deprecation.py", line 136, in new_func
return func(*args, **kwargs)
File "train.py", line 175, in main
graph_hook_fn=graph_rewriter_fn)
File "C:\tensorflow1\models\research\object_detection\legacy\trainer.py", line 249, in train
detection_model = create_model_fn()
File "C:\tensorflow1\models\research\object_detection\builders\model_builder.py", line 119, in build
return _build_ssd_model(model_config.ssd, is_training, add_summaries)
File "C:\tensorflow1\models\research\object_detection\builders\model_builder.py", line 237, in _build_ssd_model
is_training=is_training)
File "C:\tensorflow1\models\research\object_detection\builders\model_builder.py", line 187, in _build_ssd_feature_extractor
if feature_extractor_config.HasField('replace_preprocessor_with_placeholder'):
ValueError: Protocol message SsdFeatureExtractor has no field replace_preprocessor_with_placeholder
please help me guys
Tracing down the cause of this error, I found the option replace_preprocessor_with_placeholder was recently added. Here is the commit record. (On that page if you search for replace_preprocessor_with_placeholder you will find that it was added recently on March 7th, 2019).
So the cause of the error is obviously your proto files version is not consistent with the code version. If you compare object_detection/protos/ssd.proto on your local machine and on the github repo, you will probably find this line does not exist on your local machine's file (because this filed was also added recently!).
The easiest way to fix this error is to reinstall the object detection api following this guide.
Since you already have all packages installed, essentially there are two steps you need to do, install the coco api and compile the protobuff. A new protobuff compilation will fix your error.
Also I recommend you follow the latest api tutorial, I see in your call you are using train.py, this file has now been put in the legacy folder and is not recommended to run since they may not be up-to-date.