I am new to Deep Learning and my first Project is FACIAL EMOTION RECOGINISTION
I am trying to use this DeepFace library but seems kind of stuck at the moment can anyone help ?
import cv2
from cv2 import cvtColor
from deepface import DeepFace
import matplotlib.pyplot as plt
img = cv2.imread('Images\happy\happy_001.jpg')
# plt.imshow(cv2.cvtColor(img,cv2.COLOR_BGR2RGB))
# plt.show()
predictions = DeepFace.analyze(img, actions = ['age', 'gender', 'race', 'emotion'])
and the error i am getting is
Traceback (most recent call last):
File "C:\Users\asus\OneDrive - Graphic Era University\Desktop\ML AND AI\FACE RECOG\test.py", line 11, in <module>
predictions=DeepFace.analyze(img, actions = ['age', 'gender', 'race', 'emotion'])
File "C:\Python 3.9\lib\site-packages\deepface\DeepFace.py", line 355, in analyze
models['gender'] = build_model('Gender')
File "C:\Python 3.9\lib\site-packages\deepface\DeepFace.py", line 61, in build_model
model = model()
File "C:\Python 3.9\lib\site-packages\deepface\extendedmodels\Gender.py", line 49, in loadModel
gender_model.load_weights(home+'/.deepface/weights/gender_model_weights.h5')
File "C:\Python 3.9\lib\site-packages\keras\utils\traceback_utils.py", line 67, in error_handler
raise e.with_traceback(filtered_tb) from None
File "C:\Python 3.9\lib\site-packages\h5py\_hl\files.py", line 507, in __init__
fid = make_fid(name, mode, userblock_size, fapl, fcpl, swmr=swmr)
File "C:\Python 3.9\lib\site-packages\h5py\_hl\files.py", line 220, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
File "h5py\_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py\_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py\h5f.pyx", line 106, in h5py.h5f.open
OSError: Unable to open file (truncated file: eof = 232972459, sblock->base_addr = 0, stored_eof = 537149760)
I certainly don't know how to solve this .. can anyone help?
I am using VS CODE with python 3.9.6
Check that file: "h5py\h5f.pyx", line 106, in h5py.h5f.open. I guess your issue rises from there. Try to re-install the libs. Also, here your problem was discussed and, probably, solved: https://github.com/keras-team/keras/issues/6221
Related
I'm newbie and trying to use the google-cloud speech-to-text with python and multiprocessing. Here is a simple example to reproduce my issue.
I'm running the code on Windows.
When I run the code without multiprocessing, it works fine.
import io
from tqdm import tqdm
from multiprocessing import Pool, freeze_support, cpu_count
from google.cloud import speech
from google.cloud.speech import enums
from google.cloud.speech import types
# Instantiates a client
CLIENT = speech.SpeechClient()
def speech_to_text(file_name, language= "en-US"):
with io.open(file_name, 'rb') as audio_file:
content = audio_file.read()
audio = types.RecognitionAudio(content=content)
config = types.RecognitionConfig(
encoding=enums.RecognitionConfig.AudioEncoding.ENCODING_UNSPECIFIED,
sample_rate_hertz=16000,
language_code= language)
# Detects speech in the audio file
response = CLIENT.recognize(config, audio)
transcript = ""
if len(response.results):
transcript = response.results[0].alternatives[0].transcript
return transcript
def worker(ix):
audio_file_name = "audio.mp3"
transcript = speech_to_text(audio_file_name)
if __name__ == "__main__":
n_cores = cpu_count() - 1
freeze_support() # for Windows support
with Pool(n_cores) as p:
max_ = len(range(2))
with tqdm(total=max_) as pbar:
for i, result in enumerate(tqdm(p.imap_unordered(worker, range(2)))):
pbar.update()
Here is the Error message that I get:
Traceback (most recent call last):
File "C:\Users\me\Anaconda3\lib\multiprocessing\spawn.py", line 114, in _main
prepare(preparation_data)
File "C:\Users\me\Anaconda3\lib\multiprocessing\spawn.py", line 225, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "C:\Users\me\Anaconda3\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
run_name="__mp_main__")
File "C:\Users\me\Anaconda3\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "C:\Users\me\Anaconda3\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "C:\Users\me\Anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\me\Desktop\outCaptcha\multiproc.py", line 10, in <module>
from google.cloud import speech
File "C:\Users\me\.virtualenvs\outCaptcha\lib\site-packages\google\cloud\speech.py", line 20, in <module>
from google.cloud.speech_v1 import SpeechClient
File "C:\Users\me\.virtualenvs\outCaptcha\lib\site-packages\google\cloud\speech_v1\__init__.py", line 17, in <module>
from google.cloud.speech_v1.gapic import speech_client
File "C:\Users\me\.virtualenvs\outCaptcha\lib\site-packages\google\cloud\speech_v1\gapic\speech_client.py", line 23, in <module>
import google.api_core.client_options
File "C:\Users\me\.virtualenvs\outCaptcha\lib\site-packages\google\api_core\__init__.py", line 23, in <module>
__version__ = get_distribution("google-api-core").version
File "C:\Users\me\AppData\Roaming\Python\Python37\site-packages\pkg_resources\__init__.py", line 481, in get_distribution
dist = get_provider(dist)
File "C:\Users\me\AppData\Roaming\Python\Python37\site-packages\pkg_resources\__init__.py", line 357, in get_provider
return working_set.find(moduleOrReq) or require(str(moduleOrReq))[0]
File "C:\Users\me\AppData\Roaming\Python\Python37\site-packages\pkg_resources\__init__.py", line 900, in require
needed = self.resolve(parse_requirements(requirements))
File "C:\Users\me\AppData\Roaming\Python\Python37\site-packages\pkg_resources\__init__.py", line 786, in resolve
raise DistributionNotFound(req, requirers)
pkg_resources.DistributionNotFound: The 'google-api-core' distribution was not found and is required by the application
Thanks a lot for your help.
Please let me know if you need any details about the issue
In my case this solved the problem.
easy_install --upgrade google-api-core
easy_install --upgrade
google-cloud-speech
I hope this helps.
I am a newbie and then trying to learn fastai data block api
Here is the mistake, the code is exactly the same as the tutorial:
coco = untar_data(URLs.COCO_TINY)
path=coco/'train.json'
images, lbl_bbox = get_annotations(coco/'train.json')
img2bbox = dict(zip(images, lbl_bbox))
get_y_func = lambda o:img2bbox[o.name]
data = (ObjectItemList.from_folder(coco)
.split_by_rand_pct()
.label_from_func(get_y_func)
.transform(get_transforms(), tfm_y=True)
.databunch(bs=1, num_workers=0,collate_fn=bb_pad_collate))
data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6))
Then the error is:
File "D:\Anaconda3\envs\pytorch-gpu\lib\site-packages\IPython\core\interactiveshell.py", line
3326, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-7-25e60680c0ba>", line 15, in <module>
data.show_batch(rows=2, ds_type=DatasetType.Valid, figsize=(6,6))
File "D:\Anaconda3\envs\pytorch-gpu\lib\site-packages\fastai\basic_data.py", line 185, in show_batch
x,y = self.one_batch(ds_type, True, True)
File "D:\Anaconda3\envs\pytorch-gpu\lib\site-packages\fastai\basic_data.py", line 168, in one_batch
try: x,y = next(iter(dl))
File "D:\Anaconda3\envs\pytorch-gpu\lib\site-packages\fastai\basic_data.py", line 75, in __iter__
for b in self.dl: yield self.proc_batch(b)
File "D:\Anaconda3\envs\pytorch-gpu\lib\site-packages\torch\utils\data\dataloader.py", line 348,__next__
data = _utils.pin_memory.pin_memory(data)
File "D:\Anaconda3\envs\pytorch-gpu\lib\site-packages\torch\utils\data\_utils\pin_memory.py", line
55, in pin_memory
return [pin_memory(sample) for sample in data]
File "D:\Anaconda3\envs\pytorch-gpu\lib\site-packages\torch\utils\data\_utils\pin_memory.py", line
55, in <listcomp>
return [pin_memory(sample) for sample in data]
File "D:\Anaconda3\envs\pytorch-gpu\lib\site-packages\torch\utils\data\_utils\pin_memory.py", line
47, in pin_memory
return data.pin_memory()
RuntimeError: CUDA error: unknown error
Regarding this error, the forums on the Internet are all setting the parameters of the DataLoader, but it does not seem to be used here
How would I go about this?
I also got this error. me it fixed by edit the following two lines in the head of notebook:
import os
os.environ['CUDA_VISIBLE_DEVICES']='2'
I am new to pysnmp and nameko. I have been assignment a work to create a service in nameko framework to perform snmp get_request using pysnmp library.
Below is the code i have tried
from pysnmp.hlapi import *
from nameko.rpc import rpc
class GreetingService(object):
name = "greeting_service"
#rpc
def getFunc(oid):
errorIndication, errorStatus, errorIndex, varBinds = next(
getCmd(SnmpEngine(),
CommunityData('public', mpModel=0),
UdpTransportTarget(('snmp.live.gambitcommunications.com', 161)),
ContextData(),
ObjectType(ObjectIdentity('SNMPv2-MIB', oid, 0)))
)
if errorIndication:
print(errorIndication)
elif errorStatus:
print('%s at %s' % (errorStatus.prettyPrint(),
errorIndex and varBinds[int(errorIndex) - 1][0] or '?'))
else:
for varBind in varBinds:
print(' = '.join([x.prettyPrint() for x in varBind]))
if __name__ == "__main__":
getFunc('sysName')
when i try to start the service using terminal with the following command
$ nameko run helloworld
I get the following error message.
syed#syed-ThinkPad-E480:~/Pysnmp$ nameko run helloworld
Traceback (most recent call last):
File "/home/syed/.local/bin/nameko", line 11, in <module>
sys.exit(main())
File "/home/syed/.local/lib/python3.7/site-packages/nameko/cli/main.py", line 112, in main
args.main(args)
File "/home/syed/.local/lib/python3.7/site-packages/nameko/cli/commands.py", line 110, in main
main(args)
File "/home/syed/.local/lib/python3.7/site-packages/nameko/cli/run.py", line 181, in main
import_service(path)
File "/home/syed/.local/lib/python3.7/site-packages/nameko/cli/run.py", line 71, in import_service
if inspect.getmembers(potential_service, is_entrypoint):
File "/usr/lib/python3.7/inspect.py", line 354, in getmembers
if not predicate or predicate(value):
File "/home/syed/.local/lib/python3.7/site-packages/nameko/cli/run.py", line 35, in is_entrypoint
return hasattr(method, ENTRYPOINT_EXTENSIONS_ATTR)
File "/home/syed/.local/lib/python3.7/site-packages/pyasn1/type/base.py", line 221, in __getattr__
raise error.PyAsn1Error('Attempted "%s" operation on ASN.1 schema object' % attr)
pyasn1.error.PyAsn1Error: Attempted "nameko_entrypoints" operation on ASN.1 schema object
Error in sys.excepthook:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/apport_python_hook.py", line 63, in apport_excepthook
from apport.fileutils import likely_packaged, get_recent_crashes
File "/usr/lib/python3/dist-packages/apport/__init__.py", line 5, in <module>
from apport.report import Report
File "/usr/lib/python3/dist-packages/apport/report.py", line 30, in <module>
import apport.fileutils
File "/usr/lib/python3/dist-packages/apport/fileutils.py", line 23, in <module>
from apport.packaging_impl import impl as packaging
File "/usr/lib/python3/dist-packages/apport/packaging_impl.py", line 24, in <module>
import apt
File "/usr/lib/python3/dist-packages/apt/__init__.py", line 23, in <module>
import apt_pkg
ModuleNotFoundError: No module named 'apt_pkg'
Original exception was:
Traceback (most recent call last):
File "/home/syed/.local/bin/nameko", line 11, in <module>
sys.exit(main())
File "/home/syed/.local/lib/python3.7/site-packages/nameko/cli/main.py", line 112, in main
args.main(args)
File "/home/syed/.local/lib/python3.7/site-packages/nameko/cli/commands.py", line 110, in main
main(args)
File "/home/syed/.local/lib/python3.7/site-packages/nameko/cli/run.py", line 181, in main
import_service(path)
File "/home/syed/.local/lib/python3.7/site-packages/nameko/cli/run.py", line 71, in import_service
if inspect.getmembers(potential_service, is_entrypoint):
File "/usr/lib/python3.7/inspect.py", line 354, in getmembers
if not predicate or predicate(value):
File "/home/syed/.local/lib/python3.7/site-packages/nameko/cli/run.py", line 35, in is_entrypoint
return hasattr(method, ENTRYPOINT_EXTENSIONS_ATTR)
File "/home/syed/.local/lib/python3.7/site-packages/pyasn1/type/base.py", line 221, in __getattr__
raise error.PyAsn1Error('Attempted "%s" operation on ASN.1 schema object' % attr)
pyasn1.error.PyAsn1Error: Attempted "nameko_entrypoints" operation on ASN.1 schema object
Please help me out to understand weather what i have tried is correct way or else it is wrong.. If so how to rectify the mistake.
Any help will be appreciable.
Thanks in advance
It has something to do with the way how nameko hooks up your code...
It seems to try looking up nameko_entrypoints attribute at all objects it can find in your module eventually running into ASN.1 schema objects (which are sacred and should not be used for anything other than blueprinting purposes).
My suggestion would be to replace from pysnmp.hlapi import * with specific imports of pysnmp classes/functions you are using in your code. That should hopefully hide fragile pieces out of nameko's sight.
I am trying to continue with a Rasa chatbot that I have not used since September and I find a syntax problem with the dependency of Tensorflow. I do not know if this is due to the fact that there are updates to be made with my dependencies or if it due to the fact that I use python3.7 as suggested by people on GitHub, or some other reason.
(moo_env) C:\Users\antoi\Documents\Programming\projects\moodbot>py train_online.py
Traceback (most recent call last):
File "train_online.py", line 37, in <module>
nlu_interpreter = RasaNLUInterpreter('./models/nlu/default/moodnlu')
File "C:\Users\antoi\Documents\Programming\projects\moodbot\moo_env\lib\site-packages\rasa_core\interpreter.py", line 221, in __init__
self._load_interpreter()
File "C:\Users\antoi\Documents\Programming\projects\moodbot\moo_env\lib\site-packages\rasa_core\interpreter.py", line 237, in _load_interpreter
self.interpreter = Interpreter.load(self.model_directory)
File "C:\Users\antoi\Documents\Programming\projects\moodbot\moo_env\lib\site-packages\rasa_nlu\model.py", line 276, in load
skip_validation)
File "C:\Users\antoi\Documents\Programming\projects\moodbot\moo_env\lib\site-packages\rasa_nlu\model.py", line 298, in create
components.validate_requirements(model_metadata.component_classes)
File "C:\Users\antoi\Documents\Programming\projects\moodbot\moo_env\lib\site-packages\rasa_nlu\components.py", line 49, in validate_requirements
from rasa_nlu import registry
File "C:\Users\antoi\Documents\Programming\projects\moodbot\moo_env\lib\site-packages\rasa_nlu\registry.py", line 23, in <module>
from rasa_nlu.classifiers.embedding_intent_classifier import \
File "C:\Users\antoi\Documents\Programming\projects\moodbot\moo_env\lib\site-packages\rasa_nlu\classifiers\embedding_intent_classifier.py", line 32, in <module>
import tensorflow as tf
File "C:\Users\antoi\Documents\Programming\projects\moodbot\moo_env\lib\site-packages\tensorflow\__init__.py", line 22, in <module>
from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import
File "C:\Users\antoi\Documents\Programming\projects\moodbot\moo_env\lib\site-packages\tensorflow\python\__init__.py", line 49, in <module>
from tensorflow.python import pywrap_tensorflow
File "C:\Users\antoi\Documents\Programming\projects\moodbot\moo_env\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Users\antoi\Documents\Programming\projects\moodbot\moo_env\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 114
def TFE_ContextOptionsSetAsync(arg1, async):
SyntaxError: invalid syntax
You can find my code in Github repository
The reason is this line in pywrap_tensorflow_internal.py:
def TFE_ContextOptionsSetAsync(arg1, async):
Since Python 3.5, async (and await) are keywords and can no longer be used as identifiers. I assume you use a very outdated version of tensorflow.
I'm trying to carry out the tutorial named "Training a classifier" with PyTorch.
WHen trying to debug this part of the code :
import matplotlib.pyplot as plt
import numpy as np
# functions to show an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
# get some random training images
dataiter = iter(trainloader)
images, labels = dataiter.next()
# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join('%5s' % classes[labels[j]] for j in range(4)))
I get this error message :
Files already downloaded and verified Files already downloaded and verified
Files already downloaded and verified Files already downloaded and verified Traceback (most recent call last):
File "<string>", line 1, in <module>
File "D:\Anaconda\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "D:\Anaconda\lib\multiprocessing\spawn.py", line 114, in _main
prepare(preparation_data)
File "D:\Anaconda\lib\multiprocessing\spawn.py", line 225, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "D:\Anaconda\lib\multiprocessing\spawn.py", line 277, in
_fixup_main_from_path
run_name="__mp_main__")
File "D:\Anaconda\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "D:\Anaconda\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "D:\Anaconda\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "d:\Yggdrasil\Programmation\PyTorch\TutorialCIFAR10.py", line 36, in <module>
dataiter = iter(trainloader)
File "D:\Anaconda\lib\site-packages\torch\utils\data\dataloader.py", line 451, in __iter__
return _DataLoaderIter(self)
File "D:\Anaconda\lib\site-packages\torch\utils\data\dataloader.py", line 239, in __init__
w.start()
File "D:\Anaconda\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "D:\Anaconda\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "D:\Anaconda\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "D:\Anaconda\lib\multiprocessing\popen_spawn_win32.py", line 33, in __init__
prep_data = spawn.get_preparation_data(process_obj._name)
File "D:\Anaconda\lib\multiprocessing\spawn.py", line 143, in get_preparation_data
_check_not_importing_main()
File "D:\Anaconda\lib\multiprocessing\spawn.py", line 136, in
_check_not_importing_main
is not going to be frozen to produce an executable.)
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
Traceback (most recent call last):
File "d:\Yggdrasil\Programmation\PyTorch\TutorialCIFAR10.py", line 36, in <module>
dataiter = iter(trainloader)
File "D:\Anaconda\lib\site-packages\torch\utils\data\dataloader.py", line 451, in __iter__
return _DataLoaderIter(self)
File "D:\Anaconda\lib\site-packages\torch\utils\data\dataloader.py", line 239, in __init__
w.start()
File "D:\Anaconda\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "D:\Anaconda\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj) File "D:\Anaconda\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "D:\Anaconda\lib\multiprocessing\popen_spawn_win32.py", line 65, in
__init__
reduction.dump(process_obj, to_child)
File "D:\Anaconda\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
BrokenPipeError: [Errno 32] Broken pipe
All the previous lines in the tutorial are working perfectly.
Does someone know how to solve this, please ?
Thanks a lot in advance
The question happened because Windows cannot run this DataLoader in 'num_workers' more than 0.
You can see where the trainloader come from.we can see
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
shuffle=True, num_workers=2)
We need to change the 'num_workers' to 0.like this:
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
shuffle=True, num_workers=0)
Every trainloaders need to change like this.
Got the same error. The following workaround works for me:
def run():
# code goes here
if __name__ == '__main__':
run()
This doesn't look to be a PyTorch problem. Try executing the code in Jupyter notebooks and other environment troubleshooting.
you need to add a if-clause protection as stated in the pytorch docs:
https://pytorch.org/docs/stable/notes/windows.html#usage-multiprocessing