Get Segmentation Fault when loading sample model face-detection-adas-0001.xml on openvino Raspberry Pi 4 - openvino

When I run the sample, specifying the model and path to the input image:
./armv7l/Release/object_detection_sample_ssd -m ./open_model_zoo/tools/downloader/intel/face-detection-adas-0001/FP32/face-detection-adas-0001.xml -d MYRIAD -i test.jpg
There is a segmentation fault
[ INFO ] InferenceEngine:
IE version ......... 2021.4.2
Build ........... 2021.4.2-3974-e2a469a3450-releases/2021/4
Parsing input parameters
[ INFO ] Files were added: 1
[ INFO ] test.jpg
[ INFO ] Loading Inference Engine
[ INFO ] Device info:
MYRIAD
myriadPlugin version ......... 2021.4.2
Build ........... 2021.4.2-3974-e2a469a3450-releases/2021/4
[ INFO ] Loading network files:
[ INFO ] ./open_model_zoo/tools/downloader/intel/face-detection-adas-0001/FP32/face-detection-adas-0001.xml
[ INFO ] Preparing input blobs
[ INFO ] Batch size is 1
[ INFO ] Preparing output blobs
[ INFO ] Loading model to the device
Segmentation fault
I have searched everywhere, but there is no solution for this problem. Please help me.

This error can be resolved by adding -DCMAKE_CXX_FLAGS="-march=armv7-a" to cmake command when you build Object Detection SSD C++ Sample.
Build Object Detection SSD C++ Sample using the following command:
cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_FLAGS="-march=armv7-a" /opt/intel/openvino_2021/deployment_tools/inference_engine/samples/cpp
make -j2 object_detection_sample_ssd

Related

sdl2 - ImportError: DLL load failed while importing _window_sdl2: The specified module could not be found

i am facing this issue while trying to run my kivy ml application in PowerShell or cmd.
i have installed all kivy dependencies but still same error. also i have tried to run it in virtual environment.
(kivy_venv) PS C:\Users\thiru\Downloads\exe file> python3 main.py
[INFO ] [Logger ] Record log in C:\Users\thiru\.kivy\logs\kivy_22-01-24_7.txt
[INFO ] [deps ] Successfully imported "kivy_deps.gstreamer" 0.3.3
[INFO ] [deps ] Successfully imported "kivy_deps.angle" 0.3.1
[INFO ] [deps ] Successfully imported "kivy_deps.glew" 0.3.0
[INFO ] [deps ] Successfully imported "kivy_deps.sdl2" 0.3.1
[INFO ] [Kivy ] v2.0.0
[INFO ] [Kivy ] Installed at "C:\Users\thiru\AppData\Roaming\Python\Python39\site-packages\kivy\__init__.py"
[INFO ] [Python ] v3.9.9 (main, Jan 15 2022, 01:02:02) [MSC v.1929 64 bit (AMD64)]
[INFO ] [Python ] Interpreter at "c:\users\thiru\appdata\local\activestate\cache\cc3d5c70\python.exe"
2022-01-24 14:14:10.715828: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found
2022-01-24 14:14:10.723572: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
[INFO ] [Factory ] 186 symbols loaded
[INFO ] [Image ] Providers: img_tex, img_dds, img_pil (img_sdl2, img_ffpyplayer ignored)
[INFO ] [Text ] Provider: pil(['text_sdl2'] ignored)
[CRITICAL] [Window ] Unable to find any valuable Window provider. Please enable debug logging (e.g. add -d if running from the command line, or change the log level in the config) and re-run your app to identify potential causes
sdl2 - ImportError: DLL load failed while importing _window_sdl2: The specified module could not be found.
File "C:\Users\thiru\AppData\Roaming\Python\Python39\site-packages\kivy\core\__init__.py", line 58, in core_select_lib
mod = __import__(name='{2}.{0}.{1}'.format(
File "C:\Users\thiru\AppData\Roaming\Python\Python39\site-packages\kivy\core\window\window_sdl2.py", line 27, in <module>
from kivy.core.window._window_sdl2 import _WindowSDL2Storage
[CRITICAL] [App ] Unable to get a Window, abort.

Why the Device with "CPU" name is not registered in the InferenceEngine when using openvino?

Description:
I have completed the installation of openvino toolkit and build the samples provided in deployment_tool/open_model_zoo/demos, then when I run the ./segmentation_demo, I got the problem like this. Does anyone used to got the same pro as me? Thanks for your stay.
on my machine:
(base) [root#VM-218-78-centos ~/omz_demos_build/intel64/Release]# ./segmentation_demo -i 0 -m /data/chriisyang/fastseg-small/fastseg-small_t15_108.xml
[ INFO ] InferenceEngine: 0x7efdd1f31090
[ INFO ] Parsing input parameters
[ INFO ] Device info
[ ERROR ] Device with "CPU" name is not registered in the InferenceEngine
I have validated the Segmentation Demo using OpenVINO2021.3 and everything works fine. Make sure you meet the System Requirements needed.

Kivy 2.0 Installation Difficulties

This is my first time trying to use Kivy, and I haven't been able to run a base code.
from kivy.app import App
from kivy.uix.button import Button
class TestApp(App):
def build(self):
return Button(text='Hello World')
TestApp().run()
And my error log is the following:
C:\Users\Owner\AppData\Local\Microsoft\WindowsApps\python3.9.exe C:/Users/Owner/Desktop/python/BeginnerProjects/KivyTest/main.py
[INFO ] [Logger ] Record log in
C:\Users\Owner\.kivy\logs\kivy_21-02-11_57.txt
[INFO ] [deps ] Successfully imported "kivy_deps.gstreamer" 0.3.1
[INFO ] [deps ] Successfully imported "kivy_deps.angle" 0.3.0
[INFO ] [deps ] Successfully imported "kivy_deps.glew" 0.3.0
[INFO ] [deps ] Successfully imported "kivy_deps.sdl2" 0.3.1
[INFO ] [Kivy ] v2.0.0
[INFO ] [Kivy ] Installed at "C:\Users\Owner\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\kivy\__init__.py"
[INFO ] [Python ] v3.9.1 (tags/v3.9.1:1e5d33e, Dec 7 2020, 17:08:21) [MSC v.1927 64 bit (AMD64)]
[INFO ] [Python ] Interpreter at "C:\Users\Owner\AppData\Local\Microsoft\WindowsApps\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\python.exe"
[INFO ] [Factory ] 186 symbols loaded
[INFO ] [Image ] Providers: img_tex, img_dds, img_pil (img_sdl2, img_ffpyplayer ignored)
[INFO ] [Text ] Provider: pil(['text_sdl2'] ignored)
[CRITICAL] [Window ] Unable to find any valuable Window provider. Please enable debug logging (e.g. add -d if running from the command line, or change the log level in the config) and re-run your app to identify potential causes
sdl2 - ImportError: DLL load failed while importing _window_sdl2: The specified module could not be found.
File "C:\Users\Owner\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\kivy\core\__init__.py", line 58, in core_select_lib
mod = __import__(name='{2}.{0}.{1}'.format(
File "C:\Users\Owner\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\kivy\core\window\window_sdl2.py", line 27, in <module>
from kivy.core.window._window_sdl2 import _WindowSDL2Storage
[CRITICAL] [App ] Unable to get a Window, abort.
Process finished with exit code 1
I've tried reinstalling Kivy, installing all the dependencies, nothing has changed these results. Does anyone have an idea on how I can fix these?
Looks like a duplicate.
I had this error when I didn't give the main file main.py name.

Elasticsearch IP is different from the actual IP

Hi guys I am new to ubuntu and elasticsearch. Pls spare me if I sound silly.
I am using windows 8. I installed ubuntu on my Oracle virtual machine software. When I did ifconfig in the terminal I got the following ip
> 172.16.49.21
I have installed elasticsearch2.3 on my ubuntu OS. I ran the command ./elasticsearch to start elasticsearch service. At the time of start, I could see the following messages from the terminal:
anand#anand-VirtualBox:~/Downloads/elasticsearch-2.3.1/bin$ ./elasticsearch
[2016-04-12 21:14:25,567][INFO ][node ] [Fenris Wolf] version[2.3.1], pid[3168], build[bd98092/2016-04-04T12:25:05Z]
[2016-04-12 21:14:25,568][INFO ][node ] [Fenris Wolf] initializing ...
[2016-04-12 21:14:26,653][INFO ][plugins ] [Fenris Wolf] modules [lang-groovy, reindex, lang-expression], plugins [], sites []
[2016-04-12 21:14:26,691][INFO ][env ] [Fenris Wolf] using [1] data paths, mounts [[/ (/dev/sda1)]], net usable_space [86gb], net total_space [95gb], spins? [possibly], types [ext4]
[2016-04-12 21:14:26,692][INFO ][env ] [Fenris Wolf] heap size [1007.3mb], compressed ordinary object pointers [true]
[2016-04-12 21:14:26,692][WARN ][env ] [Fenris Wolf] max file descriptors [4096] for elasticsearch process likely too low, consider increasing to at least [65536]
[2016-04-12 21:14:29,439][INFO ][node ] [Fenris Wolf] initialized
[2016-04-12 21:14:29,439][INFO ][node ] [Fenris Wolf] starting ...
[2016-04-12 21:14:29,613][INFO ][transport ] [Fenris Wolf] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}, {[::1]:9300}
[2016-04-12 21:14:29,617][INFO ][discovery ] [Fenris Wolf] elasticsearch/ltT6fvOEQAWS9NDzZXsNig
[2016-04-12 21:14:32,696][INFO ][cluster.service ] [Fenris Wolf] new_master {Fenris Wolf}{ltT6fvOEQAWS9NDzZXsNig}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2016-04-12 21:14:32,711][INFO ][http ] [Fenris Wolf] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}, {[::1]:9200}
[2016-04-12 21:14:32,712][INFO ][node ] [Fenris Wolf] started
[2016-04-12 21:14:32,769][INFO ][gateway ] [Fenris Wolf] recovered [0] indices into cluster_state
From the above terminal output, one could see that the IP and the port in which elasticsearch is running is
> 127.0.0.1:9200
My questions are
why is elasticsearch service not running on the actual IP 172.16.49.21?
What changes will I to make for the service run on the actual IP 172.16.49.21?
If the change is done successfully, will I be able to access elasticsearch on the said IP from my windows machine?
Elasticsearch binds to localhost by default. If you want anything else you will need to do some Network configuration as described here in the Elasticsearch manual.. You can be new to Ubuntu and Elasticsearch, but you should do some homework before posting to this site with a question such as yours. This came up on the first Google hit.
Whether you can access the VM guest from your windows host has to do with how you set your VM up for networking. I am assuming you're using vbox based on your comments. If you can access Elasticsearch on the guest but not the host make sure the network on your Guest is Bridged and not Nat.

crm_mon command in new pacemaker not work as old behavior

I have installed pacemaker and corosync in my Redhat 7 machine.
I not see some resources(stopped resources) when node go to down(or service down). Below is my example:
When i using old pacemaker rpms. It work normally.
Online: [ node1 ]
OFFLINE: [ node2 ]
Clone Set: test [ping]
Started: [ node1 ]
Stopped: [ node2 ]
Pacemaker version: pacemaker-1.1.10-29.el7.x86_64.rpm
When i using new pacemaker rpms:
Online: [ node1 ]
OFFLINE: [ node2 ]
Clone Set: test [ping]
Started: [ node2 ]
Pacemaker version: pacemaker-1.1.13-10.el7.x86_64.rpm
I want to use new pacemaker rpms and see all resource show as old behavior. What should i do?
Using crm_mon -r to see full inactive resources.

Resources