YOLO Error: names: Using default 'data/names.list' Couldn't open file: data/names.list when implemented on CPU offline - object

I have implemented custom object detector using YOLO for offline on CPU.
When I run this command on CPU:
!./darknet detector demo data/obj.data cfg/yolov4-obj.cfg yolov4-obj_final.weights -dont_show MVI_1615_VIS.avi -i 0 -out_filename results.avi
I get the following error:
GPU isn't used
OpenCV version: 3.2.0
names: Using default 'data/names.list'
Couldn't open file: data/names.list
Kindly help.

I came across this error recently. For me, it was a simple mistake in the obj.data file.
Instead of the correct version which is:
names = obj.names
I had:
names - obj.names
Hence why it couldnt find the obj.names file
Make sure there aren't any errors in the obj.data file. For more info, check out: https://github.com/AlexeyAB/darknet#how-to-train-to-detect-your-custom-objects

Related

How to check if the environment variable "PROJ_LIB" is defined and how to unset it ? (PyQGIS Standalone Script Executer)

I just tried the standalone PyQGIS application by running the custom script "Proximity.py"* in a VS Code project without the need of a GUI (such as QGIS).
But, when I run the python-program I get the following message:
proj_create_from_database: C:\Program Files\PostgreSQL\14\share\contrib\postgis-3.2\proj\proj.db contains DATABASE.LAYOUT.VERSION.MINOR = 0 whereas a number >= 2 is expected. It comes from another PROJ installation. (see also: Error Message after launching the configuration (launch.json) from VS Code (when pressing F5))
I'm trying this online example with the following installations:
PostgreSQL 14
Python39
.vscode\extensions\ms-python.python-2022.4.1\pythonFiles\lib\python\debugpy\launcher
osgeo4w-setup.exe (including QGIS LTR)
I read that there is a solution by undefining [PROJ_LIB] before importing pyproj or osgeo: del os.environ ['PROJ_LIB'] as described under this link. If this is also supposed to be the correct solution in this case, can someone help me with step-by-step instructions (for dummies)?
. * The "Proximity.py" script is a pyqgis standalone example from "https://github.com/MarByteBeep/pyqgis-standalone"
Finally, I got a solution to be able to run the "standalone PyQGIS"* example "Proximity" (provided by MarByteBeep).
This solution was possible without needing to launch the configuration file "launch.json" as above described. And so, avoiding the need to make any configuration to the environment variable "PROJ_LIB" by trying to circumvent the above issue.
I just first added the following two code-lines (see here line 2 and 3) in the python file "main.py" so as to be able to use the plugin "PROCESSING" (initially line 8 of the "main.py" file), then I store it and finally I ran it.
Line 1: from qgis.core import
Line 2: import sys
Line 3: sys.path.append('C:\Program Files\QGIS 3.24.1\apps\qgis\python\plugins')
Line 4: qgs = QgsApplication([], False)
Line 5: ...
The Proximity example is based on the answer of "Mar Tjin" to the following Question: "Looking for manual on how to properly setup standalone PyQGIS without GUI"
. * By "Standalone PyQGIS" I refer to code/scripts that can be run outside the QGIS-GUI (=> QGIS-Desktop/Server Application). In my case under the external Editor VS Code

Sybase 16 startserver failed due to missing libsapcrypto.so

We've installed Sybase 16 Express in our Linux box, it was able to startup right after the installation. When we recently try restarting it with the startserver -f RUN_FILE command, it failed to find the libsapcrypto.so file.
~/sap/ASE-16_0/bin> ../sap/ASE-16_0/bin/dataserver: error while loading shared libraries: libsapcrypto.so: cannot open shared object file: No such file or directory
We searched this file, multiple matches presented in the following paths:
./DM/OCS-16_0/lib3p/libsapcrypto.so
./DM/OCS-16_0/lib3p64/libsapcrypto.so
./DM/OCS-16_0/devlib3p64/libsapcrypto.so
./DM/OCS-16_0/devlib3p/libsapcrypto.so
./DM/REP-16_0/lib64/libsapcrypto.so
./DataAccess/ODBC/lib/libsapcrypto.so
./DataAccess64/ODBC/lib/libsapcrypto.so
./OCS-16_0/lib3p/libsapcrypto.so
./OCS-16_0/lib3p64/libsapcrypto.so
./OCS-16_0/devlib3p64/libsapcrypto.so
./OCS-16_0/devlib3p/libsapcrypto.so
Since this hasn't been answered yet, running this command worked for me:
. /opt/sap/SYBASE.sh
Note the different syntax to make sure the environment variables are set in the terminal session, as opposed to using this syntax:
/opt/sap/SYBASE.sh

cookie cutter: what's the easiest way to specify variables for the prompts

Is there anything that offers replay-type functionality, by pointing at a predefined prompt-answer file?
What works and what I'd like to achieve.
Let's take an example, using a cookiecutter to prep a Python package for pypi
cookiecutter https://github.com/audreyr/cookiecutter-pypackage.git
You've downloaded /Users/jluc/.cookiecutters/cookiecutter-pypackage before. Is it okay to delete and re-download it? [yes]:
full_name [Audrey Roy Greenfeld]: Spartacus 👈 constant for me/my organization
email [audreyr#example.com]: spartacus#example.com 👈 constant for me/my organization
...
project_name [Python Boilerplate]: GladiatorRevolt 👈 this will vary.
project_slug [q]: gladiator-revolt 👈 this too
...
OK, done.
Now, I can easily redo this, for this project, via:
cookiecutter https://github.com/audreyr/cookiecutter-pypackage.git --replay
This is great!
What I want:
Say I create another project, UnleashHell.
I want to prep a file somehow that has my developer-info and project level info for Unleash. And I want to be able to run it multiple times against this template, without having to deal with prompts. This particular pypi template gets regular updates, for example python 2.7 support has been dropped.
The problem:
A --replay will just inject the last run for this cookiecutter template. If it was run against a different pypi project, too bad.
I'm good with my developer-level info, but I need to vary all the project level info.
I tried copying the replay file via:
cp ~/.cookiecutter_replay/cookiecutter-pypackage.json unleash.json
Edit unleash.json to reflect necessary changes.
Then specify it via --config-file flag
cookiecutter https://github.com/audreyr/cookiecutter-pypackage.git --config-file unleash.json
I get an ugly error, it wants YAML, apparently.
cookiecutter.exceptions.InvalidConfiguration: Unable to parse YAML file .../000.packaging/unleash.json. Error: None of the known patterns match for {
"cookiecutter": {
"full_name": "Spartacus",
No problem, json2yaml to the rescue.
That doesn't work either.
cookiecutter.exceptions.InvalidConfiguration: Unable to parse YAML file ./cookie.yaml. Error: Unable to determine type for "
full_name: "Spartacus"
I also tried a < stdin redirect:
cookiecutter.prompts.txt:
yes
Spartacus
...
It doesn't seem to use it and just aborts.
cookiecutter https://github.com/audreyr/cookiecutter-pypackage.git < ./cookiecutter.prompts.txt
You've downloaded ~/.cookiecutters/cookiecutter-pypackage before. Is it okay to delete and re-download it? [yes]
: full_name [Audrey Roy Greenfeld]
: email [audreyr#example.com]
: Aborted
I suspect I am missing something obvious, not sure what. To start with, what is the intent and format expected for the --config file?
Debrief - how I got it working from accepted answer.
Took accepted answer, but adjusted it for ~/.cookiecutterrc usage. It works but the format is not super clear. Especially not on the rc which has to be yaml, though that's not always/often the case with rc files.
This ended up working:
file ~/.cookiecutterrc:
without nesting under default_context... tons of unhelpful yaml parse errors (on a valid yaml doc).
default_context:
#... cut out for privacy
add_pyup_badge: y
command_line_interface: "Click"
create_author_file: "y"
open_source_license: "MIT license"
# the names to use here are:
# full_name:
# email:
# github_username:
# project_name:
# project_slug:
# project_short_description:
# pypi_username:
# version:
# use_pytest:
# use_pypi_deployment_with_travis:
# add_pyup_badge:
# command_line_interface:
# create_author_file:
# open_source_license:
I still could not get a combination of ~/.cookiecutterrc and a project-specific config.yaml to work. Too bad that expected configuration format is so lightly documented.
So I will use the .rc but enter the project name, slug and description each time. Oh well, good enough for now.
You are near.
Try this cookiecutter https://github.com/audreyr/cookiecutter-pypackage.git --no-input --config-file config.yaml
The --no-input parameter will suppress the terminal user input, it is optional of course.
The config.yaml file could look like this:
default_context:
full_name: "Audrey Roy"
email: "audreyr#example.com"
github_username: "audreyr"
cookiecutters_dir: "/home/audreyr/my-custom-cookiecutters-dir/"
replay_dir: "/home/audreyr/my-custom-replay-dir/"
abbreviations:
pp: https://github.com/audreyr/cookiecutter-pypackage.git
gh: https://github.com/{0}.git
bb: https://bitbucket.org/{0}
Reference to this example file: https://cookiecutter.readthedocs.io/en/1.7.0/advanced/user_config.html
You probably just need the default_context block since that is where the user input goes.

Why do OpenGL-based VTK targets in drake executed via `bazel test` sometimes fail on Linux?

While a binary works with bazel run, when I run a test using bazel test, such as:
$ bazel test //systems/sensors:rgbd_camera_test
I encounter a slew of errors from VTK / OpenGL:
ERROR: In /vtk/Rendering/OpenGL2/vtkXOpenGLRenderWindow.cxx, line 820
vtkXOpenGLRenderWindow (0x55880715b760): failed to create offscreen window
ERROR: In /vtk/Rendering/OpenGL2/vtkOpenGLRenderWindow.cxx, line 816
vtkXOpenGLRenderWindow (0x55880715b760): GLEW could not be initialized.
ERROR: In /vtk/Rendering/OpenGL2/vtkShaderProgram.cxx, line 453
vtkShaderProgram (0x5588071d5aa0): Shader object was not initialized, cannot attach it.
ERROR: In /vtk/Rendering/OpenGL2/vtkOpenGLRenderWindow.cxx, line 1858
vtkXOpenGLRenderWindow (0x55880715b760): Hardware does not support the number of textures defined.
May I ask why this happens?
(Note: This post is a means to migrate from http://drake.mit.edu/faq.html to StackOverflow for user-based questions.)
The best workaround at the moment is to first mark the test as as local in the BUILD.bazel file, either with local = 1, or tags = [.., "local"]. Doing so will make the specific target run without sandboxing, such that it has an environment similar to that of bazel run.
As an example, in systems/sensors/BUILD.bazel:
drake_cc_googletest(
name = "rgbd_camera_test",
# ...
local = 1,
# ...
)
If this does not work, then try running the test in Bazel without sandboxing:
$ bazel test --spawn_strategy=standalone //systems/sensors:rgbd_camera_test
Please note that you can possibly add --spawn_strategy=standalone to your ~/.bazelrc, but be aware that this means your development testing environment may deviate even more from other developer's testing environments.

erlang zip:unzip/1 {error, bad_central_directory} and {error, bad_eocd}

I have always used erlang stdlib library zip:unzip/1 successfully. Last night i hit a bar with this error:
E:\WimaxStatsParser-1.1>erl
Eshell V5.9.2 (abort with ^G)
1> zip:unzip("e:/WimaxStatsParser-1.1/in/SomeZipFile.zip").
{error,bad_central_directory}
2>
Some one help explain the cause for this ? and how i get around it ?
ADDITIONS
I got some other error on another file: {error,bad_eocd}. Please explain this as well.
I am not able to reproduce your problem with the information you give. There are 2 functions that may send this error:
get_cd_loop/5 and get_name_extra_comment/4 in stdlib-1.18.2/src/zip.erl .
it should be easy to debug
copy the file zip.erl, zip.hrl, file.hrl in a working directory,
compile with debug_info option, you will get the error message "Can't load module that resides in sticky dir", leave the VM
copy zip.beam in the stdlib.../ebin
restart the VM in the working directory, you can now add breakpoint in the zip.erl source.
BR
Pascal.

Resources