I wrote a Face detection script with the LBPH algorithm (in Python) cv2.face.createLBPHFaceRecognizer().
My problem is any other person that the algorithm is not trained on, returns me my number. (If it is me it returns 1 but if it's an other person it does the same). So I want to know what I can do, I read something about threshold but I don`t know how to use it and I read about a bug Link to bug. But I don't know how to rebuild the stuff. So I want to know what you recommend me, threshold or rebuilding, or anything else.
So I had a wrong indentation in my code. I returned the number for training with the python return command and so it stoped looping and only trained one number and image.
Related
I am trying to determine the lag time between seeing a dip (or rise) in a predictor metric and when we would see the resulting dip (or rise) in a known response metric. I am not sure where to start, could someone put me on the right path?
For context, I would like to use R or Python and am familiar with statistics and machine learning. I am just searching for what method or modeling technique would be best to use and less about the code.
I am currently trying to make EV-FlowNet work on my computer. EV-FlowNet is an open-source neural network for event-based optical flow estimation (full code here: https://github.com/daniilidis-group/EV-FlowNet). It is based on tensorflow but
unfortunately, I have no experience with this library so I have a hard time figuring out why things are not working. I have downloaded the trained network, the input data and the ground truth and have positioned them in the folders listed in the README file. I am trying to run 'test.py' and it runs without errors. However, it never enters into the main loop in which the results are visualized.
The condition for the main loop is this:
while not coord.should_stop():
coord is defined like this:
coord = tf.train.Coordinator()
and the threads are defined like this:
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
I have tried googling it but all I could find was that the threads stop if any of them call coord.request_stop(). Since I can't find anything in the code that would make them stop, I am don't understand why coord.should_stop() is true from the very beginning. I know this questions is quite vague but since I have no experience with tensorflow I am not sure what other information might be required. This is why I have included the link to the entire code. Thanks in advance!
`
Here is my problem:
I must match two images. One image from the project folder and this folder have over 20.000 images. The other one is from a camera.
What I have done?
I can compare images with basic OpenCV example codes that I found in the documentation. OpenCV Doc I can also compare and find an image by using the hash of my image data set. It is so fast and it is only suitable for 2 exact images. One for query the other one is the target. But they are the same exact image.
So, I need something as reliable as feature matching and as fast as hash methods. But I can't use machine learning or anything on that level. It should be basic. Plus, I'm new to these stuff. So, my term project is on risk.
Example scenario:
If I ever take a picture of an image in my image data set from my computer's screen. This would change many features of the original image. In the case of defining what's in that image, a human won't struggle much but a comparison algorithm will struggle. Such a case leaves lot's of basic comparison algorithm out of the game. But, a machine-learning algorithm could solve the problem but it's forbidden to use in my project.
Needs:
It must be fast.
It must be accurate.
It must be easy to understand.
Any help is okay. A piece of code, maybe an article or a tutorial. Even an advice or a topic title might be really helpful to me.
Once saw this camera model identification challenge on kaggle. This notebook discusses about noise pattern changes with changing devices. May be you should look in to this and other notebooks in that challenge. Thanks!
I am currently looking to dip my toes into deep learning after a few weeks reading some books and doing some more basic machine learning code. I found the MNIST digit database here http://yann.lecun.com/exdb/mnist/ and am currently trying to determine how to actually use the data.
The data appears to be saved in the IDX3 format, of which I am completely unfamiliar.
I have the training and test data sets saved as text files, but that seems to be fairly useless. For some reason, when I try to load them into Octave using the fopen command, the result is simply '-1'
Does anyone know of the correct way to load this data into Octave? Any help would be greatly appreciated.
Does this code work in Octave?
https://github.com/davidstutz/matlab-mnist-two-layer-perceptron/blob/master/loadMNISTImages.m
Note that is fopen returns -1, then maybe the file path is not correct.
I am new to processing, i found it by searching for "draw with coding" , and i tried it, seems every time i modify the code, i have to stop and render again to get the final result
Is there any way to get updated graph without re-rendering? that can be much more convenient for creating simple figures.
if not, is there any alternative to processing that can draw a graph with coding?
I've used Tikz in Latex, but that is just for Latex, I want something that can let me draw a figure by coding, I've suffered enough though using software like coreldraw, it lacks the fundamental elegance of coding..
thanks alot!
Please have a look at the FluidForms libraries.
easy to setup
documentation and video tutorials
as long as you don't run into exceptions, live code comfortably
if you prefix public variables with param you also get sliders for free :)
Do check out the video tutorials, especially this one:
Also, if using Python isn't a problem I recommend having a look at:
NodeBox
Field
Python is a brilliant scripting language - which makes prototyping/'live coding' easy(although it can be compiled and it also plays nicely with c/c++) and is easy to pick up and a joy to use.
In Processing, you must re-run your program to see the changes (graphically), unless you write code to receive input from the user to dynamically adjust what you are drawing. For creating user interfaces there's for example the controlP5 library (http://www.sojamo.de/libraries/controlP5/).
It doesn't support "live coding" (at least that I know of).
You must re-run the code to see the new result.
If Live coding is what you're looking for, check out Fluxus (http://www.pawfal.org/fluxus/) or Impromptu (http://en.wikipedia.org/wiki/Impromptu_(programming_environment)