I've been working with the Hopper environment in OpenAI Gym and noticed that the observation clips the joint velocities but that the true state is allowed to exceed these constraints. Does anyone know why this decision was made? Or is this an overlooked bug? Thanks!
Related
I'm in my last year's way to a master's degree and my final project is about detecting changes in multivariate datasets (changepoint detection). I was looking for interesting datasets but couldn't find any :/ One of my ideas was for example number of some kinds of fishes in the same spot (like predators and herbivores) or changes in air's ratio. Also was looking for some astronomy datasets (maybe signals from many specters of light from the same spot?
Do u have any ideas?
If you don't have a specific domain constraint, I'd suggest to focus on the dataset. As soon as you find one that is suitable for you, it's done. It's harder to look for a specific topic dataset.
You can look into Kaggle or in MLR.
I have checked five different methods for face detection.
1. Haar cascade
2. Dlib HOG
3. Python face_recognition module
4. DLib_CNN
5. OpenCV CNN
All these methods have some advantages and disadvantages and i found out that openCV_CNN works better out of these five algorithm. But for my application i need to detect faces from people on far distance and for this purpose even OpenCV_CNN is not working well (it detects faces of people closer to camera and not the people on far distance). Is there any other algorithm which detects faces of people on far distance?
One of the ways is to do instance segmentation in order to get all the classes in the environment including distant objects.
Once you get all the classes, you can draw a bounding box around the required far off face class, upsample it and send it to your face detection NN. suppose your image is of 54x54x3, it will be upsampled to 224x224x3 and sent to your trained NN.
Face Detection State-of-the-art practical considerations
Face Detection is often the first stage of a Computer Vision pipeline. Thus, it is important for the algorithm to perform in real time. So, it is important to know the comparison between various face detection algorithms and their pros and cons to use the right algorithm for your application. There are many algorithms that have been developed over the years as shown below.
Our recent favorite is YuNet because of its balance between speed and accuracy. Apart from that, RetinaFace is also very accurate but it is a larger model and is a little slow. We have compared the top 9 algorithms for Face Detection on some of the features that we should keep in mind while choosing a Face Detection algorithm:
Speed
Accuracy
Size of face
Robustness to occlusion
Robustness to Lighting variation
Robustness to Orientation or Pose
You can check out the Face Detection ultimate guide that gives a brief overview of the popular face detection algorithms.
I'm working on face recognition project with python & OpenCV I detect faces but I have that problem
I don't know how to get t make the system differentiating between real and fake faces with 2D image
if someone has any ideas, please help me.
thank you.
There is a really good article (code included) by Adrian from pyimagesearch tackling the same exact problem with liveness detector.
Below is the extract from that article
There are a number of approaches to liveness detection, including:
Texture analysis, including computing Local Binary Patterns (LBPs) over face regions and using an SVM to classify the faces as real or spoofed.
Frequency analysis, such as examining the Fourier domain of the face.
Variable focusing analysis, such as examining the variation of pixel values between two consecutive frames.
Heuristic-based algorithms, including eye movement, lip movement, and blink detection. These set of algorithms attempt to track eye movement and blinks to ensure the user is not holding up a photo of another person (since a photo will not blink or move its lips).
Optical Flow algorithms, namely examining the differences and properties of optical flow generated from 3D objects and 2D planes.
-3D face shape, similar to what is used on Appleās iPhone face recognition system, enabling the face recognition system to distinguish between real faces and printouts/photos/images of another person.
Combinations of the above, enabling a face recognition system engineer to pick and choose the liveness detections models appropriate for their particular application.
You can solve this problem using multiple methods, I'm listing some of them here, you can find a few more by referring to some research papers.
Motion Approach: You can make user blink or move which convinces a way that they are real (Most likely to work on video dataset or sequential images)
Feature Approach: Extract useful features from an Image and use them to make binary classification decisions to say real or not.
Frequency Analysis: Examining the Fourier domain of the face.
Optical Flow algorithms: Namely examining the differences and properties of optical flow generated from 3D objects and 2D planes.
Texture Analysis: You can also do Local Binary Patterns using OpenCV to classify the images fake or not, refer this link for details on this approach.
We see pretty pictures of error surface with a global minima and convergence of a neural network in many books. How can I visualize something similar in keras i.e containing error surface and how my model is converging to achieve global minimal error? Below is an example image of such illustrations. And this link has animated illustration of different optimizers. I explored tensorboard log callback for this purpose but could not find any such thing. A little guidance will be appreciated.
The pictures and animations are made for didatic purposes, but the error surface is completely unknown (or incredibly complex to be understood or visualized). That's the whole idea behind using gradient descent.
We only know, at a single point, the direction towards which the funcion increases, through getting the current gradient.
You could try to plot the way (line) you're following by getting the weights values at each iteration and the error, but then you'd face another problem: it's a massively multidimensional function. It's not actually a surface. The number of variables is the number of weights you have in the model (often thousands or even millions). This is absolutely impossible to visualize or even conceive as a visual thing.
To plot such a surface, you'd have to manually change all thousands of weights to get the error for each arrangement. Besides the "impossible to visualize" problem, this would be excessively time consuming.
I am running shared gamma frailty models (i.e., Coxph survival analysis models with a random effect) and want to know if it is "acceptable" to log transform one of your continuous predictor variables. I found a website (http://www.medcalc.org/manual/cox_proportional_hazards.php) that said "The Cox proportional regression model assumes ... there should be a linear relationship between the endpoint and predictor variables. Predictor variables that have a highly skewed distribution may require logarithmic transformation to reduce the effect of extreme values. Logarithmic transformation of a variable var can be obtained by entering LOG(var) as predictor variable".
I would really appreciate a second opinion from someone with more statistical knowledge on this topic. In a nutshell: Is it OK/commonplace/etc to transform (specifically log transform) predictor variables in a survival analysis model (e.g., Coxph model).
Thanks.
You can log transform any predictor in Cox regression. This is frequently necessary but has some drawbacks.
Why log transform? There are a number of good reasons why. You decrease the extent and effect of outliers, data becomes more normally distributed etc.
When possible? I doubt that there are circumstances when you can not do it. I find it hard to believe that it would compromise the precision of your estimates.
Why not do it always? Well it becomes difficult to interpret the results for a predictor which have been log transformed. If you don't log transform, and your predictor is, for example, blood pressure and you obtain a hazard ratio of 1.05, meaning a 5% increase in risk of event for 1 unit increase in blood pressure. IF you log transform blood pressure, the hazard ratio of 1.05 (it would most likely not land on 1.05 again after log transform but we'll stick to 1.05 for simplicity) means 5% increase for each log unit increase in blood pressure. Now thats more difficult to grasp.
But, if you are not interested in the particular variable that you think about log transforming (i.e you just need to adjust for it as a covariate), go ahead do it.