What is the action_space for? - openai-gym

I'm making custom environment in OpenAI Gym and really don't understand, what is action_space for? And what should I put in it? Just to be accurate, I don't know what is action_space, I didn't used it in any code. And I didn't find anything on internet, what could answer my question normally.

The action_space used in the gym environment is used to define characteristics of the action space of the environment. With this, one can state whether the action space is continuous or discrete, define minimum and maximum values of the actions, etc.
For continuous action space one can use the Box class.
import gym
from gym import spaces
class MyEnv(gym.Env):
def __init__(self):
# set 2 dimensional continuous action space as continuous
# [-1,2] for first dimension and [-2,4] for second dimension
self.action_space = spaces.Box(np.array([-1,-2]),np.array([2,4]),dtype=np.float32)
For discrete one can use the Discrete class.
import gym
from gym import spaces
class MyEnv(gym.Env):
def __init__(self):
# set 2 dimensional action space as discrete {0,1}
self.action_space = spaces.Discrete(2)
If you have any other requirements you can go through this folder in the OpenAI gym repo. You could also go through different environments given in the gym folder to get more examples of the usage of the action_space and observation_space.
Also, go through core.py to get to know what all methods/functions are necessary for an environment to be compatible with gym.
The main OpenAI Gym class. It encapsulates an environment with
arbitrary behind-the-scenes dynamics. An environment can be
partially or fully observed.
The main API methods that users of this class need to know are:
step
reset
render
close
seed
And set the following attributes:
action_space: The Space object corresponding to valid actions
observation_space: The Space object corresponding to valid observations
reward_range: A tuple corresponding to the min and max possible rewards
Note: a default reward range set to [-inf,+inf] already exists. Set it if you want a narrower range.
The methods are accessed publicly as "step", "reset", etc.. The
non-underscored versions are wrapper methods to which we may add
functionality over time.

Related

Is there a method in sklearn.ensemble.RandomForestClassifier to treat one hot encoded nominal data as one feature?

My goal is to use sklearn.preprocessing.OneHotEncoder for sklearn.ensemble.RandomForestClassifier. After watching youtube video on how Random Forest works, and tested it against my data set. My model metrics jumped to the level that I can only achieve after tuning the model.
The video taught me that Random Forest treat each dummy variable as separate feature instead of 1 feature. And the model metrics show just that. I realize that each dummy variable is useless if alone within a Decision Tree.
Is there a way for sklearn.ensemble.RandomForestClassifier to do this:
Randomly select features (numerical, ordinal, nominal)
If it selects one of the dummy variable, it has to select the entire dummy variables and count is 1 feature.

why do object detection methods have an output value for every class

Most recent object detection methods rely on a convolutional neural network. They create a feature map by running input data through a feature extraction step. They then add more convolutional layers to output a set of values like so (this set is from YOLO, but other architectures like SSD differ slightly):
pobj: probability of being an object
c1, c2 ... cn: indicating which class the object belongs to
x, y, w, h: bounding box of the object
However, one particular box cannot be multiple objects. As in, wouldn't having a high value for, say, c1 mean that the values for all the others c2 ... cn would be low? So why use different values for c1, c2 ... cn? Couldn't they all be represented by a single value, say 0-1, where each object has a certain range within the 0-1, say 0-0.2 is c1, 0.2-0.4 is c2 and so on...
This would reduce the dimension of the output from NxNx(5+C) (5 for the probability and bounding box, +C one for each class) to NxNx(5+1) (5 same as before and 1 for the class)
Thank you
Short answer, NO! That is almost certainly not an acceptable solution. It sounds like your core question is: Why is a a single value in the range [0,1] not a sufficient, compact output for object classification? As a clarification, I'd say this doesn't really have to do with single-shot detectors; the outputs from 2-stage detectors and most all classification networks follows this same 1D embedding structure. As a secondary clarification, I'd say that many 1-stage networks also don't output pobj in their original implementations (YOLO is the main one that does but Retinanet and I believe SSD does not).
An object's class is a categorical attribute. Assumed within a standard classification problem is that the set of possible classes is flat (i.e. no class is a subclass of any other), mutually exclusive (each example falls into only a single class), and unrelated (not quite the right term here but essentially no class is any more or less related to any other class).
This assumed attribute structure is well represented by an orthonormal encoding vector of the same length as the set of possible attributes. A vector [1,0,0,0] is no more similar to [0,1,0,0] than it is to [0,0,0,1] in this space.
(As an aside, a separate branch of ML problems called multilabel classification removes the mutual exclusivity constrain (so [0,1,1,0] and [0,1,1,1] would both be valid label predictions. In this space class or label combinations COULD be construed as more or less related since they share constituent labels or "basis vectors" in the orthonormal categorical attribute space. But enough digression..)
A single, continuous variable output for class destroys the assumption that all classes are unrelated. In fact, it assumes that the relation between any two classes is exact and quantifiable! What an assumption! Consider attempting to arrange the classes of, let's say, the ImageNet classification task, along a single dimension. Bus and car should be close, no? Let's say 0.1 and 0.2, respectively in our 1D embedding range of [0,1]. Zebra must be far away from them, maybe 0.8. But should be close to zebra fish (0.82)? Is a striped shirt closer to a zebra or a bus? Is the moon more similar to a bicycle or a trumpet? And is a zebra really 5 times more similar to a zebra fish than a bus is to a car? The exercise is immediately, patently absurd. A 1D embedding space for object class is not sufficiently rich to capture the differences between object classes.
Why can't we just place object classes randomly in the continuous range [0,1]? In a theoretical sense nothing is stopping you, but the gradient of the network would become horrendously, unmanageably non-convex and conventional approaches to training the network would fail. Not to mention the network architecture would have to encode extremely non-linear activation functions to predict the extremely hard boundaries between neighboring classes in the 1D space, resulting in a very brittle and non-generalizable model.
From here, the nuanced reader might suggest that in fact, some classes ARE related to one another (i.e. the unrelated assumption of the standard classification problem is not really correct). Bus and car are certainly more related than bus and trumpet, no? Without devolving into a critique on the limited usefulness of strict ontological categorization of the world, I'll simply suggest that in many cases there is an information embedding that strikes a middle ground. A vast field of work has been devoted to finding embedding spaces that are compact (relative to the exhaustive enumeration of "everything is its own class of 1") but still meaningful. This is the work of principal component analysis and object appearance embedding in deep learning.
Depending on the particular problem, you may be able to take advantage of a more nuanced embedding space better suited towards the final task you hope to accomplish. But in general, canonical deep learning tasks such as classification / detection ignore this nuance in the hopes of designing solutions that are "pretty good" generalized over a large range of problem spaces.
For object classification head, usually cross-entropy loss function is used which operates on the probability distribution to compute the difference between ground-truth(a one hot encoded vector) and prediction class scores.
On the otherhand, you are proposing a different way of encoding the ground-truth class labels which can be further used with certain custom loss function say L1/l2 loss function, which looks theoretically correct but it might not be as good as cross-entropy function in terms of model convergence/optimization.

How define the number of class in Detectron2 bounding box predict Pytorch?

Where should i define the number os classes ?
ROI HEAD or RETINANET ?
Or both should have the same value ?
cfg.MODEL.RETINANET.NUM_CLASSES =int( len(Classe_list)-1)
cfg.MODEL.ROI_HEADS.NUM_CLASSES=int( len(Classe_list)-1)
It depends on the network architecture you choose to use. If you use the "MaskRCNN", then you should set the cfg.MDOEL.ROI_HEADS.NUM_CLASSES.
The deep reason is that ROI_HEAD is the component used by MaskRCNN. If you use different network, you may need to change different things dependent on their implementation

Defining TF-Agents action in py_environment.PyEnvironment Class

So I'm trying to define the action_spec in the py_environment.PyEnvironment Class for a DQN network using TF-Agents. Is action limited to returning integer values? I have read through a few online tutorials and in every case it was for a simple game, like chess or tic tac toe. Where you only needed to define a single integer in a small space to make a move/action.
For my use case, action needs to define sets of coordinates in two dimensional space. To make it more complicated, I want the agent too not only choose the X, Y coordinates but also the number of [X, Y] sets.
I need action to return a list of lists. Something like this:
action = [[100, 200], [300, 350], [550, 876]]
With this in mind; I'm not entirely sure how to define action_spec.
I saw this thread here: tf_agents custom time_step_spec
Where it shows the use of a dictionary of ArraySpecs(). Is that what I need to do?
SO according to this link: https://github.com/tensorflow/agents/issues/329
DQN for TF-Agents only supports a single action. As in the dimensions of the action_spec cannot be larger than shape(1,). Keep this in mind before using TF-Agents.

sklearn decision tree classifier: How to control max number of branches of each split

I am trying to code a two class classification DT problem that I used SAS EM before. But trying to do it in Sklearn. The target variable is a two class categorical variable. But there are a few continuous independent variables. In SAS I could specify the "Maximum Number of Branches" for each split. So when it is set to 4, some leaf will split into 2 and some in 4 (especially for continuous variables). I could not find an equivalent parameter in sklearn. Looked at "max_leaf-nodes". But that controls the total number of "leaf" nodes of the entire tree. I am sure some of you probably has faced the same situation and already found a solution. Please help/share. I will really appreciate it.
I don't think this option is available in sklearn, You will find this Post very useful for your Classification DT; as it lists all the options you have available.
I would recommend creating Bins for your continues variables; this way you force the branches to be the number of bins you have.
Example: For continuous variable COl1 has values between 1-100; you can create a 4 bins 1-25, 26-50 , 51-75, 76-100. or you can create the bins bases on the median.

Resources