Scapy bind_layers using a layer condition outside the binding scope - python-3.x

I'm new to Scapy so maybe this is written down somewhere but I can't find the answer.
I'm trying to create a custom packet dissector but in order to bind specific layers to each other I need to do it on condition of a value in a deeper layer.
I have a minimum of three layers above the RTP layer but the number of 2nd and third layers is determined by a value in the first layer.
Cust1 holds the number of Cust2 layers that will follow. Each Cust2 layer will have a corresponding Cust3 layer to match with it at the end of the chain of Cust2 layers. I've shown an example below where I've used 2a/2b/2c and 3a/3b/3c just to indicate that the numbers are the same layer type but chained together.
i.e
If Cust1 holds a value of one(1) then:
UDP / RTP / Cust1 / Cust2a / Cust3a
if Cust1 holds a value of two(2):
UDP / RTP / Cust1 / Cust2a/Cust2b / Cust3a/Cust3b
if Cust1 holds a value of three(3):
UDP / RTP / Cust1 / Cust2a/Cust2b/Cust2c / Cust3a/Cust3b/Cust3c
etc...
So how do I reference Cust1 for bindings that are further along in the chain?
bind_layers(RTP, Cust1)
bind_layers(Cust1, Cust2a)
bind_layers(Cust2a, Cust2b, {conditional REF_to_Cust1.value})
bind_layers(Cust2b, Cust2c, {conditional REF_to_Cust1.value})
# etc...
Please tell me I don't have to create a custom layer for each scenario and use that to get my desired result.

It is not easily possible if you seperate your packet in several layers, it is easily achieveable using some special fields. (You could actually use guess_payload_class but it's more of a pain...)
Have a look at PacketListField or at the adequate documentation.
This is the general idea:
class Cust2(Packet):
...
class Cust3(Packet):
...
class Cust1(Packet):
fields_desc = [
ByteField("number_of_cust2", 0),
...,
PacketFieldList("cust2s", [], Cust2, count_from=lambda pkt: pkt.number_of_cust2),
PacketFieldList("cust3s", [], Cust3, count_from=lambda pkt: pkt.number_of_cust2),
]

Related

Coupling of Different Blocks in a UNET

I am starting to work with Neuralnetworks using Keras. I try to adapt the model (UNet-like architecture) given by Sim, Oh, Kim, Jung in "Optimal Transport driven CycleGAN for Unsupervised Learning in Inverse Problems" (Fig. 10).
def def_generator(image_shape=(256,256,3)):
init= RandomNormal(stddev=0.02)
#Start 1st Block
in_image = Input(shape=image_shape)
g1=Conv2D(64,(3,3))(in_image)
g1=InstanceNormalization(axis=-1)(g1)
g1=LeakyReLU(alpha=0.2)(g1)
g1=Conv2D(64,(3,3))(g1)
g1=InstanceNormalization(axis=-1)(g1)
g1=LeakyReLU(alpha=0.2)(g1)
#End of 1st Block
#Start of 2nd Block
g2=MaxPool2D()(g1)
g2=Conv2D(128,(3,3))(g2)
g2=InstanceNormalization(axis=-1)(g2)
g2=LeakyReLU(alpha=0.2)(g2)
g2=Conv2D(128,(3,3))(g2)
g2=InstanceNormalization(axis=-1)(g2)
g2=LeakyReLU(alpha=0.2)(g2)
#End of 2nd Block
#Start of 3rd Block
g3=MaxPool2D()(g2)
g3=Conv2D(256,(3,3))(g3)
g3=InstanceNormalization(axis=-1)(g3)
g3=LeakyReLU(alpha=0.2)(g3)
g3=Conv2D(256,(3,3))(g3)
g3=InstanceNormalization(axis=-1)(g3)
g3=LeakyReLU(alpha=0.2)(g3)
#End of 3rd Block
#Start of 4th block
g4=MaxPool2D()(g3)
g4=Conv2D(512,(3,3))(g4)
g4=InstanceNormalization(axis=-1)(g4)
g4=LeakyReLU(alpha=0.2)(g4)
g4=Conv2D(512,(3,3))(g4)
g4=InstanceNormalization(axis=-1)(g4)
g4=LeakyReLU(alpha=0.2)(g4)
g4=Conv2D(256,(3,3))(g4)
g4=InstanceNormalization(axis=-1)(g4)
g4=LeakyReLU(alpha=0.2)(g4)
g4=Conv2DTranspose(256,(2,2),strides=(4,4),output_padding=1)(g4)
#End of 4th Block
#Start of 5th Block
g5input=Concatenate()([g4,g3])
g5=Conv2D(256,(3,3))(g5input)
g5=InstanceNormalization(axis=-1)(g5)
g5=LeakyReLU(alpha=0.2)(g5)
g5=Conv2D(256,(3,3))(g5)
g5=InstanceNormalization(axis=-1)(g5)
g5=LeakyReLU(alpha=0.2)(g5)
g5=Conv2DTranspose(128,(2,2),strides=(3,3), padding='same', output_padding=0)(g5)
#End of 5th Block
#Start of 6th block
g6input=Concatenate()([g5,g2])
g6=Conv2D(128,(2,2))(g6input)
g6=InstanceNormalization(axis=-1)(g6)
g6=LeakyReLU(alpha=0.2)(g6)
g6=Conv2D(128,(2,2))(g6)
g6=InstanceNormalization(axis=-1)(g6)
g6=LeakyReLU(alpha=0.2)(g6)
g6=Conv2DTranspose(64,(2,2),strides=(2,2), padding='valid', output_padding=1)(g6)
#End of 6th Block
#Start of 7th block
g7input=Concatenate()([g6,g1])
g7=Conv2D(64,(2,2))(g7input)
g7=InstanceNormalization(axis=-1)(g7)
g7=LeakyReLU(alpha=0.2)(g7)
g7=Conv2D(64,(2,2))(g7)
g7=InstanceNormalization(axis=-1)(g7)
g7=LeakyReLU(alpha=0.2)(g7)
g7=Conv2DTranspose(1,(1,1))(g7)
model=Model(in_image, g5)
model.compile(loss='mse', optimizer=Adam(lr=2e-4,beta_1=0.5), loss_weights=[0.5], metrics=['accuracy'])
return model
g=def_generator((120,120,1))
print(g.summary())
I run always in the problem that the dimensions of the layers which should be concatenated are not compatible.
I understand that this issue is resulting from the MaxPooling+Conv2d steps before.
I am now wondering if there is a trick/strategy to avoid/reduce this issue?
Any help will be appreciated.
Best wishes
Michael
the problem is very simple, you are concatenating block with layers with different size, this is happening because you are trying to run the network on images that are NOT POWER OF 2 size, when you do the max pooling of an image that is not divisible for 2 you lose a pixel (243x243 -> 121x121) and when you double with the traspose you get a different size (121x121 -> 242x242) and the concatenation doesnt work because 242 is different to 243, the images are of different size (at least this is what i think, you should have shared the error).
This means that when an image reaches a maxpooling layer it needs to have an edge divisible for 2.
so, solution:
having 4 blocks means that the images need to be AT LEAST divisible for 16, otherwise it will not work

Create a connection coming from a `spike_source_cell` in Arbor?

The docs specify that in order to create a connection a source and dest are required (of type cell_global_label and cell_local_label respectively). For connections between cable cells this works fine because you can place labels on their decor and then use those labels in the cell_global_label, but how do I connect from a spike_source_cell?
Here's what I do for cable cells:
arbor.connection(
arbor.cell_global_label(gid, "soma_spike_detector"),
arbor.cell_local_label("soma_synapse"),
1,
0.1
)
But since I can't create labels on a spike_source_cell it throws the following error:
RuntimeError: Model building error on cell 26: connection endpoint label "soma_spike_detector": label does not exist.
The docs on spike source cells mention:
has one built-in source, which needs to be given a label to be used when forming connections from the cell;
So you can use the label that you gave when constructing spike_source_cells as the label when constructing the cell_global_label:
# When constructing the source cell
arbor.spike_source_cell(
"spike_source",
arbor.explicit_schedule([5, 10, 12])
)
# In the recipe's `connections_on`:
arbor.connection(
arbor.cell_global_label(gid, "spike_source"),
arbor.cell_local_label("soma_synapse"),
1,
0.1
)

Confusion About Implementing LeafSystem With Vector Output Port Correctly

I'm a student teaching myself Drake, specifically pydrake with Dr. Russ Tedrake's excellent Underactuated Robotics course. I am trying to write a combined energy shaping and lqr controller for keeping a cartpole system balanced upright. I based the diagram on the cartpole example found in Chapter 3 of Underactuated Robotics [http://underactuated.mit.edu/acrobot.html], and the SwingUpAndBalanceController on Chapter 2: [http://underactuated.mit.edu/pend.html].
I have found that due to my use of the cart_pole.sdf model I have to create an abstract input port due receive FramePoseVector from the cart_pole.get_output_port(0). From there I know that I have to create a control signal output of type BasicVector to feed into a Saturation block before feeding into the cartpole's actuation port.
The problem I'm encountering right now is that I'm not sure how to get the system's current state data in the DeclareVectorOutputPort's callback function. I was under the assumption I would use the LeafContext parameter in the callback function, OutputControlSignal, obtaining the BasicVector continuous state vector. However, this resulting vector, x_bar is always NaN. Out of desperation (and testing to make sure the rest of my program worked) I set x_bar to the controller's initialization cart_pole_context and have found that the simulation runs with a control signal of 0.0 (as expected). I can also set output to 100 and the cartpole simulation just flies off into endless space (as expected).
TL;DR: What is the proper way to obtain the continuous state vector in a custom controller extending LeafSystem with a DeclareVectorOutputPort?
Thank you for any help! I really appreciate it :) I've been teaching myself so it's been a little arduous haha.
# Combined Energy Shaping (SwingUp) and LQR (Balance) Controller
# with a simple state machine
class SwingUpAndBalanceController(LeafSystem):
def __init__(self, cart_pole, cart_pole_context, input_i, ouput_i, Q, R, x_star):
LeafSystem.__init__(self)
self.DeclareAbstractInputPort("state_input", AbstractValue.Make(FramePoseVector()))
self.DeclareVectorOutputPort("control_signal", BasicVector(1),
self.OutputControlSignal)
(self.K, self.S) = BalancingLQRCtrlr(cart_pole, cart_pole_context,
input_i, ouput_i, Q, R, x_star).get_LQR_matrices()
(self.A, self.B, self.C, self.D) = BalancingLQRCtrlr(cart_pole, cart_pole_context,
input_i, ouput_i,
Q, R, x_star).get_lin_matrices()
self.energy_shaping = EnergyShapingCtrlr(cart_pole, x_star)
self.energy_shaping_context = self.energy_shaping.CreateDefaultContext()
self.cart_pole_context = cart_pole_context
def OutputControlSignal(self, context, output):
#xbar = copy(self.cart_pole_context.get_continuous_state_vector())
xbar = copy(context.get_continuous_state_vector())
xbar_ = np.array([xbar[0], xbar[1], xbar[2], xbar[3]])
xbar_[1] = wrap_to(xbar_[1], 0, 2.0*np.pi) - np.pi
# If x'Sx <= 2, then use LQR ctrlr. Cost-to-go J_star = x^T * S * x
threshold = np.array([2.0])
if (xbar_.dot(self.S.dot(xbar_)) < 2.0):
#output[:] = -self.K.dot(xbar_) # u = -Kx
output.set_value(-self.K.dot(xbar_))
else:
self.energy_shaping.get_input_port(0).FixValue(self.energy_shaping_context,
self.cart_pole_context.get_continuous_state_vector())
output_val = self.energy_shaping.get_output_port(0).Eval(self.energy_shaping_context)
output.set_value(output_val)
print(output)
Here are two things that might help:
If you want to get the state of the cart-pole from MultibodyPlant, you probably want to be connecting to the continuous_state output port, which gives you a normal vector instead of the abstract-type FramePoseVector. In that case, your call to get_input_port().Eval(context) should work just fine.
If you do really want to read the FramePoseVector, then you have to evaluate the input port slightly differently. You can find an example of that here.

Gradients vanishing despite using Kaiming initialization

I was implementing a conv block in pytorch with activation function(prelu). I used Kaiming initilization to initialize all my weights and set all the bias to zero. However as I tested these blocks (by stacking 100 such conv and activation blocks on top of each other), I noticed that the output I am getting values of the order of 10^(-10). Is this normal, considering I am stacking upto 100 layers. Adding a small bias to each layer fixes the problem. But in Kaiming initialization the biases are supposed to be zero.
Here is the conv block code
from collections import Iterable
def convBlock(
input_channels, output_channels, kernel_size=3, padding=None, activation="prelu"
):
"""
Initializes a conv block using Kaiming Initialization
"""
padding_par = 0
if padding == "same":
padding_par = same_padding(kernel_size)
conv = nn.Conv2d(input_channels, output_channels, kernel_size, padding=padding_par)
relu_negative_slope = 0.25
act = None
if activation == "prelu" or activation == "leaky_relu":
nn.init.kaiming_normal_(conv.weight, a=relu_negative_slope, mode="fan_in")
if activation == "prelu":
act = nn.PReLU(init=relu_negative_slope)
else:
act = nn.LeakyReLU(negative_slope=relu_negative_slope)
if activation == "relu":
nn.init.kaiming_normal_(conv.weight, nonlinearity="relu")
act = nn.ReLU()
nn.init.constant_(conv.bias.data, 0)
block = nn.Sequential(conv, act)
return block
def flatten(lis):
for item in lis:
if isinstance(item, Iterable) and not isinstance(item, str):
for x in flatten(item):
yield x
else:
yield item
def Sequential(args):
flattened_args = list(flatten(args))
return nn.Sequential(*flattened_args)
This is the test Code
ls=[]
for i in range(100):
ls.append(convBlock(3,3,3,"same"))
model=Sequential(ls)
test=np.ones((1,3,5,5))
model(torch.Tensor(test))
And the output I am getting is
tensor([[[[-1.7771e-10, -3.5088e-10, 5.9369e-09, 4.2668e-09, 9.8803e-10],
[ 1.8657e-09, -4.0271e-10, 3.1189e-09, 1.5117e-09, 6.6546e-09],
[ 2.4237e-09, -6.2249e-10, -5.7327e-10, 4.2867e-09, 6.0034e-09],
[-1.8757e-10, 5.5446e-09, 1.7641e-09, 5.7018e-09, 6.4347e-09],
[ 1.2352e-09, -3.4732e-10, 4.1553e-10, -1.2996e-09, 3.8971e-09]],
[[ 2.6607e-09, 1.7756e-09, -1.0923e-09, -1.4272e-09, -1.1840e-09],
[ 2.0668e-10, -1.8130e-09, -2.3864e-09, -1.7061e-09, -1.7147e-10],
[-6.7161e-10, -1.3440e-09, -6.3196e-10, -8.7677e-10, -1.4851e-09],
[ 3.1475e-09, -1.6574e-09, -3.4180e-09, -3.5224e-09, -2.6642e-09],
[-1.9703e-09, -3.2277e-09, -2.4733e-09, -2.3707e-09, -8.7598e-10]],
[[ 3.5573e-09, 7.8113e-09, 6.8232e-09, 1.2285e-09, -9.3973e-10],
[ 6.6368e-09, 8.2877e-09, 9.2108e-10, 9.7531e-10, 7.0011e-10],
[ 6.6954e-09, 9.1019e-09, 1.5128e-08, 3.3151e-09, 2.1899e-10],
[ 1.2152e-08, 7.7002e-09, 1.6406e-08, 1.4948e-08, -6.0882e-10],
[ 6.9930e-09, 7.3222e-09, -7.4308e-10, 5.2505e-09, 3.4365e-09]]]],
grad_fn=<PreluBackward>)
Amazing question (and welcome to StackOverflow)! Research paper for quick reference.
TLDR
Try wider networks (64 channels)
Add Batch Normalization after activation (or even before, shouldn't make much difference)
Add residual connections (shouldn't improve much over batch norm, last resort)
Please check this out in this order and give a comment what (and if) any of that worked in your case (as I'm also curious).
Things you do differently
Your neural network is very deep, yet very narrow (81 parameters per layer only!)
Due to above, one cannot reliably create those weights from normal distribution as the sample is just too small.
Try wider networks, 64 channels or more
You are trying much deeper network than they did
Section: Comparison Experiments
We conducted comparisons on a deep but efficient model with 14 weight
layers (actually 22 was also tested in comparison with Xavier)
That was due to date of release of this paper (2015) and hardware limitations "back in the days" (let's say)
Is this normal?
Approach itself is quite strange with layers of this depth, at least currently;
each conv block is usually followed by activation like ReLU and Batch Normalization (which normalizes signal and helps with exploding/vanishing signals)
usually networks of this depth (even of depth half of what you've got) use also residual connections (though this is not directly linked to vanishing/small signal, more connected to degradation problem of even deep networks, like 1000 layers)

How to estimate camera pose according to a projective transformation matrix of two consecutive frames?

I'm working on the kitti visual odometry dataset. I use projective transformation to register two 2D consecutive frames(see projective transformation example here
). I want to know how this 3*3 projective transformation matrix is related to the ground truth poses provided by the kitti dataset.
This dataset gives the ground truth poses (trajectory) for the sequences, which is described below:
Folder 'poses':
The folder 'poses' contains the ground truth poses (trajectory) for the
first 11 sequences. This information can be used for training/tuning your
method. Each file xx.txt contains a N x 12 table, where N is the number of
frames of this sequence. Row i represents the i'th pose of the left camera
coordinate system (i.e., z pointing forwards) via a 3x4 transformation
matrix. The matrices are stored in row aligned order (the first entries
correspond to the first row), and take a point in the i'th coordinate
system and project it into the first (=0th) coordinate system. Hence, the
translational part (3x1 vector of column 4) corresponds to the pose of the
left camera coordinate system in the i'th frame with respect to the first
(=0th) frame. Your submission results must be provided using the same data
format.
Some samples of the given groud-truth poses:
1.000000e+00 9.043680e-12 2.326809e-11 5.551115e-17 9.043683e-12 1.000000e+00 2.392370e-10 3.330669e-16 2.326810e-11 2.392370e-10 9.999999e-01 -4.440892e-16
9.999978e-01 5.272628e-04 -2.066935e-03 -4.690294e-02 -5.296506e-04 9.999992e-01 -1.154865e-03 -2.839928e-02 2.066324e-03 1.155958e-03 9.999971e-01 8.586941e-01
9.999910e-01 1.048972e-03 -4.131348e-03 -9.374345e-02 -1.058514e-03 9.999968e-01 -2.308104e-03 -5.676064e-02 4.128913e-03 2.312456e-03 9.999887e-01 1.716275e+00
9.999796e-01 1.566466e-03 -6.198571e-03 -1.406429e-01 -1.587952e-03 9.999927e-01 -3.462706e-03 -8.515762e-02 6.193102e-03 3.472479e-03 9.999747e-01 2.574964e+00
9.999637e-01 2.078471e-03 -8.263498e-03 -1.874858e-01 -2.116664e-03 9.999871e-01 -4.615826e-03 -1.135202e-01 8.253797e-03 4.633149e-03 9.999551e-01 3.432648e+00
9.999433e-01 2.586172e-03 -1.033094e-02 -2.343818e-01 -2.645881e-03 9.999798e-01 -5.770163e-03 -1.419150e-01 1.031581e-02 5.797170e-03 9.999299e-01 4.291335e+00
9.999184e-01 3.088363e-03 -1.239599e-02 -2.812195e-01 -3.174350e-03 9.999710e-01 -6.922975e-03 -1.702743e-01 1.237425e-02 6.961759e-03 9.998991e-01 5.148987e+00
9.998890e-01 3.586305e-03 -1.446384e-02 -3.281178e-01 -3.703403e-03 9.999605e-01 -8.077186e-03 -1.986703e-01 1.443430e-02 8.129853e-03 9.998627e-01 6.007777e+00
9.998551e-01 4.078705e-03 -1.652913e-02 -3.749547e-01 -4.231669e-03 9.999484e-01 -9.229794e-03 -2.270290e-01 1.649063e-02 9.298401e-03 9.998207e-01 6.865477e+00
9.998167e-01 4.566671e-03 -1.859652e-02 -4.218367e-01 -4.760342e-03 9.999347e-01 -1.038342e-02 -2.554151e-01 1.854788e-02 1.047004e-02 9.997731e-01 7.724036e+00
9.997738e-01 5.049868e-03 -2.066463e-02 -4.687329e-01 -5.289072e-03 9.999194e-01 -1.153730e-02 -2.838096e-01 2.060470e-02 1.164399e-02 9.997198e-01 8.582886e+00
9.997264e-01 5.527315e-03 -2.272922e-02 -5.155474e-01 -5.816781e-03 9.999025e-01 -1.268908e-02 -3.121547e-01 2.265686e-02 1.281782e-02 9.996611e-01 9.440275e+00
9.996745e-01 6.000540e-03 -2.479692e-02 -5.624310e-01 -6.345160e-03 9.998840e-01 -1.384246e-02 -3.405416e-01 2.471098e-02 1.399530e-02 9.995966e-01 1.029896e+01
9.996182e-01 6.468772e-03 -2.686440e-02 -6.093087e-01 -6.873365e-03 9.998639e-01 -1.499561e-02 -3.689250e-01 2.676374e-02 1.517453e-02 9.995266e-01 1.115757e+01
9.995562e-01 7.058450e-03 -2.894213e-02 -6.562052e-01 -7.530449e-03 9.998399e-01 -1.623192e-02 -3.973964e-01 2.882292e-02 1.644266e-02 9.994492e-01 1.201541e+01
9.995095e-01 5.595311e-03 -3.081450e-02 -7.018788e-01 -6.093682e-03 9.998517e-01 -1.610315e-02 -4.239119e-01 3.071983e-02 1.628303e-02 9.993953e-01 1.286965e+01
The common name for your "projective transformation" is homography. In a calibrated setup (i.e. if you know your camera's field of view or, equivalently, its focal length) a homography can be decomposed into 3D rotation and translation, the latter only up to scale. The decomposition algorithm additionally produces the normal to the 3D plane inducting the homography. The algorithm has up to 4 solutions, of which only one is feasible when you apply additional constraints, such as that the matched image points triangulate in front of the camera, and that the general direction of the translation match a known prior.
More information about the method is in a well-known paper by Malis and Vargas. There is an implementation in OpenCV, under the name decomposeHomographyMat.

Resources