What is static and dynamic payload types in SDP ? What is the difference between them? - sdp

I am trying to understand the difference between static and dynamic payloads in SDP protocol but haven't arrived at any conclusion. Can someone please elaborate on whats the difference and why dynamic payload is required ?

You can use this draft as the basis for your search.
The payload format specification is described in RFC3550. RTP Payload type can have the values 0 to 127. The binding of RTP payload format to Payload type can be static or dynamic.
Section 6 of RFC 3551 enumerates the static payload types.
Quoting section 3 of above draft:
As described in section 3 of RFC3551 [RFC3551], the payload type number space is relatively small and cannot accommodate assignments for all existing and future encodings. The registry for RTP Payload types (PT) for standard audio and video encodings [...] is closed. New payload formats(e.g.,H.264,VP8) MUST use dynamic payload type number assignment.
I hope this helps.

Related

Is initialization vector part of Output ESP packet in IPsec?

Encryption Algorithm: AES-CBC
Authentication Algorithm: HMAC-SHA1-96
Is it necessary that in ESP, initialization vector be always the part of output packet.
If no, what are those algorithms ?
If yes, Why in some of the images on google/books show esp packet with no iv field ?
Citing RFC-2406, section 2.3 (emphasis mine):
Payload Data is a variable-length field containing data described by
the Next Header field. The Payload Data field is mandatory and is an
integral number of bytes in length. If the algorithm used to encrypt
the payload requires cryptographic synchronization data, e.g., an
Initialization Vector (IV), then this data MAY be carried explicitly
in the Payload field. Any encryption algorithm that requires such
explicit, per-packet synchronization data MUST indicate the length,
any structure for such data, and the location of this data as part of
an RFC specifying how the algorithm is used with ESP. If such
synchronization data is implicit, the algorithm for deriving the data
MUST be part of the RFC.
So the initialization vector presence is conditional and depends on particular encryption algorithm.

Confusion surrounding MongoDB/GridFS BinData

I am using Python Mongoengine for inserting image files into GridFS, with the following method:
product = Product(name='New Product', price=20.0, ...)
with open(<IMAGE_FILE>, 'rb') as product_photo:
product.image.put(product_photo_main, content_type='image/jpeg')
product.save()
When I view this data with NoSQLBooster (or anything else) the data is represented like so:
{
"_id" : ObjectId("5d71263eae9a187374359927"),
"files_id" : ObjectId("5d71263eae9a187374359926"),
"n" : 0,
"data" : BinData(0,"/9j/4AAQSkZJRgABAQEASABIAAD/4V6T... more 261096 bytes - image/jpeg")
},
And knowing that the second part of the tuple in BinData of the "data" field contains base64 encoding, I'm confused at which point the raw bytes given by open(<IMAGE_FILE>, 'rb') becomes encoded with base64?
So further more, being that base64 encoding is 33% - 37% larger in its size, in regards of transferring that data - this is bad, how can I choose the encoding? At least stop it from using base64...
I have found this SO question which mentions a HexData data type.
I also found others mentioning subtypes aswell, which led me to find this about BSON data types.
Binary
Canonical Relaxed
{ "$binary":
{
"base64": "<payload>",
"subtype": "<t>"
}
}
<Same as Canonical>
Where the values are as follows:
"<payload>"
Base64 encoded (with padding as “=”) payload string.
"<t>"
A one- or two-character hex string that corresponds to a BSON binary subtype. See the extended bson documentation
http://bsonspec.org/spec.html for subtypes available.
Which clearly tells us the payload will be base64!
So can I change this, or does it have to be that way?
at which point the raw bytes ... becomes encoded with base64
Direct Answer
Only at the point where you chose to display them on your console or through some other "display" format. The native format that crosses the wire in BSON format won't have this issue.
If you choose not to display the contents to your terminal or debugger, it will never have been encoded to base64 or any other format.
Point of Correction
which led me to find this about BSON data types.
Which clearly tells us the payload will be base64!
The linked page is referring to MongoDB Extended JSON, not the wire BSON format.
It is true that Extended JSON encodes the binary to base64, that is not true about bson itself.
As below, the only time your driver will pass the data through the extended JSON conversion is at the moment you ask it to display the contents to you via a print or debug
Details
BSON's Spec (the internal mongodb serialization format) binaries are native byte format.
The relevant portion of the spec:
binary ::= int32 subtype (byte*)
indicates that a binary object is
length of the byte*,
followed by a 1-byte subtype
followed by the raw bytes
in the case of the bytes "Hello\x00World" which includes a null byte right in the middle
the "wire format" would be
[11] [0x00] [Hello\x00World]
notice, stack overflow, like virtually every driver or display terminal struggles with the embedded null byte, as would just about every display terminal unless the system made evident that the null byte is actually included in the bytes to be displayed.
meaning the integer (packed into a 32bit byte) followed by 1byte subtype, followed by the literal bytes is what will actually cross the wire.
As you pointed out, most languages would have immense trouble rendering this onscreen to a user.
Extended JSON is the specification that involves the most appropriate way to render non-displayable data into drivers.
Object IDs aren't just bytes, they're objects that can represent timestamps.
Timestamps aren't just numbers, they can represent timezones and be converted to display against the user timezone.
Binaries aren't always text, may have problematic bytes in there, and the easiest way to not bork up your terminal/gui/debugger is to simply encode them away in some ASCII format like base64.
Keep in Mind
bson.Binary and GridFS are not really supposed to be displayed/printed/written in their wire format. The wire format exists for the transfer layer.
To ease with debugging and print statements, most drivers implement a easily "displayable" format that yanks the native BSON format through the Extended JSON spec.
If you simply choose not to display/encode as extend JSON/debug/print, the binary bytes will never actually be base64 encoded by the driver.

How standard is HMAC(SHA-1)

HMAC(SHA-1) is an algorithm for Hash computation that also accepts a key as input value. The algorithm follows certain rules and guarantees a certain level of security and resilience against attacks.
Moving to its implementation: is HMAC(SHA-1) standard at the point that all the "official" and correct implementations of it produce exactly the same result for a given input message and key? Or is the algorithm accepting different implementations that might produce a different result?
any given implementation of HMAC-SHA1 will produce the same set of bytes given the same set of bytes as the input message and key.
That said, there can be a lot of variation on how various interfaces work and how they accept those bytes. For example, one library may output the hash as a hex string, and another may output it as an array of bytes. Or one would take a string as input with a UTF-8 encoding, whereas another would take it in as a UTF-16 encoding. You would need to be careful that the same bytes are hitting the algorithm in different libraries to ensure you get the same result.
Also, while HMAC-SHA1 is probably okay from a security perspective, you should probably be using HMAC-SHA256 instead.
It's very standard. It's a standard, even!
RFC2104 specifies the actual HMAC algorithm and block sizes.
RFC2202 contains test cases for both HMAC-MD5 and HMAC-SHA1.
For further study, RFC4868 gives more guidance on HMAC for the SHA2 family, with an emphasis on IPSec.

Type-Length-Value vs. defined/structured Length-Value

There's no doubt that a length-value representation of data is useful, but what advantages are there to type-length-value over it?
Of course, using LV requires the representation to be predefined or structured, but that's hardly ever a problem. Actually, I can't think of a good enough case where it wouldn't be defined enough that TLV would be required.
In my case, this is about data interchange/protocols. In every situation, the representation must be known to both parties to be processed, which eliminates the need for type explicitly inserted in the data. Any thoughts on when the type would be useful or necessary?
Edit
I should mention that a generic parser/processor would certainly benefit from the type information, but that's not my case.
The only decent reason I could come up with is for a generic processor of the data, mainly for debugging or direct user presentation. Having the type embedded within the data would allow the processor to handle the data correctly without having to predefine all possible structures.
The below point was mentioned in the wikipedia.
New message elements which are received at an older node can be safely skipped and the rest of the message can be parsed. This is similar to the way that unknown XML tags can be safely skipped;
Example:
Imagine a message to make a telephone call. In a first version of a system this might use two message elements, a "command" and a "phoneNumberToCall":
command_c/4/makeCall_c/phoneNumberToCall_c/8/"722-4246"
Here command_c, makeCall_c and phoneNumberToCall_c are integer constants and 4 and 8 are the lengths of the "value" fields, respectively.
Later (in version 2) a new field containing the calling number could be added:
command_c/4/makeCall_c/callingNumber_c/8/"715-9719"/phoneNumberToCall_c/8/"722-4246"
A version 1 system which received a message from a version 2 system would first read the command_c element and then read an element of type callingNumber_c.
The version 1 system does not understand ;callingNumber_c
so the length field is read (i.e. the first 8) and the system skips forward 8 bytes to read
phoneNumberToCall_c which it understands, and message parsing carries on.
Without the type field, the version 1 parser would not know to skip callingNumber_c and instead call the wrong number and maybe throw an error on the rest of the message. So the type field allows for forward compatibility in a way that omiting it does not.

Protocol Definition Language

What protocol definition do you recommend?
I evaluated Google's protocol buffers, but it does not allow me to control the placement of fields in the packet being built. I assume the same is true for Thrift. My requirements are:
specify the location of fields in the packet
allow for bit fields
conditionals: a flag (bit field) = true means that data can appear at a later location in the packet
ability to define a packet structure by referring to another packet definition
Thank you.
("Flavor" on SourceForge, used for defining MPEG-4 might be a candidate, but I am looking for something that seems to have more of a community and preferably works in a .NET environment.)
Have a look to ASN.1 http://es.wikipedia.org/wiki/ASN.1
FooProtocol DEFINITIONS ::= BEGIN
FooQuestion ::= SEQUENCE {
trackingNumber INTEGER,
question IA5String
}
FooAnswer ::= SEQUENCE {
questionNumber INTEGER,
answer BOOLEAN
}
END
It seems to cover your main requirements:
- Bit detail
- ordered content
- type references
- not sure, about conditions
Is widely used, and you can find some implementations on java and python
I'd be interested in the reasons for your requirements. Why do you need to control the position of the fields? Why are bitfields important? Conditionals?
It sounds like you have a (more or less) fixed wire format for which you need to write a parser for, and in that case none of the existing popular protocol/serialization formats (Protobufs, Thrift, JSON, Yaml, etc.) will work for you.
A somewhat unorthodox approach is to use Erlang or Haskell, both of which have good support for parsing binary protocols.
How about C# itself?
eg
class MySimplePDLData {
// format: name (or blank if padding), bit length, value (or blank if data),
// name of presence flag field (or blank if no presence flag), C# type
// one packet type per string, fields separated by pipes (|)
string[] pdl = {
// MY-SIMPLE-PDL-START
",8,0xf8,|version,8,,Int32|type,8,,Int32|id1,64,,Int64",
...
// MY-SIMPLE-PDL-END
};
}
If the data is already in memory, you don't need to do IO on a file format. From here you can either dynamically interpret packes or generate the necessary C# source code for packet recognition/pack/unpack, again using C# itself.

Resources