I have the following message:
message Message {
int64 id = 1;
google.protobuf.FloatValue weight = 2;
google.protobuf.FloatValue override_weight = 3;
}
and I wish to change the type of weight and override_weight(optional fields) to google.protobuf.DoubleValue so what I did was the fllowing:
message Message {
int64 id = 1;
oneof weight_oneof {
google.protobuf.FloatValue weight = 2 [deprecated=true];
google.protobuf.DoubleValue double_weight = 4;
}
oneof override_weight_oneof {
google.protobuf.FloatValue override_weight = 3 [deprecated=true];
google.protobuf.DoubleValue double_override_weight = 5;
}
}
My question is, lets assume I have old messages who were compiled by the previous protobuff message compiler for the old message, would I be able to parse them as the new message?
The documentation is very vague about this:
"Move optional fields into or out of a oneof: You may lose some of your information (some fields will be cleared) after the message is serialized and parsed. However, you can safely move a single field into a new oneof and may be able to move multiple fields if it is known that only one is ever set."
Has anyone tried this before? what is the best practice for this situation?
As far as I know fields in an oneof are just serialize using their tag number. The serialized data does not indicate if a field is part of an oneof. This is all handled by the serializer and deserializer. So as long as the tag numbers do not conflict it can be assumed that it will work in both directions, old messages to a new serializer and new messages to an old serializer.
You could test this using an online protobuf deserializer.
Verification:
The code does indeed produce the same byte strings. Below you will find the message definitions and python code I used. The python code will output a byte string you can copy and use in the decoder of Marc Gravell.
syntax = "proto3";
message MessageA {
int64 id = 1;
float weight = 2;
float override_weight = 3;
}
message MessageB {
int64 id = 1;
oneof weight_oneof {
float weight = 2 [deprecated=true];
double double_weight = 4;
}
oneof override_weight_oneof {
float override_weight = 3 [deprecated=true];
double double_override_weight = 5;
}
}
import Example_pb2
# Set some data in the original message
msgA = Example_pb2.MessageA()
msgA.id = 1234
msgA.weight = 3.21
msgA.override_weight = 5.43
# Output the serialized bytes in a pretty format
str = 'msgA = '
for x in msgA.SerializeToString():
str += "{:02x} ".format(x)
print(str)
# Next set the original fields in the new message
msgB = Example_pb2.MessageB()
msgB.id = 1234
msgB.weight = 3.21
msgB.override_weight = 5.43
# Output the serialized bytes in a pretty format
str = 'msgB 1 = '
for x in msgB.SerializeToString():
str += "{:02x} ".format(x)
print(str)
# And finally set the new fields in msgB
msgB.double_weight = 3.21
msgB.double_override_weight = 5.43
# Output the serialized bytes in a pretty format
str = 'msgB 2 = '
for x in msgB.SerializeToString():
str += "{:02x} ".format(x)
print(str)
The output of the python script was:
msgA = 08 d2 09 15 a4 70 4d 40 1d 8f c2 ad 40
msgB 1 = 08 d2 09 15 a4 70 4d 40 1d 8f c2 ad 40
msgB 2 = 08 d2 09 21 ae 47 e1 7a 14 ae 09 40 29 b8 1e 85 eb 51 b8 15 40
As you can see message A and message B yield the same byte string when setting the original fields. Only when you set the new fields you get a different string.
Related
I have the following code snippet in perl for automating an application script using Win32::OLE
use Win32::OLE;
use Win32::OLE::Variant;
my $app = new Win32::OLE 'Some.Application';
my $InfoPacket = "78 00 C4 10 95 B4
00 02 31 7F 80 FF";
my #Bytes = split(/[ \n][ \n]*/, $InfoPacket);
my #HexBytes;
foreach(#Bytes)
{
push #HexBytes, eval "0x$_";
}
my $Packet = pack("C12", #HexBytes);
my $VarPacket = Variant(VT_UI1, $Packet);
my $InfoObj = app -> ProcessPacket($VarPacket);
print $InfoObj -> Text();
I have converted the entire code in Python 3, except for the [exact] equivalent of pack() and Variant() functions.
from win32com.client import Dispatch
from struct import pack
app = Dispatch("Some.Application")
InfoPacket = "78 00 C4 10 95 B4 \
00 02 31 7F 80 FF"
Bytes = InfoPacket.split()
HexBytes = [int(b, 16) for b in Bytes]
Packet = pack('B'*12, *HexBytes) # This however, is not giving the exact same output as perl's...
VarPacket = ... # Need to know the python equivalent of above Variant() function...
InfoObj = app.ProcessPacket(VarPacket)
print (InfoObj.Text())
Please suggest the python equivalent of the pack() and Variant() functions used in perl script in the given context so that the final variable VarPacket can be used by Python's Dispatch object to properly generate the InfoObj object.
Thanks !!!
I am not sure about the Python equivalent of the Perl Variant, but for the first question about packing the unsigned char array, the following works for me:
from struct import pack
def gen_list():
info_packet = "78 00 C4 10 95 B4 00 02 31 7F 80 FF"
count = 0
for hex_str in info_packet.split():
yield int(hex_str, 16)
count += 1
for j in range(count, 120):
yield int(0)
packet = pack("120B", *list(gen_list()))
Edit
From the test file testPyComTest.py it looks like you can generate the variant like this:
import win32com.client
import pythoncom
variant = win32com.client.VARIANT(pythoncom.VT_ARRAY | pythoncom.VT_UI1, packet)
I am trying to convert length message to ascii.
My length message is like
var ll = "0170";
In node js , is there some kind of function which converts into ascii?
Please help?
Here's a simple function(ES6) which converts a string into ASCII characters using charCodeAt()
const toAscii = (string) => string.split('').map(char=>char.charCodeAt(0)).join(" ")
console.log(toAscii("Hello, World"))
Output:
-> 72 101 108 108 111 44 32 87 111 114 108 100
You could create a prototype function aswell. There are many solutions :)
you can't have an ascii code for a whole string.
An ascii code is an integer value for a character, not a string. Then for your string "0170" you will get 4 ascii codes
you can display these ascii codes like this
var str = "0170";
for (var i = 0, len = str.length; i < len; i++) {
console.log(str[i].charCodeAt());
}
Ouput : 48 49 55 48
use charCodeAt() function to covert asscii format.
var ll = "0170";
function ascii (a) { return a.charCodeAt(); }
console.log(ascii(ll[0]),ascii(ll[1]), ascii(ll[2]), ascii(ll[3]) )
result:
48 49 55 48
I am a bit out of my comfort zone here, so I'm not even sure I'm aproaching the problem appropriately. Anyhow, here goes:
So I have a problem where I shall hash some info with sha1 that will work as that info's id.
when a client wants to signal what current info is being used, it sends a percent-encoded sha1-string.
So one example is, my server hashes some info and gets a hex representation like so:
44 c1 b1 0d 6a de ce 01 09 fd 27 bc 81 7f 0e 90 e3 b7 93 08
and the client sends me
D%c1%b1%0dj%de%ce%01%09%fd%27%bc%81%7f%0e%90%e3%b7%93%08
Removing the % we get
D c1 b1 0dj de ce 01 09 fd 27 bc 81 7f 0e 90 e3 b7 93 08
which matches my hash except for the beginning D and the j after the 0d, but replacing those with their ascii hex no, we have identical hash.
So, as I have read and understood the urlencoding, the standard would allow a client to send the D as either D or %44? So different clients would be able to send different representations off the same hash, and I will not be able to just compare them for equality?
I would prefer to be able to compare the urlencoded strings as they are when they are sent, but one way to do it would be to decode them, removing all '%' and get the ascii hex value for whatever mismatch I get, much like the D and the j in my above example.
This all seems to be a very annoying way to do things, am I missing something, please tell me I am? :)
I am doing this in node.js but I suppose the solution would be language/platform agnostic.
I made this crude solution for now:
var unreserved = 'A B C D E F G H I J S O N K L M N O P Q R S T U V W X Y Za b c d e f g h i j s o n k l m n o p q r s t u v w x y z + 1 2 3 4 5 6 7 8 9 0 - _ . ~';
function hexToPercent(hex){
var index = 0,
end = hex.length,
delimiter = '%',
step = 2,
result = '',
tmp = '';
if(end % step !== 0){
console.log('\'' + hex + '\' must be dividable by ' + step + '.');
return result;
}
while(index < end){
tmp = hex.slice(index, index + step);
if(unreserved.indexOf(String.fromCharCode('0x' + tmp)) !== -1){
result = result + String.fromCharCode('0x' + tmp);
}
else{
result = result + delimiter + tmp;
}
index = index + step;
}
return result;
}
I have a char[32] that consist of ASCII characters, some of which may be non printable characters, however, they are all in valid range of 0 to 255.
Let's say, it contains these numbers:
{ 9A 2E 5C 66 26 3D 5A 88 76 26 F1 09 B5 32 DE 10 91 3E 68 2F EA 7B C9 52 66 23 9D 77 16 BB C9 42 }
I'd like to print out or store in std::string as "9A2E5C66263D5A887626F109B532DE10913E682FEA7BC95266239D7716BBC942", however, if I print it out using printf or sprintf, it will yield the ASCII representative of each number, and instead it will print out "ö.\f&=Zàv&Ò µ2fië>h/Í{…Rf#ùwª…B", which is the correct ASCII character representation, since: ö = 9a, . = 2e, ...
How do I reliably get the string representation of the hex numbers? ie: I'd expect a char[64] which contains "9A2E5C66263D5A887626F109B532DE10913E682FEA7BC95266239D7716BBC942" instead.
Thanks!
void bytesToHex(char* dest, char* src, int size) {
for (unsigned int i = 0; i < size; i++) {
sprintf(&dest[i * 2], "%02x", src[i]);
}
}
You'd have to allocate your own memory here.
It would be used like this:
char myBuffer[32]
char result[65];
bytesToHex(result, myBuffer, 32);
result[64] = 0;
// print it
printf(result);
// or store it in an std::string
std::string str = string(result);
I tried:
char szStringRep[64];
for ( int i = 0; i < 32; i++ ) {
sprintf( &szStringRep[i*2], "%x", szHash[i] );
}
it works as intended, but if it encountered a < 0x10 number, it will null terminated as it is printing 0
You can encode your string to another system like Base64, which is widely used when using encryption algorithms.
I am having trouble understanding the basic concepts of ASN.1.
If a type is an OID, does the corresponding number get actually encoded in the binary data?
For instance in this definition:
id-ad-ocsp OBJECT IDENTIFIER ::= { id-ad 1 }
Does the corresponding 1.3.6.1.5.5.7.48.1 get encoded in the binary exactly like this?
I am asking this because I am trying to understand a specific value I see in a DER file (a certificate), which is 04020500, and I am not sure how to interpret it.
Yes, the OID is encoded in the binary data. The OID 1.3.6.1.5.5.7.48.1 you mention becomes 2b 06 01 05 05 07 30 01 (the first two numbers are encoded in a single byte, all remaining numbers are encoded in a single bytes as well because they're all smaller than 128).
A nice description of OID encoding is found here.
But the best way to analyze your ASN.1 data is to paste in into an online decoder, e.g. http://lapo.it/asn1js/.
If all your digits are less than or equal to 127 then you are very lucky because they can be represented with a single octet each. The tricky part is when you have larger numbers which are common, such as 1.2.840.113549.1.1.5 (sha1WithRsaEncryption). These examples focus on decoding, but encoding is just the opposite.
1. First two 'digits' are represented with a single byte
You can decode by reading the first byte into an integer
var firstByteNumber = 42;
var firstDigit = firstByteNumber / 40;
var secondDigit = firstByteNumber % 40;
Produces the values
1.2
2. Subsequent bytes are represented using Variable Length Quantity, also called base 128.
VLQ has two forms,
Short Form - If the octet starts with 0, then it is simply represented using the remaining 7 bits.
Long Form - If the octet starts with a 1 (most significant bit), combine the next 7 bits of that octet plus the 7 bits of each subsequent octet until you come across an octet with a 0 as the most significant bit (this marks the last octet).
The value 840 would be represented with the following two bytes,
10000110
01001000
Combine to 00001101001000 and read as int.
Great resource for BER encoding, http://luca.ntop.org/Teaching/Appunti/asn1.html
The first octet has value 40 * value1 + value2. (This is unambiguous,
since value1 is limited to values 0, 1, and 2; value2 is limited to
the range 0 to 39 when value1 is 0 or 1; and, according to X.208, n is
always at least 2.)
The following octets, if any, encode value3, ...,
valuen. Each value is encoded base 128, most significant digit first,
with as few digits as possible, and the most significant bit of each
octet except the last in the value's encoding set to "1." Example: The
first octet of the BER encoding of RSA Data Security, Inc.'s object
identifier is 40 * 1 + 2 = 42 = 2a16. The encoding of 840 = 6 * 128 +
4816 is 86 48 and the encoding of 113549 = 6 * 1282 + 7716 * 128 + d16
is 86 f7 0d. This leads to the following BER encoding:
06 06 2a 86 48 86 f7 0d
Finally, here is a OID decoder I just wrote in Perl.
sub getOid {
my $bytes = shift;
#first 2 nodes are 'special';
use integer;
my $firstByte = shift #$bytes;
my $number = unpack "C", $firstByte;
my $nodeFirst = $number / 40;
my $nodeSecond = $number % 40;
my #oidDigits = ($nodeFirst, $nodeSecond);
while (#$bytes) {
my $num = convertFromVLQ($bytes);
push #oidDigits, $num;
}
return join '.', #oidDigits;
}
sub convertFromVLQ {
my $bytes = shift;
my $firstByte = shift #$bytes;
my $bitString = unpack "B*", $firstByte;
my $firstBit = substr $bitString, 0, 1;
my $remainingBits = substr $bitString, 1, 7;
my $remainingByte = pack "B*", '0' . $remainingBits;
my $remainingInt = unpack "C", $remainingByte;
if ($firstBit eq '0') {
return $remainingInt;
}
else {
my $bitBuilder = $remainingBits;
my $nextFirstBit = "1";
while ($nextFirstBit eq "1") {
my $nextByte = shift #$bytes;
my $nextBits = unpack "B*", $nextByte;
$nextFirstBit = substr $nextBits, 0, 1;
my $nextSevenBits = substr $nextBits, 1, 7;
$bitBuilder .= $nextSevenBits;
}
my $MAX_BITS = 32;
my $missingBits = $MAX_BITS - (length $bitBuilder);
my $padding = 0 x $missingBits;
$bitBuilder = $padding . $bitBuilder;
my $finalByte = pack "B*", $bitBuilder;
my $finalNumber = unpack "N", $finalByte;
return $finalNumber;
}
}
OID encoding for dummies :) :
each OID component is encoded to one or more bytes (octets)
OID encoding is just a concatenation of these OID component encodings
first two components are encoded in a special way (see below)
if OID component binary value has less than 7 bits, the encoding is just a single octet, holding the component value (note, most significant bit, leftmost, will always be 0)
otherwise, if it has 8 and more bits, the value is "spread" into multiple octets - split the binary representation into 7 bit chunks (from right), left-pad the first one with zeroes if needed, and form octets from these septets by adding most significant (left) bit 1, except from the last chunk, which will have bit 0 there.
first two components (X.Y) are encoded like it is a single component with a value 40*X + Y
This is a rewording of ITU-T recommendation X.690, chapter 8.19
This is a simplistic Python 3 implementation of the of above, resp. a string form of an object identifier into ASN.1 DER or BER form.
def encode_variable_length_quantity(v:int) -> list:
# Break it up in groups of 7 bits starting from the lowest significant bit
# For all the other groups of 7 bits than lowest one, set the MSB to 1
m = 0x00
output = []
while v >= 0x80:
output.insert(0, (v & 0x7f) | m)
v = v >> 7
m = 0x80
output.insert(0, v | m)
return output
def encode_oid_string(oid_str:str) -> tuple:
a = [int(x) for x in oid_str.split('.')]
oid = [a[0]*40 + a[1]] # First two items are coded by a1*40+a2
# A rest is Variable-length_quantity
for n in a[2:]:
oid.extend(encode_variable_length_quantity(n))
oid.insert(0, len(oid)) # Add a Length
oid.insert(0, 0x06) # Add a Type (0x06 for Object Identifier)
return tuple(oid)
if __name__ == '__main__':
oid = encode_oid_string("1.2.840.10045.3.1.7")
print(oid)