awk, not clear about snippet in example from gnu-awk site - linux

I'm going through this GNU awk example. This program needs the ord() and chr() library functions.
And that's where I get a little confused. I don't understand what does this code snippet does:
BEGIN { _ord_init() }
function _ord_init( low, high, i, t)
{
low = sprintf("%c", 7) # BEL is ascii 7
if (low == "\a") { # regular ascii
low = 0
high = 127
} else if (sprintf("%c", 128 + 7) == "\a") {
# ascii, mark parity
low = 128
high = 255
} else { # ebcdic(!)
low = 0
high = 255
}
for (i = low; i <= high; i++) {
t = sprintf("%c", i)
_ord_[t] = i
}
}
If you remove it (BEGIN { _ord_init() }) and run the split.awk example, it works but in a strange way: there's a file called "xa?" that appear and the file "xab" is missing.
This is the input I have:
1 2 3 hello
1 3 4 world
2 4 5 india
4 2 1 china
I run the split program as such:
awk -f split.awk -2 input.txt
And the content of split.awk is as such: well its the code in the two preivous urls put into one file "split.awk".
So again, what does the function _ord_init do exactly?

It define the character set to use at runtime.
It uses a local response to a specific request ( sprintf("%c", 7) and sprintf("%c", 128 + 7) == "\a") to allow chr() and ord() to answer in the same character set.
3 main family set are treated (other are untreated here)
- ascii (most used at development time of awk)
- parity ascii (ASCII but with the 8th bit is always set to 1 instead of 0 or parity breaker in transmission [to insure there is always an odd number of 1])
- ebcdic (typically mainframe)

Related

Convert binary ( integer and fraction) from VHDL to decimal, negative value in C code

I have a 14-bit data that is fed from FPGA in vhdl, The NIos II processor reads the 14-bit data from FPGA and do some processing tasks, where Nios II system is programmed in C code
The 14-bit data can be positive, zero or negative. In Altera compiler, I can only define the data to be 8,16 or 32. So I define this to be 16 bit data.
First, I need to check if the data is negative, if it is negative, I need to pad the first two MSB to be bit '1' so the system detects it as negative value instead of positive value.
Second, I need to compute the real value of this binary representation into a decimal value of BOTH integer and fraction.
I learned from this link (Correct algorithm to convert binary floating point "1101.11" into decimal (13.75)?) that I could convert a binary (consists of both integer and fraction) to decimal values.
To be specified, I am able to use this code quoted from this link (Correct algorithm to convert binary floating point "1101.11" into decimal (13.75)?) , reproduced as below:
#include <stdio.h>
#include <math.h>
double convert(const char binary[]){
int bi,i;
int len = 0;
int dot = -1;
double result = 0;
for(bi = 0; binary[bi] != '\0'; bi++){
if(binary[bi] == '.'){
dot = bi;
}
len++;
}
if(dot == -1)
dot=len;
for(i = dot; i >= 0 ; i--){
if (binary[i] == '1'){
result += (double) pow(2,(dot-i-1));
}
}
for(i=dot; binary[i] != '\0'; i++){
if (binary[i] == '1'){
result += 1.0/(double) pow(2.0,(double)(i-dot));
}
}
return result;
}
int main()
{
char bin[] = "1101.11";
char bin1[] = "1101";
char bin2[] = "1101.";
char bin3[] = ".11";
printf("%s -> %f\n",bin, convert(bin));
printf("%s -> %f\n",bin1, convert(bin1));
printf("%s -> %f\n",bin2, convert(bin2));
printf("%s -> %f\n",bin3, convert(bin3));
return 0;
}
I am wondering if this code can be used to check for negative value? I did try with a binary string of 11111101.11 and it gives the output of 253.75...
I have two questions:
What are the modifications I need to do in order to read a negative value?
I know that I can do the bit shift (as below) to check if the msb is 1, if it is 1, I know it is negative value...
if (14bit_data & 0x2000) //if true, it is negative value
The issue is, since it involves fraction part (but not only integer), it confused me a bit if the method still works...
If the binary number is originally not in string format, is there any way I could convert it to string? The binary number is originally fed from a fpga block written in VHDL say, 14 bits, with msb as the sign bit, the following 6 bits are the magnitude for integer and the last 6 bits are the magnitude for fractional part. I need the decimal value in C code for Altera Nios II processor.
OK so I m focusing on the fact that you want to reuse the algorithm you mention at the beginning of your question and assume that the binary representation you have for your signed number is Two's complement but I`m not really sure according to your comments that the input you have is the same than the one used by the algorithm
First pad the 2 MSB to have a 16 bit representation
16bit_data = (14_bit_data & 0x2000) ? ( 14_bit_data | 0xC000) : 14_bit_data ;
In case value is positive then value will remained unchanged and if negative this will be the correct two`s complement representation on 16bits.
For fractionnal part everything is the same compared to algorithm you mentionned in your question.
For integer part everything is the same except the treatment of MSB.
For unsigned number MSB (ie bit[15]) represents pow(2,15-6) ( 6 is the width of frationnal part ) whereas for signed number in Two`s complement representation it represents -pow(2,15-6) meaning that algorithm become
/* integer part operation */
while(p >= 1)
{
rem = (int)fmod(p, 10);
p = (int)(p / 10);
dec = dec + rem * pow(2, t) * (9 != t ? 1 : -1);
++t;
}
or said differently if you don`t want * operator
/* integer part operation */
while(p >= 1)
{
rem = (int)fmod(p, 10);
p = (int)(p / 10);
if( 9 != t)
{
dec = dec + rem * pow(2, t);
}
else
{
dec = dec - rem * pow(2, t);
}
++t;
}
For the second algorithm that you mention, considering you format if dot == 11 and i == 0 we are at MSB ( 10 integer bits followed by dot) so the code become
for(i = dot - 1; i >= 0 ; i--)
{
if (binary[i] == '1')
{
if(11 != dot || i)
{
result += (double) pow(2,(dot-i-1));
}
else
{
// result -= (double) pow(2,(dot-i-1));
// Due to your number format i == 0 and dot == 11 so
result -= 512
}
}
}
WARNING : in brice algorithm the input is character string like "11011.101" whereas according to your description you have an integer input so I`m not sure that this algorithm is suited to your case
I think this should work:
float convert14BitsToFloat(int16_t in)
{
/* Sign-extend in, since it is 14 bits */
if (in & 0x2000) in |= 0xC000;
/* convert to float with 6 decimal places (64 = 2^6) */
return (float)in / 64.0f;
}
To convert any number to string, I would use sprintf. Be aware it may significantly increase the size of your application. If you don't need the float and what to keep a small application, you should make your own conversion function.

How to aid Smaz in further compressing repeating characters?

Smaz is able to compress a short string (< 100 bytes) where other compressing tools fail.
But there is a problem with it, particularly repeating characters that it doesn't optimize by itself.
For example the string "this is a short string" compresses fine:
\x9b8\xac>\xbb\xf2>\xc3F
It is 9 bytes long. But if you have a short string with repeating characters you have a problem.. for example the string "this is a string with many aaaaaaaaaaaaaaaaaaaaaa's" compresses into this:
\x9b8\xac>\xc3F\xf3\xe3\xad\tG\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\xfe'\n
It is still smaller, but the many "\x04"'s look like a waste of space.
I've been thinking about calculating a letter occurrence and replacing it with a sort of "bookmark".. for example "aaaaaaaaaa" with ten "a" occurrences becomes "a//10".
This is a test Python snippet I've created out of my head, but is very very ugly as of now
a = set("this is a string with many aaaaaaaaaaaaaaaaaaaaaa's")
b = "this is a string with many aaaaaaaaaaaaaaaaaaaaaa's"
for i in a:
if i+i in b: # if char occ. > 2
o = b.count(i) - 2
s = i*o
c = b.replace(s, i+'//'+str(o))
print c
It then becomes:
this is a string with many a//22's
Smaz compressed
\x9b8\xac>\xc3F\xf3\xe3\xad\tG\x04\xc5\xc5\xff\x0222'\n
My worry is, what if the string contains an url? Is it safe to escape it like "//"? but then you have regex strings. How can it be escaped in that case?
Finally, my clear and concise question is: How do you safely shorten repeating characters that Smaz doesn't compress by itself?
Here's an example of safe compression of repeating bytes. My result for your data example
"this is a string with many aaaaaaaaaaaaaaaaaaaaaa's"
is:
"this is a string with many \x16a's"
It's 31 bytes long, a 39% reduction. "\x16" represents the one byte hexadecimal (22 decimal) count of repeating "a"'s.
What result do you get if you "Smaz" my result?
My result for your Smaz output example
"\x9b8\xac>\xc3F\xf3\xe3\xad\tG\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\xfe"
is:
"\x9b8\xac>\xc3F\xf3\xe3\xad\x01\tG\x16\x04\xfe"
It's 15 bytes long, a 56% reduction. "\x16" represents the one byte hexadecimal (22 decimal) count of repeating compressed "\x04"'s ("a"'s).
Here's my code in Go.
package main
import (
"fmt"
)
func Compress(src []byte) (dst []byte) {
for len(src) > 0 {
c := src[0]
n := 1
for ; n < len(src) && src[n] == c; n++ {
}
src = src[n:]
for n > 0 {
m := (n-1)%31 + 1
n -= m
if m == 1 && !(1 <= c && c <= 31) {
dst = append(dst, c)
} else {
dst = append(dst, byte(m), c)
}
}
}
return dst
}
func Decompress(src []byte) (dst []byte) {
for i := 0; i < len(src); i++ {
n, c := byte(1), src[i]
if i+1 < len(src) && (1 <= c && c <= 31) {
n, c = c, src[i+1]
i++
}
for j := byte(0); j < n; j++ {
dst = append(dst, c)
}
}
return dst
}
func test(data string) {
src := []byte(data)
fmt.Printf("%d %q\n", len(src), src)
compress := Compress(src)
fmt.Printf("%d %q\n", len(compress), compress)
decompress := Decompress(compress)
fmt.Printf("%d %q\n", len(decompress), decompress)
fmt.Println(string(Decompress(Compress(src))) == string(src))
}
func main() {
data := "this is a string with many aaaaaaaaaaaaaaaaaaaaaa's"
test(data)
fmt.Println()
smaz := "\x9b8\xac>\xc3F\xf3\xe3\xad\tG\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\xfe"
test(smaz)
}
Output:
51 "this is a string with many aaaaaaaaaaaaaaaaaaaaaa's"
31 "this is a string with many \x16a's"
51 "this is a string with many aaaaaaaaaaaaaaaaaaaaaa's"
true
34 "\x9b8\xac>\xc3F\xf3\xe3\xad\tG\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\xfe"
15 "\x9b8\xac>\xc3F\xf3\xe3\xad\x01\tG\x16\x04\xfe"
34 "\x9b8\xac>\xc3F\xf3\xe3\xad\tG\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\x04\xfe"
true
How do you safely shorten repeating characters that Smaz doesn't compress by itself?
You can't without changing the Smaz algorithm and being incompatible with Smaz.
Smaz is purpose built to be effective on small strings because its dictionary is universal and pre-computed. Other compression schemes need to build up a dictionary that is data set dependent, and typically takes a few hundred bytes for you to see positive returns. Repeating sequences are rare in short strings.
For your proposed Smaz variant with run length encoding scheme to work you would have to take up one of the 256 precious byte slots Smaz reserves for its codes. You could change one of the byte slots to mean "a byte indicating length to follow, followed by the byte to be repeated" - i.e., 3 bytes to communicate [REPEAT BYTE] [BYTE indicating 2 - 257 times] [BYTE CODE TO REPEAT]. You could reassign the Smaz byte code 253 from its present meaning of ".com" for the purpose of run-length encoding. But be aware that your compression will be slightly less effective for general data with ".com".
Also be aware that searching for repeating sequences in a hypothetical Smaz variant with run-length encoding would necessarily take more CPU compute time for the backtracking compression.

Find non-unique characters in a given string in O(n) time with constant space i.e with no extra auxiliary array

Given a string s containing only lower case alphabets (a - z), find (i.e print) the characters that are repeated.
For ex, if string s = "aabcacdddec"
Output: a c d
3 approaches to this problem exists:
[brute force] Check every char of string (i.e s[i] with every other char and print if both are same)
Time complexity: O(n^2)
Space complexity: O(1)
[sort and then compare adjacent elements] After sorting (in O(n log(n) time), traverse the string and check if s[i] ans s[i + 1] are equal
Time complexity: O(n logn) + O(n) = O(n logn)
Space complexity: O(1)
[store the character count in an array] Create an array of size 26 (to keep track of a - z) and for every s[i], increment value stored at index = s[i] - 26 in the array. Finally traverse the array and print all elements (i.e 'a' + i) with value greater than 1
Time complexity: O(n)
Space complexity: O(1) but we have a separate array for storing the frequency of each element.
Is there a O(n) approach that DOES NOT use any array/hash table/map (etc)?
HINT: Use BIT Vectors
This is the element distinctness problem, so generally speaking - no there is no way to solve it in O(n) without extra space.
However, if you regard the alphabet as constant size (a-z characters only is pretty constant) you can either create a bitset of these characters, in O(1) space [ it is constant!] or check for each character in O(n) if it repeats more than once, it will be O(constant*n), which is still in O(n).
Pseudo code for 1st solution:
bit seen[] = new bit[SIZE_OF_ALPHABET] //contant!
bit printed[] = new bit[SIZE_OF_ALPHABET] //so is this!
for each i in seen.length: //init:
seen[i] = 0
printed[i] = 0
for each character c in string: //traverse the string:
i = intValue(c)
//already seen it and didn't print it? print it now!
if seen[i] == 1 and printed[i] == 0:
print c
printed[i] = 1
else:
seen[i] = 1
Pseudo code for 2nd solution:
for each character c from a-z: //constant number of repeats is O(1)
count = 0
for each character x in the string: //O(n)
if x==c:
count += 1
if count > 1
print count
Implementation in Java
public static void findDuplicate(String str) {
int checker = 0;
char c = 'a';
for (int i = 0; i < str.length(); ++i) {
int val = str.charAt(i) - c;
if ((checker & (1 << val)) > 0) {
System.out.println((char)(c+val));
}else{
checker |= (1 << val);
}
}
}
Uses as int as storage and performs bit wise operator to find the duplicates.
it is in O(n) .. explanation follows
Input as "abddc"
i==0
STEP #1 : val = 98 - 98 (0) str.charAt(0) is a and conversion char to int is 98 ( ascii of 'a')
STEP #2 : 1 << val equal to ( 1 << 0 ) equal to 1 finally 1 & 0 is 0
STEP #3 : checker = 0 | ( 1 << 0) equal to 0 | 1 equal to 1 checker is 1
i==1
STEP #1 : val = 99 - 98 (1) str.charAt(1) is b and conversion char to int is 99 ( ascii of 'b')
STEP #2 : 1 << val equal to ( 1 << 1 ) equal to 2 finally 1 & 2 is 0
STEP #3 : checker = 2 | ( 1 << 1) equal to 2 | 1 equal to 2 finally checker is 2
i==2
STEP #1 : val = 101 - 98 (3) str.charAt(2) is d and conversion char to int is 101 ( ascii of 'd')
STEP #2 : 1 << val equal to ( 1 << 3 ) equal to 8 finally 2 & 8 is 0
STEP #3 : checker = 2 | ( 1 << 3) equal to 2 | 8 equal to 8 checker is 8
i==3
STEP #1 : val = 101 - 98 (3) str.charAt(3) is d and conversion char to int is 101 ( ascii of 'd')
STEP #2 : 1 << val equal to ( 1 << 3 ) equal to 8 finally 8 & 8 is 8
Now print 'd' since the value > 0
You can also use the Bit Vector, depends upon the language it would space efficient. In java i would prefer to use int for this fixed ( just 26) constant case
The size of the character set is a constant, so you could scan the input 26 times. All you need is a counter to store the number of times you've seen the character corresponding to the current iteration. At the end of each iteration, print that character if your counter is greater than 1.
It's O(n) in runtime and O(1) in auxiliary space.
Implementation in C# (recursive solution)
static void getNonUniqueElements(string s, string nonUnique)
{
if (s.Count() > 0)
{
char ch = s[0];
s = s.Substring(1);
if (s.LastIndexOf(ch) > 0)
{
if (nonUnique.LastIndexOf(ch) < 0)
nonUnique += ch;
}
getNonUniqueElements(s, nonUnique);
}
else
{
Console.WriteLine(nonUnique);
return;
}
}
static void Main(string[] args)
{
getNonUniqueElements("aabcacdddec", "");
Console.ReadKey();
}

Converting string of ASCII characters to string of corresponding decimals

May I introduce you to the problem that destroyed my weekend. I have biological data in 4 columns
#ID:::12345/1 ACGACTACGA text !"#$%vwxyz
#ID:::12345/2 TATGACGACTA text :;<=>?VWXYZ
I would like to use awk to edit the first column to replace characters : and / with -
I would like to convert the string in the last column with a comma-separated string of decimals that correspond to each individual ASCII character (any character ranging from ASCII 33 - 126).
#ID---12345-1 ACGACTACGA text 33,34,35,36,37,118,119,120,121,122
#ID---12345-2 TATGACGACTA text 58,59,60,61,62,63,86,87,88,89,90
The first part is easy, but i'm stuck with the second. I've tried using awk ordinal functions and sprintf; I can only get the former to work on the first char in the string and I can only get the latter to convert hexidecimal to decimal and not with spaces. Also tried bash function
$ od -t d1 test3 | awk 'BEGIN{OFS=","}{i = $1; $1 = ""; print $0}'
But don't know how to call this function within awk.
I would prefer to use awk as I have some downstream manipulations that can also be done in awk.
Many thanks in advance
Using the ordinal functions from the awk manual, you can do it like this:
awk -f ord.awk --source '{
# replace : with - in the first field
gsub(/:/,"-",$1)
# calculate the ordinal by looping over the characters in the fourth field
res=ord($4)
for(i=2;i<=length($4);i++) {
res=res","ord(substr($4,i))
}
$4=res
}1' file
Output:
#ID---12345/1 ACGACTACGA text 33,34,35,36,37,118,119,120,121,122
#ID---12345/2 TATGACGACTA text 58,59,60,61,62,63,86,87,88,89,90
Here is ord.awk (taken as is from: http://www.gnu.org/software/gawk/manual/html_node/Ordinal-Functions.html)
# ord.awk --- do ord and chr
# Global identifiers:
# _ord_: numerical values indexed by characters
# _ord_init: function to initialize _ord_
BEGIN { _ord_init() }
function _ord_init( low, high, i, t)
{
low = sprintf("%c", 7) # BEL is ascii 7
if (low == "\a") { # regular ascii
low = 0
high = 127
} else if (sprintf("%c", 128 + 7) == "\a") {
# ascii, mark parity
low = 128
high = 255
} else { # ebcdic(!)
low = 0
high = 255
}
for (i = low; i <= high; i++) {
t = sprintf("%c", i)
_ord_[t] = i
}
}
function ord(str, c)
{
# only first character is of interest
c = substr(str, 1, 1)
return _ord_[c]
}
function chr(c)
{
# force c to be numeric by adding 0
return sprintf("%c", c + 0)
}
If you don't want to include the whole of ord.awk, you can do it like this:
awk 'BEGIN{ _ord_init()}
function _ord_init(low, high, i, t)
{
low = sprintf("%c", 7) # BEL is ascii 7
if (low == "\a") { # regular ascii
low = 0
high = 127
} else if (sprintf("%c", 128 + 7) == "\a") {
# ascii, mark parity
low = 128
high = 255
} else { # ebcdic(!)
low = 0
high = 255
}
for (i = low; i <= high; i++) {
t = sprintf("%c", i)
_ord_[t] = i
}
}
{
# replace : with - in the first field
gsub(/:/,"-",$1)
# calculate the ordinal by looping over the characters in the fourth field
res=_ord_[substr($4,1,1)]
for(i=2;i<=length($4);i++) {
res=res","_ord_[substr($4,i,1)]
}
$4=res
}1' file
Perl soltuion:
perl -lnae '$F[0] =~ s%[:/]%-%g; $F[-1] =~ s/(.)/ord($1) . ","/ge; chop $F[-1]; print "#F";' < input
The first substitution replaces : and / in the first field with a dash, the second one replaces each character in the last field with its ord and a comma, chop removes the last comma.

How does ASN.1 encode an object identifier?

I am having trouble understanding the basic concepts of ASN.1.
If a type is an OID, does the corresponding number get actually encoded in the binary data?
For instance in this definition:
id-ad-ocsp OBJECT IDENTIFIER ::= { id-ad 1 }
Does the corresponding 1.3.6.1.5.5.7.48.1 get encoded in the binary exactly like this?
I am asking this because I am trying to understand a specific value I see in a DER file (a certificate), which is 04020500, and I am not sure how to interpret it.
Yes, the OID is encoded in the binary data. The OID 1.3.6.1.5.5.7.48.1 you mention becomes 2b 06 01 05 05 07 30 01 (the first two numbers are encoded in a single byte, all remaining numbers are encoded in a single bytes as well because they're all smaller than 128).
A nice description of OID encoding is found here.
But the best way to analyze your ASN.1 data is to paste in into an online decoder, e.g. http://lapo.it/asn1js/.
If all your digits are less than or equal to 127 then you are very lucky because they can be represented with a single octet each. The tricky part is when you have larger numbers which are common, such as 1.2.840.113549.1.1.5 (sha1WithRsaEncryption). These examples focus on decoding, but encoding is just the opposite.
1. First two 'digits' are represented with a single byte
You can decode by reading the first byte into an integer
var firstByteNumber = 42;
var firstDigit = firstByteNumber / 40;
var secondDigit = firstByteNumber % 40;
Produces the values
1.2
2. Subsequent bytes are represented using Variable Length Quantity, also called base 128.
VLQ has two forms,
Short Form - If the octet starts with 0, then it is simply represented using the remaining 7 bits.
Long Form - If the octet starts with a 1 (most significant bit), combine the next 7 bits of that octet plus the 7 bits of each subsequent octet until you come across an octet with a 0 as the most significant bit (this marks the last octet).
The value 840 would be represented with the following two bytes,
10000110
01001000
Combine to 00001101001000 and read as int.
Great resource for BER encoding, http://luca.ntop.org/Teaching/Appunti/asn1.html
The first octet has value 40 * value1 + value2. (This is unambiguous,
since value1 is limited to values 0, 1, and 2; value2 is limited to
the range 0 to 39 when value1 is 0 or 1; and, according to X.208, n is
always at least 2.)
The following octets, if any, encode value3, ...,
valuen. Each value is encoded base 128, most significant digit first,
with as few digits as possible, and the most significant bit of each
octet except the last in the value's encoding set to "1." Example: The
first octet of the BER encoding of RSA Data Security, Inc.'s object
identifier is 40 * 1 + 2 = 42 = 2a16. The encoding of 840 = 6 * 128 +
4816 is 86 48 and the encoding of 113549 = 6 * 1282 + 7716 * 128 + d16
is 86 f7 0d. This leads to the following BER encoding:
06 06 2a 86 48 86 f7 0d
Finally, here is a OID decoder I just wrote in Perl.
sub getOid {
my $bytes = shift;
#first 2 nodes are 'special';
use integer;
my $firstByte = shift #$bytes;
my $number = unpack "C", $firstByte;
my $nodeFirst = $number / 40;
my $nodeSecond = $number % 40;
my #oidDigits = ($nodeFirst, $nodeSecond);
while (#$bytes) {
my $num = convertFromVLQ($bytes);
push #oidDigits, $num;
}
return join '.', #oidDigits;
}
sub convertFromVLQ {
my $bytes = shift;
my $firstByte = shift #$bytes;
my $bitString = unpack "B*", $firstByte;
my $firstBit = substr $bitString, 0, 1;
my $remainingBits = substr $bitString, 1, 7;
my $remainingByte = pack "B*", '0' . $remainingBits;
my $remainingInt = unpack "C", $remainingByte;
if ($firstBit eq '0') {
return $remainingInt;
}
else {
my $bitBuilder = $remainingBits;
my $nextFirstBit = "1";
while ($nextFirstBit eq "1") {
my $nextByte = shift #$bytes;
my $nextBits = unpack "B*", $nextByte;
$nextFirstBit = substr $nextBits, 0, 1;
my $nextSevenBits = substr $nextBits, 1, 7;
$bitBuilder .= $nextSevenBits;
}
my $MAX_BITS = 32;
my $missingBits = $MAX_BITS - (length $bitBuilder);
my $padding = 0 x $missingBits;
$bitBuilder = $padding . $bitBuilder;
my $finalByte = pack "B*", $bitBuilder;
my $finalNumber = unpack "N", $finalByte;
return $finalNumber;
}
}
OID encoding for dummies :) :
each OID component is encoded to one or more bytes (octets)
OID encoding is just a concatenation of these OID component encodings
first two components are encoded in a special way (see below)
if OID component binary value has less than 7 bits, the encoding is just a single octet, holding the component value (note, most significant bit, leftmost, will always be 0)
otherwise, if it has 8 and more bits, the value is "spread" into multiple octets - split the binary representation into 7 bit chunks (from right), left-pad the first one with zeroes if needed, and form octets from these septets by adding most significant (left) bit 1, except from the last chunk, which will have bit 0 there.
first two components (X.Y) are encoded like it is a single component with a value 40*X + Y
This is a rewording of ITU-T recommendation X.690, chapter 8.19
This is a simplistic Python 3 implementation of the of above, resp. a string form of an object identifier into ASN.1 DER or BER form.
def encode_variable_length_quantity(v:int) -> list:
# Break it up in groups of 7 bits starting from the lowest significant bit
# For all the other groups of 7 bits than lowest one, set the MSB to 1
m = 0x00
output = []
while v >= 0x80:
output.insert(0, (v & 0x7f) | m)
v = v >> 7
m = 0x80
output.insert(0, v | m)
return output
def encode_oid_string(oid_str:str) -> tuple:
a = [int(x) for x in oid_str.split('.')]
oid = [a[0]*40 + a[1]] # First two items are coded by a1*40+a2
# A rest is Variable-length_quantity
for n in a[2:]:
oid.extend(encode_variable_length_quantity(n))
oid.insert(0, len(oid)) # Add a Length
oid.insert(0, 0x06) # Add a Type (0x06 for Object Identifier)
return tuple(oid)
if __name__ == '__main__':
oid = encode_oid_string("1.2.840.10045.3.1.7")
print(oid)

Resources