CRijndael only encrpyting first 32 bytes of longer string - visual-c++

I'm using CRijndael ( http://www.codeproject.com/Articles/1380/A-C-Implementation-of-the-Rijndael-Encryption-Decr ) for encryption using a null based iv (I know that's an issue but for certain reasons I'm stuck with having to use that).
For strings that are longer (or contain a few ampersands) I'm only ever getting the first 32 bytes encrypted. Shorter strings are encrypted without any issues. Code is below, any ideas?
char dataIn[] = "LONG STRING HERE";
string preInput = dataIn;
CRijndael aRijndael;
aRijndael.MakeKey("32-BIT-KEY-HERE", CRijndael::sm_chain0, 32, 16);
while (preInput.length() % 16 != 0) {
preInput += '\0';
}
const char *encInput = preInput.c_str();
char szReq[1000];
aRijndael.Encrypt(preInput.c_str(), szReq, preInput.size(), CRijndael::CBC);
const std::string preBase64 = szReq;
std::string encoded = base64_encode(reinterpret_cast<const unsigned char*>(preBase64.c_str()), preBase64.length());

Related

Decode value of base64 string in different language gives different output

I have a base64 string like this
String value = "fefWUeQvPgBe/9QaG/RdPnn9PrzQK2VhVwBzAIr7eei9PQrZA2/sXTA/2SCodnTSJn4Lk+ve5kuPGjco4ljYrjNTsrKBAjN6APSHn0BqBce2lOZbm/z29U6j7j79niPbYl/UIc0VTjc0IgRhmNLn1eVvMTuoaGhlwlxUf/+xenC4NmEM2A6y5/DNRheNw6OrmHik/kowpWGQsRNFyXJ2VtzE54nqs9naePBkRlWna/oqBxzA/txtHXn8h/9xTT2caozcU5/R9JayFZq7ZeclzGs2DAACr1TyQwEb9JJpBXr04Zu4rlWLtnSbyflyK3lnSAocma0L6ENnCZoMiN8gUg=="
I used this method to decode string in java
Base64Utils.decode(value.getBytes())
output:125,-25,-42,81,-28,47,62,0,94,-1,-44,26,27,-12,93,62,121,-3,62,-68,-48,43,101,97,87,0,115,0,-118,-5,121,-24,-67,61,10,-39,3,111,-20,93,48,63,-39,32,-88,118,116,-46,38,126,11,-109,-21,-34,-26,75,-113,26,55,40,-30,88,-40,-82,51,83,-78,-78,-127,2,51,122,0,-12,-121,-97,64,106,5,-57,-74,-108,-26,91,-101,-4,-10,-11,78,-93,-18,62,-3,-98,35,-37,98,95,-44,33,-51,21,78,55,52,34,4,97,-104,-46,-25,-43,-27,111,49,59,-88,104,104,101,-62,92,84,127,-1,-79,122,112,-72,54,97,12,-40,14,-78,-25,-16,-51,70,23,-115,-61,-93,-85,-104,120,-92,-2,74,48,-91,97,-112,-79,19,69,-55,114,118,86,-36,-60,-25,-119,-22,-77,-39,-38,120,-16,100,70,85,-89,107,-6,42,7,28,-64,-2,-36,109,29,121,-4,-121,-1,113,77,61,-100,106,-116,-36,83,-97,-47,-12,-106,-78,21,-102,-69,101,-25,37,-52,107,54,12,0,2,-81,84,-14,67,1,27,-12,-110,105,5,122,-12,-31,-101,-72,-82,85,-117,-74,116,-101,-55,-7,114,43,121,103,72,10,28,-103,-83,11,-24,67,103,9,-102,12,-120,-33,32,82,
then I used this method to decode string in nodejs
Buffer.from(value, 'base64')
output:125,231,214,81,228,47,62,0,94,255,212,26,27,244,93,62,121,253,62,188,208,43,101,97,87,0,115,0,138,251,121,232,189,61,10,217,3,111,236,93,48,63,217,32,168,118,116,210,38,126,11,147,235,222,230,75,143,26,55,40,226,88,216,174,51,83,178,178,129,2,51,122,0,244,135,159,64,106,5,199,182,148,230,91,155,252,246,245,78,163,238,62,253,158,35,219,98,95,212,33,205,21,78,55,52,34,4,97,152,210,231,213,229,111,49,59,168,104,104,101,194,92,84,127,255,177,122,112,184,54,97,12,216,14,178,231,240,205,70,23,141,195,163,171,152,120,164,254,74,48,165,97,144,177,19,69,201,114,118,86,220,196,231,137,234,179,217,218,120,240,100,70,85,167,107,250,42,7,28,192,254,220,109,29,121,252,135,255,113,77,61,156,106,140,220,83,159,209,244,150,178,21,154,187,101,231,37,204,107,54,12,0,2,175,84,242,67,1,27,244,146,105,5,122,244,225,155,184,174,85,139,182,116,155,201,249,114,43,121,103,72,10,28,153,173,11,232,67,103,9,154,12,136,223,32,82
The java output is what I really want to get, why its different?
How can I correctly get decoded value in nodejs
Base64Utils.decode returns a signed 8 bit value in Java. Buffer.from returns an unsigned 8 bit value in Nodejs. While both return 8 bit (byte) values, the Java method interprets the high order bit as a negative number. Nodejs is unsigned.
var value = 'fefWUeQvPgBe/9QaG/RdPnn9PrzQK2VhVwBzAIr\
7eei9PQrZA2/sXTA/2SCodnTSJn4Lk+ve5kuPGj\
co4ljYrjNTsrKBAjN6APSHn0BqBce2lOZbm/z29\
U6j7j79niPbYl/UIc0VTjc0IgRhmNLn1eVvMTuo\
aGhlwlxUf/+xenC4NmEM2A6y5/DNRheNw6OrmHi\
k/kowpWGQsRNFyXJ2VtzE54nqs9naePBkRlWna/\
oqBxzA/txtHXn8h/9xTT2caozcU5/R9JayFZq7Z\
eclzGs2DAACr1TyQwEb9JJpBXr04Zu4rlWLtnSb\
yflyK3lnSAocma0L6ENnCZoMiN8gUg=='
buffervalue = Buffer.from(value, 'base64');
for (i=0; i < buffervalue.length; i++) {
y = buffervalue[i];
if (y > 127) {
y = -(256 - y);
}
console.log(y);
}

which delimiter can I use safely to separate zlib deflated strings in node

I need to send content from a client to a remote server using node.js.
The content can be anything (a user can upload any file).
Each piece of content is compressed by zlib.deflate before sending it to the remote.
I prefer not to make multiple roundtrips and send the entire content at once.
To separate between each piece of content, I need a character that can't be used in the compressed string, so I can split it safely on the remote.
There is no such character or sequence of characters. zlib compressed data can contain any sequence of bytes.
You could encode the zlib compressed data to avoid one byte value, expanding compressed data slightly. Then you could use that one byte value as a delimiter.
Example code:
// Example of encoding binary data to a sequence of bytes with no zero values.
// The result is expanded slightly. On average, assuming random input, the
// expansion is less than 0.1%. The maximum expansion is less than 14.3%, which
// is reached only if the input is a sequence of bytes all with value 255.
#include <stdio.h>
// Encode binary data read from in, to a sequence of byte values in 1..255
// written to out. There will be no zero byte values in the output. The
// encoding is decoding a flat (equiprobable) Huffman code of 255 symbols.
void no_zeros_encode(FILE *in, FILE *out) {
unsigned buf = 0;
int bits = 0, ch;
do {
if (bits < 8) {
ch = getc(in);
if (ch != EOF) {
buf += (unsigned)ch << bits;
bits += 8;
}
else if (bits == 0)
break;
}
if ((buf & 127) == 127) {
putc(255, out);
buf >>= 7;
bits -= 7;
}
else {
unsigned val = buf & 255;
buf >>= 8;
bits -= 8;
if (val < 127)
val++;
putc(val, out);
}
} while (ch != EOF);
}
// Decode a sequence of byte values made by no_zeros_encode() read from in, to
// the original binary data written to out. The decoding is encoding a flat
// Huffman code of 255 symbols. no_zeros_encode() will not generate any zero
// byte values in its output (that's the whole point), but if there are any
// zeros in the input to no_zeros_decode(), they are ignored.
void no_zeros_decode(FILE *in, FILE *out) {
unsigned buf = 0;
int bits = 0, ch;
while ((ch = getc(in)) != EOF)
if (ch != 0) { // could flag any zeros as an error
if (ch == 255) {
buf += 127 << bits;
bits += 7;
}
else {
if (ch <= 127)
ch--;
buf += (unsigned)ch << bits;
bits += 8;
}
if (bits >= 8) {
putc(buf, out);
buf >>= 8;
bits -= 8;
}
}
}

How to shorten UUID V4 without making it non-unique/guessable

I have to generate unique URL part which will be "unguessable" and "resistant" to brute force attack. It also has to be as short as possible :) and all generated values has to be of same length. I was thinking about using UUID V4 which can be represented by 32 (without hyphens) hex chars de305d5475b4431badb2eb6b9e546014 but it's a bit too long. So my question is how to generate something unqiue, that can be represented with url charcters with same length for each generated value and shorter than 32 chars. (In node.js or pgpsql)
v4() will generate a large number which is translated into a hexadecimal string. In Node.js you can use Buffer to convert the string into a smaller base64 encoding:
import { v4 } from 'uuid';
function getRandomName() {
let hexString = v4();
console.log("hex: ", hexString);
// remove decoration
hexString = hexString.replace(/-/g, "");
let base64String = Buffer.from(hexString, 'hex').toString('base64')
console.log("base64:", base64String);
return base64String;
}
Which produces:
hex: 6fa1ca99-a92b-4d2a-aac2-7c7977119ebc
base64: b6HKmakr
hex: bd23c8fd-0f62-49f4-9e51-8b5c97601a16
base64: vSPI/Q9i
UUID v4 itself does not actually guarantee uniqueness. It's just very, very unlikely that two randomly generated UUIDs will clash. That's why they need to be so long - that reduces the clashing chance.
So you can make it shorter, but the shorter you make it, the more likely that it won't actually be unique. UUID v4 is 128 bit long because that is commonly considered "unique enough".
The short-uuid module does just that.
"Generate and translate standard UUIDs into shorter - or just different - formats and back."
It accepts custom character sets (and offers a few) to translate the UUID to and fro.
You can also base64 the uuid which shortens it a bit to 22. Here's a playground.
It all depends on how guessable/unique it has to be.
My suggestion would be to generate 128 random bits and then encode it using base36. That would give you a "nice" URL and it would be unique and probably unguessable enough.
If you want it even shorter you can use base64, but base64 needs to contain two non alphanumeric characters.
This is a fairly old thread, but I'd like to point out the top answer does not produce the results it claims. It will actually produce strings that are ~32 characters long, but the examples claim 8 characters. If you want more compression convert the uuid to base 90 using this function.
Using Base64 takes 4 characters for every 3 bytes, and Hex (Base16) takes 2 characters for each byte. This means that Base64 will have ~67% better storage size than hex, but if we can increase that character/byte ratio we can get even better compression. Base90 gives ever so slightly more compression because of this.
const hex = "0123456789abcdef";
const base90 = "0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!#$%&'()*+-./:<=>?#[]^_`{|}~";
/**
* Convers a Base16 (Hex) represented string to a Base 90 String.
* #param {String} number Hex represented string
* #returns A Base90 representation of the hex string
*/
function convertToBase90(number) {
var i,
divide,
newlen,
numberMap = {},
fromBase = hex.length,
toBase = base90.length,
length = number.length,
result = typeof number === "string" ? "" : [];
for (i = 0; i < length; i++) {
numberMap[i] = hex.indexOf(number[i]);
}
do {
divide = 0;
newlen = 0;
for (i = 0; i < length; i++) {
divide = divide * fromBase + numberMap[i];
if (divide >= toBase) {
numberMap[newlen++] = parseInt(divide / toBase, 10);
divide = divide % toBase;
} else if (newlen > 0) {
numberMap[newlen++] = 0;
}
}
length = newlen;
result = base90.slice(divide, divide + 1).concat(result);
} while (newlen !== 0);
return result;
}
/**
* Compresses a UUID String to base 90 resulting in a shorter UUID String
* #param {String} uuid The UUID string to compress
* #returns A compressed UUID String.
*/
function compressUUID(uuid) {
uuid = uuid.replace(/-/g, "");
return convertToBase90(uuid);
}
Over a few million random uuids this generates no duplicates and the following output:
Lengths:
Avg: 19.959995 Max: 20 Min: 17
Examples:
Hex: 68f75ee7-deb6-4c5c-b315-3cc6bd7ca0fd
Base 90: atP!.AcGJh1(eW]1LfAh
Hex: 91fb8316-f033-40d1-974d-20751b831c4e
Base 90: ew-}Kv&nK?y#~xip5/0e
Hex: 4cb167ee-eb4b-4a76-90f2-6ced439d5ca5
Base 90: 7Ng]V/:0$PeS-K?!uTed
UUID is 36 characters long and you can shorten it to 22 characters (~30%) if you want save ability to convert it back and for it to be url safe.
Here is pure node solution for base64 url safe string:
type UUID = string;
type Base64UUID = string;
/**
* Convert uuid to base64url
*
* #example in: `f32a91da-c799-4e13-aa17-8c4d9e0323c9` out: `8yqR2seZThOqF4xNngMjyQ`
*/
export function uuidToBase64(uuid: UUID): Base64UUID {
return Buffer.from(uuid.replace(/-/g, ''), 'hex').toString('base64url');
}
/**
* Convert base64url to uuid
*
* #example in: `8yqR2seZThOqF4xNngMjyQ` out: `f32a91da-c799-4e13-aa17-8c4d9e0323c9`
*/
export function base64toUUID(base64: Base64UUID): UUID {
const hex = Buffer.from(base64, 'base64url').toString('hex');
return `${hex.substring(0, 8)}-${hex.substring(8, 12)}-${hex.substring(
12,
16,
)}-${hex.substring(16, 20)}-${hex.substring(20)}`;
}
Test:
import { randomUUID } from "crypto";
// f32a91da-c799-4e13-aa17-8c4d9e0323c9
const uuid = randomUUID();
// 8yqR2seZThOqF4xNngMjyQ
const base64 = uuidToBase64(uuid);
// f32a91da-c799-4e13-aa17-8c4d9e0323c9
const uuidFromBase64 = base64toUUID(base64);

Converting Byte Array to String (NXC)

Is there a way to show a byte array on the NXTscreen (using NXC)?
I've tried like this:
unsigned char Data[];
string Result = ByteArrayToStr(Data[0]);
TextOut(0, 0, Result);
But it gives me a File Error! -1.
If this isn't possible, how can I watch the value of Data[0] during the program?
If you want to show the byte array in hexadecimal format, you can do this:
byte buf[];
unsigned int buf_len = ArrayLen(buf);
string szOut = "";
string szTmp = "00";
// Convert to hexadecimal string.
for(unsigned int i = 0; i < buf_len; ++i)
{
sprintf(szTmp, "%02X", buf[i]);
szOut += szTmp;
}
// Display on screen.
WordWrapOut(szOut,
0, 63,
NULL, WORD_WRAP_WRAP_BY_CHAR,
DRAW_OPT_CLEAR_WHOLE_SCREEN);
You can find WordWrapOut() here.
If you simply want to convert it to ASCII:
unsigned char Data[];
string Result = ByteArrayToStr(Data);
TextOut(0, 0, Result);
If you only wish to display one character:
unsigned char Data[];
string Result = FlattenVar(Data[0]);
TextOut(0, 0, Result);
Try byte. byte is an unsigned char in NXC.
P.S. There is a heavily-under-development debugger in BricxCC (I assume you're on windows). Look here.
EDIT: The code compiles and runs, but does not do anything.

stick integer to string and char*

How can I add an integer variable to a string and char* variable? for example:
int a = 5;
string St1 = "Book", St2;
char *Ch1 = "Note", Ch2;
St2 = St1 + a --> Book5
Ch2 = Ch1 + a --> Note5
Thanks
The C++ way of doing this is:
std::stringstream temp;
temp << St1 << a;
std::string St2 = temp.str();
You can also do the same thing with Ch1:
std::stringstream temp;
temp << Ch1 << a;
char* Ch2 = new char[temp.str().length() + 1];
strcpy(Ch2, temp.str().c_str());
for char* you need to create another variable that is long enough for both, for instance. You can 'fix' the length of the output string to remove the chance of overrunning the end of the string. If you do that, be careful to make this large enough to hold the whole number, otherwise you might find that book+50 and book+502 both come out as book+50 (truncation).
Here's how to manually calculate the amount of memory required. This is most efficient but error-prone.
int a = 5;
char* ch1 = "Book";
int intVarSize = 11; // assumes 32-bit integer, in decimal, with possible leading -
int newStringLen = strlen(ch1) + intVarSize + 1; // 1 for the null terminator
char* ch2 = malloc(newStringLen);
if (ch2 == 0) { exit 1; }
snprintf(ch2, intVarSize, "%s%i", ch1, a);
ch2 now contains the combined text.
Alternatively, and slightly less tricky and also prettier (but less efficient) you can also do a 'trial run' of printf to get the required length:
int a = 5;
char* ch1 = "Book";
// do a trial run of snprintf with max length set to zero - this returns the number of bytes printed, but does not include the one byte null terminator (so add 1)
int newStringLen = 1 + snprintf(0, 0, "%s%i", ch1, a);
char* ch2 = malloc(newStringLen);
if (ch2 == 0) { exit 1; }
// do the actual printf with real parameters.
snprintf(ch2, newStringLen, "%s%i", ch1, a);
if your platform includes asprintf, then this is a lot easier, since asprintf automatically allocates the correct amount of memory for your new string.
int a = 5;
char* ch1 = "Book";
char* ch2;
asprintf(ch2, "%s%i", ch1, a);
ch2 now contains the combined text.
c++ is much less fiddly, but I'll leave that to others to describe.
You need to create another string large enough to hold the original string followed by the number (i.e. append the character corresponding to each digit of the number to this new string).
Try this out:
char *tmp = new char [ stelen(original) ];
itoa(integer,intString,10);
output = strcat(tmp,intString);
//use output string
delete [] tmp;

Resources