InnoSetup (sha1 + base64 encoding a string) - base64

Is there a way I can encode a password using sha1 and then base64 the output in InnoSetUp?
I see InnoSetup already has GetSHA1OfString(), but this is not base64 encoded but is a hexadecimal encoding.
I found someone posted a function here
http://www.vincenzo.net/isxkb/index.php?title=Encode/Decode_Base64
But this just takes an ANSI string as input, so I cant pass the output of GetSHA1OfString() to this.
I want to do base64 encoding of input say admin using sha1, so output should be
0DPiKuNIrrVmD8IUCuw1hQxNqZc=
Any help is appreciated, thanks in advance !!

FPC comes with sha1 and base64 libraries, so you can write your own function (could be implemented in one line, but I've splitted it into 3 to add comments):
uses
sha1,
base64;
function GetBase64SHA1OfString(const S: AnsiString): AnsiString;
var
Digest: TSHA1Digest;
DigestStr: AnsiString;
begin
// Compute the SHA1
Digest := SHA1String(S);
// Convert SHA1 to string representation
DigestStr := SHA1Print(Digest);
// Compute base64 of the string representation and return it
Exit(EncodeStringBase64(DigestStr));
end;
begin
Writeln(GetBase64SHA1OfString('Hello!'));
end.

Related

Encoding md5 in a node compatible way

I'm converting a node service to go. For this I need a compatible md5 hash (not for storing passwords!!) generator. However, in this example, I keep getting different results:
Node's crypto takes an encoding parameter when creating md5s.
> crypto.createHash("md5").update("1Editor’s notebook: Escaping temptation for turf145468066").digest("hex")
'c7c3210bd977b049f42c487b8c6d0463'
In golang: (test_encode.go)
package main
import (
"crypto/md5"
"encoding/hex"
"testing"
)
func TestFoo(t *testing.T) {
const result = "c7c3210bd977b049f42c487b8c6d0463"
stringToEncode := "1Editor’s notebook: Escaping temptation for turf145468066"
hash := md5.Sum([]byte(stringToEncode))
hashStr := hex.EncodeToString(hash[:])
if hashStr != result {
t.Error("Got", hashStr, "expected", result)
}
}
Then go test test_encode.go results in:
--- FAIL: TestFoo (0.00s)
encode_test.go:17: Got c3804ddcc59fabc09f0ce2418b3a8335 expected c7c3210bd977b049f42c487b8c6d0463
FAIL
FAIL command-line-arguments 0.006s
I've tracked it down to the encoding parameter of crypto.update in the node code. And the fact that the string as a ’ quote character in it. If I specify "utf8" it works.
crypto.createHash("md5").update("1Editor’s notebook: Escaping temptation for turf145468066", "utf8").digest("hex")
BUT: I can't change the node code, so the go code has to be compatible. Any ideas on what to do?
As you've already noted: you must convert the UTF8 string to whatever encoding is used in your node application. This can be done with encoding packages such as:
golang.org/x/text/encoding/charmap
isoString, err := charmap.ISO8859_1.NewEncoder().Bytes([]byte(stringToEncode))
Considering that the character ’ is not allowed in iso-8859-1, we can assume you have a different encoding. Now you just need to figure out which one!
And in worse case, you might have to use another package than charmap.
After a lot of digging in node and V8 I was able to conclude the following:
require("crypto").createHash("md5").update(inputString).digest("hex");
Is pretty dangerous, as not specifying a encodes the input string as "ASCII". Which, after a lot of digging, is the equivalent (verified on a large input set from my end):
// toNodeASCIIString converts a string to a byte of node compatible ASCII string
func toNodeASCIIString(inputString string) []byte {
lengthOfString := utf8.RuneCountInString(string(inputString))
stringAsRunes := []rune(inputString)
bytes := make([]byte, lengthOfString)
for i, r := range stringAsRunes {
bytes[i] = byte(r % 256)
}
return bytes
}
What is basically does is mods by 256 and forgets a large part of the input string.
The node example above is pretty much the standard and copy-pasted-everywhere way to create MD5 hashes in node. I have not checked but I'm assuming this works the same for all other hashes (SHA1, SHA256, etc).
I would love to hear someones thoughts on why this is not huge security hole.

Golang Random Sha256

I am having trouble getting a random sha256 hash using a timestamp seed:
https://play.golang.org/p/2-_VPe3oFr (dont use playground - time always same)
Does anyone understand why it always returns the same result? (non-playground runs)
Because you do this:
timestamp := time.Now().Unix()
log.Print(fmt.Sprintf("%x", sha256.Sum256([]byte(string(timestamp))))[:45])
You print the hex form of the SHA-256 digest of the data:
[]byte(string(timestamp))
What is it exactly?
timestamp is of type int64, converting it to string is:
Converting a signed or unsigned integer value to a string type yields a string containing the UTF-8 representation of the integer. Values outside the range of valid Unicode code points are converted to "\uFFFD".
But its value is not a valid unicode code point so it will always be "\uFFFD" which is efbfbd (UTF-8 encoded), and your code always prints the SHA-256 of the data []byte{0xef, 0xbf, 0xbd} which is (or rather its first 45 hex digits because you slice the result):
83d544ccc223c057d2bf80d3f2a32982c32c3c0db8e26
I guess you wanted to generate some random bytes and calculate the SHA-256 of that, something like this:
data := make([]byte, 10)
for i := range data {
data[i] = byte(rand.Intn(256))
}
fmt.Printf("%x", sha256.Sum256(data))
Note that if you'd use the crypto/rand package instead of math/rand, you could fill a slice of bytes with random values using the rand.Read() function, and you don't even have to set seed (and so you don't even need the time package):
data := make([]byte, 10)
if _, err := rand.Read(data); err == nil {
fmt.Printf("%x", sha256.Sum256(data))
}
Yes. This:
string(timestamp)
does not do what you think it does, see the spec. Long story short, the timestamp is not a valid unicode code point, so the result is always "\uFFFD".

How to convert strings to array of byte and back

4I must write strings to a binary MIDI file. The standard requires one to know the length of the string in bytes. As I want to write for mobile as well I cannot use AnsiString, which was a good way to ensure that the string was a one-byte string. That simplified things. I tested the following code:
TByte = array of Byte;
function TForm3.convertSB (arg: string): TByte;
var
i: Int32;
begin
Label1.Text := (SizeOf (Char));
for i := Low (arg) to High (arg) do
begin
label1.Text := label1.Text + ' ' + IntToStr (Ord (arg [i]));
end;
end; // convert SB //
convertSB ('MThd');
It returns 2 77 84 104 100 (as label text) in Windows as well as Android. Does this mean that Delphi treats strings by default as UTF-8? This would greatly simplify things but I couldn't find it in the help. And what is the best way to convert this to an array of bytes? Read each character and test whether it is 1, 2 or 4 bytes and allocate this space in the array? For converting back to a character: just read the array of bytes until a byte is encountered < 128?
Delphi strings are encoded internally as UTF-16. There was a big clue in the fact that SizeOf(Char) is 2.
The reason that all your characters had ordinal in the ASCII range is that UTF-16 extends ASCII in the sense that characters 0 to 127, in the ASCII range, have the same ordinal value in UTF-16. And all your characters are ASCII characters.
That said, you do not need to worry about the internal storage. You simply convert between string and byte array using the TEncoding class. For instance, to convert to UTF-8 you write:
bytes := TEncoding.UTF8.GetBytes(str);
And in the opposite direction:
str := TEncoding.UTF8.GetString(bytes);
The class supports many other encodings, as described in the documentation. It's not clear from the question which encoding you are need to use. Hopefully you can work the rest out from here.

How can I convert string encoded with Windows Codepage 1251 to a Unicode string

The cyrllic string my app receives uses(I believe) the table below:
said I believe, because all the chars I tested fit this table.
Question: How do I convert such thing to a string, which is unicode by default in my delphi?
Or better yet: Is there a ready-to-use converter in delphi or should I write one?
If you are using Delphi 2009 or later, this is done automatically:
type
CyrillicString = type AnsiString(1251);
procedure TForm1.FormCreate(Sender: TObject);
var
UnicodeStr: string;
CyrillicStr: CyrillicString;
begin
UnicodeStr := 'This is a test.'; // Unicode string
CyrillicStr := UnicodeStr; // ...converted to 1251
CyrillicStr := 'This is a test.'; // Cryllic string
UnicodeStr := CyrillicStr; // ...converted to Unicode
end;
First of all I recommend you read Marco Cantù's whitepaper on Unicode in Delphi. I am also assuming from your question (and previous questions), that you are using a Unicode version of Delphi, i.e. D2009 or later.
You can first of all define an AnsiString with codepage 1251 to match your input data.
type
CyrillicString = type Ansistring(1251);
This is an important step. It says that any data contained inside a variable of this type is to be interpreted as having been encoded using the 1251 codepage. This allows Delphi to perform correct conversions to other string types, as we will see later.
Next copy your input data into a string of this variable.
function GetCyrillicString(const Input: array of Byte): CyrillicString;
begin
SetLength(Result, Length(Input));
if Length(Result)>0 then
Move(Input[0], Result[1], Length(Input));
end;
Of course, there may be other, more convenient ways to get the data in. Perhaps it comes from a stream. Whatever the case, make sure you do it with something equivalent to a memory copy so that you don't invoke code page conversions and thus lose the 1251 encoding.
Finally you can simply assign a CyrillicString to a plain Unicode string variable and the Delphi runtime performs the necessary conversion automatically.
function ConvertCyrillicToUnicode(const Input: array of Byte): string;
begin
Result := GetCyrillicString(Input);
end;
The runtime is able to perform this conversion because you specified the codepage when defining CyrillicString and because string maps to UnicodeString which is encoded with UTF-16.
Windows API MultiByteToWideChar() and WideCharToMultiByte() can be used to convert to and from any supported code page in Windows. Of course if you use Delphi >= 2009 it is easier to use the native unicode support.

Delphi 2010: how do I convert a UTF8-encoded PAnsiChar to a UnicodeString?

The situation: I’ve an external DLL that uses UTF-8 as its internal string format. The interface functions all use PAnsiChar to pass strings along.
The rest of my application uses Delphi’s native string type; since I’m working with Delphi 2010, that will map to a UnicodeString.
How can I reliably cast those PAnsiChar arguments (which are pointing to UTF-8 encoded strings) to a UnicodeString?
I had this function, which I thought worked fine:
function PUTF8CharToString(Text: PAnsiChar): string;
var
UText: UTF8String;
begin
UText := UTF8String(Text);
Result := string(UText);
end;
...but now I’ve run into a case where the result string is corrupted; when I save the PAnsiChar to file, it’s fine; but when I save the resulting string after conversion using the above function, it’s corrupted.
Or should this work correctly, and is this indicative of some other memory (de)allocation problem?
Edit: I finally managed to get rid of the memory corruption by assigning the converted string to a local variable string, instead of directly passing it to another function.
From System:
function UTF8ToUnicodeString(const S: PAnsiChar): UnicodeString; overload;
UnicodeStr := System.Utf8ToUnicodeString(Text);
Try using SetString() instead of casting:
function PUTF8CharToString(Text: PAnsiChar): string;
var
UText: UTF8String;
begin
SetString(UText, Text, StrLen(Text));
Result := UText;
end;

Resources