Why does cookie-signature in nodejs/express compare signatures using sha1-hashing? - node.js

I was just looking into the implementation of the cryptograhic signing extension for express which allows creation of signed cookies.
The mac in the signing function is calculated as described here:
create an instance of SHA256
hash the data value
create a base64 encoded digest
remove trailing equal characters ('=')
The result is a concatenation of the original value and the calculated mac.
On verification of the signature the value is signed again. But then not the signatures are tested on equality, but the overall strings consisting of the original value and the appended mac are compared:
return sha1(mac) == sha1(val) ? str : false;
Here "mac" contains the original value concatenated with a freshly calculated mac, "val" contains the input string as passed to the verification method (consisting of the original value concatenated with a former mac) and "str" is the signed value itself.
See: https://github.com/tj/node-cookie-signature/blob/master/index.js
I would have expected that only the macs would be compared. But this is not the case. Why have the authors chosen this way of implementing the verification? What is the reason for this? And especially: Why don't they compare char by char but a hash of sha1?

The implementation's sign function returns the value concatenated with '.' and HMAC of the value converted to Base64 without the trailing '=' (if any).
The implementation's unsign function does the same with the value part of the given input (up to the '.') and checks if the whole input equals the output of the sign function.
And as to comparing using hash values I the authors were trying to fend off timing attack, whereby an attacker would observe the time it took to check for equality character by character and determine by minute changes between two tries at what character the check failed, and thereafter try to guess on character by character basis the MAC value for the arbitary value part. By comparing using sha1 digest code takes constant time depending only on the given whole input length.
More interesting side note is the removal of padding '=' from Base64 encoded MACs, I have no idea why would they do that as there is URL safe variant of Base64.

Related

Why is a JWT signature not unique for a specific payload

My application is using JWT and should prevent replay attacks. I was testing this an ran into the following.
When I have a valid JWT and change the last character of the token/signature the JWT is still valid. E.g. the following token do all validate correctly:
eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJTb21lIFRlc3QiLCJjbGFpbSI6IlNvbWUgQ2xhaW0ifQ.UkFYSK7hSSeiqUOSMdbXgbOErMFnuK0Emk1722ny-r4
eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJTb21lIFRlc3QiLCJjbGFpbSI6IlNvbWUgQ2xhaW0ifQ.UkFYSK7hSSeiqUOSMdbXgbOErMFnuK0Emk1722ny-r5
eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJTb21lIFRlc3QiLCJjbGFpbSI6IlNvbWUgQ2xhaW0ifQ.UkFYSK7hSSeiqUOSMdbXgbOErMFnuK0Emk1722ny-r6
eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJTb21lIFRlc3QiLCJjbGFpbSI6IlNvbWUgQ2xhaW0ifQ.UkFYSK7hSSeiqUOSMdbXgbOErMFnuK0Emk1722ny-r7
I have checked this on http://jwt.io/ and can be reproduced in my .Net application as well.
Can someone explain how it is possible that the signature is not unique for a given payload? I understand that collisions can occur, but I cannot explain that they are consecutive sequences.
In this special case you are changing the base64 url encoding of the signature, not the signature itself
The fourth base64 values encode the same binary value. Try converting to hexadecimal at http://kjur.github.io/jsjws/tool_b64udec.html
The value you will see is
52415848aee14927a2a9439231d6d781b384acc167b8ad049a4d7bdb69f2fabe
If you change the suffix to -r1 or -r8 then the binary value changes and signature validation will fail
Can two different BASE 64 encoded strings result into same string if decoded?
When you change the signature (the last part) you can still decode the JWT to see the header and payload. However, if you attempt to validate the JWT with the changed signature, that validation will fail.

What are the user defined key's value restrictions?

In ArangoDB, when a collection is defined to allow user defined keys, what are the restrictions on the value of the key? For example, it appears that a key of "Name-2" works but a key of "Name,2" gives ArangoError 1221: invalid document key error.
Quoting from the manual
The key must be a string value. Numeric keys are not allowed, but any numeric value can be put into a string and can then be used as document key.
The key must be at least 1 byte and at most 254 bytes long. Empty keys are disallowed when specified (though it may be valid to completely omit the _key attribute from a document)
It must consist of the letters a-z (lower or upper case), the digits 0-9 or any of the following punctuation characters: _ - : . # ( ) + , = ; $ ! * ' %
Any other characters, especially multi-byte UTF-8 sequences, whitespace or punctuation characters cannot be used inside key values
The key must be unique within the collection it is used
Keys are case-sensitive, i.e. myKey and MyKEY are considered to be different keys.
Restrictions (or naming conventions) for user defined keys can be found in docs here.

unable to send a combination of keybord commands in watir

I have a step definition:
Then (/^I send '(.?*)' keys$/) do |key|
$browser.send_keys key
end
I pass :shift,:tab in my feature file. This doesn't work for me.
But when I have a step definition
Then (/^I send keys$/) do
key =:shift,:tab
$browser.send_keys key
end
and hard code the value, it works fine. What might be the issue?
Problem
The problem is that when Cucumber gives you the key (in the first step definition), it is a string with value ':shift,:tab'. Watir just sees this as text and therefore types each of those characters (rather than interpreting the special keys).
In contrast, the second step definition the key is an array containing 2 symbols.
You need to take the string from the Cucumber step and manipulate it to be symbols.
Solution
Depending on the different sequence of keys you need to send, the following step definition might be sufficient:
Then (/^I send '(.?*)' keys$/) do |key|
key_sequence = key.split(',').map do |key|
if key[0] == ':'
key[1..-1].to_sym
else
key
end
end
$browser.send_keys key_sequence
end
This step will:
Take the string ':shift,:tab' (from the step)
Split the string on commas, which is assumed to separate the keys
If the key starts with a colon, assume it is a special character and convert it to a symbol.
If the key does not start with a colon, assume it is plain text and leave it as-is.
Send the mapped keys to the send_keys method.

Is an empty string valid base64 encoded data of zero bytes length?

One of my colleges was telling me that the empty string is not a valid base64 encoded data string. I don't think this is true (he is too lazy to parse it), but after googling around a bit and even checking the RFC I have not found any documentation that explicitly states how to properly encode a blob of zero bytes length in base64.
So, the question is: Do you have a link to some official documentation that explicitly states how zero bytes should be encoded in base64?
According to RFC 4648 Section 10, Test Vectors,
BASE64("") = ""
I would assume the inverse must hold as well.
My thought on this is that there are two possible base64 values that an empty string could produce; either an empty string, or a string that consists entirely of pad characters ('==='). Any other valid base64 string contains information. With the second case, we can apply the following rule from the RFC:
If more than the allowed number of pad characters are found at the end
of the string, e.g., a base 64 string terminated with "===", the
excess pad characters could be ignored.
As they can be ignored, they can be dropped from the resultant encoded string without consequence, once again leaving us with an empty string as the base64 representation of an empty string.

Azure Table Storage RowKey restricted Character Patterns?

Are there restricted character patterns within Azure TableStorage RowKeys? I've not been able to find any documented via numerous searches. However, I'm getting behavior that implies such in some performance testing.
I've got some odd behavior with RowKeys consisting on random characters (the test driver does prevent the restricted characters (/ \ # ?) plus blocking single quotes from occurring in the RowKey). The result is I've got a RowKey that will insert fine into the table, but cannot be queried (the result is InvalidInput). For example:
RowKey: 9}5O0J=5Z,4,D,{!IKPE,~M]%54+9G0ZQ&G34!G+
Attempting to query by this RowKwy (equality) will result in an error (both within our app, using Azure Storage Explorer, and Cloud Storage Studio 2). I took a look at the request being sent via Fiddler:
GET /foo()?$filter=RowKey%20eq%20'9%7D5O0J=5Z,4,D,%7B!IKPE,~M%5D%54+9G0ZQ&G34!G+' HTTP/1.1
It appears the %54 in the RowKey is not escaped in the filter. Interestingly, I get similar behavior for batch requests to table storage with URIs in the batch XML that include this RowKey. I've also seen similar behavior for RowKeys with embedded double quotes, though I have not isolated that pattern yet.
Has anyone co me across this behavior? I can easily restrict additional characters from occurring in RowKeys, but would really like to know the 'rules'.
The following characters are not allowed in PartitionKey and RowKey fields:
The forward slash (/) character
The backslash (\) character
The number sign (#) character
The question mark (?) character
Further Reading: Azure Docs > Understanding the Table service data model
public static readonly Regex DisallowedCharsInTableKeys = new Regex(#"[\\\\#%+/?\u0000-\u001F\u007F-\u009F]");
Detection of Invalid Table Partition and Row Keys:
bool invalidKey = DisallowedCharsInTableKeys.IsMatch(tableKey);
Sanitizing the Invalid Partition or Row Key:
string sanitizedKey = DisallowedCharsInTableKeys.Replace(tableKey, disallowedCharReplacement);
At this stage you may also want to prefix the sanitized key (Partition Key or Row Key) with the hash of the original key to avoid false collisions of different invalid keys having the same sanitized value.
Do not use the string.GetHashCode() though since it may produce different hash code for the same string and shall not be used to identify uniqueness and shall not be persisted.
I use SHA256: https://msdn.microsoft.com/en-us/library/s02tk69a(v=vs.110).aspx
to create the byte array hash of the invalid key, convert the byte array to hex string and prefix the sanitized table key with that.
Also see related MSDN Documentation:
https://msdn.microsoft.com/en-us/library/azure/dd179338.aspx
Related Section from the link:
Characters Disallowed in Key Fields
The following characters are not allowed in values for the PartitionKey and RowKey properties:
The forward slash (/) character
The backslash (\) character
The number sign (#) character
The question mark (?) character
Control characters from U+0000 to U+001F, including:
The horizontal tab (\t) character
The linefeed (\n) character
The carriage return (\r) character
Control characters from U+007F to U+009F
Note that in addition to the mentioned chars in the MSDN article, I also added the % char to the pattern since I saw in a few places where people mention it being problematic. I guess some of this also depends on the language and the tech you are using to access the table storage.
If you detect additional problematic chars in your case, then you can add those to the regex pattern, nothing else needs to change.
I just found out (the hard way) that the '+' sign is allowed, but not possible to query in PartitionKey.
I found that in addition to the characters listed in Igorek's answer, these also can cause problems (e.g. inserts will fail):
|
[]
{}
<>
$^&
Tested with the Azure Node.js SDK.
I transform the key using this function:
private static string EncodeKey(string key)
{
return HttpUtility.UrlEncode(key);
}
This needs to be done for the insert and for the retrieve of course.

Resources