RFC 6488 specifies the following content of Signed-Data content type:
SignedData ::= SEQUENCE {
version CMSVersion,
digestAlgorithms DigestAlgorithmIdentifiers,
encapContentInfo EncapsulatedContentInfo,
certificates [0] IMPLICIT CertificateSet OPTIONAL,
crls [1] IMPLICIT RevocationInfoChoices OPTIONAL,
signerInfos SignerInfos }
The text explains:
The digestAlgorithms set contains the OIDs of the digest algorithm(s)
used in signing the encapsulated content. This set MUST contain
exactly one digest algorithm OID [...]
Then, inside SignerInfo, there is another DigestAlgorithmIdentifier:
SignerInfo ::= SEQUENCE {
version CMSVersion,
sid SignerIdentifier,
digestAlgorithm DigestAlgorithmIdentifier,
signedAttrs [0] IMPLICIT SignedAttributes OPTIONAL,
signatureAlgorithm SignatureAlgorithmIdentifier,
signature SignatureValue,
unsignedAttrs [1] IMPLICIT UnsignedAttributes OPTIONAL }
And the explanation is:
The digestAlgorithm MUST consist of the OID of a digest algorithm
that conforms to the RPKI Algorithms and Key Size Profile
specification [RFC6485].
In a couple of PKCS#7 files I peeked in, these two elements had equal value.
Is this a duplication of the same attribute? If not, what is the meaning of either?
Is this a duplication of the same attribute? If not, what is the meaning of either?
In case of RPKI yes.
The reason is that this standard does not introduce a specialized new structure but merely a profile of an existing one:
The RPKI signed object is a profile of the CMS [RFC5652] signed-data object
A CMS signed-data object in contrast to your RPKI one may contain multiple SignerInfo object in its signerInfos set, and each of them might make use of a different digestAlgorithm. Thus, in a CMS signed-data object the initial digestAlgorithms set may sensibly contain multiple entries.
Actually that field is even more liberally specified:
digestAlgorithms is a collection of message digest algorithm
identifiers. There MAY be any number of elements in the
collection, including zero. Each element identifies the message
digest algorithm, along with any associated parameters, used by
one or more signer. The collection is intended to list the
message digest algorithms employed by all of the signers, in any
order, to facilitate one-pass signature verification.
Related
I want to create XML digital signature using nodejs. So I used the xml-crypto
npm install xml-crypto
In the xml-crypto method called addrefrence contains three parameters. (xpath, transformers, digestalgorithm)
My requirement need to create reference without xpath becomes empty.
var sign = new SignedXml()
sig.addReference('', ["http://mytransformation"], 'http://mydigestalgorithm')
Got error.
Throw new Error(xpath parse error)
Also give the entire code example for create XML digital signature using nodejs.
The empty '' reference URI is used to de-reference the current document containing the signature. It is commonly used for enveloped signature types. For such type of signatures you may want to use an enveloped-signature tranform (i.e. "http://www.w3.org/2000/09/xmldsig#enveloped-signature"). This transform will remove the current signature from the reference scope and will apply the following transformations on the original XML content. Usually, when signing an XML, after an enveloped reference transform, a canonicalization algorithm should be applied.
See XMLDSig for more details.
I am implementing SCIM provisioning for my current project, and I am trying to implement the PATCH method and it seems not that easy.
What I read in the RFC is that SCIM PATCH is almost like JSON PATCH, but when I look deeper it seems a bit different on how the path is described which doesn't allow me to use json-patch libraries.
example:
"path":"addresses[type eq \"work\"]"
"path":"members[value eq
\"2819c223-7f76-453a-919d-413861904646\"]"
Do you know any library that is doing SCIM PATCH out of the box?
My project is currently a node project, but I don't care about the language I can rewrite it in javascript if needed.
Edit
I have finally create my own library for that, it is called scim-patch and it is available on npm https://www.npmjs.com/package/scim-patch
I implement SCIM PATCH operation in my own library. Please take a look here and here. It is currently a work in progress for v2, but the CRUD capability required by patch operations has matured.
First of all, you need a way to parse the SCIM path, which can optionally include a filter. I implement a finite state machine to parse the path and filter. A scanner would go through each byte of the text and point out interesting events, and a parser would use the scanner to break the text into meaningful tokens. For instance, emails[value eq "foo#bar.com"].type can be broken down to emails, [, eq, "foo#bar.com", ] and type. Finally, a compiler will take these token inputs and assemble it into an abstract syntax tree. On paper, it will look something like the following:
emails -> eq -> type
/ \
value "foo#bar.com"
Next, you need a way to traverse the resource data structure according to the abstract syntax tree. I designed my property model to carry a reference to the SCIM attribute. Consider the following resource:
{
"schemas": ["urn:ietf:params:scim:schemas:core:2.0:User"],
"userName": "imulab",
"emails": [
{
"value": "foo#bar.com",
"type": "work"
},
{
"value": "bar#foo.com",
"type": "home"
}
]
}
I start traversing from the root of the resource and find the child called emails, which will return a multiValued property of complex type. I see my next token (eq) is the root of a filter, so I perform the filter operations on the two elements of emails. For each element, I go down the value child and evaluate its value. Since only the first element matches the filter, I finally go down the type child of that complex property and arrive at the target property. From there, you are free to perform Add, Replace and Remove operations.
There are two things I recommend to watch out.
One thing is that you traversing path will split when you hit a multiValued property. In the above example, we only have one elements that matched the filter. In reality, we may have many matches, or there could be no filter at all, forcing you to traverse all elements.
The other is the syntax of the SCIM path. The specification mandates that it is possible to prefix the schema URN in front the actual paths and delimit them with a :. So in that representation, emails.type and urn:ietf:params:scim:schemas:core:2.0:User:emails.type are actual equivalents. Note that the schema URN contains dots (.) in the 2.0 part. This creates further complication that now you cannot simply delimit the text by . and hope to get all correct tokens. I use a Trie data structure to record all schema URNs as reserved words. Whenever I start a new segment in the path, I will try to match it in the Trie and not solely rely on the . to terminate the segment.
Hope it will help your work.
Have a look at scim2-filter-parser: https://github.com/15five/scim2-filter-parser
It is a library mainly used by the authors' django-scim2 library: https://github.com/15five/django-scim2
It relies on python AST objects, but I think you should get some takeaways from there.
Since I did not found any typescript library to implement scim patch operations, I have implemented my own library.
You can find it here: https://www.npmjs.com/package/scim-patch
Currently trying to understand the puppet manifests written by another person and met the following construction in the class:
postgres_helper::tablespace_grant { $tablespace_grants:
privilege => 'all',
require => [Postgresql::Server::Role[$rolename]]
}
what does $tablespace_grants: means in this case? First i suggested that is some kind of a title, however when i used notice to receive the value of it, it is hash:
Tablespace_grants value is [{name => TS_INDEX_01, role => developer},
{name => TS_DATA01_01, role => developer}]
what does $tablespace_grants: means in this case? First i suggested
that is some kind of a title,
It is a variable reference, used, yes, as the title of a postgres_helper::tablespace_grant resource declaration.
however when i used notice to receive
the value of it, it is hash:
Tablespace_grants value is [{name => TS_INDEX_01, role => developer},
{name => TS_DATA01_01, role => developer}]
Actually, it appears to be an array of hashes. An array may be used as the title of a resource declaration to compactly declare multiple resources, one for each array element. In Puppet 4, however, the elements are required to be strings. Earlier versions of Puppet would stringify hashes presented as resource titles; I am uncertain offhand whether Puppet 4 still falls back on this.
In any case, it is unlikely that the overall declaration means what its original author intended, in any version of Puppet. It looks like the intent is to declare multiple resources, each with properties specified by one of the hashes, but the given code doesn't accomplish that, and it's unclear exactly what the wanted code would be.
In our application we are creating Xml files with an attribute that has a Guid value. This value needed to be consistent between file upgrades. So even if everything else in the file changes, the guid value for the attribute should remain the same.
One obvious solution was to create a static dictionary with the filename and the Guids to be used for them. Then whenever we generate the file, we look up the dictionary for the filename and use the corresponding guid. But this is not feasible because we might scale to 100's of files and didnt want to maintain big list of guids.
So another approach was to make the Guid the same based on the path of the file. Since our file paths and application directory structure are unique, the Guid should be unique for that path. So each time we run an upgrade, the file gets the same guid based on its path. I found one cool way to generate such 'Deterministic Guids' (Thanks Elton Stoneman). It basically does this:
private Guid GetDeterministicGuid(string input)
{
//use MD5 hash to get a 16-byte hash of the string:
MD5CryptoServiceProvider provider = new MD5CryptoServiceProvider();
byte[] inputBytes = Encoding.Default.GetBytes(input);
byte[] hashBytes = provider.ComputeHash(inputBytes);
//generate a guid from the hash:
Guid hashGuid = new Guid(hashBytes);
return hashGuid;
}
So given a string, the Guid will always be the same.
Are there any other approaches or recommended ways to doing this? What are the pros or cons of that method?
As mentioned by #bacar, RFC 4122 ยง4.3 defines a way to create a name-based UUID. The advantage of doing this (over just using a MD5 hash) is that these are guaranteed not to collide with non-named-based UUIDs, and have a very (very) small possibility of collision with other name-based UUIDs.
There's no native support in the .NET Framework for creating these, but I posted code on GitHub that implements the algorithm. It can be used as follows:
Guid guid = GuidUtility.Create(GuidUtility.UrlNamespace, filePath);
To reduce the risk of collisions with other GUIDs even further, you could create a private GUID to use as the namespace ID (instead of using the URL namespace ID defined in the RFC).
This will convert any string into a Guid without having to import an outside assembly.
public static Guid ToGuid(string src)
{
byte[] stringbytes = Encoding.UTF8.GetBytes(src);
byte[] hashedBytes = new System.Security.Cryptography
.SHA1CryptoServiceProvider()
.ComputeHash(stringbytes);
Array.Resize(ref hashedBytes, 16);
return new Guid(hashedBytes);
}
There are much better ways to generate a unique Guid but this is a way to consistently upgrading a string data key to a Guid data key.
As Rob mentions, your method doesn't generate a UUID, it generates a hash that looks like a UUID.
The RFC 4122 on UUIDs specifically allows for deterministic (name-based) UUIDs - Versions 3 and 5 use md5 and SHA1(respectively). Most people are probably familiar with version 4, which is random. Wikipedia gives a good overview of the versions. (Note that the use of the word 'version' here seems to describe a 'type' of UUID - version 5 doesn't supercede version 4).
There seem to be a few libraries out there for generating version 3/5 UUIDs, including the python uuid module, boost.uuid (C++) and OSSP UUID. (I haven't looked for any .net ones)
You need to make a distinction between instances of the class Guid, and identifiers that are globally unique. A "deterministic guid" is actually a hash (as evidenced by your call to provider.ComputeHash). Hashes have a much higher chance of collisions (two different strings happening to produce the same hash) than Guid created via Guid.NewGuid.
So the problem with your approach is that you will have to be ok with the possibility that two different paths will produce the same GUID. If you need an identifier that's unique for any given path string, then the easiest thing to do is just use the string. If you need the string to be obscured from your users, encrypt it - you can use ROT13 or something more powerful...
Attempting to shoehorn something that isn't a pure GUID into the GUID datatype could lead to maintenance problems in future...
MD5 is weak, I believe you can do the same thing with SHA-1 and get better results.
BTW, just a personal opinion, dressing a md5 hash up as a GUID does not make it a good GUID. GUIDs by their very nature are non Deterministic. this feels like a cheat. Why not just call a spade a spade and just say its a string rendered hash of the input. you could do that by using this line, rather than the new guid line:
string stringHash = BitConverter.ToString(hashBytes)
Here's a very simple solution that should be good enough for things like unit/integration tests:
var rnd = new Random(1234); // Seeded random number (deterministic).
Console.WriteLine($"{rnd.Next(0, 255):x2}{rnd.Next(0, 255):x2}{rnd.Next(0, 255):x2}{rnd.Next(0, 255):x2}-{rnd.Next(0, 255):x2}{rnd.Next(0, 255):x2}-{rnd.Next(0, 255):x2}{rnd.Next(0, 255):x2}-{rnd.Next(0, 255):x2}{rnd.Next(0, 255):x2}-{rnd.Next(0, 255):x2}{rnd.Next(0, 255):x2}{rnd.Next(0, 255):x2}{rnd.Next(0, 255):x2}{rnd.Next(0, 255):x2}{rnd.Next(0, 255):x2}");
I am working with the classes in the System.Windows.Documents namespace, trying to write some generic code that will conditionally set the value of certain dependency properties, depending on whether these properties exist on a given class.
For example, the following method assigns an arbitrary value to the Padding property of the passed FrameworkContentElement:
void SetElementPadding(FrameworkContentElement element)
{
element.SetValue(Block.PaddingProperty, new Thickness(155d));
}
However, not all concrete implementations of FrameworkContentElement have a Padding property (Paragraph does but Span does not) so I would expect the property assignment to succeed for types that implement this property and to be silently ignored for types that do not.
But it seems that the above property assignment succeeds for instances of all derivatives of FrameworkContentElement, regardless of whether they implement the Padding property. I make this assumption because I have always been able to read back the assigned value.
I assume there is some flaw in the way I am assigning property values. What should I do to ensure that a given dependency property assignment is ignored by classes that do not implement that property?
Many thanks for your advice.
Tim
All classes that derive from Block have the Padding property. You may use the following modification:
void SetElementPadding(FrameworkContentElement element)
{
var block = element as Block;
if (block == null) return;
block.Padding = new Thickness(155d);
}
Even without this modification everything would still work for you because all you want is for Padding to be ignored by classes that do not support it. This is exactly what would happen. The fact that you can read out the value of a Padding dependency property on an instance that does not support it is probably by design but you shouldn't care. Block and derivatives would honor the value and all others would ignore it.