Microsoft is moving away from SHA1. As a result many executables now have two or more signatures; one using a SHA1 digest for backward compatibility and one using a SHA256.
For example if you look at the properties of vstest.executionengine.exe from Visual Studio 2013 (look at the properties on Windows 8 or Server 2012) you'll see it has 3 different signatures from 3 different certificates.
I already have code that uses a combination of CryptQueryObject, CryptMsgGetParam, and .NET SignedCms, but it only sees 1 of the 3 signatures. There appears to be only one message with one signer.
I need to get the certificate information for all signatures. How are multiple signatures modeled - is it multiple messages, or multiple signers in a single message? Did Microsoft add new APIs or new flags to access multiple signatures?
It turns out that Microsoft (sort of) hides subsequent signatures. When adding another signature, the entire CMS structure is added as an unsigned attribute.
So for example a dual-signed Authenticode signature decoded as a .NET SignedCms will have one signer, and that SignerInfo will have a value in UnsignedAttributes. If you take the attribute ASN RawData value and pass it to SignedCms.Decode, you get the second signature.
// decode inner signature
signedCms2.Decode(signedCms1.SignerInfos[0].UnsignedAttributes[0].Values[0].RawData);
It also appears that instead of adding another attribute to the root signature signer, the attribute is added in the inner-most signers attributes.
Also note that not all attributes are inner signatures, I think you need to check for a proper Oid on the attribute.
Is suspect that this was the best way for Microsoft to keep backward compatibility.
Related
I have an application that uses unique AnchorTabStrings for locating signature blocks. When we tested the application using the DocuSign Sandbox the signing locations worked flawlessly. However, now in production we are experiencing erroneous behavior. Sometimes the location of the signature is completely off (meaning that it appears in a random location in the document) and other times the signature block is completely omitted (not used). The application is a Windows C# MVVM WPF Desktop application and we are using the DocuSign SOAP API. The number of signature blocks varies depending on how many pages are produced by the user in the application. The names of the AnchorTabStrings I'm using are "AuditorSignatureBlock", "OwnerManagerSignatureBlock", and "TechnicianSignatureBlock". We are not using document templates. Here's a snippet of code where we create a new signing tab within the document.
new Tab
{
DocumentID = this.dataContext._inspReport.DocumentGuid.ToString(),
RecipientID = idRoutingNumber.ToString(),
Type = TabTypeCode.SignHere,
AnchorTabItem = new AnchorTab
{
AnchorTabString = "AuditorSignatureBlock",
XOffset = 0,
YOffset = 0
}
}
Update:
I have narrowed the issue down to the number of pages. Meaning when there is 3 or more pages within the envelope, this is when the feature no longer functions. I produced the same document in the DocuSign sandbox environment and this issue is not apparent. I have submitted a case with DocuSign support.
Since your app is working on the developer sandbox system (demo), but not on the production system, the document may different or anchor string placement may not be enabled on your production account (see below).
First try with the exact same document that worked on the developer sandbox system. If that works but the new document does not, then check carefully that the anchor string in the new document does not include any spaces or other white space, and that it is not being wrapped across two lines.
For example, you can make a test document where the anchor text is not white, then, once it works, change the text color to white.
I've had cases where the transformation from the document creation system to PDF introduced unexpected changes.
For these reasons, one common technique is use strings such as /auditsig1/ for the anchors.
Another diagnostic technique to try is to create an anchor field (tab) by using the DocuSign web tool, and then check that DocuSign can locate the anchor text in your document. In other words, try it on production without your application at all.
Re: anchor strings an optional feature
Yes, anchor strings (also know as Auto-place) may not be enabled for your production account. To check, use the DocuSign web tool as described above. If anchor strings are not enabled there then contact DocuSign to have the feature added to your account.
What happened had nothing to do with AnchorTabStrings. In production there was an additional recipient added for carbon copy and I did not properly handle the recipient IDs (had two recipients with the same ID). Sorry, my fault :(
I am trying to build a logic app that receives incoming X12 EDI messages. I have an integration account set up and an agreement created. However, I don't see a way to associate a map with an agreement. It seems I have to hard code this in the logic app which would mean one logic app for each trading partner document type. Surely there is a way to select maps based on sender, receiver and document type. This is basic functionality for other EDI packages but I don't have a clue with logic apps.
Not sure whether there is a better way, but you can specify the map name at run time.
Then, as you are already using an integration account, you can externalise Business Rule using liquid templates, as explained here: https://blog.mexia.com.au/business-rules-on-azure-logic-apps-with-liquid-templates
You could pass some metadata about the trading partner to the liquid template, and then return the name of the map and use it in the mapping action.
HTH
Actually, this is almost as easy as BizTalk Server* ;)
Since the Map Name can be composed at runtime, you can upload your Maps using a set pattern, for example [SenderID] + "_" + [MessageType] -> "CONTOSOSID_810"
Then Initialize a Variable set to [SenderID] + "_" + [MessageType] and use that as the Map parameter to the Liquid Action.
Important Point: Every output of the X12 Decoder can loop so you may have to manually index the paths in the Initialize Variable depending on if/how you're debatching the transactions.
*BizTalk Server automatically resolved the Map based on the Party's message Type which you could use here, just that the names are a bit messy.
There are various places in my Sharepoint generated web pages where element IDs are assigned values by Sharepoint with the prefix "ct100".
How safe is it to refer to these IDs in my javascript?
How static are they?
How likely are they to change for any reason?
Is there any documentation that discusses how they are generetad and answers the above questions?
Thanks,
George
"Safe" is relative here and depends on the specific page and controls involved.
The individual IDs that look like ctXYZ are generated automatically by the parent ASP.Net Naming Container when the control has not been explicitly assigned an ID - that is, the ID is auto-generated like ct001, ct002, etc, making sure not to create duplicates.
The final Client ID - which is assigned as the id attribute in the HTML - is the made up of joining all the IDs of the control and all ancestors with underscores; thus the stability of a single Client ID is affected by multiple controls.
The generated Client IDs are generally guaranteed to remain stable only if the Control Tree is always re-created in the same way - this holds true in many cases, but it is not absolute. Dynamically adding or removing controls can easily break this assumption.
ASP.Net 4 (or 4.5?) introduced a different ID mode - primarily "Predictable", see the Naming Container link - but I do not believe that SharePoint uses this new feature anywhere; and it's definitely not used for IDs that contain ctXYZ components.
Based on research, the safest approach is to use css and jquery to identify and manipulate the "ct001" elements. I have tried this approach and it appears to be working fine.
I was wondering if anyone knows if the following scenario can happen.
Suppose I have dynamically generated a form that has check boxes for products specific to that customer. If the customer checks the boxes, the products will be deleted when form posted. The checkboxes are named after the productID.
Now my handler will check the response.form and parse the productID out, then delete the product from the database based on the productID.
Potentially could someone amend the post to allow other productIDs to be deleted, potentially everything in the product table by adding fake checkbox names to the POST?
If so, it would be easy to check prior to delete the productID is related to the authenticated user, and they have sufficient roles to delete, or to generate a nonce and label the checkboxes with that rather than their product ID, however I am not doing this at the moment. Any pointers to best practices for this would be good.
I have never considered this before, and wonder just how many people actually do this by default, or are there a million web sites out there vunerable?
Thanks
It is absolutely possible that someone could build a custom POST request with any key/value pairs (including product ID values) and submit it to your application. The fact that the checkboxes are not on the form that the POST is supposed to come from is irrelevant from a security perspective.
When thinking about web application security, the client is a completely untrusted entity. You have to assume that your JavaScript validation will be bypassed, your SELECT elements can be altered to contain any value an attacker wants, and so forth.
So yes, you should validate that the current user is authorized to delete any product ID submitted to this handler.
I'm not necessarily convinced that you need to go the nonce-obfuscation route. It is an additional layer of security, which is good, but if you are performing proper authorization I don't think it's necessary.
My $0.02
Yes this is a problem. What you are describing is an example of the "Insecure Direct Object References" risk as defined by the Open Web Application Security Project (OWASP).
As to how common it is, it currently (2011) ranks 4th in the OWASP's list of top 10 most severe web application security risks. Details of how to prevent this can be found on the OWASP page.
How Do I Prevent Insecure Direct Object References?
Preventing insecure direct object
references requires selecting an
approach for protecting each user
accessible object (e.g., object
number, filename):
Use per user or session indirect
object references. This prevents
attackers from directly targeting
unauthorized resources. For example,
instead of using the resource’s
database key, a drop down list of
six resources authorized for the
current user could use the numbers 1
to 6 to indicate which value the
user selected. The application has
to map the per-user indirect
reference back to the actual
database key on the server. OWASP’s
ESAPI includes both sequential and
random access reference maps that
developers can use to eliminate
direct object references.
Check access. Each use of a direct object reference from an
untrusted source must include an
access control check to ensure the
user is authorized for the requested
object.
Why not simply validate the values you get against the values you provided? Example: You have provided check boxes for items 1, 2 and 3, 9. The user posts 1, 2, 3, 4, 5, 6. You can find the intersection of the lists and delete that (in this case 1, 2, and 3 are in both lists).
I am essentially storing a private key (Hash) in any of the OctetString attributes within Active Directory.
My question is, what attribute is secure by default and makes sense to keep private data there? This value should be considered similar to a password, where even administrators shouldn't have access (if possible), just like the current AD Password.
Here is a start of a list of attributes that are enabled by default on a Windows 2008R2 + Exchange 2010 domain.
Update:
Does anyone know of an Octet String attribute that does not expose "read" permissions to all users in the domain by default? I don't want to store my hash publicly and allow someone to build a rainbow table based on the hashes.
Here is the answer for the fella who upvoted my question... it's pretty interesting:
The default permissions in Active Directory are such that Authenticated Users have blanket read access to all attributes. This makes it difficult to introduce a new attribute that should be protected from being read by everyone.
To mitigate this, Windows 2003 SP1 introduces a way to mark an attribute as CONFIDENTIAL. This feature achieved by modifying the searchFlags value on the attribute in the schema. SearchFlags contains multiple bits representing various properties of an attribute. E.g. bit 1 means that the attribute is indexed. The new bit 128 (7th bit) designates the attribute as confidential.
Note: you cannot set this flag on base-schema attributes (those derived from "top" such as common-name). You can determine if an object is a base schema object by using LDP to view the object and checking the systemFlags attribute of the object. If is the 10th bit is set it is a base schema object.
When the Directory Service performs a read access check, it checks for confidential attributes. If there are, then in addition to READ_PROPERTY access, the Directory Service will also require CONTROL_ACCESS access on the attribute or its property set.
By default, only Administrators have CONTROL_ACCESS access to all objects. Thus, only Administrators will be able to read confidential attributes. Users are free to delegate this right to any specific group they want. This can be done with DSACLs tool, scripting, or the R2 ADAM version of LDP. As of this writing is not possible to use ACL UI Editor to assign these permissions.
The process of marking an attribute Confidential and adding the users that need to view the attribute has 3 Steps
1.Determining what attribute to mark Confidential, or adding an attribute to mark Confidential.
2.Marking it confidential
3.Granting the correct users the Control_Access right so they can view the attribute.
For more details and step-by-step instructions, please refer to the following article:
922836 How to mark an attribute as confidential in Windows Server 2003 Service Pack 1
http://support.microsoft.com/default.aspx?scid=kb;EN-US;922836
It doesn't actually important whether you use an attribute of OctetString syntax or somewhat else such as DirectoryString. What is important from the security point of view is the security descriptor assigned to the entry or branch of entries that hold your attributes. In other words, a binary attribute value hardly makes your system more secure, unless a proper security is assigned to the directory tree.
You cannot have security similar to unicodePwd attributes has, because it's kind of special. While you can assign a security descriptor that prohibits accessing your attribute values even by an administrator, you cannot disable administrator from changing security descriptor and ultimately acquiring the access to the value.
Unless you totally plan to lock yourself into AD I would suggest just adding an Aux class with your Octet String attribute and use that. (I.e. Not all other schema's might have the same attribute with the same syntax. Just ran into that with destinationIndicator. SunOne and eDirectory have different schema syntaxes.)
Then I would encrypt the contents of the attribute, since it is too hard to guarantee privacy of the data otherwise.