TimeStamp service for CoSign - digital-signature

I'm making progress on writing a PHP script that uses SAPI to sign PDFs. The example code was quite helpful, but I tried to take advantage of a few other SAPI features and have a few questions:
I tried to get the signature to be timestamped. So essentially I added the following code to the PHP example:
define( 'AR_SAPI_SIG_ENABLE_STS' , 0x100 );
define( 'AR_SAPI_SIG_PDF_REVOCATION' , 0x1000 );
$req->OptionalInputs->Flags =
AR_SAPI_SIG_ENABLE_STS | AR_SAPI_SIG_PDF_REVOCATION;
Unfortunately though, now the code doesn't work. I think I added the flags in the right place. Can anyone shed any light on this? The error that I am getting back is:
result is: urn:oasis:names:tc:dss:1.0:resultmajor:ResponderError
urn:oasis:names:tc:dss:1.0:resultminor:GeneralError
Failed create and sign err 90030373

By definition, a trusted timestamp is a timestamp issued by a trusted third party acting as a Time Stamping Authority (TSA).
CoSign itself provides signing services, not the timestamping service.
If you also interested in a secure timestamp you should configure SAPI to communicate with a TSA server. Since you're using Web Services you have to specify the TSA server hostname or IP address in your code.

Related

How to validate domain EPP codes?

I'm trying to find a way to automate our company's domain transfer process, and one of the first steps requires validating the domain's EPP code before we initiate the actual transfer.
Currently, we're having to login to our domain registrar and manually use their domain EPP validation tool. They don't provide any API access for this and setting up what would essentially be a macro to automatically log in and run the tool is too fragile for our requirements. The code for their tool is closed source so I'm unable to see how they're validating the EPP codes.
Is there any other method for validating domain EPP codes? I've searched StackOverflow and Google but have been unable to find any information on how to do this.
By "EPP code," do you mean the temporary transfer auth codes that registrars usually send via email (or show in their web UIs)?
For reference: https://en.wikipedia.org/wiki/Auth-Code
(EPP codes can also be these status codes, fwiw: https://www.icann.org/resources/pages/epp-status-codes-2014-06-16-en)
I'm also not quite following what you mean by "validating" the codes?
An EPP (Extensible Provisioning Protocol) code, also known as an 'Authorization Code', 'Auth-Info Code', or 'transfer code', is typically generated by the domain registry or registrar and can expire. It will normally be between 1 - 32 character long, and will contain at least 1 number, 1 letter, and 1 special character.
Depending upon the Registrar, the EPP code might expire after a certain period of time to avoid security vulnerabilities. If the domain owner requires the EPP code after expiry, they must request or generate it via their domain registrar.
Because the EPP code is used like a password to verify ownership of a domain, it should be secure and inaccessible to third parties, therefore a third party cannot explicitly validate it.
You can validate the syntax of it, because it is in the EPP specifications, but it is quite lax, so won't help you much (it is just an XML normalizedString, there are no restrictions on content or length basically). You may get further constraints detailed by the specific registry, so you need to peruse their documentation or ask them directly (if you can get access to that because it is restricted to registrars most often), but that will differ anyway from one registry to another.
I guess, even if you don't want that as answer, since you say:
They don't provide any API access
and it means to be a problem for your business, then you either need to find another provider more suitable for your needs, or pressure your current one to give you the tools you need.
If you were a registrar, with direct connection to registry, you could, for some registries, assess if a given authInfo (the official name from the specification, that is then often code "auth code" or "epp code" or things like that, but no point in defining new terms) from customer does indeed work,
as a domain:info EPP command can specify an authInfo and hence done on a domain you don't sponsor yet, the results from registry will change depending on if the authInfo is correct or not.
Anyway, even as a reseller, if you start a transfer, you have to provide the authInfo. If it is wrong you should get an error immediately, as the registrar sending the command to the registry to transfer the domain will also get an error immediately. And if there is a success instead, it seems the domain has started the transfer and as such proves the authInfo was correct.

How to make Fiddler generate certificate with wrong CN

I need it for security testing. My purpose is to check, how the application would behave, if the adversary presents a certificate with a wrong Common Name (CN) and/or SubjectAltName, but signed by a correct CA.
I believe that the application in test uses HostnameVerifier incorrectly and need to prove it.
Here is an official answer from Telerik (Eric Lawrence):
Click Rules > Customize Rules. Scroll to OnBeforeRequest.
Inside that function, add the following:
if (oSession.HTTPMethodIs("CONNECT") &&
oSession.HostnameIs("siteIcareabout.com"))
{
oSession["X-OverrideCertCN"] = "badhostname.net";
}
Save the file and restart the browser if it had previously established any connections to https://siteIcareabout.com.
==========================
I checked it and it works.
Vanilla Fiddler lacks a mechanism to do this, so you would need to use an external tool or plugin. Some examples are cataloged here.
Of course, any certificate you generate will be signed by the Fiddler root cert, so the platform you're running the application from will need to trust that certificate

How to check NTLM type3 message? (node.js)

I want to write a http server with node.js that supports NTLMv2 authentication.
Evertything works fine with the handshak (type1, type2, type3 messages) and I get my type3-message from the client (Chrome Browser). In this message that is being sent to the server there is a ntlmv2 response that I can read within my node.js server. How I can authenticate if this reponse is valid?
According to [1] I have understood the type3 message and I was able to create my own node.js-routine to generate these hashes. So when I have the password I can create a hash that is equal to the one I get from the browser. But how can I authenticate this hash/response without knowing the password? How can I authenticate this against a DomainController/ActiveDirectory in my network?
If you have look at [2], there is a picture that describes my question perfectly. How can I execute the steps "4" and "5" of this picture?
Thanks,
Laryllan
[1] http://davenport.sourceforge.net/ntlm.html#theType3Message
[2] http://msdn.microsoft.com/en-us/library/cc239685.aspx
A quick web search affirms that everyone seems to get stuck at about the same point.
The best response to this topic I've seen so far is here:
Windows Authentication Headers without .NET. Possible?
To valid NTLMv2 credentials you would need to perform SecureChannel encrypted RPCs with the NETLOGON service of an Active Directory domain controller. Which is to say, this is a difficult thing to do. If your server supports Java Servlet Filters there's Jespa.
Otherwise, there are modules that can do the auth at the webserver level like an Apache module or by turning on IWA in IIS. But of course these type of solutions are somewhat limited in a number of ways.

Keygen tag in HTML5

So I came across this new tag in HTML5, <keygen>. I can't quite figure out what it is for, how it is applied, and how it might affect browser behavior.
I understand that this tag is for form encryption, but what is the difference between <keygen> and having a SSL certificate for your domain. Also, what is the challenge attribute?
I'm not planning on using it as it is far from implemented in an acceptable range of browsers, but I am curious as to what EXACTLY this tag does. All I can find is vague cookie-cutter documentation with no real examples of usage.
Edit:
I have found a VERY informative document, here. This runs through both client-side and server-side implementation of the keygen tag.
I am still curious as to what the benefit of this over a domain SSL certificate would be.
SSL is about "server identification" or "server AND client authentication (mutual authentication)".
In most cases only the server presents its server-certificate during the SSL handshake so that you could make sure that this really is the server you expect to connect to. In some cases the server also wants to verify that you really are the person you pretend to be. For this you need a client-certificate.
The <keygen> tag generates a public/private key pair and then creates a certificate request. This certificate request will be sent to a Certificate Authority (CA). The CA creates a certificate and sends it back to the browser. Now you are able to use this certificate for user authentication.
You're missing some history. keygen was first supported by Netscape when it was still a relevant browser. IE, OTOH, supported the same use cases through its ActiveX APIs. Opera and WebKit (or even KHTML), unwilling to reverse-engineer the entire Win32 API, reverse-engineered keygen instead.
It was specified in Web Forms 2.0 (which has now been merged into the HTML specification), in order to improve interoperability between the browsers that implemented it.
Since then, the IE team has reiterated their refusal to implement keygen, and the specification (in order to avoid turning into dry science fiction) has been changed to not require an actual implementation:
Note: This specification does not
specify what key types user agents are
to support — it is possible for a user
agent to not support any key types at
all.
In short, this is not a new element, and unless you can ignore IE, it's probably not what you want.
If you're looking for "exactly" then I'd recommend reading the RFC.
The keygen element is for creating a key for authentication of the user while SSL is concerned about privacy of communication and the authentication of the server. Quoting from the RFC:
This specification does not specify how the private key generated is to be used. It is expected that after receiving the SignedPublicKeyAndChallenge (SPKAC) structure, the server will generate a client certificate and offer it back to the user for download; this certificate, once downloaded and stored in the key store along with the private key, can then be used to authenticate to services that use TLS and certificate authentication.
Deprecated
This feature has been removed from the Web standards. Though some
browsers may still support it, it is in the process of being dropped.
Avoid using it and update existing code if possible. Be aware that
this feature may cease to work at any time.
Source
The doc is useful to elaborate on what is the keygen element. Its requirement arises in WebID that maybe understood to be part of Semantic Web of Linked Data as seen at https://dvcs.w3.org/hg/WebID/raw-file/tip/spec/index-respec.html#creating-a-certificate 2.1.1
This might be useful for websites that provide services, where people need to pay for the service, like video on demand, or news website for professionals like Bloomberg. With this keys people can only watch the content in their computer and not in simultaneous computers! You decide how data is stored and processed. you can specify a .asp or .php file that will receive the variables and your file will store that key in the user profile. This way your users will not be able to log in from a different computer if you want. You may force them to check their email to authorize that new computer, just like steam does. Basically it allows to individualize service access, if your licensing model is per machine, like Operating System.
You can check the specs here:
http://www.w3.org/TR/html-markup/keygen.html

I need resources for API security basics. Any suggestions?

I've done a little googling but have been a bit overwhelmed by the amount of information. Until now, I've been considering asking for a valid md5 hash for every API call but I realized that it wouldn't be a difficult task to hijack such a system. Would you guys be kind enough to provide me with a few links that might help me in my search? Thanks.
First, consider OAuth. It's somewhat of a standard for web-based APIs nowadays.
Second, some other potential resources -
A couple of decent blog entries:
http://blog.sonoasystems.com/detail/dont_roll_your_own_api_security_recommendations1/
http://blog.sonoasystems.com/detail/more_api_security_choices_oauth_ssl_saml_and_rolling_your_own/
A previous question:
Good approach for a web API token scheme?
I'd like to add some clarifying information to this question. The "use OAuth" answer is correct, but also loaded (given the spec is quite long and people who aren't familiar with it typically want to kill themselves after seeing it).
I wrote up a story-style tutorial on how to go from no security to HMAC-based security when designing a secure REST API here:
http://www.thebuzzmedia.com/designing-a-secure-rest-api-without-oauth-authentication/
This ends up being basically what is known as "2-legged OAuth"; because OAuth was originally intended to verifying client applications, the flow is 3-parts involving the authenticating service, the user staring at the screen and the service that wants to use the client's credentials.
2-legged OAuth (and what I outline in depth in that article) is intended for service APIs to authenticate between each other. For example, this is the approach Amazon Web Services uses for all their API calls.
The gist is that with any request over HTTP you have to consider the attack vector where some malicious man-in-the-middle is recording and replaying or changing your requests.
For example, you issue a POST to /user/create with name 'bob', well the man-in-the-middle can issue a POST to /user/delete with name 'bob' just to be nasty.
The client and server need some way to trust each other and the only way that can happen is via public/private keys.
You can't just pass the public/private keys back and forth NOR can you simply provide a unique token signed with the private key (which is typically what most people do and think that makes them safe), while that will identify the original request coming from the real client, it still leaves the arguments to the comment open to change.
For example, if I send:
/chargeCC?user=bob&amt=100.00&key=kjDSLKjdasdmiUDSkjh
where the key is my public key signed by my private key only a man-in-the-middle can intercept this call, and re-submit it to the server with an "amt" value of "10000.00" instead.
The key is that you have to include ALL the parameters you send in the hash calculation, so when the server gets it, it re-vets all the values by recalculating the same hash on its side.
REMINDER: Only the client and server know the private key.
This style of verification is called an "HMAC"; it is a checksum verifying the contents of the request.
Because hash generation is SO touchy and must be done EXACTLY the same on both the client and server in order to get the same hash, there are super-strict rules on exactly how all the values should be combined.
For example, these two lines provides VERY different hashes when you try and sign them with SHA-1:
/chargeCC&user=bob&amt=100
/chargeCC&amt=100&user=bob
A lot of the OAuth spec is spent describing that exact method of combination in excruciating detail, using terminology like "natural byte ordering" and other non-human-readable garbage.
It is important though, because if you get that combination of values wrong, the client and server cannot correctly vet each other's requests.
You also can't take shortcuts and just concatonate everything into a huge String, Amazon tried this with AWS Signature Version 1 and it turned out wrong.
I hope all of that helps, feel free to ask questions if you are stuck.

Resources