ASP.NET MVC5 OWIN rejects long URLs - iis

I am creating an ASP.NET MVC5 action method that implements a password reset endpoint and accepts a click-through from an email message containing a token. My implementation uses OWIN middleware and closely resembles the ASP.NET Identity 2.1 samples application.
As per the samples application, the token is generated by UserManager and embedded into a URL that is sent to the user by email:
var token = await UserManager.GeneratePasswordResetTokenAsync(user.Id);
var encoded = HttpServerUtility.UrlTokenEncode(Encoding.UTF8.GetBytes(token));
var uri = new Uri(Url.Link("ResetPasswordRoute", new { id = user.Id, token = encoded }));
The link in the email message targets an MVC endpoint that accepts the token parameter as one of its route segments:
[Route("reset-password/{id}/{token}"]
public async Task<ActionResult> PasswordResetAsync(int id, string token)
{
token = Encoding.UTF8.GetString(HttpServerUtility.UrlTokenDecode(token));
// Implementation here
}
However, requests to this endpoint (using a URL generated in the above manner) fail with Bad Request - Invalid URL.
It appears that this failure occurs because the URL is too long. Specifically, if I truncate the token segment, it connects correctly to the MVC endpoint (although, of course, the token parameter is no longer valid). Specifically, the following truncated URL works ...
http://localhost:53717/account/reset-password/5/QVFBQUFOQ01uZDhCRmRFUmpIb0F3RS9DbCtzQkFBQUFzcko5MEJnYWlrR1RydnVoY2ZwNEpnQUFBQUFDQUFBQUFBQVFaZ0FBQUFFQUFDQUFBQUNVeGZZMzd4OTQ3cE03WWxCakIwRTl4NkVSem1Za2ZUc1JxR2pwYnJSbmJ3QUFBQUFPZ0FBQUFBSUFBQ0FBQUFEcEpnVXFXS0dyM2ZPL2dQcWR1K2x6SkgxN25UVjdMYlE2UCtVRG4rcXBjU0FBQUFE
... but it will fail if one additional character is added ...
http://localhost:53717/account/reset-password/5/QVFBQUFOQ01uZDhCRmRFUmpIb0F3RS9DbCtzQkFBQUFzcko5MEJnYWlrR1RydnVoY2ZwNEpnQUFBQUFDQUFBQUFBQVFaZ0FBQUFFQUFDQUFBQUNVeGZZMzd4OTQ3cE03WWxCakIwRTl4NkVSem1Za2ZUc1JxR2pwYnJSbmJ3QUFBQUFPZ0FBQUFBSUFBQ0FBQUFEcEpnVXFXS0dyM2ZPL2dQcWR1K2x6SkgxN25UVjdMYlE2UCtVRG4rcXBjU0FBQUFEf
I believe that the default IIS configuration setting for maxUrlLength should be compatible with what I am trying to do, but I have also tried explicitly setting it to a larger value, which did not solve the problem.
However, using Fiddler to examine the server response, I can see that the working URL generates a server response with the following header ...
Server: Microsoft-IIS/8.0
... whereas the longer URL is rejected with a response containing the following header ...
Server: Microsoft-HTTPAPI/2.0
This seems to imply that the URL is not being being rejected by IIS, but by a middleware component.
So, I am wondering what that component might be and how I might work around its effect.
Any suggestions please?
Many thanks,
Tim
Note: Although my implementation above Base64 encodes the token before using it in the URL, I have also experimented with the simpler approach used in the sample code, which relies on the URL encoding provided by UrlHelper.RouteUrl. Both techniques suffer from the same issue.

You should not be passing such long values in the application path of the URL as they are limited in length to something like 255 characters.
A slightly better alternative is to use a query string parameter instead:
http://localhost:53717/account/reset-password/5?token=QVFBQUFOQ01uZDhCRmRFUmpIb0F3RS9DbCtzQkFBQUFzcko5MEJnYWlrR1RydnVoY2ZwNEpnQUFBQUFDQUFBQUFBQVFaZ0FBQUFFQUFDQUFBQUNVeGZZMzd4OTQ3cE03WWxCakIwRTl4NkVSem1Za2ZUc1JxR2pwYnJSbmJ3QUFBQUFPZ0FBQUFBSUFBQ0FBQUFEcEpnVXFXS0dyM2ZPL2dQcWR1K2x6SkgxN25UVjdMYlE2UCtVRG4rcXBjU0FBQUFEf
That should be safe for at least 2000 characters (full URL) depending on the browser and IIS settings.
A more secure and scalable approach is to pass a token inside an HTTP header.

Related

OTP validation response shows MISSING_PARAMETER

Issue
I am trying to use a yubikey for second factor authentication via their OTP Validation Protocol Version 2.0. Despite following all the documentation meticulously (https://developers.yubico.com/OTP/OTP_Walk-Through.html and https://developers.yubico.com/OTP/Specifications/OTP_validation_protocol.html), I either get a BAD_SIGNATURE or MISSING_PARAMETER status response. I was trying to attach an HMAC-SHA-1 signature only because of the MISSING_PARAMETER response (the signature is not required). When I do try to attach a HMAC-SHA-1 signature then I get the BAD_SIGNATURE status response.
Environment
I am using Visual Studio 2019 and ASP.NET via a custom login form for authentication. Because of the deadline I am attempting to use SharePoint 2019 as a platform since it meets all the requirements OOTB except the 2FA requirement. Since there were issues in the past with AD based authentication I am using Forms auth with ASP.NET Sql Membership as a provider.
Login Code Process
The custom login form first checks the membership provider without setting any auth cookies. If the username and password are valid it proceeds to check if the user requires 2FA via yubikey. If they do I am using an HttpWebRequest to send the get and then reading the response from it (for debugging I am currently printing the response on a label):
string getUrl = "https://api2.yubico.com/wsapi/2.0/verify?id="+YUBICOID+"&otp="+otp+"&nonce="+nonce;
HttpWebRequest yubiGet = (HttpWebRequest)WebRequest.Create(getUrl);
HttpWebResponse response = (HttpWebResponse)yubiGet.GetResponse();
Stream respstr = response.GetResponseStream();
StringBuilder sb = new StringBuilder();
string temp = null;
int ct = 0;
byte[] buffer = new byte[8192];
do
{
ct = respstr.Read(buffer, 0, buffer.Length);
if (ct != 0)
{
temp = Encoding.ASCII.GetString(buffer, 0, ct);
sb.Append(temp);
}
}
while (ct > 0);
string responseStr = sb.ToString();
ErrorLabel.Text = "get: "+getUrl+ "<br /><br />response: " + responseStr;
Per the documentation (see the validation protocol link above) the HMAC signature is not required, however when I leave it off I get the MISSING_PARAMETER response. To build the signature I followed the documentation and reviewed the code in the old YubicoClient class (deprecated). I used the same function calls and processes to generate the signature.
If the username and the creds were verified (via membership provider) and 2FA (yubikey) passed then I set the auth cookie and redirect appropriately.
What I have tried
Everything works except the yubikey response part. I have tried including all of the parameters indicated in the validation protocol documentation, both with and without the signature. With the signature I receive a BAD_SIGNATURE response and without it I receive a MISSING_PARAMETER response.
I am using the Client ID from the Yubico API Key signup from https://upgrade.yubico.com/getapikey/ site for the id url parameter. When I tried with the signature I used the secret key that was generated there. I followed the same process to generate the signature that is used in the YubicoClient class (deprecated)
A separate page is used to link the users to the key and is outside the scope of this issue.
I did use the yubikey manager application to reset the slot and re-registered it with the api key signup multiple times, using both slot 1 and slot 2 on the key.
It is a Yubikey 4 C FIPS. If it works we will be getting a lot of yubikey 5's. The documentation does not differentiate key generations and the yubikey manager indicates that the OTP is supported and there are no issues generating the otp.
Any guidance is greatly appreciated!
Ok. Figured it out. Reduced nonce characters from 40 to 20 and it worked. The documentation stated "from 16 to 40 character long string" I was using a random number generator to get bytes and then converting to string. Not sure if it was the url length or if the generated nonce resulted in more than 40 characters, but when I reduced it from 40 to 20 bytes it worked. I am happy now!

DocuSign Authorization Code Grant flow returns invalid_grant error

The DocuSign documentation goes through an easy to follow authorization flow for code grant. I'm able to get the "code" from the initial GET request to /oath/auth but getting the tokens gives me an error of "invalid_grant" when I try in postman. I've followed the steps and have a request that looks like this using account-d.docusign.com for host:
POST /oauth/token
Content-Type: application/x-www-form-urlencoded
Authorization: Basic MjMwNTQ2YTctOWM1NS00MGFkLThmYmYtYWYyMDVkNTQ5NGFkOjMwODc1NTVlLTBhMWMtNGFhOC1iMzI2LTY4MmM3YmYyNzZlOQ==
grant_type=authorization_code&code=ey2dj3nd.AAAA39djasd3.dkn4449d21d
Two other members of my team have also tried with their developer accounts and all are getting invalid_grant errors. Is this no longer supported or are there common errors associated with this error that we might be able to investigate?
Re-check all of your values.
I was also getting the same invalid_grant response and could not figure out why at first. It turns out that I had a typo in the Content-Type header. I was using application/x-www-form-urlencode instead of application/x-www-form-urlencoded.
You may not be, but if you are submitting the exact Authorization Header as you've posted it here in your question (MjMwNTQ2YTctOWM1NS00MGFkLThmYmYtYWYyMDVkNTQ5NGFkOjMwODc1NTVlLTBhMWMtNGFhOC1iMzI2LTY4MmM3YmYyNzZlOQ==) it will fail with that message.
That is the base64 value for the sample integration key and sample secret key provided in their documentation. If you decode that string with an online base64decoder it will result in 230546a7-9c55-40ad-8fbf-af205d5494ad:3087555e-0a1c-4aa8-b326-682c7bf276e9. This is the same sample integration key and secret in the documentation.
Check the Authorization header you are submitting by encoding your integration key and secret (integrationKey:secret) using this online base64encoder. This will make sure the issue isn't with your base64 encoding of your integration key and secret. Once you have that value make sure your Authorization uses the word Basic before the value you got from this website. (Basic base64stringFromOnlineEncoder)
Check that the code your are submitting in the body of the post is not the sample code from their documentation. ey2dj3nd.AAAA39djasd3.dkn4449d21d is the sample code from their documentation. You may just be using that in your question as a placeholder but if you are submitting any of those values it will return invalid_grant. Make sure that the body of your post does not have any leading or trailing spaces.
Have the correct Content-Type set application/x-www-form-urlencoded
Have the correct Authorization header set Basic base64EncodedIntegrationKey:Secret
Have the correct body using the valid code received from the GET request to /oauth/auth with no leading or trailing spaces, making sure you're not using the values from your question.
If you are still having trouble and you are not doing a user application but are doing a service integration you can use Legacy Authentication to get your oAuth2 token.
Alternative Method using Legacy Authentication for Service Integrations
This method does not use a grant code. You pass in the integration key, username and password into the X-DocuSign-Authentication header in JSON format.
Demo Server: demo.docusign.net
Production Server: www.docusign.net API
Version: v2
POST https://{server}/restapi/{apiVersion}/oauth2/token
Content-Type: application/x-www-form-urlencoded
X-DocuSign-Authentication: {"IntegratorKey":"your_integrator_key","Password":"docusign_account_password","Username":"docusign_account_username"}
grant_type=password&client_id=your_integrator_key&username=docusign_account_username&password=docusign_account_password&scope=api
If you are building a user application that requires the user enter their docusign credentials to generate the token, this alternative will not work for you.
For anyone who is facing this error, I'd like to point out this note in the documentation:
Note: The obtained authorization code is only viable for 2 minutes. If more then two minutes pass between obtaining the authorization code and attempting to exchange it for an access token, the operation will fail.
I was struggling with the same error until I spotted the note and sped up my typing to meet the 2 minutes.
Hope it helps someone else.
In my case the problem was related to having set a wrong value for Content-Type header, namely "application/x-www-form-URIencoded" instead of the correct "application/x-www-form-urlencoded". Note though that in my case the problem was not a "typo" but an excessive trust in DocuSign documentation.
Indeed the wrong Content-Type is, at the time of writing, suggested directly into the documentation page where they describe the Authorization Code Grant workflow, see the image below for the relevant part.
Hopefully they will fix the documentation soon but for the time being be careful not to blindly copy & paste the code from their examples without thinking, as I initially did.
anyone have an idea what is wrong here I am getting a BadRequest with the following
{"error":"invalid_grant","error_description":"unauthorized_client"}
var client = new RestClient(ESIGNURL);
var request = new RestRequest("/oauth/token");
request.Method = Method.POST;
request.AddHeader("Content-Type", "application/x-www-form-urlencoded");
request.AddHeader("Authorization", "Basic " + Convert.ToBase64String(System.Text.Encoding.UTF8.GetBytes(integrationkey+ ":" + secret)));
string body = "grant_type=authorization_code&code=" + code;
request.Parameters.Clear();
request.AddParameter("application/x-www-form-urlencoded", body, ParameterType.RequestBody);
var response = client.Execute(request);
I was getting this error as well. What I realized is I was appending the state at the end of the code before passing it to the oauth token endpoint.
This snippet is from Docusign explaining what are some other reasons for getting that error.
Invalid-error explanation
I just spent a day doing this (in NodeJS). I'll add a couple of things to the answers from before. First, I had to put:
"Content-Type": "application/x-www-form-urlencoded"
in the header. Otherwise it gave me the message:
{
"error": "invalid_grant",
"error_description": "unsupported_grant_type"
}
Second, the base64 encoding:
I used this in NodeJS and it worked
const integration_key = process.env.INTEGRATION_KEY;
const secret_key = process.env.SECRET_KEY;
const authinfo =
integration_key.toString("utf8") + ":" + secret_key.toString("utf8");
const buff2 = Buffer(authinfo, "utf8").toString("base64");
If you use "base64url" it dosen't work because it strips the == off of the end of the string. The = symbol is used as padding and apparently it's needed. You see a similar difference on this site https://www.base64encode.org/ when you toggle the url safe encoding option. If you don't have the padding on the end of your base64 encoded string (or if it's generally incorrect) you get this message:
{
"error": "invalid_grant",
"error_description": "unauthorized_client"
}
Finally, if you're using Postman (I'm using DocuSign's Postman Collection) remember to reset and save the codeFromUrl variable after you update it. Otherwise it doesn't update and you get the message:
{
"error": "invalid_grant",
"error_description": "expired_client_token"
}
This means the old URL code has expired and your new one didn't save.

Verifying JWT generated by Node in Laravel

I'm generating a token on our auth server (Node.js) in node-jsonwebtoken that will be passed to an API (PHP Laravel) and verified by tymondesigns/jwt-auth.
A token generated by tymondesigns/jwt-auth will be verified successfully by
its own verify function, node-jsonwebtoken and jwt.io.
A token generated by node-jsonwebtoken will be verified successfully by its own verify function, jwt.io, but not tymondesigns/jwt-auth.
On the Laravel server, i get the following error when I try to verify a token generated by node-jsonwebtoken:
TokenInvalidException in NamshiAdapter.php line 71:
Token Signature could not be verified.
The payloads look identical when I look at them over at jwt.io. I have even tried to generate the exact same token on the Node server by passing the same iat,sub,iss,exp,nbf and jti as generated by a working token, but tymondesigns/jwt-auth still won't accept it.
Is there anything else that could be causing this, but isn't visible in the decoded information? I'm also not 100% sure how jti works. Maybe there is something preventing this from working about that?
node-jsonwebtoken (7.1.9), tymon/jwt-auth (0.5.9), namshi/jose (5.0.2)
The last version of the namshi/jose library is 7.0.
There is also a known bugs for all ESxxx algorithms.
If you cannot verify signatures using that library, you could try with another one.
I developed a library that supports all features described in the RFCs related to the JWT, including encryption support.
The reason is, as mentioned by Spomky aswell, a bug in namshi/jose related to the iss claim. It is resolved in 7.0 which is used by tymon/jwt-auth 1.0.0-alpha.2. However, since there currently isn't a documented way to install 1.0.0-alpha.2, we probably have to wait for a stable release.
Until then, since the problem and the bug is related to the iss claim, removing the iss requirement from required_claims and generating the tokens without it solves the problem temporarily.
In my case I had a url inside the payload. PHP escapes slashes by default when encoding to JSON, while Node.js doesn't. When the verification JWT gets generated in PHP (with those extra backslashes) of course the final hashes won't match since the payload is just different. Solution is to use the JSON_UNESCAPED_SLASHES flag when converting to JSON inside your JWT library, I was using https://github.com/namshi/jose so I created a simple class like this one:
use Namshi\JOSE\SimpleJWS;
class SimpleJWSWithEncodeOptions extends SimpleJWS
{
protected static $encodeOptions = 0;
public static function setEncodeOptions($options)
{
self::$encodeOptions = $options;
}
/**
* Generates the signed input for the current JWT.
*
* #return string
*/
public function generateSigninInput()
{
$base64payload = $this->encoder->encode(json_encode($this->getPayload(), self::$encodeOptions));
$base64header = $this->encoder->encode(json_encode($this->getHeader(), self::$encodeOptions));
return sprintf("%s.%s", $base64header, $base64payload);
}
}
Then it could be used like:
SimpleJWSWithEncodeOptions::setEncodeOptions(JSON_UNESCAPED_SLASHES);
$jws = SimpleJWSWithEncodeOptions::load($token);
$jws->verify($key);
$data = $jws->getPayload();
This problem was very specific to my payload content but it could help someone

Azures Arr Affinity response cookie

Once you consume and set Azure ARRAffinity response cookie and send it back to Azure, are you supposed to get it back with next response ?
I just completed bit of code what brings Azure response cookie all the way to browser, sets it as a session cookie and then I pass it back to Azure in request as a cookie. To my surprise I am not getting this cookie back, I see it only the first time. However I have a feeling this might be expected behaviour - I could find anything in the documentation. When I try to change the cookie to some made up value, the correct cookie is returned with the next response.
public class RestRequestWithAffinity : RestRequest
{
public RestRequestWithAffinity(string resource, IRequestWithAffinity request)
: base(resource)
{
if (!string.IsNullOrEmpty(request.AffinityValue))
{
AddCookie("ARRAffinity", request.AffinityValue);
}
}
}
var request = new RestRequestWithAffinity(url, feedRequest)
{
Method = Method.GET
};
// cookie doesn't come back when already in request
IRestResponse response = await _client.ExecuteTaskAsync(request);
Yes, you supposed to get it back with next response. You can take a look on the following link:
http://azure.microsoft.com/blog/2013/11/18/disabling-arrs-instance-affinity-in-windows-azure-web-sites/
if you create the cookie, than choose a different name and everything will be fine! ARRAffinity is a reserved name by the IIS ARR Module. And that's why you may see this misbehavior.
Also pay attention that if you use the public Microsoft provided domains (i.e. yourdomain.cloudapp.net or yourdomain.azurewebsites.net) you cannot set the cookie at top domain level - i.e. you cannot set cookie for the cloudapp.net domain or for the azurewebsites.net domain. You shall always use the full domain, including any subdomains to set the cookie - i.e. yourdomain.azurewebsites.net.
Take a read here for more information about that issue: https://publicsuffix.org/learn/

getHeaderField("WWW-Authenticate") giving improper value in J2ME

I'm building a client for an api that uses http digest access authentication for authentication. I have studied the rfc to know the setup the required response headers and this works well on my emulator. Problem however is when I test on my phone (nokia E5), I found out that getting the www-authenticate header from the returned headers doesnt get the full value
[code]
// c = (HttpConnection) Connector.open(url) and other declarations
String digest = c.getHeaderField("WWW-Authenticate");
System.out.println(digest); // gives only: Digest
//no realm, qop and others
[/code]
I'm I doing something wrong or it is from the phone? What are my other options?
I have faced this problem in some nokias, and yes, it is a bogus HttpConnection implementation ... I suggest you to try creating a new header from the server side with a base64 encoded WWW-Authenticate-encoded header and using it instead, or you can do it the hard way and implement the whole HttpConnection from scratch ...

Resources