Azure Active Directory OAuth 2.0 Authorization gives Bad Request - azure

When requesting an authorization code, if the state url parameter has following value, https://login.microsoftonline.com/oauth2/authorize gives me a Bad Request.
state=%3C%3CMULE_EVENT_ID%3D0-6cadfe22-e9ea-11e6-99ff-205120524153%3E%3E
If I remove the encoded values: << and >>, it works well. Currently I have some limitations and I cannot remove those values.
In the documentation is says that "state" is a value included in the request that will also be returned in the token response. It can be a string of any content that you wish.

The double << >> appears to be semantically incorrect, although those characters are allowed in https://www.rfc-editor.org/rfc/rfc6749#appendix-A.5 (referencing ABNF syntax for that field, which is essentially all printable characters including space, VSCHAR, https://www.rfc-editor.org/rfc/rfc5234).
However, when we look at the intended use of the state field, it is to be used to send a token back from the service, for your application to be able to validate the local state to avoid CSRF attacks.
In most cases, a short string should suffice, and you will probably do yourself a favor if you keep the string short, saving bytes on the wire and additional parsing overhead.
There is a good overview of using the oauth2 endpoint with here (admittedly with Bing Ads, but the principals and advice are applicable to this case):
https://msdn.microsoft.com/en-us/library/bing-ads-user-authentication-oauth-guide.aspx
If I can find the exact restrictions on the state field, I shall update my answer.

Well, the documentation seems a bit wrong then. I tested various state strings, and what makes it fail consistently is starting the state string with %3C. So a less-than sign is fine in some places in the string.
EDIT: There is something really odd going on.
This fails:
state=MUL%3CE_EVENT_ID%3D0-6cadfe22-e9ea-11e6-99ff-205120524153%3E%3E
But this works:
state=MULE%3C_EVENT_ID%3D0-6cadfe22-e9ea-11e6-99ff-205120524153%3E%3E
But this also fails:
state=MULE_%3CEVENT_ID%3D0-6cadfe22-e9ea-11e6-99ff-205120524153%3E%3E
My theory is that it doesn't allow anything that looks like a valid HTML tag. That's why it would allow %3C_....%3D, but *%3Ca%3e is not. You can replace a with any characters a-z. So HTML elements are a no-no :)

Related

security for http headers

We'd like to double-check our http headers for security before we send them out. Obviously we can't allow '\r' or '\n' to appear, as that would allow content injection.
I see just two options here:
Truncate the value at the newline character.
Strip the invalid character from the header value.
Also, from reading RFC2616, it seems that only ascii-printable characters are valid for http header values Should also I follow the same policy for the other 154 possible invalid bytes?
Or, is there any authoritative prior art on this subject?
This attack is called "header splitting" or "response splitting".
That OWASP link points out that removing CRLF is not sufficient. \n can be just as dangerous.
To mount a successful exploit, the application must allow input that contains CR (carriage return, also given by 0x0D or \r) and LF (line feed, also given by 0x0A or \n)characters into the header.
(I do not know why OWASP (and other pages) list \n as a vulnerability or whether that only applies to query fragments pre-decode.)
Serving a 500 on any attempt to set a header that contains a character not allowed by the spec in a header key or value is perfectly reasonable, and will allow you to identify offensive requests in your logs. Failing fast when you know your filters are failing is a fine policy.
If the language you're working in allows it, you could wrap your HTTP response object in one that raises an exception when a bad header is seen, or you could change the response object to enter an invalid state, set the response code to 500, and close the response body stream.
EDIT:
Should I strip non-ASCII inputs?
I prefer to do that kind of normalization in the layer that receives trusted input unless, as in the case of entity-escaping to convert plain-text to HTML escaping, there is a clear type conversion. If it's a type conversion, I do it when the output type is required, but if it is not a type-conversion, I do it as early as possible so that all consumers of data of that type see a consistent value. I find this approach makes debugging and documentation easier since layers below input handling never have to worry about unnormalized inputs.
When implementing the HTTP response wrapper, I would make it fail on all non-ascii characters (including non-ASCII newlines like U+85, U+2028, U+2029) and then make sure my application tests include a test for each third-party URL input to makes sure that any Location headers are properly %-encoded before the Location reaches setHeader, and similarly for other inputs that might reach the request headers.
If your cookies include things like a user-id or email address, I would make sure the dummy accounts for tests include a dummy account with a user-id or email address containing a non-ASCII letter.
The simple removal of new lines \n will prevent HTTP Response Splitting. Even though a CRLF is used as a delimiter in the RFC, the new line alone is recognized by all browsers.
You still have to worry about user content within a set-cookie or content-type. Attributes within these elements are delimited using a ;, it maybe possible for an attacker to change the content type to UTF-7 and bypass your XSS protection for IE users (and only IE users). It may also be possible for an attacker to create a new cookie, which introduces the possibility of Session Fixation.
Non-ASCII characters are allowed in header fields, although the spec doesn't really clearly say what they mean; so it's up to sender and recipient to agree on their semantics.
What made you think otherwise?

Is my site safe from XSS if I replace all '<' with '<'?

I'm wondering what the bare minimum to make a site safe from XSS is.
If I simply replace < with < in all user submitted content, will my site be safe from XSS?
Depends hugely on context.
Also, encoding less than only isn't that flash of an idea. You should just encode all characters which have special meaning and could be used for XSS...
<
>
"
'
&
For a trivial example of where encoding the less than won't matter is something like this...
Welcome to Dodgy Site. Please link to your homepage.
Malicious user enters...
http://www.example.com" onclick="window.location = 'http://nasty.com'; return false;
Which obviously becomes...
View user's website
Had you encoded double quotes, that attack would not be valid.
There are also case where the encoding of the page counts. Ie - if your page character set is not correct or does not match in all applicable spots, then there are potential vulnerabilities. See http://openmya.hacker.jp/hasegawa/security/utf7cs.html for details.
No. You have to escape all user input, regardless of what it contains.
Depending on the framework you are using, many now have an input validation module. A key piece I tell software students when I do lectures is USE THE INPUT VALIDATION MODULES WHICH ALREADY EXIST!
reinventing the wheel is often less effective than using the tried and tested modules which exist already. .Net has most of what you might need built in - really easy to whitelist (where you know the only input allowed) or blacklist (a bit less effective as known 'bad' things always change, but still valuable)
If you escape all user input you should be safe.
That mean EVERYTHING EVERYWHERE it shows up. Even a username on a profile.

Eliminate < > as accepted characters in a wordpress password?

Is it possible to eliminate these characters from a wordpress password? I have heard that it can open up scripts this way, that hackers can use to get in. Thank you.
Simple answer:
Your friend has misinformed you. Restricting these characters in a wordpress password is not something you need to worry about. But as they say "There is no smoke without fire".
More background information:
In your own web-application code, you should always be especially careful whenever you take any data from a user (Whether from a form, a cookie,or a URL) or another external computer system or application. The reason for this is that you want to avoid the values being interpreted as code and not just used as data.
The issue that has led your friend to worry about the <> characters is called Cross-Site Scripting and is a kind of attack that malicious users can perform to "inject" html or javascript content into your pages. If you accept information from the user that contains these html mark-up characters and re-display it on the same, or another page, then you can cause their html or javascript content to become part of your page. Any javascript content will run with access to the same data as the user that views the page.
Whenever outside data is read, it sould always be
validated : i.e. checked that it looks like the kind of thing you are expecting, and rejected if it doe not.
and encoded: i.e. When this data is displayed to back to the user or sent to another part of the system, it is converted to be safe. The type of conversion always depends on how and where the data is being used.
Please note that the angle-bracket characters are not the only thing to worry about. Please also note that it is well proven that disallowing certain characters (also called "blacklisting") is never the best way to secure code. It is always safer to state what is allowed (also called "whitelisting").

Always escape output in view? Why?

The Zend Framework Manual says the following:
60.3.1. Escaping Output
One of the most important tasks to
perform in a view script is to make
sure that output is escaped properly;
among other things, this helps to
avoid cross-site scripting attacks.
Unless you are using a function,
method, or helper that does escaping
on its own, you should always escape
variables when you output them.
Why 'always'? Why do I have to escape variables that have not been created or altered by user input?
Users aren't the only source of dodgy strings in output. Consider, for example, the apparently safe string "Romeo & Juliet" coming out of a database. No cross-site scripting there, you say? True enough. Stick it in a web page, however, and the raw ampersand could cause some interesting problems with validation, parsing, etc.
Output escaping isn't just to guard against malicious or accidentally borked input, it ensures that the output is thoroughly sanitised and treated as having no special meaning in the surrounding output format, whether that's HTML, XML, JSON or whatever.
As a rule, I would escape anything coming from user input, a data source or even calculations. You want the output to be predictable, escaping ensures that it is. If the value when converted to a string contains characters that break your desired markup, things would get messy.
If you're using a view, $this->escape($variableToEscape) should suffice.
Another thing is many times things that are hard coded one day become user or at least database generated another day. Its just better practice to manage output of variables in your code.
You could look at it this way: you should always HTML-encode variables, unless you know that they've already been encoded.
Say you have a variable that contains:
foo <b>bar</b>
If you know that it contains HTML tags, and you're okay with that, then you can say that this variable has already been properly HTML-encoded. You could even assign it to a different variable type to make the compiler aware of the distinction (Joel's idea), and have your output functions handle these types without escaping them.
Of course, this means that
foo & <b>bar</b>
is an incorrect value; you would need to ensure that it's:
foo & <b>bar</b>
I think the best practice here is always escape output unless you intend to output a raw HTML fragment. Even "safe" data can contain characters which need to be escaped. For example, consider the e-mail address '"Bob" <bob#bob.com>'. If you don't escape it, the browser will think <bob#bob.com> is a tag.
Obviously you want to escape things that are the result of user data to prevent XSS attacks. Since you're often changing what you're republishing and what you're not, you probably can't remember all the places that need to be changed... So even if you get all the nuances correct now, and your site is secure from XSS scripting today, you may at some point may add user input to some variable you're not escaping (or more likely, some variable to some variable to some variable which you're not escaping), which would open you up to XSS attacks.
Escaping by default would prevent that attack.
The other reason is more conceptual: with MVC, all of your markup--which is, by definition, the "view"--should be in your view templates. So if your controller is determining the view, and the view contains all the markup, why not escape your variables?
Well, if you have hard-coded values (let's say language translations, which you read from a database or XML file), you don't have to escape them.
But if there is a value that has been created/modified by user, even let's say in admin panel, you have to escape it, because you don't know what kind of data user or if I'm more radical, even administrator, will send.

Will HTML Encoding prevent all kinds of XSS attacks?

I am not concerned about other kinds of attacks. Just want to know whether HTML Encode can prevent all kinds of XSS attacks.
Is there some way to do an XSS attack even if HTML Encode is used?
No.
Putting aside the subject of allowing some tags (not really the point of the question), HtmlEncode simply does NOT cover all XSS attacks.
For instance, consider server-generated client-side javascript - the server dynamically outputs htmlencoded values directly into the client-side javascript, htmlencode will not stop injected script from executing.
Next, consider the following pseudocode:
<input value=<%= HtmlEncode(somevar) %> id=textbox>
Now, in case its not immediately obvious, if somevar (sent by the user, of course) is set for example to
a onclick=alert(document.cookie)
the resulting output is
<input value=a onclick=alert(document.cookie) id=textbox>
which would clearly work. Obviously, this can be (almost) any other script... and HtmlEncode would not help much.
There are a few additional vectors to be considered... including the third flavor of XSS, called DOM-based XSS (wherein the malicious script is generated dynamically on the client, e.g. based on # values).
Also don't forget about UTF-7 type attacks - where the attack looks like
+ADw-script+AD4-alert(document.cookie)+ADw-/script+AD4-
Nothing much to encode there...
The solution, of course (in addition to proper and restrictive white-list input validation), is to perform context-sensitive encoding: HtmlEncoding is great IF you're output context IS HTML, or maybe you need JavaScriptEncoding, or VBScriptEncoding, or AttributeValueEncoding, or... etc.
If you're using MS ASP.NET, you can use their Anti-XSS Library, which provides all of the necessary context-encoding methods.
Note that all encoding should not be restricted to user input, but also stored values from the database, text files, etc.
Oh, and don't forget to explicitly set the charset, both in the HTTP header AND the META tag, otherwise you'll still have UTF-7 vulnerabilities...
Some more information, and a pretty definitive list (constantly updated), check out RSnake's Cheat Sheet: http://ha.ckers.org/xss.html
If you systematically encode all user input before displaying then yes, you are safe you are still not 100 % safe.
(See #Avid's post for more details)
In addition problems arise when you need to let some tags go unencoded so that you allow users to post images or bold text or any feature that requires user's input be processed as (or converted to) un-encoded markup.
You will have to set up a decision making system to decide which tags are allowed and which are not, and it is always possible that someone will figure out a way to let a non allowed tag to pass through.
It helps if you follow Joel's advice of Making Wrong Code Look Wrong or if your language helps you by warning/not compiling when you are outputting unprocessed user data (static-typing).
If you encode everything it will. (depending on your platform and the implementation of htmlencode) But any usefull web application is so complex that it's easy to forget to check every part of it. Or maybe a 3rd party component isn't safe. Or maybe some code path that you though did encoding didn't do it so you forgot it somewhere else.
So you might want to check things on the input side too. And you might want to check stuff you read from the database.
As mentioned by everyone else, you're safe as long as you encode all user input before displaying it. This includes all request parameters and data retrieved from the database that can be changed by user input.
As mentioned by Pat you'll sometimes want to display some tags, just not all tags. One common way to do this is to use a markup language like Textile, Markdown, or BBCode. However, even markup languages can be vulnerable to XSS, just be aware.
# Markup example
[foo](javascript:alert\('bar'\);)
If you do decide to let "safe" tags through I would recommend finding some existing library to parse & sanitize your code before output. There are a lot of XSS vectors out there that you would have to detect before your sanitizer is fairly safe.
I second metavida's advice to find a third-party library to handle output filtering. Neutralizing HTML characters is a good approach to stopping XSS attacks. However, the code you use to transform metacharacters can be vulnerable to evasion attacks; for instance, if it doesn't properly handle Unicode and internationalization.
A classic simple mistake homebrew output filters make is to catch only < and >, but miss things like ", which can break user-controlled output out into the attribute space of an HTML tag, where Javascript can be attached to the DOM.
No, just encoding common HTML tokens DOES NOT completely protect your site from XSS attacks. See, for example, this XSS vulnerability found in google.com:
http://www.securiteam.com/securitynews/6Z00L0AEUE.html
The important thing about this type of vulnerability is that the attacker is able to encode his XSS payload using UTF-7, and if you haven't specified a different character encoding on your page, a user's browser could interpret the UTF-7 payload and execute the attack script.
One other thing you need to check is where your input comes from. You can use the referrer string (most of the time) to check that it's from your own page, but putting in a hidden random number or something in your form and then checking it (with a session set variable maybe) also helps knowing that the input is coming from your own site and not some phishing site.
I'd like to suggest HTML Purifier (http://htmlpurifier.org/) It doesn't just filter the html, it basically tokenizes and re-compiles it. It is truly industrial-strength.
It has the additional benefit of allowing you to ensure valid html/xhtml output.
Also n'thing textile, its a great tool and I use it all the time, but I'd run it though html purifier too.
I don't think you understood what I meant re tokens. HTML Purifier doesn't just 'filter', it actually reconstructs the html. http://htmlpurifier.org/comparison.html
I don't believe so. Html Encode converts all functional characters (characters which could be interpreted by the browser as code) in to entity references which cannot be parsed by the browser and thus, cannot be executed.
<script/>
There is no way that the above can be executed by the browser.
**Unless their is a bug in the browser ofcourse.*
myString.replace(/<[^>]*>?/gm, '');
I use it, then successfully.
Strip HTML from Text JavaScript

Resources