Creating A Faux Domain - .htaccess

I'm using a web platform for my real estate business that "due to technical reasons" cannot offer subdomains. Instead, if an individual in my company wants credit for the leads that come in due to their own marketing efforts, they would be required to manually add a url parameter to every link they share. (ie. ?agent=xxxxx) This is clearly absurd.
I could write a chrome plugin or bookmarklets that add the agent= parameter for them, but this isn't a universal solution.
If it possible to host a "faux" domain which would function like it's own website, but pull all it's resources from my main website? (While adding in the url parameter that triggers the tracking cookie)
Hope this makes sense.

Related

How to check if a url is malicious within the code?

I would like to take a url input from a user, and serve that url to all other users in some context. I will show this url to users within my website as a link with a message of More Details. I am aware that when you point the cursor to the link, it shows the url and give or take it can be understood whether it is real or malicious, but 99.9% of the people won't think about such thing and will just click it right away.
So my question is, can I detect whether the inputted link is real or malicious, and if so how, and if not so, what can I do to at least improve the security to some extent? I am using frontend Reactjs , backend Nodejs, and multiple AWS Resources for data and API management.
As far as I am aware, there's nothing specific that can be done on the AWS side to achieve this since this is specific to your backend implementation.
I am no expert on security, but maybe using VirusTotal API to check if a given URL is malicious? There are limits on the allowed number of requests. Also, as stated:
The public API is a free service, available for any website or application that is free to consumers. The API must not be used in commercial products or services
If you want to commercialize your service, you may get banned from using VirusTotal if you do not go with the paid route.
Maybe there are alternative solutions that are free for commercial use. But using such a service is your only route if you want to delegate URL security checks to a third-party service since AWS does not offer anything similar.

Web security: What prevents a hacker from spoofing a bank's site and grabbing data before form submission? (with example)

Assume http://chaseonline.chase.com is a real URL with a web server sitting behind it, i,e, this URL revolves to an IP address or probably several so that there can be a lot of identical servers that allows load balancing from client requests.
I guess that probably Chase buys up URLs that are "close" in the URL namespace(<<< how to define the term "namespace"? Lexicographically?? I think the latter is not trivial (because it depends on a post that one defines on top of URL strings ... never mind this comment).
Suppose that given of the URLs (http://mychaseonline.chase.com, http://chaseonline.chase.ua, http://chaseonline.chase.ru, etc.) is "free" (not bought). I buy one of these free URLs, write my phishing/spoofing server that sits behind
my URL and renders the following screen => https://chaseonline.chase.com/
I work to get my URL indexed (hopefully) at least as high or higher than the real one (http://chaseonline.chase.com). Chance is (hopefully) most bank clients/users won't notice my bogus URLs and I start collecting . I then use my server as a client in relationship to the real bank server http://chaseonline.chase.com, log in and using my collection/list of <user id, password> tuples to login to each <user id, password> to create mischief.
Is this a cross-site request forgery? How would one prevent this from occurring?
What I'm hearing in your description is a phishing attack albeit with slightly more complexity. Let's address some of this points
2) Really hard to get all the urls, especially when you take into consideration different variations such as unicode, or even just simple kerning hacks. For example the R and N in kerning looks a lot like an m when you look quickly. Welcome to chаse.rnobile.com! So with that said, I'd guess that most companies just buy the obvious domains.
4) Getting your url indexed higher than the real one, I'll posit is impossible. Google et al. are likely sophisticated enough to catch that type of thing from happening. One approach to getting above chase in SERP would be to buy adwords for something like "Bank Online With Chase." But there again, I'd assume that the search engines have a decent filtering/fraud prevention mechanism to catch this type of thing.
Mostly you'd be better off to keep your server from being indexed since that would simply attract attention. Because this type of thing will be shut down, I presume most phishing attacks go for large numbers of small 'fish' (larger ROI) or small numbers of large 'fish' (think targeted phishing attacks of execs, bank employees, etc.)
I think you offer up an interesting idea in point 4, that there's nothing to stop a man-in-the-middle attack from occurring wherein your site delegates out to the target site for each request. The difficulty in that approach is that you'd spend a ton of resources on creating a replica website. When you think of most hacking as being a business, trying to maximize your ROI then a lot of the "this is what I'd do if I were a hacker" ideas go way.
If I were to do this type of thing, I'd provide a login facade, have the user provide me their credentials, and then redirect to the main site on POST to my server. This way I get your credentials and you think there's just been an error on the form. I'm then free to pull all the information off of your banking site at my leisure.
There's nothing cross-site about this. It's a simple forgery.
It fails for a number of reasons: lack of security (your site isn't HTTPS), malware protection vendors explicitly check against this kind of abuse, Google won't rank your forgery above highly popular sites, and finally banks with a real sense of security use 2 Factor Authentication. The login token you'd get for my bank account is valid for a few seconds, literally, and can't be used for anything but logging in.

Cross-site scripting vulnerability because of a CNAME entry

One of our advertising networks for a site I administer and develop is requesting the following:
We have been working on increasing performance on XXXX.com and our team feels that if we can set up the following CNAME on that domain it will help increase rates:
srv.XXXX.com d2xf3n3fltc6dl.XXXX.net
Could you create this record with your domain registrar? The reason we need you to create this CNAME is to preserve domain transparency within our RTB. Once we get this setup I will make some modifications in your account that should have some great results.*
Would this not open up our site to cross-site scripting vulnerabilities? Wouldn't malicious code be able to masquerade as coming from our site to bypass same-origin policy protection in browsers? I questioned him on this and this was his response:
First off let me address the benefits. The reason we would like you to create this CNAME is to increase domain transparency within our RTB. Many times when ads are fired, JS is used to scrape the URL and pass it to the buyer. We have found this method to be inefficient because sometimes the domain information does not reach the market place. This causes an impression (or hit) to show up as “uncategorized” rather than as “XXXX.com” and this results in lower rates because buyer pay up to 80% less for uncategorized inventory. By creating the CNAME we are ensuring that your domain shows up 100% of the time and we usually see CPM and revenue increases of 15-40% as a result.
I am sure you are asking yourself why other ad networks don’t do this. The reason is that this is not a very scalable solution, because as you can see, we have to work with each publisher to get this setup. Unlike big box providers like Adsense and Lijit, OURCOMPANY is focused on maximizing revenue for a smaller amount of quality publishers, rather than just getting our tags live on as many sites as possible. We take the time and effort to offer these kinds of solutions to maximize revenue for all parties.
In terms of security risks, they are minimal to none. You will simply be pointing a subdomain of XXXX.com to our ad creative server. We can’t use this to run scripts on your site, or access your site in any way.
Adding the CNAME is entirely up to you. We will still work our hardest to get the best rates possible, with or without that. We have just seen great results with this for other publishers, so I thought that I would reach out and see if it was something you were interested in.
This whole situation raised red flags with me but is really outside of my knowledge of security. Can anyone offer any insight to this please?
This would enable cookies set at the XXXX.com level to be read by each site, but it would not allow other Same Origin Policy actions unless both sites opt in. Both sites would have to set document.domain = 'XXXX.com'; in client-side script to allow access to both domains.
From MDN:
Mozilla distinguishes a document.domain property that has never been set from one explicitly set to the same domain as the document's URL, even though the property returns the same value in both cases. One document is allowed to access another if they have both set document.domain to the same value, indicating their intent to cooperate, or neither has set document.domain and the domains in the URLs are the same (implementation). Were it not for this special policy, every site would be subject to XSS from its subdomains (for example, https://bugzilla.mozilla.org could be attacked by bug attachments on https://bug*.bugzilla.mozilla.org).

How to prevent my web proxy site to be report as a fishing site

I have a web proxy site to help chinese, they can view a webpage blocked by government by a modified url, like http://www.google.com.mywebproxy.domain/.
But some website has been report as a fishing site on google and others. I need know the rules how search engine detect a fishing site, and I can block some page that I should never proxy.
for example, how can I detect a website has a form input credit card information?
how can I detect a website has a form input credit card information?
I know it somewhat simplistic, but what about a simple rule that looks for Credit Card related terms on page/in form?
Think about it, for phishing attack to work, the attacker will need to convince the visitor - in one way or another - to provide his/hers CC info. So you can make a list of payment related terms like "credit card, payment, CC, billing" and so on and use them to determent page intent.
Having said that:
a: images/flash will provide a loophole
b: you`ll need to cover different translations of all terms
c: as in your case, some "legit" sites will be blocked
This of course dose not describe the working of Google (or other) filtering algorithms which use a more complex set of rules based on multiple verification vectors and existing data pools for cross-reference.
An exact mix of those is a closely guarded secret and I agree with Rob, contacting someone for a manual check-up is probably the best solution.

Why do links in gmail redirect?

I've noticed that some email services (like gmail or my school's webmail) will redirect links (or used to) in the email body. So when I put "www.google.com" in the body of my email, and I check that email in gmail or something, the link says something like "gmail.com/redirect?www.google.com".
This was very confusing for me and the people I emailed (like my parents, who are not familiar with computers). I always clicked on the link anyway, but why is this service used? (I'm also worried that maybe my information was being sent somewhere... Do I have anything to worry about? Is something being stored before the redirect?)
Sorry if this is unwarranted paranoia. I am just curious about why some things work the way they do.
Wikipedia has a good article on URL redirection. From the article:
Logging outgoing links
The access logs
of most web servers keep detailed
information about where visitors came
from and how they browsed the hosted
site. They do not, however, log which
links visitors left by. This is
because the visitor's browser has no
need to communicate with the original
server when the visitor clicks on an
outgoing link. This information can be
captured in several ways. One way
involves URL redirection. Instead of
sending the visitor straight to the
other site, links on the site can
direct to a URL on the original
website's domain that automatically
redirects to the real target. This
technique bears the downside of the
delay caused by the additional request
to the original website's server. As
this added request will leave a trace
in the server log, revealing exactly
which link was followed, it can also
be a privacy issue.1 The same
technique is also used by some
corporate websites to implement a
statement that the subsequent content
is at another site, and therefore not
necessarily affiliated with the
corporation. In such scenarios,
displaying the warning causes an
additional delay.
So, yes, Google (and Facebook and Twitter do this to) are logging where your services are taking you. This is important for a variety of reasons - it lets them know how their service is being used, shows trends in data, allows links to be monetized, etc.
As far as your concerns, my personal opinion is that, if you're on the internet, you're being tracked. All the time. If this is concerning to you, I would recommend communicating differently. However, for the most part, I think it's not worth worrying about.
This redirection is a dereferrer to avoid disclosure of the URL in the HTTP Referer field to third party sites as that URL can contain sensitive data like a session ID.

Resources