How to block rawgit.com to access my website server - .htaccess

I think my website is injected with some script that is using rawgit.com. Recently my website runs very slow with browser lower bar notification "Transferring data from rawgit.com.." or "Read rawgit.com"..." . I have never used RawGit to serve raw files directly from GitHub. I can see they are using https://cdn.rawgit.com/ domain to serve files.
I would like my website to block everything related to this domains, how can I achieve that ?

As I said in the comments, you are going about this problem in the wrong way. If your site already includes sources you do not recognise or allow, you are already compromised and your main focus should be on figuring out how you got compromised, and how much access an attacker may have gotten. Based on how much access they have gotten, you may need to scrap everything and restore a backup.
The safest thing to do is to bring the server offline while you investigate. Make sure that you still have access to the systems you need to access (e.g. ssh), but block any other remote ip. Just "blocking rawgit.com" blocks one of the symptoms you can see and allows an attacker to change their attack while you are fumbling with that.
I do not recommend to only block rawgit.com, not even when it's your first move to counter this problem, but if you want you can use the Content-Security-Policy header. You can whitelist the urls you do expect and thus block the urls you do not. See mdn for more information.

Related

How to prevent snooping by user of Mac app?

I am creating a Chromium/Electron based Mac app. The app is essentially a browser for my customers to use a web service that I have no control over. My requirement is that users of my app (who may have root access on their Mac) should not be able to view the URLs the app is visiting, and should be unable to gain access to the cookies the app is storing. Normally it is not hard to MITM yourself, or attach a debugger to an app and dump memory to see the URLs and cookies.
How can I prevent these types of leaks to the user? If it's impossible, it may be acceptable to make it very hard so that a very high level of sophistication is needed.
Your users have full control of their devices, it is not possible to securely prevent them from proxying or exploring what your client-side app does. Obfuscation would seem like an option, but in the end, the http request that leaves your app will traverse the whole OS through different layers, and your user can easily observe that, if not else then in network packets (but usually much easier).
The only way it is possible to prevent the user from knowing what's happening is if you have your own backend. The frontend app (Electron) would make a request to your backend, which in turn could make any request with any parameters without the user being aware.
Note though that your backend could still be used as a proxy or oracle just like if the user was connecting to the real service. This might or might not be a problem in your case, depending on what you actually want to achieve and why.
The app is essentially a browser for my customers to use a web service that I have no control over. My requirement is that users of my app (who may have root access on their Mac) should not be able to view the URLs the app is visiting, and should be unable to gain access to the cookies the app is storing
Basically, you cannot (you could with the appropriate infrastructure. But you lack that infrastructure).
Network communications can be secured, to a point, using HTTPS (if you can't even use that, then you're completely out of luck - users wouldn't even need root access to the Mac to sniff traffic). You need to verify the server certificate to be sure you're connecting to the correct server.
One thing you might do - effectual just against wannabes, I'm afraid - is first run a test API call on some random server and verify that the connection either fully succeeds, with the proper server identification and matching IP, if the server exists, or that it properly fails if the server never existed. Anything else would be a telltale that someone has taken over the network layer, and at that point you could connect to a different server, making different calls, and lament that the server isn't answering properly.
Strings in memory can be (air quote) protected (end air quote) by having them available only for the shortest time, and otherwise stored in a different form - you can have for example an URL and a random byte sequence with the same length, then store the sequence and the XOR of the URL and the sequence. You can then reconstruct the URL every time you need it, remembering to clear it off any app caches it might find its way into. Also, just for the lols, you can keep a baker's dozen of different URLs sprinkled in the clear throughout the code. A memory dump at that point will turn out nothing useful.
Files, of course, can be encrypted with any one of several schemes - the files residing on the same machine that has to know how to decode them makes all such schemes ultimately vulnerable, but there again, you can try and obfuscate things. I once stored some information in a ZIP file - but it was just the header of an encrypted ZIP file, with the appropriate directory entry block glued at the end. The data were actually just gzipped in the clear, there was no password whatsoever. The guys that tried to decode the file thought it was a plain encrypted Zip file with the extension changed, wasted a significant amount of time with several Zip cracking tools, and ended up owing me a beer.
More than that, there is not much that can realistically be done.
A big advantage would be in outsourcing the API calls and "cookie" maintenance to an external service that you control, e.g. on Amazon AWS or Azure or similar. Then you could employ all kinds of protection schemes (for example: all outbound API calls could be stored in an opaque object, timestamped, nonced, and encrypted with your server's public key, and the responses sent encrypted with your client's unique key). Since this is relatively simple and cost-effective, it would also be my recommendation.

Identifying requests made by Chrome extensions?

I have a web application that has some pretty intuitive URLs, so people have written some Chrome extensions that use these URLs to make requests to our servers. Unfortunately, these extensions case problems for us, hammering our servers, issuing malformed requests, etc, so we are trying to figure out how to block them, or at least make it difficult to craft requests to our servers to dissuade these extensions from being used (we provide an API they should use instead).
We've tried adding some custom headers to requests and junk-json-preamble to responses, but the extension authors have updated their code to match.
I'm not familiar with chrome extensions, so what sort of access to the host page do they have? Can they call JavaScript functions on the host page? Is there a special header the browser includes to distinguish between host-page requests and extension requests? Can the host page inspect the list of extensions and deny certain ones?
Some options we've considered are:
Rate-limiting QPS by user, but the problem is not all queries are equal, and extensions typically kick off several expensive queries that look like user entered queries.
Restricting the amount of server time a user can use, but the problem is that users might hit this limit by just navigating around or running expensive queries several times.
Adding static custom headers/response text, but they've updated their code to mimic our code.
Figuring out some sort of token (probably cryptographic in some way) we include in our requests that the extension can't easily guess. We minify/obfuscate our JS, so are ok with embedding it in the JS source code (since the variable name it would have would be hard to guess).
I realize this may not be a 100% solvable problem, but we hope to either give us an upper hand in combatting it, or make it sufficiently hard to scrape our UI that fewer people do it.
Welp, guess nobody knows. In the end we just sent a custom header and starting tracking who wasn't sending it.

Multi-Domain Login

I'm working on a little node.js-project, and while googling alot, I kinda got a bit confused, but maybe some of you are able to point me towards the road again.
Several websites are generated by DocPad (excellent piece of software), and hosted on different domains.
All these websites shall now get a "login module" (which is also written in Node.js, using passport). Visually, it will look similar to the excellent login-slider from Web-Kreation (Here a demo). My plan was to use nginx and route all the /login-requests to the login-app, which is working fine.
The problem is rather related to the multiple domains, and the clientside implementation of it all. All logins use the same database.
Can I somehow use both together, and create the session-cookies from the Login-Module (which could use the same domain all the time)?
I'm answering my own question for reference, in case someone else comes across the same problem.
In the end, I solved my problem by having a bit of a different setup. Instead of a module, using the dns of each page, I use a central login-application for all sites. The sites itself do not require to access any personal information, so that's not a problem.
DocPad is still being used to generate the different websites (works excellent - I know I say this very often, but if there's a brilliant piece of software out, there's no reason to not mention it once in a while) statically, and all static content is delivered to the user using a CDN.
The login-system is a node.js-application using Redis as the only database. It is integrated via a simple iframe on all pages rendered by DocPad on login.example.com.
After successful login in 'login-app' you can create encrypted string with info about current user. You can pass this string back in get/post parameter with redirect to necessary domain. Encription key is known only to the 'login-app' and your websites. You can trust this encrypted data. It is necessary to make sure that every time the key is different for the same user. For example you can add the information about the time of login or random. After decrypting the data you can set authorization cookie for a particular domain.

"Referral Denied" Implementation

I was looking for a way to protect a web service from "Synthetic queries". Read this security stack question for more details.
It seemed that I had little alternative, until I came across NSE India's website which implements a certain kind of measure against such synthetic queries.
I would like to know how could they have implemented a protection which works somewhat this way: You go to their website and search for a quote, lets say, RELIANCE, we get a page displaying the latest quote.
On analysis we find that the query being sent across is something like:
http://www.nseindia.com/marketinfo/equities/ajaxGetQuote.jsp?symbol=RELIANCE&series=EQ
But when we directly copy paste the query in the browser, it returns "referral denied".
I guess such a procedure may also help me. Any ideas how I may implement something similar?
It won't help you. Faking a referrer is trivial. The only protection against queries the attacker constructs is server side validation.
Referrers sometimes can be used to sometimes prevent hotlinking from other websites, but even that is rather hard to do since certain programs create fake referrers and you don't want to block those users.
The problems referrer validation could help against other websites trying to manipulate the users browser into accessing your site. Like some kinds of cross site request forgery. But it does never protect against malicious users. Against those the only thing that helps is server side validation. You can never trust the client if you don't trust the user of that client.

Remote activation/deactivation and protecting against out of business

I'm in charge of an app that uses the internet to transfer data between sites, and some customers are being awkward about paying, so we need a mechanism that will allow us to cut off the service of non-payers. I'd like to protect against the admin people using firewalls to block off our checks, but conversely I'd like to give some allowance for our company web site disappearing for some reason and not being accessible.
The scheme I'm imagining is:
server makes twice daily check to web page using a URL like:
http://www.ourcompany.com/check.php?myID=GUID&Code=MyCode
This then returns a response that contains either nothing of interest, or the GUID and a value.
GUID=0
That zero indicates that the server should stop operation. To make it work again, the server will check every 5 mins for the same info, until the value matches what it thinks the code that it passed in should be transformed to.
This scheme makes sense to me, but the question really is how to protect against blocking. Given we know we must have internet access, how long should we continue to operate without being able to get the response from our web server? Is something like 14 days and then we just shut it off anyway the best way?
The solution I used in the end was pretty much as I suggested. Yes, it is defeatable using tools outlined here, but it is better than nothing.
The app checks daily to access a web site that contains a control file encrypted using public key encryption. It decrypts in memory, and if it finds its GUID, then it must match a code. To disable the operation, the code is set to 0 (zero) which will always fail. When disabled, it checks every two minutes to allow rapid restoration. There is also a manual mechanism to generate a code that will work for a week in case of server trouble.
The code will allow up to 14 days without connecting to the server before it takes this as a deliberate attempt to block it. After 10 days, it shows an error message which asks them to contact support.
This method is really easy to circumvent: just use a local dns server to point www.ourcompany.com to the local machine, or use a http proxy. Then the user can return whatever response they want to the program.
Assuming the user hasn't circumvented the check, how long you are to continue to operate without confirmation is a business decision and not a programming decision.
A user can use a tool such as OWASP WebScarab to change values on the fly to subvert your security model. You need to include something more difficult such as requiring a secure channel, comparing public key and so on.

Resources