I'm possibly developing a web-based application that allows users to create individual pages. I would like users to be able to use their own domains/sub-domains to access the pages.
So far I've considered:
A) Getting users to forward with masking to their pages. Probably the most in-efficient option, as having used this before myself I'm pretty sure it iFrames the page (not entirely sure though).
B) Having the users download certain files, which then make calls to the server for information for their specific account settings via a user key of some sort. The most efficient in my mind at the moment, however, this requires letting users see a fair degree of source code, something I'd rather not do if possible
C) Getting the users to add a C-NAME record to their DNS settings, which is semi in-efficient (most of these users will be used to uploading files via FTP hence why B is the most efficient option), but at the same time means no source code will be seen by them.
The downside is, I have no idea how to implement C or what would be needed.
I got the idea from: http://unbounce.com/features/custom-urls/.
I'm wondering what method of the three I should use to allow custom urls for users, I would prefer to do C, but I have no idea how to implement it (I'm kind of asking how), and whether or not the time spent learning how-to/getting that kind of functionality set-up would even be worth it.
Any answers/opinions/comments would be very much appreciated :)!
Option C is called wildcard DNS: I've linked to a writeup that gives an example of how to do it using Apache. Other web server setups should be able to do this as well: for what you want it is well worth it.
Related
Does anyone know if this exact thing would be possible? I've been looking around everywhere but no help. The idea is that we have two sites (we'll call them Websites A and B respectively) residing on two different ports on one server in a network. Website A (designed for the user to go there first) has dual authentication upon the initial login, then has a link to Website B. We want it to work so that a user could not get to Website B without visiting Website A beforehand and logging in to duo. I would think it's somehow possible, given that it's our domain, server, and sites.
Thanks so much! All help is appreciated.
I've tried a lot. However, it's a bit different now, because one of the sites used to be on a different server, but now it's on the same one. I haven't tried anything since they've been on the same server, and I'm not really sure where to start. I've looked at a lot of forums, but no one had my exact problem, so I thought I'd ask.
You can use single sign-on (SSO) solution that supports two-step authentication. This would allow users to log in to Website A using their two-step authentication credentials, and then automatically log them in to Website B without requiring them to enter their credentials again.
The best one for your use case will depend on a number of factors, such as the technology stack you are using, your budget, and the specific requirements of your implementation.
I have a hapijs application and checking some logs I have found some entries for automated site scanners and hits to entries to /admin.php and similar.
I found this great article How to Block Automated Scanners from Scanning your Site and I thought it was great.
I am looking for guidance on what the best strategy would be to create honey pots for a hapijs / nodejs app to identify suspicious requests, log them, and possibly ban the IPs temporarily.
Do you have any general or specific (to node and hapi) recommendations on how to implement this?
My thoughts include:
Create the honeypot route with a non-obvious name
Add a robots.txt to disallow search engines on that route
Create the content of the route (see the article and discussions for some of the recommendations)
Write to a special log or tag the log entries for easy tracking and later analysis
Possibly create some logic that if traffic from this IP address receives more traffic than certain threshold (5 times of honeypot route access will ban the IP for X hours or permanently)
A few questions I have:
How can you ban an IP address using hapi.js?
Are there any other recommendations to identify automated scanners?
Do you have specific suggestions for implementing a honeypot?
Thanks!
Let me start with saying that this Idea sounds really cool but I'm not if it is much practical.
First the chances of blocking legit bots/users is small but still exisits.
Even if you ignore true mistakes the option for abuse and denial of service is quite big. Once I know your blocking users who enter this route I can try cause legit users touch it (with an iframe / img / redirect) and cause them to be banned from the site.
Than it's effectiveness is small. sure your going to stop all automated bots that scan your sites (I'm sure the first thing they do is check the Disallow info and this is the first thing you do in a pentest). But only unsophisticated attacks are going to be blocked cause anyone actively targeting you will blacklist the endpoint and get a different IP.
So I'm not saying you shouldn't do it but I am saying you should think to see if the pros outwaite the cons here.
How to actually get it done is actually quite simple. And it seem like your looking for a very unique case of rate limiting I wouldn't do it directly in your hapi app since you want the ban to be shared between instances and you probably want them to be persistent across restarts (You can do it from your app but it's too much logic for something that is already solved).
The article you mentioned actually suggests using fail2ban which is a great solution for rate limiting. you'll need to make sure your app logs to afile it can read and write a filter and jail conf specifically for your app but it should work with hapi with no issues.
Specifically for hapi I maintain an npm module for rate limiting called ralphi it has a hapi plugin but unless you need a proper rate limiting (which you should have for logins, sessions and other tokens) fail2ban might be a better option in this case.
In general Honey pots are not hard to implement but as with any secuiry related solution you should consider who is your potential attacker and what are you trying to protect.
Also in general Honey pots are mostly used to notify about an existing breach or an imminent breach. Though they can be used to also trigger a lockdown your main take from them is to get visibility once a breach happend but before the attacker had to much time to abuse the system (You don't want to discover the breach two months later when your site has been defaced and all valuable data was already taken)
A few ideas for honey pots can be -
Have an 'admin' user with relatively average password (random 8 chars) but no privileges at all when this user successfully loges in notify the real admin.
Notice that your not locking the attacker on first attempt to login even if you know he is doing something wrong (he will get a different ip and use another account). But if he actually managed to loggin, maybe there's an error in your login logic ? maybe password reset is broken ? maybe rate limiting isn't working ? So much more info to follow through.
now that you know you have a semi competent attacker maybe try and see what is he trying to do, maybe you'll know who he is or what his end goal is (Highly valuable since he probably going to try again).
Find sensitive places you don't want users to play with and plant some canary tokens in. This can be just a file that sites with all your other uploads on the system, It can be an AWS creds on your dev machine, it can be a link that goes from your admin panel that says "technical documentation" the idea is that regular users should not care or have any access to this files but attackers will find them too tempting to ignore. the moment they touch one you know this area has been compromised and you need to start blocking and investigating
Just remember before implementing any security in try to think who you expect is going to attack you honey pots are probably one of the last security mesaures you should consider and there are a lot more common and basic security issues that need to be addressed first (There are endless amount of lists about node.js security best practices and OWASP Top 10 defacto standard for general web apps security)
I am working on a personal project and I have being considering the security of sensitive data. I want to use API for accessing the Backend and I want to keep the Backend in a different server from the one the user will logon to. This then require a cross domain accessing of data.
Considering that a lot of accessing and transaction will be done, I have the following questions to help guide me in the right path by those who have tried and tested cross domain access. I don't want to assume and implement and run into troubles and redesign when I have launched the service thereby losing sleep. I know there is no right way to do many things in programming but there are so many wrong ways.
How safe is it in handling sensitive data (even with https).
Does it have issues handling a lot of users transactions.
Does it have any downside I not mentioned.
These questions are asked because some post I have read this evening discouraged the use of cross-domain access while some encouraged it. I decided to hear from professionals who have actually used it in a bigger scale.
I am actually building a Mobile App, using Laravel as the backend.
Thanks..
How safe is it in handling sensitive data (even with https).
SSL is generally considered safe (it's used everywhere and is considered the standard). However, it's not any less safe by hitting a different server. The data still has to traverse the pipes and reach its destination which has the same risks regardless of the server.
Does it have issues handling a lot of users transactions.
I don't see why it would. A server is a server. Ultimately, your server's ability to handle volume transactions is going to be based on its power, the efficiency of your code, and your application's ability to scale.
Does it have any downside I not mentioned.
Authentication is the only thing that comes to mind. I'm confused by your question as to how they would log into one but access data from another. It seems that would all just be one application. If you want to revise your question, I'll update my answer.
By including Google Analytics in a website (specifically the Javascript version) isn't it true that you are giving Google complete access to all your cookies and site information? (ie. it could be a security hole).
Can this be mitigated by putting Google in an iFrame that is sandboxed? Or maybe only passing Google the necessary information (ie. browser type, screen resolution, etc)?
How can someone get the most out of Google Analytics without leaving the entire site open?
Or perhaps passing the data through my own server and then uploading it to Google?
You can create a scriptless implementation via the measurement protocol (for Universal Analytics enabled properties). This not only avoids any security issues with the script (although I'd rather trust Google on that), it also means you have more control what data is submitted to the Google Server.
A script run on your site can read cookies on your site, yes. And that data can be sent back to google, yes. That is why you shouldn't store sensitive information in cookies. You shouldn't do this even if you don't use google analytics. Even if you don't use ANY other code except your own. Browsers and browser addons can also read that stuff and you definitely cannot control that. Again, never store sensitive information in cookies.
As far as access to "site information".. javascript can be used to read the content on your pages, know urls of pages, etc.. IOW anything you serve up on a web page. Anything that is not behind a wall (e.g. login barrier) is surely up for grabs. But crawlers will look at that stuff anyway. Stuff behind walls can still be grabbed automatically, depending on what they have to actually do to get past those walls (e.g. simple registration/login barriers are pretty easy to get past).
This is also why you should never display sensitive information even in content of your site. E.g. credit card numbers, passwords, etc.. that's why virtually every site you go to that has even remotely sensitive information always shows a mask (e.g. ** ) instead of actual values.
Google Analytics does not actively do these things, but you're right: there's nothing stopping them from doing it, and you've already given them the right to do it by using their script.
And you are right: the safest way to control what Google can actually see is to send server-side requests to them. And also put all your content behind barriers that cannot be easily crawled or scraped. The strongest barrier being one that involves having to pay for access. People are ingenious about making bots about making crawlers and bots to get past all sorts of forms and "human" checks etc.. and you're fighting a losing battle on that count, but nothing stops a bot faster than requiring someone to give you money to access your stuff. Of course, this also means you'd have to make everybody pay for access...
Anyways.. if you're that paranoid about this stuff, why use GA at all? Use something you host yourself (e.g. Piwik). This won't solve for crawlers/bots, obviously, but it will solve for worries about GA grabbing more than you want it to.
I have a web application, there is one input can let user post their website links.
However I try to protect user as I can.
Is any free website can check the links for you?
What i'm looking for is passing the links to the security site, if the links is ok, it will auto pass to the destination, if not, it will stop and let user know the site is not safe.
links(my site) -> security check website -> destination
The Web of Trust API allows you to check the user-provided "reputation" of a website.
WOT is a good suggestion. However, keep in mind that the crowdsoured nature of this project can be an issue, security wise.
For example, using few fake account, one can boost the trust of a malicious domain. This is especially true for "under the radar" domains, that would generate no organic ratings.
Admittedly, this would probably be resolved over time (as users become exposed to the threat) but the process could be repeated again and again...
I would suggest to use WOT as cross reference, combining it with other factors (i.e. Alexa) just to filter out small domains, which are not likely to be rated - expect by their owners.
Best way is to manage your own list...