Achieve site specific TOR exit nodes - tor

Any one who's had enough experience using TOR (browser) and is persistent and adamant enough to continue to do so to regain a little of their privacy and anonymity on the internet knows that this could start becoming very challenging for many sites that automatically become the white knight in shining armor to "protect" your account from being compromised.
The technically aware cannot choose to take responsibility of their own security and need to be baby-sat along with the masses.
That being said, in order to prevent the security alarms and algorithms employed by such sites and services it may be required to force an exit node to be used from a particular region consistently, thereby satisfying one of the major triggers employed by such algorithms.
Given however, one would access a varied number of sites in a session, this poses the problem that some services are better accessed from one region and another from another region.
Thus the question is, is there a way to configure TOR Browser to choose site specific exit nodes?

Related

Client Server Security Architecture

I would like go get my head around how is best to set up a client server architecture where security is of up most importance.
So far I have the following which I hope someone can tell me if its good enough, or it there are other things I need to think about. Or if I have the wrong end of the stick and need to rethink things.
Use SSL certificate on the server to ensure the traffic is secure.
Have a firewall set up between the server and client.
Have a separate sql db server.
Have a separate db for my security model data.
Store my passwords in the database using a secure hashing function such as PBKDF2.
Passwords generated using a salt which is stored in a different db to the passwords.
Use cloud based infrastructure such as AWS to ensure that the system is easily scalable.
I would really like to know is there any other steps or layers I need to make this secure. Is storing everything in the cloud wise, or should I have some physical servers as well?
I have tried searching for some diagrams which could help me understand but I cannot find any which seem to be appropriate.
Thanks in advance
Hardening your architecture can be a challenging task and sharding your services across multiple servers and over-engineering your architecture for semblance security could prove to be your largest security weakness.
However, a number of questions arise when you come to design your IT infrastructure which can't be answered in a single SO answer (will try to find some good white papers and append them).
There are a few things I would advise which is somewhat opinionated backed up with my own thought around it.
Your Questions
I would really like to know is there any other steps or layers I need to make this secure. Is storing everything in the cloud wise, or should I have some physical servers as well?
Settle for the cloud. You do not need to store things on physical servers anymore unless you have current business processes running core business functions that are already working on local physical machines.
Running physical servers increases your system administration requirements for things such as HDD encryption and physical security requirements which can be misconfigured or completely ignored.
Use SSL certificate on the server to ensure the traffic is secure.
This is normally a no-brainer and I would go with a straight, "Yes"; however you must take into consideration the context. If you are running something such as a blog site or documentation-related website that does not transfer any sensitive information at any point in time through HTTP then why use HTTPS? HTTPS has it's own overhead, it's minimal, but it's still there. That said, if in doubt, enable HTTPS.
Have a firewall set up between the server and client.
That is suggested, you may also want to opt for a service such as CloudFlare WAF, I haven't personally used it though.
Have a separate sql db server.
Yes, however not necessarily for security purposes. Database servers and Web Application servers have different hardware requirements and optimizing both simultaneously is not very feasible. Additionally, having them on separate boxes increases your scalability quite a bit which will be beneficial in the long run.
From a security perspective; it's mostly another illusion of, "If I have two boxes and the attacker compromises one [Web Application Server], he won't have access to the Database server".
At foresight, this might seem to be the case but is rarely so. Compromising the Web Application server is still almost a guaranteed Game Over. I will not go into much detail into this (unless you specifically ask me to) however it's still a good idea to keep both services separate from eachother in their own boxes.
Have a separate db for my security model data.
I'm not sure I understood this, what security model are you referring to exactly? Care to share a diagram or two (maybe an ERD) so we can get a better understanding.
Store my passwords in the database using a secure hashing function such as PBKDF2.
Obvious yes; what I am about to say however is controversial and may be flagged by some people (it's a bit of a hot debate)—I recommend using BCrypt instead of PKBDF2 due to BCrypt being slower to compute (resulting in slower to crack).
See - https://security.stackexchange.com/questions/4781/do-any-security-experts-recommend-bcrypt-for-password-storage
Passwords generated using a salt which is stored in a different db to the passwords.
If you use BCrypt I would not see why this is required (I may be wrong). I go into more detail regarding the whole username and password hashing into more detail in the following StackOverflow answer which I would recommend you to read - Back end password encryption vs hashing
Use cloud based infrastructure such as AWS to ensure that the system is easily scalable.
This purely depends on your goals, budget and requirements. I would personally go for AWS, however you should read some more on alternative platforms such as Google Cloud Platform before making your decision.
Last Remarks
All of the things you mentioned are important and it's good that you are even considering them (most people just ignore such questions or go with the most popular answer) however there are a few additional things I want to point:
Internal Services - Make sure that no unrequired services and processes are running on server especially in productions. These services will normally be running old versions of their software (since you won't be administering them) that could be used as an entrypoint for your server to be compromised.
Code Securely - This may seem like another no-brainer yet it is still overlooked or not done properly. Investigate what frameworks you are using, how they handle security and whether they are actually secure. As a developer (and not a pen-tester) you should at least use an automated web application scanner (such as Acunetix) to run security tests after each build that is pushed to make sure you haven't introduced any obvious, critical vulnerabilities.
Limit Exposure - Goes somewhat hand-in-hand with my first point. Make sure that services are only exposed to other services that depend on them and nothing else. As a rule of thumb, keep everything entirely closed and open up gradually when strictly required.
My last few points may come off as broad. The intention is to keep a certain philosophy when developing your software and infrastructure rather than a permanent rule to tick on a check-box.
There are probably a few things I have missed out. I will update the answer accordingly over time if need be. :-)

How to securely distinguish traffic from my app and browser traffic

I'm designing a game that makes queries to a database on the web. The database is fronted by a web service. For example, a request could look like this:
Endpoint: "server.com/user/UID/buygold"
POST:
amount: 100
The web service would make sure that userid has enough funds to purchase 100 gold, then would return a Boolean answer based on the success of the transaction.
However, I want to limit the amount of scripting someone could possibly do to automate gameplay. For example, they could figure out their userid and have automated tasks that buy gold for them while they are at work.
On the web services side, what are some sound security measures that I can put in place to decline all but real app traffic. Is there also a way to trump reverse engineers who will take the app apart and look for keys/certs?
I hope that this is not for a production environment, the security implications alone are mind boggling and certainly go beyond the scope of the allowed response, as it would require a rather lengthy and in depth list of requirements and recommendations, that are really contingent on many other factors that make up your web-service environment. For example, the network topology, authentication, session control and management, and various other variables all play important roles in fostering and implementing a sound cyber security counter measure.
However, assuming that you have all that taken care of, I will answer your main questions as follows:
For question:
"what are some sound security measures that I can put in place to decline all but real app traffic"
Answer:
This is one of many options out there that would address your particular concern, and that is to check for the client User-Agent header in the the request, which may appear something like this:
"Mozilla/5.0 (Macintosh; Intel Mac OS X 32.11; rv:49.0) Gecko/20122101 Firefox/42.0"
Depending on how the script is being run to automate gameplay, if it is in the form of a browser extension, then the User Agent really plays very poorly as a counter measure, if on the other hand, the script is being run directly from the client to your server (web-service), then, you can detect it right away, and there are ways to detect if someone spoofed a User Agent just to bypass this counter measure.
Another counter measure you can utilize is session management at the client level. So, this would require an architectural overview of how you implemented your particular project, but a general summary would follow a pattern like this:
Customer/GamePlayer would naturally be required to login (authentication of some sort)
The client system (which is the User Interface) that the game player is using, will have counter measures implemented in a front-end type of scripting language, i.e. Javascript or any framework that makes use of JS, such as jQuery, DoJo, etc.
Register event handlers that monitor actions, such as type of input, some Boolean logic that will follow something like this:
"if input is not from keyboard or mouse, then send flag with request"
The server/web-service will have logic to handle this request appropriately. This would be a way to catch/detect the game player after they commit the violation, used for legal reasons and such to profile evidence and such. If on the other hand you want to prevent that from happening, then, you could have some Boolean logic that goes something like this: "if input is not from the keyboard or mouse (or whatever permitted input device), then do not allow action (GamePlay), and still report back to web-server".
There are a dozen other ways, but this one seems to generally address your question, provided that you take into consideration that there are hundreds of other factors to think about, from networking level all the way to the application layer, and down to what pattern are you using for your web-service, such as whether it's a REST/API type of environment, does it follow an MVC pattern, and so on. There is no silver lining when it comes to cyber security, it's really a proactive and constant initiative on your end to ensure that all stakeholders' assets are protected, in this case, the asset is the web-service and gameplay, and the threat is the risk of gameplay tampering, that would affect the integrity of your game.
Now, regarding your second question:
" Is there also a way to trump reverse engineers who will take the app apart and look for keys/certs?"
When reverse engineers really put their mind to it, there is nothing you can really do, as whatever counter measure you may implement, they will find a workaround, that's why it's called reverse engineering, they will reverse engineer your "counter measure", so, not to be all cynical about it, you have to accept the reality that there is really no such thing as a "trump" counter measure when it comes to cyber security. You can however employ various mechanisms at both, the network layer and all the way to the application level, combined with proactive initiatives, intrusion detection, abnormal behavioral characteristics of the gameplay pattern, all will mitigate your risk; with all that said, your final frontier will be to ensure you have a good legal policy in place in your TOS (terms of service), and depending where you're hosting your web-service (geographically), you will be protected when users violate such terms, especially when you have verbiage that precludes users from attempting to reverse-engineer, tamper with gameplay scoreboards or currency, and so on.
Another good way, is to really connect with users, users are people, and people sometimes forget that they are also affecting others by their actions, so, once a user is aware of how his/her actions may negatively affect others, such as in the case of a user's actions to increase their 100 Gold, they may financially and emotionally affect others who may have put real time and effort into making this service even possible, so, a simple introductory welcome video upon signing up can do wonders for example; however, sometimes the user may not really know that they were prohibited from using auto-scripts for gameplay, or at least can raise an affirmative defense of that, so, having well published policies can really mitigate and potentially reduce altogether these types of risks. Despite the potentially optimistic outlook on users, you still have to exercise good programming practices and have security counter measures in place though.
I hope that I have given you some insight and direction to assist you with this matter, and that as you can see, it is really an involved and can be a very complicated type of process.
Good luck with your initiatives.

Is basic HTTP auth in CouchDB safe enough for replication across EC2 regions?

I can appreciate that seeing "basic auth" and "safe enough" in the same sentence is a lot like reading "Is parachuting without a parachute still safe?", so I'll do my best to clarify what I am getting at.
From what I've seen online, people typically describe basic HTTP auth as being unsecured due to the credentials being passed in plain text from the client to the server; this leaves you open to having your credentials sniffed by a nefarious person or man-in-the-middle in a network configuration where your traffic may be passing through an untrusted point of access (e.g. an open AP at a coffee shop).
To keep the conversation between you and the server secure, the solution is to typically use an SSL-based connection, where your credentials might be sent in plain text, but the communication channel between you and the server is itself secured.
So, onto my question...
In the situation of replicating one CouchDB instance from an EC2 instance in one region (e.g. us-west) to another CouchDB instance in another region (e.g. singapore) the network traffic will be traveling across a path of what I would consider "trusted" backbone servers.
Given that (assuming I am not replicating highly sensitive data) would anyone/everyone consider basic HTTP auth for CouchDB replication sufficiently secure?
If not, please clarify what scenarios I am missing here that would make this setup unacceptable. I do understand for sensitive data this is not appropriate, I just want to better understand the ins and outs for non-sensitive data replicated over a relatively-trusted network.
Bob is right, it is better to err on the side of caution, but I disagree. Bob could be right in this case (see details below), but the problem with his general approach is that it ignores the cost of paranoia. It leaves "peace dividend" money on the table. I prefer Bruce Schneier's assessment that it is a trade-off.
Short answer
Start replicating now! Do not worry about HTTPS.
The greatest risk is not wire sniffing, but your own human error, followed by software bugs, which could destroy or corrupt your data. Make a replica!. If you will replicate regularly, plan to move to HTTPS or something equivalent (SSH tunnel, stunnel, VPN).
Rationale
Is HTTPS is easy with CouchDB 1.1? It is as easy as HTTPS can possibly be, or in other words, no, it is not easy.
You have to make an SSL key pair, purchase a certificate or run your own certificate authority—you're not foolish enough to self-sign, of course! The user's hashed password is plainly visible from your remote couch! To protect against cracking, will you implement bi-directional SSL authentication? Does CouchDB support that? Maybe you need a VPN instead? What about the security of your key files? Don't check them into Subversion! And don't bundle them into your EC2 AMI! That defeats the purpose. You have to keep them separate and safe. When you deploy or restore from backup, copy them manually. Also, password-protect them so if somebody gets the files, they can't steal (or worse, modify!) your data. When you start CouchDB or replicate, you must manually input the password before replication will work.
In a nutshell, every security decision has a cost.
A similar question is, "should I lock my house at night? It depends. Your profile says you are in Tuscon, so you know that some neighborhoods are safe, while others are not. Yes, it is always safer to always lock all of your doors all of the time. But what is the cost to your time and mental health? The analogy breaks down a bit because time invested in worst-case security preparedness is much greater than twisting a bolt lock.
Amazon EC2 is a moderately safe neighborhood. The major risks are opportunistic, broad-spectrum scans for common errors. Basically, organized crime is scanning for common SSH accounts and web apps like Wordpress, so they can a credit card or other database.
You are a small fish in a gigantic ocean. Nobody cares about you specifically. Unless you are specifically targeted by a government or organized crime, or somebody with resources and motivation (hey, it's CouchDB—that happens!), then it's inefficient to worry about the boogeyman. Your adversaries are casting broad nets to get the biggest catch. Nobody is trying to spear-fish you.
I look at it like high-school integral calculus: measuring the area under the curve. Time goes to the right (x-axis). Risky behavior goes up (y-axis). When you do something risky you saved time and effort, but the the graph spikes upward. When you do something the safe way, it costs time and effort, but the graph moves down. Your goal is to minimize the long-term area under the curve, but each decision is case-by-case. Every day, most Americans ride in automobiles: the single most risky behavior in American life. We intuitively understand the risk-benefit trade-off. Activity on the Internet is the same.
As you imply, basic authentication without transport layer security is 100% insecure. Anyone on EC2 that can sniff your packets can see your password. Assuming that no one can is a mistake.
In CouchDB 1.1, you can enable native SSL. In earlier version, use stunnel. Adding SSL/TLS protection is so simple that there's really no excuse not to.
I just found this statement from Amazon which may help anyone trying to understand the risk of packet sniffing on EC2.
Packet sniffing by other tenants: It is not possible for a virtual instance running in promiscuous mode to receive or "sniff" traffic that is intended for a different virtual instance. While customers can place their interfaces into promiscuous mode, the hypervisor will not deliver any traffic to them that is not addressed to them. This includes two virtual instances that are owned by the same customer, even if they are located on the same physical host. Attacks such as ARP cache poisoning do not work within EC2. While Amazon EC2 does provide ample protection against one customer inadvertently or maliciously attempting to view another's data, as a standard practice customers should encrypt sensitive traffic.
http://aws.amazon.com/articles/1697

Software and Security - do you follow specific guidelines?

As part of a PCI-DSS audit we are looking into our improving our coding standards in the area of security, with a view to ensuring that all developers understand the importance of this area.
How do you approach this topic within your organisation?
As an aside we are writing public-facing web apps in .NET 3.5 that accept payment by credit/debit card.
There are so many different ways to break security. You can expect infinite attackers. You have to stop them all - even attacks that haven't been invented yet. It's hard. Some ideas:
Developers need to understand well known secure software development guidelines. Howard & Le Blanc "Writing Secure Code" is a good start.
But being good rule-followers is only half the point. It's just as important to be able to think like an attacker. In any situation (not only software-related), think about what the vulnerabilities are. You need to understand some of those weird ways that people can attack systems - monitoring power consumption, speed of calculation, random number weaknesses, protocol weaknesses, human system weaknesses, etc. Giving developers freedom and creative opportunities to explore these is important.
Use checklist approaches such as OWASP (http://www.owasp.org/index.php/Main_Page).
Use independent evaluation (eg. http://www.commoncriteriaportal.org/thecc.html). Even if such evaluation is too expensive, design & document as though you were going to use it.
Make sure your security argument is expressed clearly. The common criteria Security Target is a good format. For serious systems, a formal description can also be useful. Be clear about any assumptions or secrets you rely on. Monitor security trends, and frequently re-examine threats and countermeasures to make sure that they're up to date.
Examine the incentives around your software development people and processes. Make sure that the rewards are in the right place. Don't make it tempting for developers to hide problems.
Consider asking your QSA or ASV to provide some training to your developers.
Security basically falls into one or more of three domains:
1) Inside users
2) Network infrastructure
3) Client side scripting
That list is written in order of severity, which opposite the order to violation probability. Here are the proper management solutions form a very broad perspective:
The only solution to prevent violations from the inside user is to educate the user, enforce awareness of company policies, limit user freedoms, and monitor user activities. This is extremely important as this is where the most severe security violations always occur whether malicious or unintentional.
Network infrastructure is the traditional domain of information security. Two years ago security experts would not consider looking anywhere else for security management. Some basic strategies are to use NAT for all internal IP addresses, enable port security in your network switches, physically separate services onto separate hardware and carefully protect access to those services ever after everything is buried behind the firewall. Protect your database from code injection. Use IPSEC to reach all automation services behind the firewall and limit points of access to known points behind an IDS or IPS. Basically, limit access to everything, encrypt that access, and inherently trust every access request is potentially malicious.
Over 95% of reported security vulnerabilities are related to client side scripting from the web and about 70% of those target memory corruption, such as buffer overflows. Disable ActiveX and require administrator privileges to activate ActiveX. Patch all software that executes any sort of client side scripting in a test lab no later than 48 hours after the patches are released from the vendor. If the tests do not show interference to the companies authorized software configuration then deploy the patches immediately. The only solution for memory corruption vulnerabilities is to patch your software. This software may include: Java client software, Flash, Acrobat, all web browsers, all email clients, and so forth.
As far as ensuring your developers are compliant with PCI accreditation ensure they and their management are educated to understand the importance security. Most web servers, even large corporate client facing web servers, are never patched. Those that are patched may take months to be patched after they are discovered to be vulnerable. That is a technology problem, but even more important is that is a gross management failure. Web developers must be made to understand that client side scripting is inherently open to exploitation, even JavaScript. This problem is easily realized with the advance of AJAX since information can by dynamically injected to an anonymous third party in violation of the same origin policy and completely bypass the encryption provided by SSL. The bottom line is that Web 2.0 technologies are inherently insecure and those fundamental problems cannot be solved without defeating the benefits of the technology.
When all else fails hire some CISSP certified security managers who have the management experience to have the balls to speak directly to your company executives. If your leadership is not willing to take security seriously then your company will never meet PCI compliance.

Detecting login credentials abuse

I am the webmaster for a small, growing industrial association. Soon, I will have to implement a restricted, members-only section for the website.
The problem is that our organization membership both includes big companies as well as amateur “clubs” (it's a relatively new industry…).
It is clear that those clubs will share the login ID they will use to log onto our website. The problem is to detect whether one of their members will share the login credentials with people who would not normally supposed to be accessing the website (there is no objection for such a club to have all it’s members get on the website).
I have thought about logging along with each sign-on the IP address as well as the OS and the browser used; if the OS/Browser stays constant and there are no more than, say, 10 different IP addresses, the account is clearly used by very few different computers.
But if there are 50 OS/Browser combination and 150 different IPs, the credentials have obviously been disseminated far, and there would be then cause for action, such as modifying the password.
Of course, it is extremely annoying when your password is being unilaterally changed. So, for this problem, I thought about allowing the “clubs” to manage their own list of sub-accounts, and therefore if abuse is suspected, the user responsible would be easily pinned-down, and this “sub-member” alone would face the annoyance of a password change.
Question:
What potential problems would anyone see with such an approach?
Any particular reason why you can't force each club member to register (just straight-up, not necessarily as a sub or a similar complex structure)? Perhaps give each club some sort of code to use just when the users register so you can automatically create their accounts and affiliate them with a club, but you then have direct accounting of each member without an onerous process that the club has to manage themselves. Then it's much easier to determine if a given account is being spread around (disparate IP accesses in given periods of time).
Clearly then you can also set a limit on the number of affiliated accounts per club, should you want to do so. This is basically what you've suggested, I suppose, but I would try to keep any onerous management tasks out of the hands of your users if at all possible. If you can manage club-affiliated signups, you should, rather than forcing someone at the club to manage them for you.
Also, while some sort of heuristic based on IP and credentials is probably fine, I would stay away from incorporating user-agent, or at least caring too much about it. Seeing a few different UAs from the same IP - depending on your expected userbase, I suppose - isn't really that unusual. I use several browsers in the course of my day due to website bugs, etc. and unless someone is using a machine as a proxy, it's not evidence of anything nefarious.

Resources