Programming/Hacking - security

Lets say I knew an ethical hacker that I wanted to hire to do a penetration test, but trust was an issue. Could I duplicate my system but have its sensitive data removed, and have it untraceable to the company that owns it?
If just the structure and security measures remained, could this duplicate be hacked to see if certain areas can be accessed? I'm guessing it could be done similarly to the 'missions' on hackthissite.org. I could then be informed of the exploits. What would the test site look like?
Could it actually be completely untraceable to its company? How hard would this be?

You generally cannot go around distributing the code for your employers sites.
With their permission, though, what you could do is setup a staging environment (most development environments should have these anyway) and in that sense you can point relevant people to that site (with no real data) for the purposes of providing a penetration test. Of course, it may limit the scope of the validity of their attacks, but not generally so, because you're already basically saying "attack this web infrastructure", and the data they see is kind of irrelevant (as long as it has the same structure); that is the aim of exposing weaknesses in the sites function is independent of data.

You could do that, but there are nuances. Just make sure that the structure is not changed. That is, remove non behavioral procedures and create a clone and allow him to test that only.
Bear in mind, though, that even if you remove the sensible data, you can still be hacked. A security flaw can be such that does not rely on behavior, but services and such (which is most times the case).
The tester can easily not report a vulnerability and leave this open as a backdoor to your real application.

Related

What is the security standard for a small business?

This maybe a very newbie question, but exactly what do I need so that I can say my network is considered "secure"?
To be more specific, if I have a website that deals with login/signup and lots of money transactions, what do I need to protect it?
So far I know I need EV SSL certificate, login system protections like brute force login protection, hashing the password, key stretching. Is there anything I missed?
Besides, is firewall really necessary in my case? I just feel like everything I want to do can be accomplished by the server itself, so is there really a need to get a software/hardware firewall?
To be completely blunt, you should probably hire a security professional to assess and make recommendations about your site. Alternatively, a part or full-time network administrator with security experience/certifications might be a good hire.
I recommend the "don't do-it-yourself" approach not because I want to increase work for my peers, or that I don't believe you are a fully competent individual. Rather, I recommend it because security is really, really hard to get right, and any site that handles money is an ideal target for any attacker out there. From a professional perspective, you would be best served by getting an expert to secure your network, perhaps on an ongoing basis; this is a situation that security professionals are very used to, and very well equipped to handle. From a legal perspective, getting an expert opinion on such a sensitive matter is essential due diligence, and trying to do it entirely on your own opens you to significant liability if your system gets breached and attackers are able to carry off your customer's data. Which, as your business grows and you gain more visibility online, only more and more likely to happen without ongoing, professional help.

Is showing TCM ID on th the public website a security issue?

Is showing TCM ID on the SiteEdit instruction on the public website a security issue? My thoughts are it should not be an issue since Tridion is behind the firewall. I want to know the experts opinion.
I think you're asking the wrong question here. It is not important whether those SiteEdit instructions are a security risk, they should only be present on the publishing target(s) where you use SiteEdit. On any other target they just needlessly increase the size and expose implementation details that are not relevant to the visitors of that target.
So unless you enable SiteEdit on your public web site (highly unlikely), the SiteEdit instructions should not be in the HTML.
It depends on the level of security you require. In principle, your security should be so good that you don't rely on "security by obscurity". You should have modelled every threat, and understood it, and designed impregnable defences.
In real life, this is a little harder to achieve, and the focus is more on what is usually described as "security in depth". In other words, you do your best to have impregnable defences, but if some straightforward disciplines will make it more difficult for your attacker, you make sure that you go to that effort as well. There is plenty of evidence that the first step in any attack is to try to enumerate what technology you are using. Then if there are any known exploits for that technology, the attacker will attempt to use them. In addition, if an exploit becomes known, attackers will search for potential victims by searching for signatures of the compromised technology.
Exposing TCM URIs in your public-facing output is as good as telling an attacker that you are using Tridion. So, for that matter, is exposing SiteEdit code. If you use Tridion, it is utterly unnecessary to do either of these things. You can simply display a web site that gives no clues about its implementation. (The ability to avoid giving these clues will be a hard requirement for many large organisations choosing a WCMS, and Tridion's strength in this regard may be one of the reasons why the organisation you work for chose to use it.)
So while there is nothing in a TCM URI which of itself causes a security problem, it unnecessarily gives information to potential attackers, so yes, it is a security issue. Financial institutions, government organisations, and large corporations in general, will expect you to do a clean implementation that gives no succour to the bad guys.
I would argue that it does not really present an issue. If there are holes in the firewall that can be breached, an attacker may find a way to get through regardless. The fact that there is a Tridion CMS installation behind the firewall is somewhat irrelevant.
Whether you have the URIs in your source code or not, your implementation should be secured well enough that the knowledge gained by knowing that you have a Tridion CMS is of no value to a hacker.

Corporate Espionage of Website Source Code

This may not be the most technical question, but I was just interested, nonetheless...
How does a giant company like Google keep from having their code stolen by employees? Maybe I'm wrong, but I would assume that their source code to their search algorithms (amongst other things) would be valuable to their competitors (i.e. Microsoft).
I guess I can best phrase it like this:
What's keeping an unscrupulous
employee who has sufficient clearance from
accessing Google's code repository for
a specific project and copying significant amounts of code
to a flash drive and taking it to their
competitors?
Fear of being sued?
Things within a company like Google are also compartmentalized. So not everybody has access to all code. If someone has access to code, you can bet that Google knows when they access it. I'm sure they have some kind of algorithm that looks and sees if somebody just downloads a lot of files very fast. The search algorithm isn't a small file obviously, it is a gigantic application.
All this would allow them to track who has stolen the code from within. There is also the fact that any self-respecting company or company with something to lose (i.e. Microsoft) would not take anything like this from somebody. They would probably even tell Google about it.
It is called protocol. The idea that only a few people get to know the code. In which then those few have to tell a major very embarrassing secret to the others. So then nobody can tell or else they get outed in the public. Which can be very simple like they like something, compared to as bashful as they are all the way to they killed somebody.
Many employers, including one that I've worked for, completely block flash drives.
In many cases, though, this is to protect non-technical confidential information.
Companies that are serious about protecting their assets will have access logging on their core systems and active scanning to detect suspicious patterns. Similar security is implemented for employees of government agencies (e.g. tax, social security) holding sensitive personal information. Users who access data outside of their assigned cases can be flagged and investigated.
I suspect (but don't know) that similar scanning could be implemented in high value source code repositories.
Some organizations block the use of removable media (It has been reported that some agencies have reacted to Wikileaks with such policies), in some cases by physically gluing up the USB/media ports. This restricts potential thiefs to network transfers of material which can be scanned.
I think companies such as Google will implement access control on their source code repository / version control system. So their employee would only be able to access source code in which they were involved. And their access could be revoked from previous repository if they're being assigned to different project. Its the same thing with normal internal documents, would a security-conscious company let documents be downloaded by any employee freely ?
I think codethis hit the nail on the head. Some fly-by-night operation may be interested, but Microsoft, Yahoo, etc - wouldn't touch stolen code with a ten foot pole. And the fly-by-night wouldn't have the infrastructure. If you didn't tell anybody it was stolen - it's not like you could get away with walking in to a company with an entire spider/searching algorithm on your thumbdrive and declare you wrote it last week.
The bigger threat is details of the search algorithm getting out. SEOers, as a whole, are rather shady - and many would kill for solid facts about how the algorithm ranked or downranked pages. Even then, Google has demonstrated the ability to change their ranking algorithms so quickly that it wouldn't much matter.
On the other hand, Google doesn't have that much super-secret code. Most of their cool stuff (MapReduce et.al) is publicly available (see Hadoop). This question is probably more applicable to a company like Adobe. Some of their Photoshop algorithms are really cool, and would probably hurt them if they got out - but again, no legit company would touch it.

Website hacking - Why it is always possible to do?

we know that each executable file can be reverse engineered (disassembled, decompiled). No mater how strong security you will implement, anyway if crackers want to, they do crack!!! Just that is a question of time.
What about websites? May we say that website can be completely safe from attacks of hackers (we assume that hosting is not vulnerable)? If no, than what is the reason?
Yes it is always possible to do. There is always a way in.
It's like my grandfather always said:
Locks are meant to keep the honest
people out
May we say that website can be completely safe from attacks of hackers?
No. Even the most secure technology in the world is vulnerable to social engineering attacks, for one thing.
You can easily write a webapp that is mathematically proven to be secure... But that proof will only hold as long as the underlying operating system, interpreter|compiler, and hardware are secure, which is never the case.
The key thing to remember is that websites are usually part of a huge and complex system and it doesn't really matter if the hacker enters the system through the web application itself or some other part of the entire infrastructure. If someone can get access to your servers, routers, DNS or whatever, they can bring down even the best web application. In my experience a lot of systems are vulnerable in some way or another. So "completely secure" means either "we're trying really hard to secure the platform" or "we have no clue whatsoever, but we hope everything is okay". I have seen both.
To sum up and add to the posts that precede:
Web as a shared resource - websites are useful so long as they are accessible. Render the web site unaccessible, and you've broken it. Denial of service attacks add up to flooding the server so that it can no longer respond to legitimate requests will always be a factor. It's a game of keep away - big server sites find ways to distribute, hackers find ways to deluge.
Dynamic data = dynamic risk - if the user can input data, there's a chance for a hacker to be a menance. Today the big concepts are cross-site scripting and SQL injection, but once one avenue for cracking is figured out, chances are high that another mechanism will rise. You could, conceivably, argue that a totally static site can be secure from this, but then how many useful sites fit that bill?
Complexity = the more complex, the harder to secure - given the rapid change of technology, I doubt that any web developer could say with 100% confidence that a modern website was secure - there's too much unknown code. Taking the host aside (the server, network protocols, OS, and maybe database), there's still all the great new libraries in Java EE and .Net. And even a less enterprise-y architecture will have some serious complexity that makes knowing all potential inputs and outputs of the code prohibitively difficult.
The authentication problem = by definition, the web site lets a remote user do something useful on a server that is far away. Knowing and trusting the other end of the communication is an old challenge. These days server side authenitication is relatively well implemented an understood and (so far as I know!) no one's managed to hack PKI. But getting user authentication ironed out is still quite tricky. It's doable, but it's a tradeoff between difficulty for the user and for configuration, and a system with a higher risk of vulnerability. And even a strong system can be broken when users don't follow the rules or when accidents happen. All this doesn't apply if you want to make a public site for all users, but that severely limits the features you'll be able to implement.
I'd say that web sites simply change the nature of the security challenge from the challenges of client side code. The developer does not need to be as worried about code replication, but the developer does need to be aware of the risks that come from centralizing data and access to a server (or collection of servers). It's just a different sort of problem.
Websites suffer greatly from injection and cross site scripting attacks
Cross-site scripting carried out on
websites were roughly 80% of all
documented security vulnerabilities as
of 2007
Also part of a website (in some web sites a great deal) is sent to the client in the form of CSS, HTML and javascript, which is the open for inspection by anyone.
Not to nitpick, but your definition of "good hosting" does not assume the HTTP service running on the host is completely free from exploits.
Popular web servers such as IIS and Apache are often patched in order to protect against such exploits, which are often discovered the same way exploits in local executables are discovered.
For example, a malformed HTTP request could cause a buffer overrun on the server, leading to part of its data being executed.
It's not possible to make anything 100% secure.
All that can be done is to make something hard enough to break into, that the time and effort spent doing so makes it not worth doing.
Can I crack your site? Sure, I'll just hire a few suicide bombers to blow up your servers. Or... I'll blow up those power plants that power up your site, or I do some sort of social engineering, and DDOS attacks would quite likely be effective in a large scale not to mention atom bombs...
Short answer: yes.
This might be the wrong website to discuss that. However, it is widely known that security and usability are inversely related. See this post by Bruce Schneier for example (which refers to another website, but on Schneier's blog there's a lot of interesting readings on the issue).
Assuming the server itself isn't comprimised, and has no other clients sharing it, static code should be fine. Things usually only start to get funky when there's some sort of scripting language involved. After all, I've never seen a comprimised "It Works!" page
Saying 'completely secure' is a bad thing as it will state two things:
there has not been a proper threat analysis, because secure enough would be the 'correct' term
since security is always a tradeoff it means that the a system that is completely secure will have abysmal usability and the site will be a huge resource hog as security has been taken to insane levels.
So instead of trying to achieve "complete security" you should;
Do a proper threat analysis
Test your application (or have someone professional test it) against common attacks
Apply best practices, not extreme measures
The short of it is that you have to strike a balance between ease of use and security, much of the time, and decide what provides the optimal level of both for your purposes.
An excellent case in point is passwords. The easy way to go about it is to just have one, use it everywhere, and make it something easy to remember. The secure way to go about it is to have a randomly generated variable-length sequence of characters across the encoding spectrum that only the user himself knows.
Naturally, if you go too far on the easy side, the user's data is easy to pick off. If you go too far on the side of security, however, practical application could end up leading to situations that compromise the added value of the security measures (e.g. people can't remember their whole keychain of passwords and corresponding user names, and therefore write them all down somewhere. If the list is compromised, the security measures that had been put into place are for naught. Hence, most of the time a balance gets struck and places ask that you put a number in your password and tell you not to do anything stupid like tell it to other people.
Even if you remove the possibility of a malicious person with the keys to everything leaking data from the equation, human stupidity is infinite. There is no such thing as 100% security.
May we say that website can be completely safe from attacks of hackers (we assume that hosting is not vulnerable)?
Well if we're going to start putting constraints on the attacker, then of course we can design a completely secure system: we just have to bar all of the attacker's attacks from the scenario.
If we assume the attacker actually wants to get in (and isn't bound by the rules of your engagement), then the answer is simply no, you can't be completely safe from attacks.
Yes, it's possible for a website to be completely secure, for a reasonable definition of 'complete' that includes your original premise that the hosting is not vulnerable. The problem is the same as with any software that contains defects; people create software of a complexity that is slightly beyond their capability to manage and thus flaws remain undetected until it's too late.
You could start smaller and prove all your work correct and safe as you construct it, remaking any off-the-shelf components that haven't been designed to that stringent degree of quality, but unfortunately that leaves you at a massive commercial disadvantage compared to the people who can write 99% safe software in 1% of the time. Therefore there's rarely a good business reason for going down this path.
The answer to this question lies close to the ideas about computational theory that arise from considering the halting problem. http://en.wikipedia.org/wiki/Halting_problem To wit, if you could with clarity say you'd devised a way to programmatically determine if any particular program was secure, you might be close to disproving the undecidability of the halting problem on the class of machines you were working with. Since the undecidability of the halting problem has been proven, we can know that over turing machines you would be unable to prove securability since the problem of security reduces to the halting problem. Even for finite machines you might be able to decide all of the states of the program, but Minsk would tell us that the time required for a complete state tree for even simplistic modern day machines and web servers would be huge. You probably know a lot about a specific piece of code, but as soon as you changed the code, or updated it, a complete retest would be required. Fundamentally this is interesting because it all boils back to the concept of information and meaning. Read about Automated theory proving to understand more about the limits of computational systems. http://en.wikipedia.org/wiki/Automated_theorem_proving
The fact is hackers are always one step ahead of developers, you can never ever consider a site to be bullet proof and 100% safe. You just avoid malicious stuff as much as you can !!
In fact, you should follow whitelist approach rather than blacklist approach when it comes to security.

What is the best way to stop an application being copied and used without the owner’s permission?

What is the best way to avoid that an application is copied and used without the owner’s knowing?
Is there any way to trace the usage? Meaning periodically the application communicates back, with enough information so that we can know where it is, and if it’s legal. Next thing, of course, shut it down, if it’s not legit.
Software that "phones home" will be quickly shunned by the vast majority of your users. Just license it appropriately and sell it.
People who use your software professionally will either pay for it or they won't use it. Corporations tend to frown on potential lawsuits.
People who want to use your software without paying for it will continue to do so despite your best efforts to counteract them. Once the software is in their hands, it is out of yours. Without pissing off your users, your only recourse is a legal one.
If your product is priced reasonably, some people will pay for it and some won't. That is just something you need to deal with upfront and it should be factored into your business plan.
Don't do this, don't attempt it, don't even think about it.
This is a battle you can't win. If people want to pirate your software they will. You'll be shamed by the fact that a smart reverse engineer can write a one byte binary patch to subvert all your protection schemes.
The people who are going to pirate your software will do so and all these "security features" you build in will likely end up only inconveniencing your true supporters: the people who have legitimately purchased your software. These draconian DRM / anti-piracy schemes only build resentment among software users.
Hardware dongles are the best way if you are really concerned about piracy IMO. Check out the big industrial CAD/CAM packages worth thousands or tens-of-thousands, or the AV/Music production software, they virtually all have dongle protection. Dongles can be emulated or reversed but not without a significant investment in time, a lot more than just changing a few JEs to JNEs in your assembly.
Phoning home is not the way to go unless you are providing a service that requires a subscription and constant updates (like antivirus products, for example) as part of your business model. You need to have a bit of respect for your users and their privacy. You might have perfectly innocent intentions but what if a court ordered your company to hand over that information (like the US government is doing with Google and its search terms) - would/could you fight it? What if you some time in the future sold your company and the new owners decided to sell all that historic information to a marketing company? Privacy is not just about trusting a company not to abuse your data, it is trusting that company to go out of their way to protect your data. Which is pretty far down the list of priorities for most companies. So basically, the monitoring users thing is not really a good path to go down.
The best (and pretty much only) way to reliably prevent piracy is to have a client/server application instead of a standalone one, where a non-trivial part of the work is done by the server and users need to register. Then you can at least detect and block simultaneous use of the same account.
There are several approaches you could take, but there are three that will be vastly more effective that any of the others.
A. Don't create it.
Software that doesn't exist never suffers from unauthorized use.
B. Don't release it.
If you have the only copy, and you keep it that way, then the chances are exceedingly good that there will be no unauthorized use.
C. Give everyone permission to use it.
If you don't want anyone to use it without permission, then you can give everyone permission and there will be no unauthorized users.
There is a possibility to trace the usage. You can accomplish this by letting phone your tool home and send the information you need. The problem with this is, that first nobody likes software that phones home for this purpose and second with a simple application-level gateway you can block the application to phone home! What you describe in your question is a common problem of software-distributors and it's not an easy one to solve!
There's another thing I haven't seen mentioned yet : You could add loads of settings to the applications' configuration file, and start with ridiculous defaults. Then do the installation & configuration personally, so no-one but you is able to figure out how everything should be set. This can be a mayor put-down for people that are just trying out if a copy is enough. (Be sure to add settings that depend on all sorts of system-settings, like OS-version related DLL-versions that should be loaded, etc). Not very user-friendly tho ;-)

Resources