Security issuses and risks concerning debugging production environment - security

I am doing some research about security vulnerability and risks concerning debugging production environments. I would like to get your opinions and about possible risks concerning such environments.
By debugging I mean not only inspecting software with debugger but also all kinds of debugging techniques like logging, testing, reviewing code and especially post mortem debugging using mini-dumps. I am especially interested in general issues and issues related to .NET framework. I would also like to hear about other risk concerning bug management process.
In following answer I also posted my current research results.
For future investigating I found this posts related:
What's the risk of deploying debug symbols (pdb file) in a production environment?
Good processes for debugging production environment? Copying data to Dev?
Which are the dangers of remote debugging?

1) Most obvious issue is related to private data exposure. Using debuggers we have access to all data which was earlier loaded to process memory. This means that we are ignoring build in software access control logic. In many countries there are also legal issues with exposing private data to unauthorized people.
This is also an concern with logging, we should be careful what information we are logging, so that we have enough data to investigate bug cause but do not store vulnerable data (financial records, health-care records) in logs. There is also other general issue that usually our security level is not consistent regarding security of production database and log files.
.NET is addressing this issue with SecureSting class, but it is not eliminating the problem it only minimize data exposure length. For processing data we have to get string value in some point so if memory dump was taken when that processing was taking place secure information would be exposed in dump file. Other way to address this issue is preventing developers to access production data with data anonymisation before coping any data to local environments.
2) Another issue is risk involved in introducing new defects to software when fixing and investigating reported bugs. Bugs fixing process tends to be more ad-hoc then normal development process. It have some reasons, because existing in production bugs could be costing company money so there is pressure to fix them quickly.
The solution here is to maintain the same quality procedure which are being held with new features development process.

Related

Is it possible that the popular applications in my laptop are surveilling my files on hard drive?

What if I develop a desktop application which million people will use, and behind the scene, the application is surveilling users' files on their hard drives, streaming the data time to time?
Can one be assured no such things happen, with any popular software applications, be it MS Office or Google Chrome?
Or this is just a stupid question?
Is it technically possible? Yes, it is.
Could it be happening in an application used by a million users for a relatively long time without being noticed? Very unlikely. Somebody would notice the strange network traffic eventually.
Also #Mjh mentioned open source in a comment. While open source can help by allowing people to audit the source code, how many times have you checked that the binary you are using is actually the compiled source that you were looking at? Of course, there are signatures on binary packages and all, but the signature is made by the package maintainer. There is an inherent trust not only in the developer of the application, but also in the tool chain that creates a binary package from the source code. And then we haven't talked about strange "bugs", or the fact that even in open source, some security issues are very hard to find (otherwise all open source software would be security bug-free, which they are not).
So back to your question, sure, you could use all kinds of techniques to monitor the behavior of an application, you could monitor memory access, network traffic, whatever else. You can also analyse the code itself, look for suspicious things. It will take a huge amount of effort and still there will be no 100% guarantee, only some level of assurance.
Automated version upgrades could make detection even harder by the way. Even if you put lots of resources into analysis of one version, what if only a short-lived version had malicious code? Sure, that too can be analysed, but would anyone bother, unless there was a good reason (like indications of something malicious)?
Yet I think you can be pretty sure that major vendors don't do this. It's just not worth it for them, why would they? Their risk would be huge, with a relatively low benefit.

Jenkins security as an open-source tool

I work in a corporate development environment that is fairly risk-averse where management is often afraid of change. I've prototyped out how a Jenkins solution for our development team might work, and highlighted some success stories where the pilot implementation has helped, but the time has now come to get it approved to a wider audience and in a more permanent way, and some security concerns have been raised.
Primarily, the concerns so far have focused on the fact that the tool is open-sourced and the plugins are open-sourced and made by community contributors, so management is concerned that somebody could insert malicious code that would go unnoticed by us when we update. My opinion is that if so many other places can make Jenkins work, we probably can too, but that is not necessarily a very compelling argument to our security testing team.
My question is, can anybody tell me how they have secured their own Jenkins implementations, or how what specific Jenkins capabilities (sandboxing, etc) are in place to prevent malicious code from being executed on our systems?
Using 3rd party components either in your software or your infrastructure will always have risks. One very important thing to note is that open source is not less secure than closed source. While probably anybody might contribute code to an open source project, in most cases there is review before it actually makes its way into the project. Of course, a vulnerability may slip through, but how is that different from a software company with lots of developers? A vulnerability may slip through there too, and based on the experience of many of us, it quite often does. :) And in case of closed source, you don't even have the power of a diverse community to spot such security flaws, the best you can rely on are 3rd party penetration tests or code scans, both of which miss many issues.
In case of such a well established project like Jenkins, you can be pretty sure that there is lots of scrutiny on its security, probably more so than any closed source commercial tool you may currently have.
As with any 3rd party component, you should exercise due diligence though. Have a look at online vulnerability databases like NVD regularly to find security issues. Install updates as they come out to mitigate the risk. You should do these for closed source components too.
As for how to secure a Jenkins installation, an answer here is not the right format I think, but there is a whole set of pages on their website dedicated to the topic.
Having said all this and looking at past vulnerabilities in Jenkins, there are quite a few. It's up to you (and your security department) to assess how exactly you would want to deploy Jenkins, and whether those past vulnerabilities are serious enough for you to think the whole tool is not adequate for your environment considering the way you want to deploy it. Again, it's the same process you would follow with a closed source tool too.

When Something Goes Wrong: Good contingency planning?

I work at a small firm with little technical skill/knowledge.
One colleague had a hard drive die without any backup, and we recently had a virus come through and infect our test server (the gumblar.cn one) which we may or may not have transferred to a client's server.
After these two events, management danced around promoting good practices to avoid future occurences, for about a week.
Changing the company's culture to take this more seriously is one problem I'll try and deal with, but my question is...
What events should be planned for?
I suppose there are natural disasters, hardware failures, people quitting (bus factor?).
Here are some common things:
Shared Directories on a Fault Tolerant server to be used as a policy for user files & data (with appropriate security). Event=data loss limitation
Scheduled Backups of the Server. Event= data loss limitation
Firewall Proxy with logging and intrusion detection. Event=Data damage and theft
Enterprise Virus Software deployed on server and clients. Event=Virus Infection, Data theft, System Damage
Automated IT assets tracking software that reports on hardware and software changes happening on servers and clients. Event=Data and Hardware theft, unauthorised modification
Off Site storage of data. Event=Data Loss limitation
Firefighting Equipment & Automated firefighting mechanisms. Event=Fire
Internet Filtering Proxy such as WebMarshall. Event=Protection against "drive-by" infections and risks.
etc. etc. You should be able to find much more comprehensive strategies, measures etc. on the Internet.
Think for a while what equipment and services you use and how likely it is that they fail or become unavailable for a while. Build a list. Evaluate how likely each problem is to happen, how much it will cost you and how much a backup solution costs. Then you decide.

Application Security Audit of an .NET Web Application?

Anyone have suggestions for security auditing of an .NET Web Application?
I'm interested in all options. I'd like to be able to have something agnostically probe my application for security risks.
EDIT:
To clarify, the system has been designed with security in mind. The environment has been setup with security in mind. I want an independent measure of security, other than - 'yeah it's secure'... The cost of having someone audit 1M+ lines of code is probably more expensive than the development. It looks like there really isn't a good automated/inexpensive approach to this yet. Thanks for your suggestions.
The point of an audit would be to independently verify the security that was implemented by the team.
BTW - there are several automated hack/probe tools to probe applications/web servers, but i'm a bit concerned about whether they are worms or not...
Best Thing to do:
Hiring a security guy for source code analysis
Second best thing to do hiring a security guy / pentesting company for black-box analysis
Following tools will help :
Static Analysis Tools Fortify / Ounce Labs - Code Review
Consider solutions such as HP WebInspects's secure object (VS.NET addon)
Buying a blackbox application scanner such as Netsparker, Appscan, WebInspect, Hailstorm, Acunetix or free version of Netsparker
Hiring some security specialist is so much better idea (will cost more though) because they won't only find injection and technical issues where an automated tool might find, they will also find all logical issues as well.
Anyone in your situation has the following options available:
Code Review,
Static Analysis of the code base using a tool,
Dynamic Analysis of the application at run time.
Mitchel has already pointed out the use of Fortify. In fact, Fortify has two products to cover the areas of static and dynamic analysis - SCA (static analysis tool, to be used in development) and PTA (that performs analysis of the application as test cases are executed during testing).
However, no tool is perfect and you can end up with false positives (fragments of your code base although not vulnerable will be flagged) and false negatives. Only a code review could solve such problems. Code reviews are expensive - not everyone in your organization would be capable of reviewing code with the eyes of a security expert.
To begin, with one can start with OWASP. Understanding the principles behind security is highly recommended before studying the OWASP Development Guide (3.0 is in draft; 2.0 can be considered stable). Finally, you can prepare to perform the first scan of your code base.
One of the first things that I have started to do with our internal application is use a tool such as Fortify that does a security analysis of your code base.
Otherwise, you might consider enlisting the services of a third-party company that specializes in security to have them test your application
Testing and static analysis is a very poor way to find security vulnerabilities, and is really a method of last resort if you haven't thought of security throughout the design and implementation process.
The problem is that you are now trying to enumerate all of the ways your application could fail, and deny those (by patching), rather than trying to specify what your application should do, and prevent everything that isn't that (by defensive programming). Since your application probably has infinite ways to go wrong and only a few things that it is meant to do, you should take an approach of 'deny by default' and allow only the good stuff.
Put it another way, it's easier and more effective to build in controls to prevent whole classes of typical vulnerabilities (for examples, see OWASP as mentioned in other answers) no matter how they may arise, than it is to go looking for which specific screwup some version of your code has. You should be trying to evidence the presence of good controls (which can be done), rather than the absence of bad stuff (which can't).
If you get somebody to review your design and security requirements (what exactly are you trying to protect against?), with full access to code and all details, that will be more valuable than some kind of black box test. Because if your design is wrong then it won't matter how well you implemented it.
We have used Telus to conduct Pen Testing for us a few times and have been impressed with the results.
May I recommend you contact Artec Group, Security Compass and Veracode and check out their offerings...

Security Testing Types

What are the different types of Security Testing?
We have a fairly full list which is discussed over on Security Stack Exchange here and here.
Discovery
The purpose of this stage is to identify systems within scope and the services in use. It is not intended to discover vulnerabilities, but version detection may highlight deprecated versions of software / firmware and thus indicate potential vulnerabilities.
Vulnerability Scan
Following the discovery stage this looks for known security issues by using automated tools to match conditions with known vulnerabilities. The reported risk level is set automatically by the tool with no manual verification or interpretation by the test vendor. This can be supplemented with credential based scanning that looks to remove some common false positives by using supplied credentials to authenticate with a service (such as local windows accounts).
Vulnerability Assessment
This uses discovery and vulnerability scanning to identify security vulnerabilities and places the findings into the context of the environment under test. An example would be removing common false positives from the report and deciding risk levels that should be applied to each report finding to improve business understanding and context.
Security Assessment
Builds upon Vulnerability Assessment by adding manual verification to confirm exposure, but does not include the exploitation of vulnerabilities to gain further access. Verification could be in the form of authorised access to a system to confirm system settings and involve examining logs, system responses, error messages, codes, etc. A Security Assessment is looking to gain a broad coverage of the systems under test but not the depth of exposure that a specific vulnerability could lead to.
Penetration Test
Penetration testing simulates an attack by a malicious party. Building on the previous stages and involves exploitation of found vulnerabilities to gain further access. Using this approach will result in an understanding of the ability of an attacker to gain access to confidential information, affect data integrity or availability of a service and the respective impact. Each test is approached using a consistent and complete methodology in a way that allows the tester to use their problem solving abilities, the output from a range of tools and their own knowledge of networking and systems to find vulnerabilities that would/ could not be identified by automated tools. This approach looks at the depth of attack as compared to the Security Assessment approach that looks at the broader coverage.
Security Audit
Driven by an Audit / Risk function to look at a specific control or compliance issue. Characterised by a narrow scope, this type of engagement could make use of any of the earlier approaches discussed (vulnerability assessment, security assessment, penetration test).
Security Review
Verification that industry or internal security standards have been applied to system components or product. This is typically completed through gap analysis and utilises build / code reviews or by reviewing design documents and architecture diagrams. This activity does not utilise any of the earlier approaches (Vulnerability Assessment, Security Assessment, Penetration Test, Security Audit)
Risk assessment - creating a threat model and defining what will be tested.
Security auditing - using the threat model to probe the system design.
Vulnerability scanning - using software to probe the system inplementation.
Penetration testing - trying to hack into the system, either externally or internally.
Operational testing - some or all of the above after the system is in production.
Vulnerability Scanning - Typically an automated procedure to scan one or more systems against known vulnerability signatures.
Security Scanning - This is a vulnerability scan plus a manual verification of the findings to help remove false positives/ negatives.
Penetration Testing - A tester will attempt to gain access and prove access to the system owner.
Risk Assessment - involves a security analysis of interviews with employees compiled with business and industry justifications for risks discovered.
Security Auditing - Typically an in-depth auditing of software code and/or Operating Systems. This is often a very thorough line-by-line inspection of code.
Ethical Hacking - This is very similar to a penetration test, but it is usually many of them against a number of systems in order to discover as many attack vectors as possible.
Posture Assessment and Security Testing - This combines security scanning, ethical hacking and risk assessments to show the overall security posture of the organization.
Each of these security testing types can be further sub-categorized by different methodologies.
Penetration can be of different types, broadly categorized as follows:
Web parameter tampering: The user manipulates parameters exchanged between client and server and modifies application data such as user credentials, permissions, price or quantity of products, etc. for their benefit.
Database Tampering: compromising the databases that support the system and store data critical for business or running of the app
Cookie Stealing: A valid computer session is exploited to gain unauthorized access
Cross-site Scripting: An attacker injects malicious scripts on the client-side code to redirect the website link.
Cross-site Request Forgery: Also called one-click attack or session riding, unauthori
Privilege Escalation: To hack into a senior’s ID and misuse privileges.
Let’s break down security testing into its constituent parts by discussing the different types of security tests that you might perform.
Static code analysis
Static code analysis is perhaps the first type of security testing that comes to mind, its the oldest form also.
Static code analysis involves reviewing source code to identify problems that could lead to security breaches in an application (or in resources to which the application has access). Classic examples of vulnerabilities that you might be looking out for using this type of analysis are coding flaws that could enable buffer overflows or injection attacks.
It’s possible to perform some amount of static code analysis by hand, meaning that developers read through code manually to find security flaws. But that is often not practical to do on a large scale, given the size of many source code files; plus, humans can easily overlook flaws. That’s why using automated analysis tools to scan your source code is important.
Penetration testing
Penetration tests involve simulating attacks against an application or infrastructure in order to identify weak points. For example, you could use a tool like nmap to attempt to connect to all endpoints on a network from a non-trusted host and see if any endpoints accept the connection; if they do, you probably want to make them stop accepting connections from arbitrary hosts.
Some folks might argue that penetration testing should be broken down into subcategories, since there are different types of penetration tests. Some focus on the network, some on applications, some on authentication gateways, some on databases, and so on.
Compliance testing
Compliance tests (which are sometimes called conformance tests) are used to assess whether a configuration, architecture or process meets an organization’s predefined policies. Compliance testing is not strictly limited to the realm of security; you could conceivably use compliance tests to help maintain standards for application performance or response time, for example.
However, when it comes to security, compliance tests are an important resource for ensuring that a given application’s configuration or deployment architecture meets minimum standards set by your organization. Compliance tests typically work by comparing actual configurations with those that are deemed to be safe. When the tests identify incongruity, admins know that there may be a security issue or other problem.
Load testing
Load testing refers to tests that measure how an application or infrastructure performs under heavy demand. Load testing is not often thought of as a type of security test; it’s more commonly used to help optimize application performance and availability.
However, there is a reason why security admins might want to pay attention to load testing results, too. That reason is Distributed-Denial-of-Service, or DDoS, attacks, which aim to disrupt application availability by overwhelming an application or its host infrastructure with traffic or other requests.
Origin analysis testing
As the popularity of open source software has grown over the past decade, so has the importance of origin analysis testing. This type of testing helps developers and security admins determine where a given piece of source code originated.
In cases where some of your source code came from a third-party project or repository — which is very common these days, given the ease with which developers can incorporate upstream open source code into their applications — security admins will need to make sure that any known vulnerabilities in that code are addressed, and that the code conforms to internal security standards.

Resources