I like to post links to Secunia search results to demonstrate (in numbers) how insecure a certain CMS (or blogging software) is.
See What are some of Drupal's shortcomings?
But there was an interesting comment to this answer:
Eaton:
It's also important to note that
Secunia only publishes vulnerability
reports that are explicitly announced.
I've worked with other CMS packages
that tuck important security fixes in
minor releases with no announcements
at all. Drupal has a 15 person secteam
that reviews core and all 3500 addons
and officially announces the security
patches, no matter how minor, as a
matter of policy.
Are there any studies or articles which take this into account when comparing Content Management Systems?
I have a small number of articles bookmarked (like this one by my coworker), but they're almost all by people defending their CMS of choice from accusations of poor security. (My own comment in your post included!) One of the difficulties is that I don't think anyone has ironed out what constitutes a 'reasonable comparison' -- everyone gets annoyed at a bad comparison, but wanders off before anyone can determine what a level playing field is.
A couple things stand out that most "quick overviews" miss:
The security policy of the product's dev team
The presence of a specific person or team (depending on the project's size) responsible for security. Everyone on the project should care, obviously
Are there documented security best practices for third-party developers
Comparison of vulnerabilities by type and severity
Perhaps this thread would be a good place to brainstorm what WOULD constitute a good comparison study?
Update - A colleague has had the opposite frustration with Secunia: inaccurate and erroneous reports filed by third-parties against an OSS project. Secunia refuses to update or amend them, apparently. It's a useful service or announcements, but everything I hear makes me cringe at using them for comparison.
The other major problem with using those Secunia searches is that they include all contributed modules along with Drupal Core even when the particular announcement even though a particular security announcement might be for a module that's used by about 30 people.
In addition to vulnerabilities by type and severity, you also need to take into account "core" vs. "add-on" modules and the practice of occasionally putting multiple vulnerabilities into a single announcement (happens often).
My feeling is that some of Eaton's measures on policy are more important than specific numbers or severity of vulnerabilities.
The last good measure I would add to that list is months in the past X years where a vulnerability was publicly disclosed without any fix from the project. That's rare, but is a sign of a truly failed security process.
Related
I work in a corporate development environment that is fairly risk-averse where management is often afraid of change. I've prototyped out how a Jenkins solution for our development team might work, and highlighted some success stories where the pilot implementation has helped, but the time has now come to get it approved to a wider audience and in a more permanent way, and some security concerns have been raised.
Primarily, the concerns so far have focused on the fact that the tool is open-sourced and the plugins are open-sourced and made by community contributors, so management is concerned that somebody could insert malicious code that would go unnoticed by us when we update. My opinion is that if so many other places can make Jenkins work, we probably can too, but that is not necessarily a very compelling argument to our security testing team.
My question is, can anybody tell me how they have secured their own Jenkins implementations, or how what specific Jenkins capabilities (sandboxing, etc) are in place to prevent malicious code from being executed on our systems?
Using 3rd party components either in your software or your infrastructure will always have risks. One very important thing to note is that open source is not less secure than closed source. While probably anybody might contribute code to an open source project, in most cases there is review before it actually makes its way into the project. Of course, a vulnerability may slip through, but how is that different from a software company with lots of developers? A vulnerability may slip through there too, and based on the experience of many of us, it quite often does. :) And in case of closed source, you don't even have the power of a diverse community to spot such security flaws, the best you can rely on are 3rd party penetration tests or code scans, both of which miss many issues.
In case of such a well established project like Jenkins, you can be pretty sure that there is lots of scrutiny on its security, probably more so than any closed source commercial tool you may currently have.
As with any 3rd party component, you should exercise due diligence though. Have a look at online vulnerability databases like NVD regularly to find security issues. Install updates as they come out to mitigate the risk. You should do these for closed source components too.
As for how to secure a Jenkins installation, an answer here is not the right format I think, but there is a whole set of pages on their website dedicated to the topic.
Having said all this and looking at past vulnerabilities in Jenkins, there are quite a few. It's up to you (and your security department) to assess how exactly you would want to deploy Jenkins, and whether those past vulnerabilities are serious enough for you to think the whole tool is not adequate for your environment considering the way you want to deploy it. Again, it's the same process you would follow with a closed source tool too.
We are looking to implement an open source identity management system and have identified ForgeRock's stack as the best technology to implement.
The high cost of ForgeRock support and its per-User pricing model, however, is a potential roadblock. Our current User base is ~45K, but we expect to ramp up to 1M in the next 2 years.
So we're looking into scenarios where we proceed without FR Support. The lack of FR Maintenance releases would seem to put a damper on that, so we're curious if others have gone that route.
What has been your experience?
What kind of projects have you done this for? Size, etc.
In the absence of FR's Maintenance releases, have you been able to easily create your own patches?
What are some potential pitfalls?
If there are blogs or other communities that deal with this topic, please point me in their general direction.
Thanks.
As a community user I did use OpenAM(/OpenSSO) and OpenDJ for the past 6 years or so, but it was a very small deployment (10k users only 1 server instance from both products).
1) In the early stages we did have reliability issues with OpenAM, which we mostly resolved by restarting the server instances - clearly wasn't preferred, but we didn't really spend too much development effort on actually trying to resolve it (plus lacked the necessary knowledge for investigation back then). After spending some actual effort on trying to learn the product it turned out that the most of our issues were either self-inflicted (badly written customizations, or misconfigurations), or was actually something that got recently resolved in the OpenAM project and was relatively simple to backport to our version.
Of course the experience itself largely depends on how often you want to make configuration changes in the deployment though, since we weren't changing a lot of things over the years, OpenAM just worked nicely for long intervals without requiring any kind of maintenance.
3) Since we didn't really ran into new issues (the config barely changed), there weren't too many surprises after a while. The security patches were mostly simple to backport and didn't cause too much trouble (It did help that after 1,5 years I became a FR employee and I actively worked on OpenAM issues though :) )
4) I think running without subscription has its risks, but they mostly relate to:
are you planning to roll out new features based on OpenAM functionality during that 2 years (i.e. are you planning to constantly make changes to the deployment)?
do you have good developers to work on these features? Working with OpenAM for example can quite easily require you to have a look at the source code to figure out how things work, the quality of the documentation has improved a lot over the years though. Regardless, backporting fixes are going to be more and more difficult over time, as the releases will differ a lot more (since the development team is getting bigger and bigger for each projects) - and even then you can't just assume that all the issues you run into are by definition already resolved in trunk. The need to resolve some of the issues on your own is a cost/risk you need to take into account.
what kind of SLA do you want to have for your deployment? Is your business going bankrupt after a 1 minute outage? Is it acceptable to just frequently restart your service (in case you run into some weird issues)?
do you really need support for all 3 products? For example my background would allow me to work easily without OpenAM support, but I would be in the deep end if something is going wrong with my provisioning system...
And a generic remark:
Having user growth of 20x within two years sounds a bit unrealistic, or very hopeful at least. Maybe what you should look for is a 1 year subscription for a bit more reasonable target number and then have a renewal once you have a better understanding of customer growth in your business?
DISCLAIMER: At my place of work we are aware that, as none of us are security experts, we can't avoid hiring security consultants to get a true picture of our security status and remedial actions for vulnerabilities. This question is asked in the spirit of trying to be a little less dumb and a bit more aware of the issues.
In my place of work, a small business with a sum total of 7 employees, we need to do some work on reviewing our application for security flaw and vulnerabilities. We have identified two main requirements in a security tester:
They are competent, thorough and know their stuff.
They are able to leave us with a clear idea of the work we need to do to make our security better.
This process will be iterative so we will have a scan, do the remedial work and repeat. This will be a regular occurrence going forward.
The problem we have is: How do we know 1? And, even if we're reasonably sure of 1, how on earth do we proceed to 2?
Our first idea was to do some light security scanning on our code ourselves and see if we could identify any definite issues. Then, if the security consultants we choose identify those issues and a few more we're well on the way to 1 and 2. The only problem is that I've been trawling the interweb for days now looking at OWASP, Metasploit, w3af, burp, wikto, sectools (and Stack Overflow, natch)...
As far as I can tell security software seems to come in two flavours, complex open source security stuff for security experts and expensive complex proprietary security stuff for security experts.
I am not a security expert, I am an intermediate level business systems programmer looking for guidance. Is there no approachable scanner type software or similar which will give me an overview of the state of my codebase? Am I just going to have to take a part time degree in order to understand this stuff at a brass tacks level? Or am I missing something?
I read that you're first interested in hiring someone and knowing they're good. Well, you've got a few options, but the easiest is to talk to someone in the know. I've worked with a few companies, and can tell you that Neohapsis and Matasano are very good (though it'll cost you).
The second option you have is to research the company. Who have they worked with? Can they give you references? What do the references have to say? What vulns has the company published to the world? What was the community response (were they shouted down, was the vuln considered minor, or was it game changing, like the SSL MitM vuln)? Have any of the company's employees talked at a conference? Was it a respected conference? Was the talk considered good by the attendees?
Second, you're interested in understanding the vulnerabilities that are reported to you. A good testing company will (a) give you a document describing what they did and did not do, what vulnerabilities they found, how to reproduce the vulnerabilities, and how they know the vulnerability is valid, and (b) will meet with you (possibly teleconference) to review the vulnerabilities and explain how the vulns work, and (c) will have written into the contract that they will retest once after you fix the vulns to validate that they are truly fixed.
You can also get training for your developers (or hire someone who has a good reputation in the field) so they can understand what's what. SafeLight is a good company. SANS offers good training, too. You can use training tools like OWASP's webgoat, which walks you through common web app vulns. Or you can do some reading - NIST SP 800 is a freely downloadable fantastic intro to computer security concepts, and the Hacking Exposed series do a good job teaching how to do the very basic stuff. After that Microsoft Press offers a great set of books about security and security development lifecycle activities. SafeCode offers some good, short recommendations.
Hope this helps!
If you can afford to hire expert security consultants, then that may be your best bet given that your in-house security skills are low.
If not, there is not escaping the fact that you are going to need to understand more about security, how to identify threats, and how to write tests to test for common security exploits like XSS, SQL injection, CSRF, and so on.
Automated security vulnerability software (static code analysis and runtime vulnerability scanning) are useful, but they are only ever going to be one piece in your overall security approach. Automated tools do not identify all exploits, and they can leave you with a false sense of security, or a huge list of false positives. Without the ability to interpret the output of these tools, you might as well not have them.
One tool I would recommend for external vulnerability scanning is QualysGuard. They have a huge and up to date database of common exploits that they can scan for in public facing web applications, web servers, DNS servers, firewalls, VPN servers etc., and the output of the reports usually leaves you with a very clear idea of what is wrong, and what to do about it. But again, this would only be one part in your overall security approach.
If you want to take a holistic approach to security that covers not only the components in your network, applications, databases, and so on, but also the processes (eg. change management, data retention policy, patching) you may find the PCI-DSS specification to be a useful guide, even if you are not storing credit card numbers.
Wow. I wasn't really expecting this little activity.
I may have to alter this answer depending on my experiences but in continuing to wade through the acres of verbiage on my quest for something approachable I happened on a project which has been brought into the OWASP fold:
http://www.owasp.org/index.php/OWASP_Zed_Attack_Proxy_Project
It boasts, and I quote from the project documentation's introduction:
[ZAP] is designed to be used by people
with a wide range of security
experience and as such is ideal for
developers and functional testers who
a (sic) new to penetration testing.
EDIT: After having a swift play with ZAP this morning, although I couldn't directly switch on the attack mode on our site right away I can see that the proxy works in a manner very similar to OWASP's Web Scarab (Would link but lack of rep and anti-spam rules prevent this. Web Scarab is more technically oriented, it seems, looking over the feature list Scarab does more stuff, but it doesn't have a pen test vulnerability scanner. I'll update more once I've worked out how to have a go with the vulnerability scanner.
Anyone else who would like to pitch in and have a go would be welcome to do so and comment or answer as well below.
I am planning to release a .NET product as Beta in the next couple of weeks. I want to know what points should be considered before releasing a product. I think the following needs to be taken care of:
Professional Icons/Splash and About screens
Obfuscation of assemblies
Sign the assemblies - Strong Name
Professional Security Certificate (Verisign/Thwate) - Authenticode signing assemblies
Google AdWords, AdSense and Analytics
Writing blogs etc about the application features
A way to get bug/features from BETA users
Basically the question is how to release an effective BETA and make my product popular?
Obfuscation of assemblies
Only if obfuscation does not decrease the quality and usefulness of (automatic) bug and crash reports.
Professional Security Certificate (Verisign/Thwate) - Authenticode signing assemblies
I don't think this is an important point for a BETA. If you can afford it, its a plus. However, it won't make your application more popular. Also, you won't loose potential users if you miss a certificate - whoever downloads and installs your application will not abort just because your stuff is not signed.
But there's another point missing: you need a fancy website. 'fancy' means: it must look professional, fit to your product and lead straight to a download link. The product needs to be easily installable - really easy. Small download size is also a plus in my opinion.
Writing blogs etc about the application features
Increases your google ranking, but needs to be used with care. If you fill hundreds of blogs, everybody will know it's YOU advertising your product. Nobody likes self-advertising. Let your users spread the word for your. A good application in a market niche won't need much advertising ...
Professional Icons/Splash and About screens
Is a good point to make the application look more polished, but the effect on potential users needn't be positive: nobody likes BETAs with awesome splashscreens and icons if they crash all the time.
The question
how to release an effective BETA and make my product popular?
has little to do with all the points you stated.
It is 95% marketing thing than technical.
But I would consider the most important ones for Betas:
Easy way to submit bug-reports/feature request (2 clicks, no more; 1 minute of user's time).
Usability.
Product demos to get started easily.
The first is crucial as it will give you the first feedback on the application.
Today online security is a very important factor. Many businesses are completely based online, and there is tons of sensitive data available to check out only by using your web browser.
Seeking knowledge to secure my own applications I've found that I'm often testing others applications for exploits and security holes, maybe just for curiosity. As my knowledge on this field has expanded by testing on own applications, reading zero day exploits and by reading the book The Web Application Hacker's Handbook: Discovering and Exploiting Security Flaws, I've come to realize that a majority of online web applications are really exposed to a lot of security holes.
So what do you do? I'm in no interest of destroying or ruining anything, but my biggest "break through" on hacking I decided to alert the administrators of the page. My inquiry was promptly ignored, and the security hole has yet not been fixed. Why wouldn't they wanna fix it? How long will it be before someone with bad intentions break inn and choose to destroy everything?
I wonder why there's not more focus on this these days, and I would think there would be plenty of business opportunities in actually offering to test web applications for security flaws. Is it just me who have a too big curiosity or is there anyone else out there who experience the same? It is punishable by law in Norway to actually try break into a web page, even if you just check the source code and find the "hidden password" there, use it for login, you're already breaking the law.
I once reported a serious authentication vulnerability in a online audiobook store that allowed you to switch the account once you were logged in. I was wary too if I should report this. Because in Germany hacking is forbidden by law too. So I reported the vulnerability anonymously.
The answer was that although they couldn’t check this vulnerability by themselves as the software was maintained by the parent company they were glad for my report.
Later I got a reply in that they confirmed the dangerousness of the vulnerability and that it was fixed now. And they wanted to thank me again for this security report and offered me an iPod and audiobook credits as a gift.
So I’m convinced that reporting a vulnerability is the right way.
"Ive found that Im often testing others applications for exploits and security holes, maybe just for curiosity".
In the UK, we have the "Computer Misuse Act". Now if these applications you're proverbially "looking at" are say Internet based and the ISP's concerned can be bothered to investigate (for purely political motivations) then you're opening yourself up getting fingered. Even doing the slightest "testing" unlesss you are the BBC is sufficient to get you convicted here.
Even Penetration Test houses require Sign Off from companies who wish to undertake formal work to provide security assurance on their systems.
To set expectations on the difficulty in reporting vulnerabilties, I have had this with actual employers where some pretty serious stuff has been raised and people have sat on it for months from the likes of brand damage to even completely shutting down operations to support an annual £100m E-Com environment.
I usually contact the site administrator, although the response is almost ALWAYS "omg you broke my javascript page validation I'll sue you."
People just don't like to hear that their stuff is broken.
Informing the administrator is the best thing to do, but some companies just won't take unsolicited advice. They don't trust or don't believe the source.
Some people would advise you to exploit the security flaw in a damaging way to draw their attention to the danger, but I would recommend against this, and it's possible that you could have serious consequences because of this.
Basically if you've informed them it's no longer your problem (not that it ever was in the first place).
Another way to ensure you get their attention is to provide specific steps as to how it can be exploited. That way it will be easier for whomever recieves the email to verify it, and pass it on to the right people.
But at the end of the line, you owe them nothing, so anything you choose to do is sticking your neck out.
Also, you could even create a new email address for yourself to use to alert the websites, because as you mentioned, some places it would be illegal to even verify the exploit, and some companies would choose to go after you instead of the security flaw.
If it doesn't affect many users, then I think notifying the site administrators is the most you can be expected to do. If the exploit has widespread ramifications (like a Windows security exploit) then you should notify someone in a position to fix the problem, then give them time to fix it before you publish the exploit (if publishing it is your intention).
A lot of people cry about exploit publication, but sometimes that's the only way to get a response. Keep in mind that if you found an exploit, there's a high likelihood that someone with less altruistic intentions has found it and has started exploiting it already.
Edit: Consult a lawyer before you publish anything that could damage a company's reputation.
I experienced the same like you. I once found an exploit in an oscommerce shop where you could download ebooks without paying. I wrote two mails:
1) Developers of oscommerce, they answered "Known issue, just don't use this paypal module, we won't fix"
2) Shop administrator: no answer at all
Actually I have no idea what's the best way to behave ... maybe even publicate the exploit to force the admins to react.
Contact the administrator, not a business-type person. Generally the admin will be thankful for the notice, and the chance to fix the problem before something happens and he gets blamed for it. A higher-up, or the channels a customer service person is going to go through, are the channels where lawyers get involved.
I was part of a group of people who reported an issue we stumbled across on the NAS system at University. The admins were very grateful we found the hole and reported it, and argued with their bosses on our behalf (the people in charge wanted to crucify us).
We informed the main developer about a sql injection vulnerability on their login page. Seriously, it's the classic '<your-sql-here>-- variety. You can't bypass the login, but you can easily execute arbitrary sql. Still hasn't been fixed in 2 months! Not sure what to do now...no one else at my office really cares, which amazes me since we pay so much for every little upgrade and new feature. It also scares me when I think about the code quality and how much stock we are putting in this software.