Reliability and flaws with GitHub's 'Dependabot: Automated security fixes' - security

I want to learn from others about whether GitHub's 'Dependabot: Automated security fixes' is a secure reliable solution to security issues.
The one and only time GitHub flagged a security issue it was also for outdated dependencies.
At that time I pulled from the repository, manually updated them, and then pushed them back up.
To date I have not found on Stack Overflow any comment directly addressing my question.
Please correct me if I am wrong.
How safe are the automated fixes? Can we blindly rely on them? Should I use them with any sort of caution?
Screenshot including the green 'Create automated security fix' button.

Dependabot will notify you about security issues known for certain dependencies.
The only drawback could be:
The new dependency bring other issues.
It does not guarantee that the upgrade won't provoke side effects, such as bugs introduced by an unexpected behavior of the dependency in your project.
So don't forget to test the upgrade on your project whether through automated ones or manually.

Related

Terraform providers vulnerability detection

Using a lot of (official and non official) terraform providers, I'm looking for a tool to perform security analysis on terraform providers before executing terraform plan/apply commands (and so executing providers code). I want to prevent malicious code from providers to be executed blindly.
I'm basically executing terraform providers mirror command to save local copies of required providers and I'm wondering if I can security scan that result.
I tested kics, checkov and tfsec but they are all looking for security issues in my terraform static code but not in providers.
Do you have any good advices regarding this topic ?
This is actually quite a good question. There are many other problems that can be reduced to same generic question - how to make sure that the thing you downloaded from the internet does not do anything malicious to you like e.g.:
How to make sure that a minecraft plugin does not hack you?
How to make sure that a spring boot dependency does not hack you?
How to make sure that a library xxx you attach to your project does not do harm to you?
Should you use docker image yyy in your project?
Truth is: everything you use has the potential to explode right in your face (or more correctly: right into the face of the system owner). That's why the system owner (usually a company) defines a set of rules to follow what is allowed and what is not allowed. No set of rules you are aware of? Below a set of rules we came up with ourselves when thinking about on-boarding a new library for some projects to use:
Do not take random stuff from github. Take only products with longer history, small bug backlog, little to none past issues in the CVE list, actively maintained.
Do static code analysis yourself. Sometimes it is possible to have tools that work on binaries level do that for you. Sometimes you can do it on source level only. In case of Java libraries, check what tools like Dependency Track think about the library and version you are about to use.
Run the code and see how it works: what does it write, what does it read, what URLs does it communicate with (do a TCP dump if necessary).
Document everything you have done somewhere.
This gives you no 100% confidence that things will not go terribly wrong. But this is a systematic approach that will reduce the risk of doing something stupid.

Why do we have to fix security vulnerabilities on the test scope dependencies?

Why do we have to fix security vulnerabilities on the libraries that we use only in testing scope?
I've been trying to find the answer online but no luck so thought of asking here.
For example:
https://nvd.nist.gov/vuln/detail/CVE-2021-23463 I found this vulnerability but H2 was included as <scope>test</scope> in maven.
Testing code does not get shipped to production environment, so I was wondering why do we have to fix such vulnerabilities if it's only vulnerable in testing scope.
Thanks in advance!
TL;DR - It is probably more work to figure out if it is safe to not fix a vulnerability (in your tests) than to just fix the vulnerability.
Why do we have to fix security vulnerabilities on the libraries that we use only in testing scope?
I can think of a couple of reasons why you might have to fix the vulnerabilities:
Because your management, or the security team tells you that you have to. They may tell you this for reason of compliance to some internal policy, or some external compliance rules ... or even for legal reasons. Or maybe because someone has a "thing" about it. They may not distinguish between production and test code.
Because you are unable to conclusively show that the vulnerabilities in the test scope do not constitute a risk.
For example, could the vulnerability be exploited by a bad actor who has access to your CI infrastructure? Can you demonstrate that that is not possible? Or that that would not provide a way to do significant damage ... to something that matters?
And the converse is:
IF management doesn't say that you have to fix them AND you can conclusively show that the vulnerability is NOT a risk in your test infrastructure THEN you could decide to not fix them.
HOWEVER if your assessment is in incorrect THEN the blame and consequences will fall on you.
In short ... you need to decide if you should take the risk of ignoring the vulnerability.
Tests will likely be run by CI on your internal infrastructure. Or just on your developer machines. They will be run somewhere that is more or less internal to your infrastructure.
A vulnerability can be exploited in many ways, the one you mentioned is an XXE. A malicious xml file can be used to do stuff on the host that processes it. This might allow an internal unprivileged attacker (eg. a developer) to compromise CI that might have access to more valuable credentials. Or it might allow an external attacker to compromise a developer PC (by somehow providing malicious xml input), and then compromise CI from there, and so on.
You can see the point, you don't just want to protect your production environment. Sure, that might be the most important, but the way to protect it is to apply defense in depth, and mitigate risks for the whole infrastructure.

Is it possible to let people approve their own pull requests if nothing but a bug workitem is linked?

In my current company, we are sometimes approving our own pull requests for the sake of bugfixing. If it's a small bug, or someone has breakdown service, they are required to be able to fix things quickly.
Because some people abuse this functionality to approve their own 'features', I wish to remove the ability to do this, except if the there are only 'bug' workitems linked to the PR.
As far as I have seen, I can only check some marks regarding the policy of the masterbranch.
Can I create a policy to enable people to approve their own pull requests, if no work-items other than bug items are linked to it?
That's not supported in Azure Pipelines - it's either allowed or not based on your branch policies. Everything that follows is opinion, so take it with a grain of salt - I'm not convinced that such a feature would solve your problem.
You said that you currently allow developers to approve their changes because, if there's an urgent bug they need to be able to move quickly. That's understandable. Also, developers can "game the system" by PR'ing features.
If you were to restrict branch policy to allow developers to merge PR's only if bugs are attached, what prevents the developer from putting new feature functionality into bug fixes?
In other words, your PR policies work by convention, and that convention can be broken. Your proposed solution is another convention that can be broken.

Jenkins security as an open-source tool

I work in a corporate development environment that is fairly risk-averse where management is often afraid of change. I've prototyped out how a Jenkins solution for our development team might work, and highlighted some success stories where the pilot implementation has helped, but the time has now come to get it approved to a wider audience and in a more permanent way, and some security concerns have been raised.
Primarily, the concerns so far have focused on the fact that the tool is open-sourced and the plugins are open-sourced and made by community contributors, so management is concerned that somebody could insert malicious code that would go unnoticed by us when we update. My opinion is that if so many other places can make Jenkins work, we probably can too, but that is not necessarily a very compelling argument to our security testing team.
My question is, can anybody tell me how they have secured their own Jenkins implementations, or how what specific Jenkins capabilities (sandboxing, etc) are in place to prevent malicious code from being executed on our systems?
Using 3rd party components either in your software or your infrastructure will always have risks. One very important thing to note is that open source is not less secure than closed source. While probably anybody might contribute code to an open source project, in most cases there is review before it actually makes its way into the project. Of course, a vulnerability may slip through, but how is that different from a software company with lots of developers? A vulnerability may slip through there too, and based on the experience of many of us, it quite often does. :) And in case of closed source, you don't even have the power of a diverse community to spot such security flaws, the best you can rely on are 3rd party penetration tests or code scans, both of which miss many issues.
In case of such a well established project like Jenkins, you can be pretty sure that there is lots of scrutiny on its security, probably more so than any closed source commercial tool you may currently have.
As with any 3rd party component, you should exercise due diligence though. Have a look at online vulnerability databases like NVD regularly to find security issues. Install updates as they come out to mitigate the risk. You should do these for closed source components too.
As for how to secure a Jenkins installation, an answer here is not the right format I think, but there is a whole set of pages on their website dedicated to the topic.
Having said all this and looking at past vulnerabilities in Jenkins, there are quite a few. It's up to you (and your security department) to assess how exactly you would want to deploy Jenkins, and whether those past vulnerabilities are serious enough for you to think the whole tool is not adequate for your environment considering the way you want to deploy it. Again, it's the same process you would follow with a closed source tool too.

Recommendations for automatically logging unexpected errors/stack traces to bug tracker

We have been looking at automatically logging all unexpected client errors to our bug tracker. For reference our application is written in Java/GWT/Guice/Hibernate/Jetty and our bug tracker is the hosted version of FogBugz which can create bugs programatically or via an email.
The biggest problem I see with doing this is stack traces that happen in a loop overload the bug tracker by creating thousands of cases. Does anybody have a suggested way to handle automatic bug creation like this?
If you're using FogBugz bugscout (also see up-to-date docs here) then it has the ability to just increase number of occurences of same problem, instead of creating new case for same exception again and again.
Are you sure that you want to do that?
It obviously depends on your application but even by carefully taking care of the cases that could generate lots of bug reports (because of the loops) this approach could still end up filling the bug tracker.
How about this?
Code your app so that every time an exception is thrown, you gather info about the client (IP, login, app version, etc) and send that + the stack trace (or the whole exception object .ToString()) by email to yourself (or the dev team).
Then on you email client, have a filter that sorts that incoming mail and throws it in a nice folder for you to look at later.
Thus you can have tons of emails about maybe one of more issues but then you don't really care because you input the issues yourself in the bugtracker, and easily delete that ton of mail.
That's what I did for my app (which is a client-server desktop app). It plays out well in this case.
Hope that helped!
JIRA supports automated issues creation using so called services: documentation.
Does anybody have a suggested way to handle automatic bug creation...?
Well, I have. Don't do that.
What are you going to gain from that? Tester's effort? in my experience, whatever effort one can save from that was lost multiple times with overhead transferred to developers who had to analyze and maintain the automatically created tickets anyway. Not to mention overall frustration caused by that.
The least counterproductive way I can imagine would be something like establishing a dedicated bugs category or issue tracker instance, such that only testers can see and use it.
In that "sandbox", auto-created bugs could be assigned to testers who would later pass analyzed and aggregated bug reports to developers.
And even in that case, I'd recommend to pay close attention to what users (testers) say about the system. If they, say, start complaining about the system, consider trying a manual way of doing things instead.

Resources