dependency-check for application code - security

I am looking for a solution to implement security-scanning of the application code-base at the time of a build. The idea is to capture a list of security vulnerabilities early in the software development life cycle.
I have a simple java project which uses a maven build. The java project specifies a number of .jar dependencies and comes up with a .war file as a build output.
I came across (and was able to configure) the dependency-check maven plugin (http://jeremylong.github.io/DependencyCheck/dependency-check-maven/index.html). However, though it scans the dependency jars and comes up with a vulnerability report, it doesn't seem to scan the final artifact - which in my case is the .war file.
How do I ensure that the .war is scanned as well? Is the dependency-check plugin the right tool for this?

dependency-check isn't the right tool for checking your own code. It uses a list of known vulnerability reports to determine if any of your dependancies have known flaws. It does not do an active scan of the code. see Plugin wiki
For checking your own code, HP's Fortify is a decent commercial solution, but if you are working in more of a DIY software setting, I would recommend Sonar. There are certainly many static code analysis tools out there. All have advantages and disadvantages.

Related

Docker security scan detects vulnerability in gradle 7.4.1

Creating a docker image with gradle 7.4.1 triggers the security scan which shows vulnerability CVE-2020-36518. How can this particular jar file within the gradle package be updated?
I would just reject the security issue, explaining that it is not possible to exploit the vulnerability as the Gradle build runs isolated on controlled input, and is not accessible by any potential attackers.
(Assuming this is the case, of cause, and you don't have a custom Gradle plugin that reads untrusted JSON documents using Jackson from the Gradle classpath. But even then, all you are risking is a denial-of-service on the build.)
Fiddling around with jar files in external tools could easily lead to problems hard to debug later. But if you like, you could create an issue for them, asking if they could bump the Jackson version to avoid unnecessary noise from security scans like this. There is an example of that here.

Is updating npm dependencies not recommended on a production application?

I recently started exploring npm and installed a github repo yoonic/nicistore.
But when I try to build it fails.
My question is if I start building things on top of node, which I see has tooo many dependencies from different vendors, Am I completely on the mercy of the respective package developers?
I have seen that most node based github repos fail to build in the first try. If I update one of the modules by running a console command, Is it likely to break all the application?
And if It is does, doesn't it prove node.js an unreliable and unstable development platform?
Think of it as the opposite of most other languages.
You are writing an app in Java.
You want to use LibA, LibB and LibC.
So you try to use LibC 2.4, and as soon as you do, your manager throws all kinds of errors at you.
Why?
Because LibB is using LibC 1.9
So now what are your options?
Strip out all of the calls to all of the new API for LibC that you wanted to use...
...or hope that LibB is open-source, and you can contribute an update for a new version of it, so that you can use the latest version of LibC (and hope it doesn't update).
So now you've done that... but now you've broken your LibA, because it wants the old LibB.
You didn't even want LibA, you just had to have it for your app to be happy with your framework, and the libs that you did actually want to use (B and C). LibA is closed-source, and isn't maintained, anymore. Tough luck. Go back to your old ways, and forget about how much better life could be, if you could only use your framework with the new version of LibC. Or start praying that your framework does a major rewrite, to get rid of the LibA dependency... but then figure out what new hell you have to deal with, just to get LibC working.
Is this really better than Node?
What node allows you to do is install dependencies which are at different versions than the same library that your dependencies are using.
Not that you can't do that with Java, too... but the entire community has decided that it's just not ever going to try to do that, and thus outlaw it at a tool level.
Next, you see too many things which leave you at the mercy of too many vendors...
Going back to Java (or C++, or nearly any mainstream language), looking at Java, itself, how many libraries are made by Sun Microsystems, or by James Gosling?
Moreover, if you want to boil it down, to suggest using only, say, one huge, overarching framework (like Spring MVC) and using no other libraries of any kind (like JodaTime), then how many libraries does Spring itself lean on, and why are they of no concern to you, even if you're just using the compiled VM bytecode?
In fact, a strong argument could be made to be more wary of compiled binaries, in languages where it was traditional to see strong, copy-left licensing like that of the GNU GPL... in that realm, you open yourself up to craziness.
Most of the Node stuff, by comparison, is dirt-simple freeware. And even if it's not, it's quickly replaceable as most are micro-libraries.
I would suggest that updating a Node package your server depends on, via CLI is less hazardous than doing the same to a full-fledged Java project, if your goal is to see your project compile again, some time in the next week, but with the newer fixes/features...
...but if you're talking about a full-scale, production application, you also want to be cognizant of what it is you're doing, with regards to your codebase, regardless.
As to why things don't build for you on the first try, assuming that you're on a non-Windows platform, and your environment is up to date, I don't know.
Most C/C++ projects I clone don't build for me, first try, either. I usually forget something, or there was something poorly documented, or the actual project was set up to make unfair assumptions about the system it would operate in.
Does that mean that C++ is an unreliable/unstable development platform?
Or the hours/days spent on getting Eclipse set up in an enterprise environment, with all kinds of crazy, company-specific projects and project settings?
It sounds like a case of bad design, more than anything.
Then again, most of my projects these days are wrapped in Docker containers. They all run in the same environment, whether they're running in Windows, on a Mac, or on the server. That tends to take the sting out of building projects, regardless of what language the code is in, or what VM / processor they're running on.
You should also be using NPM shrinkwrap files, or Yarn Lockfiles to preserve the build configuration, with the known-working versions of libraries. And you should have unit and integration tests to ensure that changing library versions has no discernible impact on your system.

GitVersion – selective versioning multiple assemblies of the same project

I’m on a .net c# project composed by a solution with several class library projects.
The source control is managed by git using gitflow as branching model.
We have decided that we wanted to implement semantic versioning (http://semver.org/) of the project in order to follow a standard way to communicate our releases.
For that we are using GitVersionTask (via NuGet) which works pretty well with gitflow.
Every time we tag a release and we perform a build from the master branch the version of all assemblies are updated and a new release is out for delivery.
Only one of the assemblies has a public API, all the other are for internal consume. I would like to know if this is the correct way to manage the version of multiple assemblies of the same project I mean, isn’t it wrong to change the version of every assembly when only a couple (or even just one) was changed? To get thinks more complicated there is strong possibility that some of the “internal” assemblies will be used by other projects so I believe it not very wise to increment a major version of an assembly that didn’t suffer a change just because another assembly of the same project is promoting breaking changes. Should each assembly project be managed on its own repository?
Thanks in advance.
I know this is a bit of an old question, still:
I want to share a workaround that seems to be working:
GitVersion uses $(Build.SourcesDirectory) to see where the sources are located - src
We can change this using logging commands*
Workaround is to set the Build.SourcesDirectory before GitVersion task
Then gitVersion uses the GitVersion.yml from the project folder (Build.SourceDirectory) and voila - works
After that you might want to roll back the change or not - depending on your need. For me it seems it is nice to scope down to the only nuget package from the collection of nuget packages in our nugetPackages monorepo.
see GitVersion issue and comment
*Example Powershell command:
standard PowerShell task; set to inline script;
Write-Host "##vso[task.setvariable variable=Build_SourcesDirectory;]$(Build.SourcesDirectory)\$(NugetProjectName)"
There is certainly nothing in GitVersion that would help with having separate projects within the same repository. The guidance that we would offer here is that you should use different repositories for the different parts of your application. That way they can be versioned/updated at their own cadence.

Tools to help manage sets of multiple versions of executables on Linux?

We are in a networked Linux environment and what I'm looking for is a FOSS or generic system level method for managing what versions of executables and libraries get used, per session. The executables would preferably be installed on the network. The executables will be in-house tools and installs of commercial packages like Houdini, Maya and Nuke.
The need for this is that we'd prefer to have multiple versions of the software installed and available for the artists but there needs to be an easy way to select which version to use. As an added benefit, I'd like to be able to track the version of software used to generate a given output as metadata. I've worked at studios that did this successfully but I was not 100% up to speed on how it was achieved. Every executable in a given set was assigned a single uber version for the set. That way, the "approved packages" of the studio tools were all collapsed into a single package of tools that were known to work together.
Due to the way they install, some programs make setting this up easy (It's as simple as adding their install directories to $PATH). Other programs don't make it quite so easy. I'm particularly worried about how to handle the libraries a program might install. What's needed is a generic access method I can use to wrap everything into a clean front end.
Does anyone know of such a system available in the wild or am I going to have to implement it from scratch? Google hasn't been very helpful in finding a solution.
Thanks!
Check out the "modules" system at http://modules.sourceforge.net/ ; it's quite widely used in HPC.
There is eselect . I have only used it on funtoo(offspring of gentoo) but it seems to be doing what you need. It is also written entirely in BASH, so it should be quite possible to port to other distros.

OSGi Bundle Repositories

I am empirically testing OSGi Bundles and their relationships for this I need lots of bundles.
Making these datasets is a difficult task. I already have Eclipse update (1700 Bundles) sites and Spring Enterprise bundle repository for testing, however I want more, anyone out there got massive amounts of bundles.
I don't even need the code, just the manifests would be fine.
Cheers
Google Code Search returns 15800 results for query 'Bundle-SymbolicName file:MANIFEST.MF'. If you find some way to automatically download them and remove duplicates you might get a sizable dataset.
Another similar idea is to find some way to search the Maven central repository for artifacts that include OSGi bundle manifest.

Resources