I'm trying to implement an automated scan using Arachni. The goal will be to perform automated scans in GitLab repository, in which the tests are already contained. I already configured Docker image and everything, but for the moment I can only scan singles URLs, so not really "Automated". By reaching the other repository (maybe using proxy?), I will be able to reach all the tests for all the URLs present in the application.
I guess a possible solution will be to use the Proxy feature to reach the repository and perform the scan, but I'm not sure. Maybe there are also other options possible?
Does anyone have clues here?
Thank you!
The Extend paths (--scope-extend-paths) allows you to extend the scope of the scan by seeding the system with the paths contained within the given file. However, the proxy plugin is the best approach to follow.
Related
Using a lot of (official and non official) terraform providers, I'm looking for a tool to perform security analysis on terraform providers before executing terraform plan/apply commands (and so executing providers code). I want to prevent malicious code from providers to be executed blindly.
I'm basically executing terraform providers mirror command to save local copies of required providers and I'm wondering if I can security scan that result.
I tested kics, checkov and tfsec but they are all looking for security issues in my terraform static code but not in providers.
Do you have any good advices regarding this topic ?
This is actually quite a good question. There are many other problems that can be reduced to same generic question - how to make sure that the thing you downloaded from the internet does not do anything malicious to you like e.g.:
How to make sure that a minecraft plugin does not hack you?
How to make sure that a spring boot dependency does not hack you?
How to make sure that a library xxx you attach to your project does not do harm to you?
Should you use docker image yyy in your project?
Truth is: everything you use has the potential to explode right in your face (or more correctly: right into the face of the system owner). That's why the system owner (usually a company) defines a set of rules to follow what is allowed and what is not allowed. No set of rules you are aware of? Below a set of rules we came up with ourselves when thinking about on-boarding a new library for some projects to use:
Do not take random stuff from github. Take only products with longer history, small bug backlog, little to none past issues in the CVE list, actively maintained.
Do static code analysis yourself. Sometimes it is possible to have tools that work on binaries level do that for you. Sometimes you can do it on source level only. In case of Java libraries, check what tools like Dependency Track think about the library and version you are about to use.
Run the code and see how it works: what does it write, what does it read, what URLs does it communicate with (do a TCP dump if necessary).
Document everything you have done somewhere.
This gives you no 100% confidence that things will not go terribly wrong. But this is a systematic approach that will reduce the risk of doing something stupid.
I'm trying to setup a git server and I want to allow only a specific user to push commits to master branch.
I have tried to use the Linux group permission setting to meet the requirement above but it seems not a correct way.
And I even don't know what are the key words for searching the answer for this.
Any help would be appreciated.
Git does not allow you to have private branches, but you can achieve this functionality by implementing your own server-side pre-receive hook. Github enterprise specific pre-receive hook example is here, as a reference.
However, if you are using Git hosting services (like Github) they might have an option for this. Github, in particular, has an option called branch restrictions, but it requires you to have a paid subscription, unless your project is public.
You have two options:
By far the easiest solution is to use hosting software that already provides this functionality. You might want to look at GitLab, which has free options for both SaaS (hosted at gitlab.com) and self-managed instances (running your own gitlab instance). Or github. Or bitbucket. Or I'm sure there are others I'm not thinking of.
If you really don't want to use any of those, you can implement access control on a simple git server, but it's not so simple. The short (or rather, glib) answer is "hooks" - but a hook is just a script that runs when something happens - like in this case you'd use the prereceive hook, which runs when someone's trying to push and decides whether to accept the push. Now, how does your hook know who is pushing? (The commit metadata does not indicate who's pushing. What you need is authentication around the actual connection, and visibility of the authentication in your script so that the script may implement your authorization rules. That very quickly breaks down into "it depends on your environment".)
Since it's not really possible to exhaustively cover every scenario for doing this manually, hopefully either you'll find a pre-packaged solution you like, or you'll find the above to be enough to get you pointed in the right direction to do it manually.
I have two node apps running on my server, each performing different tasks.
However, I now need to create a service that is going to be used by both of them. Obviously I don't want to create it in both of the apps, hence creating two codes to maintain.
My current thought is to have a separate repository only for this service, then require it from each app as an outsourced module.
I was wondering whether there are better methods, or if this method might encounter problems I'm not seeing
Well if you strictly follow the rule that shared means only shared things in the common package, I don't see any issues with that. The problem comes when you try to put the logic in one repo which is supposed to be only used for one. In that scenario you will need to rebuilt both apps as the repo or package is depedency of both.
One issue that I have seen people face is when they work with shared repo is that when you need to tweak things just because they are at common place. for example you have a method that does one job and suddenly you want to use that in other place as well but with a tweak. In that case you end up modifying the shared code to support the second repo but since it is shared, you will have to do regression testing of both apps.
I see shared repo candidates being drivers, client etc. I guess rest is up to your project structure and judgement. In this case there is nothing correct or incorrect. Hope this is making sense.
I've been reading on SpamAssassin for some time now and learned a lot but I cannot seem to figure out a way I cannot find a way to create a rule where a 3rd party script can be executed to for a custom rule.
THis would have to be something user based not globally.
I want to run additional verifications on domains and email addresses.
I wish to build a reputation system in which if a domain or email address are checked against the contacts list and other things.
I have have considered modifying the profile to add regex rules but that seems like a way to complicated way of doing it. A more preferred scenario would be to simply run a 3rd party script that returns the score for each domain and email address.
Out of the box, SpamAssassin has no such facility, but since you ask on a programming site, I assume you are not alien to writing some code on your own.
The plugin facility in SpamAssassin was designed for this sort of thing. You can create a piece of Perl code which gets called for each message which SpamAssassin analyzes, and you have access to everything Perl has access to.
In particular, look at the pyzor plugin which calls an external program and returns its analysis results to SpamAssassin. There's a fair amount of boilerplate there, but the part you need to start with is getting the right arguments to the helper_app_pipe_open call (on line 282 as of version 3.4.0, which is what I link to above). These things are configurable so you could perhaps even just reconfigure the path to pyzor to your own program as a proof of concept. Note that it needs to accept a check argument and some other parameters, and a message from a temporary file on its standard input.
Mail::SpamAssassin::Plugin.pm contains POD documentation for the plugin API. Other files in the module tree contain useful documentation too; in particular, you might want to refer to the general documentation in Mail::SpamAssassin.pm and Mail::SpamAssassin::Conf.pm to understand the configuration parameters you can pass to your plugin.
Out of the box, there is a new TxRep plugin that automatically recognizes senders you've seen recently. There is also a collection of whitelist and blacklist options.
If you wanted to implement something yourself, I think you'll quickly find that an exec mechanism won't scale well. Perhaps try crafting your own DNSBL instead. This can be done with custom code and any DNS server (e.g. bind, dnsmasq, etc) or with a DNS server designed for this purpose, such as RBLDNSD. The SA wiki on DnsBlocklists has directions for how to hook it into SA.
Usually, people seeking this kind of solution don't have DNSBLs configured properly. I'd take a look into that before trying to build your own project.
Say I've got a \\Repo\... repo. Currently devs generally tend to do all their work directly in there, which normally isn't a problem for small pieces of work. Every so often, this approach fails for various reasons, mainly because they're unable to submit the incomplete change to Live.
So, I was wondering, is there a way to enforce on the server that:
1) no files can be directly checked out from \\Repo\...
2) users then branch to a private area (\\Projects\...)
3) dev, test, submit, dev, test, submit, ...
4) on dev complete, they can re-integrate back into \\Repo\...
I guess the last part is the problem, as files need to be checked out! Has anyone implemented something similar? Any suggestions are much appreciated.
There is no way (that I know of) to enforce this type workflow in P4. You could try to enforce it by setting commit triggers, restricting permissions, or locking files however I believe it would only result in more work (micro-management) and frustrate you and your team.
The best way to establish and enforce any SCM workflow is to set as company/studio policy. Your team should be responsible/able to follow the set procedure and determine (by themselves or through discussion) if an issue is able to be fixed in the main line.
One note about the proposed workflow; creating a new branch for every issue will eventually cause issues and at some point you will need to perform maintenance on the server to conserve disk space and depot browsing speed.
For more information (over) branching on Perforce read this Perforce blog entry from 2009: Perforce Anti-Patterns Part 2: Overuse of branching.
In many studios using Perforce, most developers have their own "working" branch which they continually re-use whenever there are changes that are not safe or able to be performed in the main line.
if i understand your questions properly, you should try with shelving features and working offline features of Perforce. Process is main thing to achieve success in this senario. So you might need to setup a right process to execute this.
For more Info about shelving and working offline with perforce, you can try following links...
http://www.perforce.com/perforce/doc.current/manuals/cmdref/shelve.html