opensnitch: changing a "process.path" rule to match command args - security

Opensnitch intro
opensnitch is an open-source security tool modeled after the MAC OS-X littlesnitch app.
I've been using Gustavo Iniguez Goya's fork of opensnitch (which is a big improvement over the original great pioneering work by Simone Margaritelli) on my desktop to limit outgoing connections based on rules. The goal is to beef-up outgoing network security, for example to catch malware or limit some "phone-home" apps from talking to the outside world.
Configuration/rules
The default rules which drive opensnitch, are created under /etc/opensnitchd/rules are stored as *.json files, one file per rule. When I use the UI to add a rule, a new *.json rule file gets created.
Example of a rule (trimmed down for brevity):
{
"name": "allow-always-simple-usrbinpython",
"enabled": true,
"action": "allow",
"duration": "always",
"operator": {
"type": "simple",
"operand": "process.path",
"data": "/usr/bin/python",
}
}
Problem
These rules may be too coarse when setting them from the UI. e.g. when I allow a certain script I wrote to talk to the outside world, and that executable just happens to be written in python selecting the executable option and clicking Allow, I inadvertently allow any python script to talk to the outside world.
Searching the web, I was able to find a nice overview of opensnich which is missing the detail of how to specify conjunctive rules directly in *.json and match the full command line, with examples.
Questions:
Is it possible to limit such rule and allow running only a certain executable script (1st arg to /usr/bin/python) ?
More generally: what would be the syntax, with an example, for an AND conjunction in the rule, and a clause for a regex-match vs. other arguments of the command line or remote IP-addresses, or both?

Is it possible to limit such rule and allow running only a certain executable script (1st arg to /usr/bin/python) ?
You can select the option "from this command line" to filter by the whole command.
More generally: what would be the syntax, with an example, for an AND conjunction in the rule, and a clause for a regex-match vs. other arguments of the command line or remote IP-addresses, or both?
take a look at the documentation (maybe you already did... but just in case):
https://github.com/gustavo-iniguez-goya/opensnitch/wiki/Rules
https://github.com/gustavo-iniguez-goya/opensnitch/wiki/Rules-editor
For example, if you wanted to filter by a particular (python) script:
[x] From this command line: ".*/usr/bin/dnsping.*"
(By the way, we finally are contributing to the original repo, so you can use latest releases from there)

Related

How do you turn off the ESLint no-undef rule in VS Code settings, but only for certain variables?

I know it's possible to completely turn off the 'no-undef' rule in VS Code's settings.json using something like:
{
"eslint.rules.customizations": [
{ "rule": "no-undef", "severity": "off" }
]
}
What I'd like to do is turn it off in settings ONLY for a handful of specific global variables that are used in most of the 400+ JS files in our codebase. I'd really like to be able to do it without having to add header comments to every file I work with, which is why I'd prefer to do it in the settings.
I did some searching online for how to configure ESLint rules but didn't see anything that addressed this (for that matter, I didn't find anything that would have taught me the method above that turns it off completely; I learned that from another developer).

AuditBeat: How to get resolved symbolic links in published event output?

I am using AuditBeat to monitor filesystem operations performed in an external application and a specific root path, e.g.: /var/myapp/myroot
Configuration
AuditBeat version: 7.14.1
auditd module enabled
OS: centos7
Output: Kafka
Event publishing in general works fine. If the user sets the working directory to /var/myapp/myroot kafka event output contains corresponding paths,
"file": {
"device": "00:00",
...
"path": "/var/myapp/myroot/somefile.txt"
If users set a symlink inside their home directory (with ln -s), e.g. /home/user1/myenv pointing to /var/myapp/myroot
the output changes to
"file": {
"device": "00:00",
...
"path": "/home/user1/myenv/somefile.txt"
Remark: This appears to be application-specific, e.g. events for files created by touch contain resolved paths.
Problem
I need to correlate recorded paths when processing events. Paths with symlink cannot be easily matched as the link name is given by the user. I would like AuditBeat to provide a resolved path in case the application's operation occurred on a symbolic link.
I have browsed AuditBeat configuration settings and specifically exported fields: AuditBeat Exported Fields
and were only able to find
file.target_path
Target path for symlinks.
file.type
File type (file, dir, or symlink).
in ECS fields which is apparently some uniform schema targetted towards ElasticSearch.
However, these fields are not contained in my published event output. Only an ecs field with a version is contained.
"ecs": {
"version": "1.10.0"
},
Maybe ECS field output is not enabled because I am using Kafka output. Also, there are tons of ECS fields I dont require and would not like to include in published events.
Question
Is there any way to instruct AuditBeat, respectively its auditd module, to resolve symbolic links?
Can I enable ECS output specifically for file.target_path excluding all other ECS fields (Kafka output is mandatory)?
If not, isn't this a general problem? If someone wants to search for modifications to a specific path, I cannot imagine that you are forced to query for arbitrary symlink-based locations which you might not be even aware of.
I hope that I am overlooking something. I browsed the AuditBeat reference config though, but found no option that might be useful.

How to run one feature file as initialization (i.e. before all other feature files) in cucumber-jvm?

I have a cucumber feature file 'A' that serves as setting up environment (data clean up and initialization). I want to have it executed before all other feature files can run.
It's it kind of like #before hook as in http://zsoltfabok.com/blog/2012/09/cucumber-jvm-hooks/. However, that does not work because my feature files 'A' contains hundreds of cucumber steps and it is not as simple as:
#Before
public void beforeScenario() {
tomcat.start();
tomcat.deploy("munger");
browser = new FirefoxDriver();
}
instead it's better to be able to run 'A' as a feature file as a whole.
I've searched around but did not find a answer. I am so surprised that no one has this type of requirement before.
The closest i found is 'background'. But that means i can have only one huge feature file with the content of 'A' as 'background' at the top, and rest of my test in the same file. I really do not want to do that.
Any suggestions?
By default, Cucumber features are run single thread in order by:
Alphabetically by feature file directory
Alphabetically by feature file name within directory
Scenario execution is then by order within the feature file.
So have your initialization feature in the first directory (alhpabetically) with a file name that sorts first (alphabetically) in that directory.
That being said it is generally a bad practice to require an execution order in your feature files. We run our feature files in parallel so order is meaningless. For Jenkins or TeamCity you could add a build step that executes the one feature file followed by a second build step that executes the rest of your feature files.
I have also a project, where we have a single feature file, that contains a very long scenario called Scenario: Test data with a lot of very long scenarios, like this:
Given the system knows about the following employees
|uuid|user-key|name|nickname|
|1|0101140000|Anna|annie|
... hundreds of lines like this follow ...
We see this long SystemKnows scenarios as quite valuable, so that our testers, Product Owner and developers have a baseline of what data are in the system. Our domain is quite complex, and we need this baseline of reference data for everyone to be able to understand the tests.
(These reference data become almost like well known personas, and are a shared team metaphore)
In the beginning, we were relying on the alphabetic naming convention, to have the AAA.feature to be run first.
Later, we discovered that this setup was brittle, and decided to use the following trick, inspired by the PageObject pattern:
Add a background with the single line Given(~'^I set test data for all feature files$')
In the step definition, have a factory to create the test data, and make sure inside the factore method, that it is only created once, like testFactory.createTestData()
In this way, you have both the convenience of expressing reference setup as a scenario, that enhances team communication, but you also have a stable test setup.
Hope this is helpful!
Agata

Getting/Setting File Permissions From Rebol

Is it possible to change file permissions within Rebol 3 without relying on CALLing CHMOD? Rebol 2 had 'set-modes though doesn't appear to be available any longer:
permissive-access: [
owner-read: group-read: world-read:
owner-write: group-write: world-write: #[true]
owner-execute: group-execute: world-execute: #[false]
]
set-modes file permissive-access
At the moment, no, you have to use call.
It is planned to add back the port mode getting and setting capabilities, but the API needs a revamp first and we haven't started the discussion for that yet. The port model is mostly different in Rebol 3, so the port mode model is going to have to be different too. Feel free to get the discussion started.

Guard and Cucumber: when I edit a step definition I'd like to only run features that implement this step

I have read the topic Guardfile for running single cucumber feature in subdirectory?, and this works great: when I change a feature, only this will be run by guard.
But in the other direction it doesn't work: when I edit any step definition file, always all features are run, whether they are using any of the steps in the step definition file, or not.
This is not nice. I'd like to have at least only those features to be run which use any of the steps in the edited file; but even better would be if guard could see which step currently is edited, and then only runs the features that use this specific step.
The first shouldn't be that hard to accomplish, I guess; the second rather seems wishfu thinking...
To master Guard and have the perfect setup for your projects and own needs, you have to change the Guardfile and configure your watchers accordingly. The templates that comes with each Guard plugin try to match the most useful behavior for most users, which might differ from your personal preferences.
Each Guard plugin starts with the guard DSL method, followed by an options hash to configure the Guard plugin. The options are often different for different Guard plugins and you have to consult the plugin README for more information.
In between the guard block do ... end you normally configure your watchers. A watcher must be defined with a RegExp, which describe the files to be watched. I use Rubular to test my watchers and you can paste your current features copied from the output from find features to have real files to test your RegExp.
The line
watch(%r{features/.+\.feature})
for example watches for all files in the features folder that ends with .feature. Since there is no block provided to the watcher, the matched file is passed unmodified to Guard::Cucumber for running.
The watcher
watch(%r{features/support/.+}) { 'features' }
matches all files in the features/support directory and because the block always returns features, every time a file within the support directory changes, features is passed to Guard::Cucumber and thus all features are exectued.
The last line
watch(%r{features/step_definitions/(.+)_steps\.rb}) do |m|
Dir[File.join("**/#{m[1]}.feature")][0] || 'features'
end
watches for every file that ends with _steps.rb in the features/step_definitions dierctory and tries to match a feature for the step definition. Please notice the parenthesis in the RegExp features/step_definitions/(.+)_steps\.rb. This defines a match group, that is available later in your watcher block. For example, a step definition features/step_definitions/user_steps.rb will match and the first match group (m[1]) will contain the value user.
Now we try to find a matching file in all subdirectories (**) that is named user.feature. If this is the case then run the first matching file ([0]) or if you do not find anything, then run all features.
So it looks like you've named your steps different from what the default Guard::Cucucmber Guardfile is expecting, which is totally fine. Just change the watcher to match your naming convention.

Resources