io.undertow : undertow-core : 2.2.20.final - CVE-2016-6311 vulnerability - security

Mitigation:
You can add a filter in the JBoss CLI that sets the host header to the 'myvirtualhost.com' if the host header is not present. eg:
/subsystem=undertow/configuration=filter/expression-filter=hostname:add(expression="header(header=Host, value=myvirtualhost.com)")
/subsystem=undertow/server=default-server/host=default-host/filter-ref=hostname:add(predicate="not exists(%{i,Host})")
Question:
Where do I add this filter?
I have not tried, because I am unsure where to add this filter.

Related

Logstash (ELK): Enrich IP with hostname (based off a file). - (No direct connect w/ DNS/AD)

Trying to figure out how to enrich the data being ingested (Network Data) Zeek/Suricata. I would like to either show Hostname vice IP, or more preferably add another field for hostname based off the IP address.
Have a file with IP -> Hostnames (CSV) currently could be anything other format if required. Unable to get IP to Hostname with DNS or Active Directory or any other connected means.
I know in Splunk you could do lookup tables, but unsure how to accomplish the same in the ELK stack to view the results in Kibana.
You could do this in logstash using a translate filter, which requires a two-column CSV file (or YML, or JSON). You could try
translate {
source => "[fieldWithIP]"
dictionary_path => "/path/to/mapping.csv"
target => "[fieldForHostname]"
}
this would add a new field called [fieldForHostname] if the value of [fieldWithIP] is found in column 1 of the mapping.csv

Puppet - How to write yaml files based on Role/Profile method

I've added our infrastructure setup to puppet, and used roles and profiles method. Each profile resides inside a group, based on their nature. For example, Chronyd setup and Message of the day are in "base" group, nginx-related configuration is in "app" group. Also, on the roles, each profile is added to the corresponding group. For example for memcached we have the following:
class role::prod::memcache inherits role::base::debian {
include profile::app::memcache
}
The profile::app::memcached has been set up like this :
class profile::app::memcache {
service { 'memcached':
ensure => running,
enable => true,
hasrestart => true,
hasstatus => true,
}
}
and for role::base::debian I have :
class role::base::debian {
include profile::base::motd
include profile::base::chrony
}
The above structure has proved to be flexible enough for our infrastructure. Adding services and creating new roles could not been easier than this. But now I face a new problem. I've been trying to separate data from logic, write some yaml files to keep the data there, using Hiera version 5. Been looking through internet for a couple of days, but I cannot deduct how to write my hiera files based on the structure I have. I tried adding profile::base::motd to common.yaml and did a puppet lookup, it works fine, but I could not append chrony to common.yaml. Puppet lookup returns nothing with the following common.yaml contents :
---
profile::base::motd::content: This server access is restricted to authorized users only. All activities on this system are logged. Unauthorized access will be liable to prosecution.'
profile::base::chrony::servers: 'ntp.centos.org'
profile::base::chrony::service_enable: 'true'
profile::base::chrony::service_ensure: 'running'
Motd lookup works fine. But the rest, no luck. puppet lookup profile::base::chrony::servers returns with no output. Don't know what I'm missing here. Would really appreciate the community's help on this one.
Also, using hiera, is the following enough code for a service puppet file?
class profile::base::motd {
class { 'motd':
}
}
PS : I know I can add yaml files inside modules to keep the data, but I want my .yaml files to reside in one place (e.g. $PUPPET_HOME/environment/production/data) so I can manage the code with git.
The issue was that in init.pp file inside the puppet module itself, the variable $content was assigned a value. Removing the value fixed the problem.

CanonicalDocFlowSegment is blank or null

I am facing this Issue, when the order is placed in hybris and sent to crm in backend (i checked the businessprocess flow in backoffice the status is OK).
2020-12-11 14:30:08,669 [DEBUG] [c.h.d.c.i.CompositionChainRunnerStrategy] Integration Key generation for canonical item CanonicalItem{status=ERROR, dataPool=DataHubPoolEntity{id=9306, name=SAPORDER_OUTBOUND_POOL}, fields={precedingDocumentId=null, orderId=0006200128}} failed.
com.hybris.datahub.composition.key.IncompleteKeyException: Value for attribute precedingDocumentId of canonical item CanonicalDocFlowSegment is blank or null.
In tomcat server, after this IncompleteKeyException, i could see Idoc's are generated in console for the above (0006200128)orderId.
So Question is, what exactly "canonical item CanonicalDocFlowSegment is blank or null" mean and how can i resolve it?
Issue related to precedingDocumentId of canonical item CanonicalDocFlowSegment is blank or null: resolved by removing the sapreturnorders related jars in the library.
These jars is mainly required in case of return order.
i.e., all three jars for sapreturnorder (raw, canonical and target) from tomcat library.
After removing jars follow the below steps:
1.restart the catalina server(tomcat).
2.do InitialLoad from backoffice.
For more details refer:https://help.sap.com/viewer/search?q=precedingDocumentID&state=PRODUCTION&language=en-US&format=standard,html,pdf,others

How to search by arbitrary fields using field selector with kubectl?

In this doc supported fields are not listed and I cannot find them properly. With some trial and experiments I noticed the following:
This works nicely and finds some pods:
kubectl get pods --field-selector=spec.restartPolicy=Never
But this produces error:
kubectl get pods --field-selector=spec.serviceAccount=default
No resources found.
Error from server (BadRequest): Unable to find {"" "v1" "pods"} that match label selector "", field selector "spec.serviceAccount=default": field label not supported: spec.serviceAccount
So how is this decided? I know I can find with JSONPath but it is client-side filtering AFAIK.
You can select the serviceAccount using following query:
kubectl get pods --field-selector=spec.serviceAccountName="default"
The --field-selector currently selects only equality based values and in that too it has very limited support to select the pod based on fields. The following fields are supported by --field-selector:
metadata.name
metadata.namespace
spec.nodeName
spec.restartPolicy
spec.schedulerName
spec.serviceAccountName
status.phase
status.podIP
status.nominatedNodeName
As you already know, you need to rely on the jsonpath to select any other field other than above fields.
You can visit following link to find out more:
https://github.com/kubernetes/kubernetes/blob/master/pkg/apis/core/v1/conversion.go#L160-L167]1

Partial Update Document without script Elasticsearch

I am using the following code for partial update
POST /website/blog/1/_update
{
"script" : "ctx._source.views+=1"
}
is there any alternative way I can achieve the same thing. because I don't want to change anything in
groovy script because last time I changed the settings and my server was compromised.
So someone please help me with the solution or some security measures if there is no work around.
No, you cannot dynamically change a field value without using a script.
You can use file-based scripts though, which means that you can disable dynamic scripting (default in ES 1.4.3+) while still using scripting in a safe, trusted way.
config/
elasticsearch.yml
logging.yml
scripts/
your_custom_script.groovy
You could have the script store:
ctx._source.views += your_param
Once stored, you can then access the script by name, which bypasses dynamic scripting.
POST /website/blog/1/_update
{
"script": "your_custom_script",
"params" : {
"your_param" : 1
}
}
Depending on the version of Elasticsearch, the script parameter is better named (e.g., ES 2.0 uses "inline" for dynamic scripts), but this should get you off the ground.

Resources