I have added the following file on my cassandra node
/etc/dse/cassandra/metrics-reporter-config.yaml
csv:
-
outdir: '/mnt/cassandra/metrics'
period: 10
timeunit: 'SECONDS'
predicate:
color: "white"
useQualifiedName: true
patterns:
- "^org.apache.cassandra.metrics.Cache.+"
- "^org.apache.cassandra.metrics.ClientRequest.+"
- "^org.apache.cassandra.metrics.CommitLog.+"
- "^org.apache.cassandra.metrics.Compaction.+"
- "^org.apache.cassandra.metrics.DroppedMetrics.+"
- "^org.apache.cassandra.metrics.ReadRepair.+"
- "^org.apache.cassandra.metrics.Storage.+"
- "^org.apache.cassandra.metrics.ThreadPools.+"
- "^org.apache.cassandra.metrics.ColumnFamily.+"
- "^org.apache.cassandra.metrics.Streaming.+"
And then added this line to etc/dse/cassandra/cassandra-env.sh
JVM_OPTS="$JVM_OPTS -Dcassandra.metricsReporterConfigFile=metrics-reporter-config.yam"
And then finally restarted DSE, /etc/init.d/dse restart
I dont see any csv metrics files being spitted out by the MetricsReported in /mnt/cassandra/metrics folder.
any ideas why?
Check the logs, check if you have something like:
Trying to load metrics-reporter-config from file followed by Enabling CsvReporter to
Possibly metrics reporter could not create metrics directory or so...
In my case, I just had to change csv: to console:
Related
I know there are a lot of similar questions out there, but none of them has a proper answer. I am trying to deploy my code using GitLab cicd pipeline. While executing the deployment stage, my pipeline failed and got this error.
My serverless.yml has this code related to exclude
package:
patterns:
- '!nltk'
- '!node_modules/**'
- '!package-lock.json'
- '!package.json'
- '!__pycache__/**'
- '!.gitlab-ci.yml'
- '!tests/**'
- '!README.md'
The error I am getting is
Serverless Error ----------------------------------------
No file matches include / exclude patterns
I forgot to mention, I have a nltk layer which I am deploying in the same serverless.yml as my lambda function and other resources.
I am not sure what has to be done exactly to get rid of the error. Any help would be appreciated. thank you.
Your directives do not define any inclusive patterns. Perhaps you want to list the files & directories you need packaged. Each directive builds on the next.
Something like:
package:
patterns:
- "**/**"
- '!nltk'
- '!node_modules/**'
- '!package-lock.json'
- '!package.json'
- '!__pycache__/**'
- '!.gitlab-ci.yml'
- '!tests/**'
- '!README.md'
See https://www.serverless.com/framework/docs/providers/aws/guide/packaging/#patterns
I'm trying (and struggling) to get a (multiple) role model implemented in Hiera.
I've worked in the last 2 years with exact the same model as a user and now want to rebuild the same structure on my own. For example, my node.yaml should contain only the roles I want to apply onto the host:
/etc/puppetlabs/code/environments/production/nodes/my.host.de.yaml
classes:
- ydixken_baseinstall
- additional_modules
[...]
For me it's way more intuitive, to place a yaml in the roles/ directory, with the name of the role, and avoid dealing with profiles:
/etc/puppetlabs/code/environments/production/roles/ydixken_baseinstall.yaml
classes:
- apt
- unattended_upgrades
- [...]
apt::update:
frequency: 'daily'
loglevel: 'debug'
[...]
Placing the role definitions as a node fact is not practicable for me. It's also nice-to-have to allow a customization of the already defined values inside of the node configuration, if needed.
Right now my directory, hiera.yaml & file-structure looks like this:
/etc/puppetlabs/puppet/hiera.yaml
version: 5
defaults:
datadir: /etc/puppetlabs/code/environments/production
data_hash: yaml_data
hierarchy:
- name: "Per-node data (yaml version)"
paths:
- "nodes/%{fqdn}.yaml"
- "roles/%{role}.yaml"
- common
/etc/puppetlabs/code/environments/production/hiera.yaml
version: 5
defaults:
hierarchy:
- name: "FQDN"
path: "nodes/%{fqdn}.yaml"
- name: "Roles"
path: "roles/%{role}.yaml"
- name: "Common Data"
path: "common.yaml"
/etc/puppetlabs/code/environments/production/manifests/site.pp
hiera_include('classes')
How can i achieve this?
My current error:
Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Evaluation Error: Error while evaluating a Function Call, Could not find class ::ydixken_baseinstall for my.host.de (file: /etc/puppetlabs/code/environments/production/manifests/site.pp, line: 1, column: 1) on node my.host.de
I've found exactly, what i was looking for: r10k
I am using Filebeat > Logstash > Elasticsearch > Kibana to parse and analyse logs basically Java Stack Trace and other Logs.
Here is YML for Filebeat
filebeat:
prospectors:
-
paths:
- C:\logs\OCR\example.log
input_type: log
#document_type: UAT_EXAMPLE
exclude_lines: [".+DEBUG"]
multiline:
pattern: ".+(ERROR|INFO)"
negate: true
match: after
fields:
app_name: EXAMPLE_APP
environment: UAT
fields_under_root: true
#force_close_files: true
spool_size: 2048
#publish_async: true
#scan_frequency: 10s
#close_older: 2h
output:
logstash:
host: "10.0.64.14"
port: 5044
index: filebeat
timeout: 5
reconnect_interval: 3
bulk_max_size: 2048
shipper:
tags: ["ABC_Engine", "UAT_EXAMPLE"]
queue_size: 1000
### Enable logging of the filebeat
logging:
level: warning
to_files: true
files:
path: c:\logs\
name: mybeat.log
rotateeverybytes: 20485760 # = 20MB
keepfiles: 7
Enable logging of the filebeat is also not working on windows. Let me know if I am missing anything here.
logging:
level: warning
to_files: true
files:
path: c:\logs\
name: mybeat.log
rotateeverybytes: 20485760 # = 20MB
keepfiles: 7
Problem - the Filebeat is not able to send logs to logstash at times, some times it start running shipping but sometimes it doesn't.
Although If I use "test.log" as a prospector and save logs locally on disk via below config it works well.
Writing Files to local File to Check the output. I have tried "file" output and "logstash" output one by one.
output:
file:
path: c:\logs\
filename: filebeat
rotate_every_kb: 100000
number_of_files: 7
Also, The things mostly run when I am using command Line:
.\filebeat.exe -c filebeat.yml -e -v
Kindly assist with the correct config for Windows.
The log file "example.log" is getting rotated on every 30 MB of size.
I am not very sure to use the below attributes and how they will function with Filebeat on Windows.
"close_older"
"ignore_older"
"Logging"
output to logstash :
comment elasticsearch line
then
logstash:
# The Logstash hosts
hosts: ["localhost:5044"]
keep []
and config for log in debug mode for example
logging:
# Send all logging output to syslog. On Windows default is false, otherwise
# default is true.
#to_syslog: true
# Write all logging output to files. Beats automatically rotate files if rotateeverybytes
# limit is reached.
#to_files: false
# To enable logging to files, to_files option has to be set to true
files:
# The directory where the log files will written to.
#path: /var/log/mybeat
path: c:\PROGRA~1/filebeat
# The name of the files where the logs are written to.
name: filebeat.log
# Configure log file size limit. If limit is reached, log file will be
# automatically rotated
rotateeverybytes: 10485760 # = 10MB
# Number of rotated log files to keep. Oldest files will be deleted first.
#keepfiles: 7
# Enable debug output for selected components. To enable all selectors use ["*"]
# Other available selectors are beat, publish, service
# Multiple selectors can be chained.
#selectors: [ ]
# Sets log level. The default log level is error.
# Available log levels are: critical, error, warning, info, debug
level: debug
LOGGING is in LOGGING part, output is logstash or elastic search, if you want know you can install as service go to the elastic.co website :
https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-installation.html
I'm trying to connect Pentaho 5.4 (kitchen job runs to be precise) to our Logstash Server. To do that, I need to edit the Log4j config file so it will use the new appender.
I searched every jar file and every folder, but to no avail. Pentaho 6 comes with several log4j.xml files, but none of them seems to be used for the output, either. Also, the -Dlog4j.debug flag does not give me useful any info. Am I barking at the wrong tree?
A typical log looks like this:
Kitchen.bat -Dlog4j.debug /file "samples\jobs\changelog\Process changelog.kjb"
..
2016/04/14 15:24:17 - Kitchen - Start of run.
2016/04/14 15:24:18 - Process changelog - Start of job execution
..
2016/04/14 15:24:21 - General - Change log processing 2 - Finished processing (I=0, ... E=0)
Any help is greatly appreciated!
I am using Eclipse and I am creating a web project. I am also using SLF4J for storing logs in a log file.
I am putting the log4j.properties file into the WEB-INF/classes folder .
When I access the login page, all of the associated logs and my debug statement log are being loaded into the log file, but I want a specific log (i.e., project specific log) to be loaded into the log file instead of other logs.
On the log file I can see:
10/20/2010 10:16:24 - DEBUG - org.apache.commons.digester.Digester.sax
.
10/20/2010 10:16:24 - DEBUG - org.apache.commons.digester.Digester - [ObjectCreateRule]{resource-config/resource}New org.ajax4jsf.javascript.AjaxScript
.
10/20/2010 10:16:24 - DEBUG - org.apache.commons.digester.Digester - [ObjectCreateRule]{resource-config/resource}New
.
10/20/2010 10:16:24 - DEBUG - org.apache.commons.digester.Digester.sax
.
10/20/2010 10:16:24 - DEBUG - org.apache.commons.digester.Digester - [ObjectCreateRule]{resource-config/resource}New org.ajax4jsf.javascript.AjaxScript
.
10/20/2010 10:16:24 - DEBUG - org.apache.commons.digester.Digester - [ObjectCreateRule]{resource-config/resource}New
.
Can anyone help me to find out how to store specific project related logs into log file?
Have something like the following in your log4j.properties file:
log4j.logger.com.foo.logingubbins=DEBUG, loginfilelogger
log4j.appender.app=org.apache.log4j.RollingFileAppender
log4j.appender.loginfilelogger.File=c:/logs/login.log
log4j.appender.loginfilelogger.layout=org.apache.log4j.PatternLayout
log4j.appender.loginfilelogger.layout.ConversionPattern=%d [%t] %-5p [%c (%F:%L)] %n \t %m %n
That first line tells the logger to use a new logger for com.foo.logingubbins, which you direct to a separate file. You might want to add an additivity statement in there as well if you want it entirely separated.
The log4j manual can give you some useful examples, though I have to admit that the information on additivity is a little hazy.
Thanks for reply. I added the folowing steps into log4j.properties file
log4j.logger.org.apache.commons.digester=ERROR
log4j.category.se.bilprovningen.prippe=DEBUG,R
Now i am able to store project specific log into log file.
Thanks
Arvind