Dynamically set file path in log4rs - rust

I already asked this question in the Rust subreddit but wanted to ask it here too.
I'm using the log4rs crate and want to find a way to generate more than one log file. I have a YAML file set up with the file appender created, and am trying to have the path be unique so it doesn't have to either append or truncate the original file.
appenders:
file:
kind: file
path: "log/{h({d(%H:%M:%S)})}.log"
But this does not work and gives me this error:
log4rs: error deserializing appender file: The filename, directory name, or volume label syntax is incorrect. (os error 123)
I know that log4rs has a way to do patterns but it doesn't seem to work specifically for the path parameter.
I also saw this other crate called log4rs_routing_appender which looks promising but I don't know if I will need to use that.
Finally, I want to be able to do this non-programmatically (i.e. only with one YAML file), and am wondering if it's possible within log4rs
Thanks a lot!

I do not believe what you want is possible with yaml configuration. However, log4rs provides another way to build it's logger, which is through log4rs::Config::builder():
// get current date
let date = chrono::Utc::now();
// create log file appender
let logfile = FileAppender::builder()
.encoder(Box::new(PatternEncoder::default()))
// set the file name based on the current date
.build(format!("log/{}.log", date))
.unwrap();
// add the logfile appender to the config
let config = Config::builder()
.appender(Appender::builder().build("logfile", Box::new(logfile)))
.build(Root::builder().appender("logfile").build(LevelFilter::Info))
.unwrap();
// init log4rs
log4rs::init_config(config).unwrap();
log::info!("Hello, world!");
Ok(())
I understand that you want to use YAML configuration. However, as you said, patterns do not seem to work with the path variable, seeing as writing this fails:
path:
pattern: "log/requests-{d}-{m}-{n}.log"
Another option would be to manually parse the yaml with serde_yaml (which log4rs actually uses internally) and parse custom variables with regex.

I realize the rolling_file type make it so that it automatically increments numbers to the log names! This is the example of what I did.
appenders:
default:
kind: console
encoder:
kind: pattern
pattern: "{h({d(%H:%M:%S)})} - {m}{n}"
log_file:
kind: rolling_file
append: true
path: "logs/log.log"
encoder:
pattern: "{h({d(%m-%d-%Y %H:%M:%S)})} - {m}{n}"
policy:
kind: compound
trigger:
kind: size
limit: 10mb
roller:
kind: fixed_window
base: 1
count: 100
pattern: "logs/log{}.log"
root:
level: info
appenders:
- default
- log_file
This generates log{}.log (replace {} with incrementing numbers) files within the logs folder after the file reaches 10MB of size. Since I set append: true the log file will keep accumulating until it reaches the size limit.
Hopefully this helps others too!

Related

Is there a way to update or merge string literals with kustomize?

I'm trying to manage Argo CD projects with helm definitions using kustomize.
Unfortunately Argo manages helm values with string literals, which gives me headaches in conjunction with kustomize configuration.
I have this base/application.yml
apiVersion: argoproj.io/v1alpha1
kind: Application
source:
chart: something
helm:
values: |
storageClass: cinder-csi
... many more lines identical to every stage
and I'd like to create variants using kustomize overlays, where I'd like to add a single line solely important for the dev stage to the base values.
This is NOT working, it simply replaces the existing base definiton.
overlay/dev/kustomize.yml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
patchesJson6902:
- target:
kind: Application
patch: |-
- op: add
path: /source/helm/value
value: "storageSize: 1Gi"
To me it seems kustomize can not append values to string literals. My current solution requires to repeat the whole values string literal in every stage variant, with just a few lines of difference, which heavily violates DRY principles.
Any help is appreciated.
There's an open PR to add support for arbitrary YAML in the values field. If merged, I would expect it to be available in 2.4. Reviews/testing are appreciated if you have time!
One workaround is to use the parameters field and set parameters individually. It's not ideal, but maybe could help until 2.4 is released.

Puppet: variable value in test file

I'm writing some tests for puppet and in my init_spec.rb file I want to use a variable that is declared in the default_facts.yml file. How could I import the value of that variable without having to declare it in the init_spec.rb file.
Thanks in advance!
In general, you would be able to access that data inside the RSpec.configuration object.
Supposing you had a default facts file like this:
▶ cat spec/default_facts.yml
# Use default_module_facts.yml for module specific facts.
#
# Facts specified here will override the values provided by rspec-puppet-facts.
---
concat_basedir: "/tmp"
ipaddress: "172.16.254.254"
is_pe: false
macaddress: "AA:AA:AA:AA:AA:AA"
You could address that data in your tests like this:
it 'ipaddress default fact' do
expect(RSpec.configuration.default_facts['ipaddress']).to eq '172.16.254.254'
end
(I am assuming of course that your default facts file was set up correctly, e.g. by PDK.)
If instead you just want a general way to access the data in any arbitrary YAML file, you can also do this:
▶ cat spec/fixtures/mydata.yml
---
foo: bar
Then in your tests you can write:
require 'yaml'
mydata = YAML.load_file('spec/fixtures/mydata.yml')
describe 'test' do
it 'foo' do
expect(mydata['foo']).to eq 'bar'
end
end

Using env variables in swagger.yaml in nodejs

Trying to figure out how can i access the env variable inside swagger.yaml configuration file.
The variable can be access inside the nodejs application using process.env.VARNAME. I want to use the same variable inside swagger.yaml file.
something like
definations:
myvariabledetail: "${process.env.VARNAME}"
. I already tried different combinations including "${process.env.VARNAME}",${process.env.VARNAME},${VARNAME} etc.
YAML as a text file format doesn't know anything about environment variables. A solution would be to load the YAML and then have code that uses a regex to find the environment variables and replace them with the current values. Then finally pass that resulting string into your YAML parser.
You can use envsub:
const envsub = require('envsub');
envsub({
templateFile: `${__dirname}/input.yml`,
outputFile: '/dev/null', // or filename to save result
})
.then(({ outputContents }) => console.log(outputContents));

Writing Logstash config file

Hello and thank you for your time.
I need to create configuration file that this is the input:
2017-02-14T13:39:33+02:00 PulseSecure: 2017-02-14 13:39:33 - ive -
[10.16.4.225] dpnini(Users)[] - Testing Password realm restrictions
failed for dpnini/Users
and this is the required text file output:
{"timestamp":"2017-02-14T13:39:33+02:00
","vendor":"PulseSecure","localEventTime":"2017-02-14
13:39:33","userIP":"10.16.4.225","username":"dpnini","group":"Users","vpnMsg":"Testing
Password realm restrictions failed for dpnini/Users\r"}
All i know is that i start the logstash with "bin/logstash -f logstash-simple.conf"
also i know that the file that i need to change is YML file inside the config folder.
Thank you!
Logstash conf file (your logstash-simple.conf) is composed of three parts: input, filter, output. Input/output is source/destination of your data, while filter defines data transformation.
Check elastic page for samples:
https://www.elastic.co/guide/en/logstash/current/config-examples.html
What you actually need to do, is to write grok pattern inside filter to split your text into tokens/fields that you have in your json. Simple description of grok:
http://logz.io/blog/logstash-grok/

Reading from rotating log files in logstash

As per the documentation of logstash's file plugin, the section on File Rotation says the following:
To support programs that write to the rotated file for some time after
the rotation has taken place, include both the original filename and
the rotated filename (e.g. /var/log/syslog and /var/log/syslog.1) in
the filename patterns to watch (the path option).
If anyone can clarify how to specify two filenames in the path configuration, that will be of great help as I did not find an exact example. Some examples suggest to use wild-cards like /var/log/syslog*, however I am looking for an example that achieves exactly what is said in documentation - two filenames in the path option.
The attribute path is an array and thus you can specify multiple files as follows:
input {
file{
path => [ "/var/log/syslog.log", "/var/log/syslog1.log"]
}
}
You can also use * notation for name or directory as follows:
input {
file{
path => [ "/var/log/syslog.log", "/var/log/syslog1.log", "/var/log/*.log", "/var/*/*.log"]
}
}
When you specify path as /var/*/*.log it does a recursive search to get all files with .log extension.
Reference Documentation

Resources