I want to subdue some sensu checks outside working hours and weekends. The documentation is not clear on how it works. Sensu subdue documentation
'subdue' => {
'days' => {
'all' => [
{
'begin' => '8:00 PM',
'end' => '10:00 AM'
}
],
'saturday' => [
{
'begin' => '12:00 AM',
'end' => '11:59 PM'
}
],
'sunday' => [
{
'begin' => '12:00 AM',
'end' => '11:59 PM'
}
]
}
}
My question is: will the specific day override the all attribute?
Also: is there a better way to do this check?
Thanks!
Yes, Specific day override all attribute. We should add subdue configurations in our client.json file.
Related
I want to use Stripe Upcoming Invoices to display how much a user will be billed when he makes changes to his subscriptions. But it seems that I miss something...
Why do I get 29 instead of 0?
dump($plans['basic_monthly']->price);
29.0
dump($plans['premium_monthly']->price);
49.0
$stripe_customer = step1_create_customer();
$stripe_subscription = step2_create_subscription($stripe_customer, $plans['basic_monthly']->stripe_price_id);
dump([
'reason' => 'Nohting was changed (price_id and quantity are the same), so 0 is expected. Why 29 is here?',
'expected' => 0,
'actual' => upcoming($stripe_subscription, $plans['basic_monthly']->stripe_price_id)->amount_due/100,
]);
array:3 [▼
"reason" => "Nohting was changed (price_id and quantity are the same), so 0 is expected. Why 29 is here?"
"expected" => 0
"actual" => 29
]
dump([
'reason' => 'Transition to more expensive plan was made. 49 - 29 = 20 is expected',
'expected' => 20,
'actual' => upcoming($stripe_subscription, $plans['premium_monthly']->stripe_price_id)->amount_due/100,
]);
array:3 [▼
"reason" => "Transition to more expensive plan was made. 49 - 29 = 20 is expected"
"expected" => 20
"actual" => 20
]
function step1_create_customer()
{
$stripe = new \Stripe\StripeClient(env('STRIPE_SECRET_KEY'));
$test_clock = $stripe->testHelpers->testClocks->create([
'frozen_time' => time(),
'name' => sprintf('Testing Upcoming Invoices'),
]);
$stripe_customer = $stripe->customers->create([
'test_clock' => $test_clock->id,
'payment_method' => 'pm_card_visa',
'invoice_settings' => ['default_payment_method' => 'pm_card_visa'],
'metadata' => [
'testing_upcoming_invoices' => 1,
],
'expand' => [
'test_clock',
'invoice_settings.default_payment_method',
],
]);
return $stripe_customer;
}
function step2_create_subscription($stripe_customer, $stripe_price_id)
{
$stripe = new \Stripe\StripeClient(env('STRIPE_SECRET_KEY'));
$stripe_subscription = $stripe->subscriptions->create([
'customer' => $stripe_customer->id,
'items' => [
[
'price' => $stripe_price_id,
'quantity' => 1,
],
],
'metadata' => [
'testing_upcoming_invoices' => 1,
],
]);
return $stripe_subscription;
}
function upcoming($stripe_subscription, $stripe_price_id)
{
$stripe = new \Stripe\StripeClient(env('STRIPE_SECRET_KEY'));
$stripe_invoice = $stripe->invoices->upcoming([
'subscription' => $stripe_subscription->id,
'subscription_items' => [
[
'id' => $stripe_subscription->items->data[0]->id,
'price' => $stripe_price_id,
'quantity' => 1,
],
],
'subscription_cancel_at_period_end' => false,
'subscription_proration_behavior' => 'always_invoice',
//'subscription_proration_date' => $now,
]);
return $stripe_invoice;
}
What your code is doing here is upgrading a Subscription from Price A ($29/month) to Price B ($49/month) immediately after creation. You're also passing subscription_proration_behavior: 'always_invoice'.
When you upgrade or downgrade a subscription, Stripe calculates the proration for you automatically. This is something Stripe documents in details here and here.
In a nutshell, since you move from $29/month to $49/month immediately after creation, what happens is that Stripe calculates that:
You owe your customer credit for the time they paid on $29/month that they won't use. Since it's immediately after creation, you owe them $29.
The customer owes you for the remaining time for the new Price. Since it's the start of the month they owe you the full price of $49.
In a default integration, the proration is created as 2 separate InvoiceItems that are pending until the next cycle. In your case you pass proration_behavior: 'always_invoice' so an Invoice is created immediately with only those 2 line items. -$29 + $49 = $20 which is where the amount comes from.
I want to combine two fields from a logfile and use the result as timestamp for logstash.
The logfile is in csv format and the date format is somewhat confusing. Date and time are formated like this:
Datum => 17|3|19
Zeit => 19:21:50
I tried the following code.
filter {
csv {
separator => ","
columns => [ "Datum", "Zeit" ]
}
mutate {
merge => { "Datum" => "Zeit" }
}
date {
match => [ "Datum", "d M yy HH:mm:ss" ]
}
}
The merge part seems to work with this result
"Datum" => [
[0] "17|3|19",
[1] "23:32:37"
]
but for the conversion of the date i get the following error message:
"_dateparsefailure"
can someone please help me?
With an event with the following fields:
"Datum" => "17|3|19"
"Zeit" => "19:21:50"
I got a working configuration:
mutate {
merge => { "Datum" => "Zeit" }
}
mutate {
join => {"Datum" => ","}
}
date {
match => [ "Datum", "d|M|yy,HH:mm:ss" ]
}
This give me in the output: "#timestamp":"2019-03-17T18:21:50.000Z"
I am using logstash 6.2.4 with the following config:
input {
stdin { }
}
filter {
date {
match => [ "message","HH:mm:ss" ]
}
}
output {
stdout { }
}
With the following input:
10:15:20
I get this output:
{
"message" => "10:15:20",
"#version" => "1",
"host" => "DESKTOP-65E12L2",
"#timestamp" => 2019-01-01T09:15:20.000Z
}
I have just a time information, but would like to parse it as current date.
Note that current date is 1. March 2019, so I guess that 2019-01-01 is some sort of default ?
How can I parse time information and add current date information to it ?
I am not really interested in any replace or other blocks as according to the documentation, parsing the time should default to current date.
You need to add a new field merging the current date with the field that contains your time information, which in your example is the message field, then your date filter will need to be tested against this new field, you can do this using the following configuration.
filter {
mutate {
add_field => { "current_date" => "%{+YYYY-MM-dd} %{message}" }
}
date {
match => ["current_date", "YYYY-MM-dd HH:mm:ss" ]
}
}
The result will be something like this:
{
"current_date" => "2019-03-03 10:15:20",
"#timestamp" => 2019-03-03T13:15:20.000Z,
"host" => "elk",
"message" => "10:15:20",
"#version" => "1"
}
I have file containing series of such messages:
component+branch.job 2014-09-04_21:24:46 2014-09-04_21:24:49
It is string, some white spaces, first date and time, some white spaces and second date and time. Currently I'm using such filter:
grok {
match => [ "message", "%{WORD:componentName}\+%{WORD:branchName}\.%{DATA:jobType}\s+20%{DATE:dateStart}_%{TIME:timeStart}\s+20%{DATE:dateStop}_%{TIME:timeStop}" ]
}
mutate {
add_field => {"tmp_start_timestamp" => "20%{dateStart}_%{timeStart}"}
add_field => {"tmp_stop_timestamp" => "20%{dateStop}_%{timeStop}"}
}
date {
match => [ "tmp_start_timestamp", "YYYY-MM-dd_HH:mm:ss" ]
add_tag => [ "jobStarted" ]
}
date {
match => [ "tmp_stop_timestamp", "YYYY-MM-dd_HH:mm:ss" ]
target => "stop_timestamp"
remove_field => ["tmp_stop_timestamp", "tmp_start_timestamp", "dateStart", "timeStart", "dateStop", "timeStop"]
add_tag => [ "jobStopped" ]
}
elapsed {
start_tag => "jobStarted"
end_tag => "jobStopped"
unique_id_field => "message"
}
As result I receive "#timestamp" and "stop_timestamp" fields with date time data and two tags, without elapsed time calculation. What I'm missing?
UPDATE
I tried with splitting (as #Rumbles suggested) event on two separate events, but somehow logstash creates two the same events:
input {
stdin { type => "time" }
}
filter {
grok {
match => [ "message", "%{WORD:componentName}\+%{WORD:branchName}\.%{DATA:jobType}\s+20%{DATE:dateStart}_%{TIME:timeStart}\s+20%{DATE:dateStop}_%{TIME:timeStop}" ]
}
mutate {
add_field => {"tmp_start_timestamp" => "20%{dateStart}_%{timeStart}"}
add_field => {"tmp_stop_timestamp" => "20%{dateStop}_%{timeStop}"}
update => [ "type", "start" ]
}
clone {
clones => ["stop"]
}
if [type] == "start" {
date {
match => [ "tmp_start_timestamp", "YYYY-MM-dd_HH:mm:ss" ]
target => ["start_timestamp"]
add_tag => [ "jobStarted" ]
}
}
if [type] == "stop" {
date {
match => [ "tmp_stop_timestamp", "YYYY-MM-dd_HH:mm:ss" ]
target => "stop_timestamp"
remove_field => ["tmp_stop_timestamp", "tmp_start_timestamp", "dateStart", "timeStart", "dateStop", "timeStop"]
add_tag => [ "jobStopped" ]
}
}
elapsed {
start_tag => "jobStarted"
end_tag => "jobStopped"
unique_id_field => "message"
timeout => 15
}
}
output {
stdout { codec => rubydebug }
}
I've never used this filter, however I have just had a quick read of the documentation, and I think I understand the issue you are having.
From your description I believe you are trying to run the elapsed filter on one event, from the documentation it would appear that the filter is expecting 2 events, one with the starting time the second with the ending time, with a common id helping the filter to identify when the 2 events match up:
The events managed by this filter must have some particular properties. The event describing the start of the task (the “start event”) must contain a tag equal to ‘start_tag’. On the other side, the event describing the end of the task (the “end event”) must contain a tag equal to ‘end_tag’. Both these two kinds of event need to own an ID field which identify uniquely that particular task. The name of this field is stored in ‘unique_id_field’.
Each message is considered an event, so you would need to split your messages in to two events and have each pair of events have a unique identifier to help the filter to link them back together. It's not exactly a tidy solution (split your event in to two events, and then reconnect them again later) there may be a better solution to this that I am not aware of.
Our object, parsed and all:
{
"message" => "[2014-12-15 14:28:03,786] WARN org.apache.sshd.serve
"#version" => "1",
"#timestamp" => "2014-01-15T14:28:03.786Z",
"type" => "errorlog",
"host" => "localhost",
"path" => "/var/lib/gerrit/log/error_log",
"tags" => [
[0] "multiline"
],
"gerrit_timestamp" => "2014-12-15 14:28:03,786",
"loglevel" => "WARN",
"object" => "org.apache.sshd.server.session.ServerSession"
}
As you can see we're extracting the date into gerrit_timestamp just fine. We then have a date-filter to read gerrit_timestamp, and stuff it into #timestamp
date {
type => "errorlog"
match => [ "gerrit_timestamp", "YYYY-MM-DD HH:mm:ss,SSS" ]
target => "#timestamp"
}
so why is #timestamp off by 11 months?
From experience the date function needs to be called with the correct layout for the date, otherwise nothing will come out, I'm not sure why your date is 11 months out in your example, I would recommend you try the following:
date {
type => "errorlog"
match => [ "gerrit_timestamp", "yyyy-MM-dd HH:mm:ss,SSS" ]
}
Target in this example is redundant as the default behaviour is to set the value to #timestamp. As per the date documentation y is year, while Y is year of era, not quite the same, and D is day of year, i.e. between 1-365, not day of month, which is d.