HornetQ : How to read DLQ details - jboss6.x

I would like to read the number of messages sent to the DLQ in hornetq.
Using CLI commands, the jms.queue.DLQ does not appear after :
/subsystem=messaging/hornetq-server=default/jms-queue=
Even if it's configured as DLQ for the testQueue
{
"outcome" => "success",
"result" => {
"consumer-count" => 0,
"dead-letter-address" => "jms.queue.DLQ",
"delivering-count" => 0,
"durable" => true,
"entries" => [
"queue/test",
"java:jboss/exported/jms/queue/test"
],
"expiry-address" => "jms.queue.ExpiryQueue",
"message-count" => 0L,
"messages-added" => 0L,
"paused" => false,
"queue-address" => "jms.queue.testQueue",
"scheduled-count" => 0L,
"selector" => undefined,
"temporary" => false
}
}
Enviroment: jboss 6.0
Thanks.

The DLQ does not appear because is just an address and not a physical destination. The same situation occurs with the ExpiryQueue. You have to create first these 2 queues. Please find below the corresponding CLI commands:
/subsystem=messaging/hornetq-server=default/jms-queue=deadLetterQueue:add(entries=["queue/deadLetterQueue"],durable=false)
/subsystem=messaging/hornetq-server=default/jms-queue=expiryQueue:add(entries=["queue/expiryQueue"],durable=false)

Related

Using Stripe Upcoming Invoice to preview changes in subscription

I want to use Stripe Upcoming Invoices to display how much a user will be billed when he makes changes to his subscriptions. But it seems that I miss something...
Why do I get 29 instead of 0?
dump($plans['basic_monthly']->price);
29.0
dump($plans['premium_monthly']->price);
49.0
$stripe_customer = step1_create_customer();
$stripe_subscription = step2_create_subscription($stripe_customer, $plans['basic_monthly']->stripe_price_id);
dump([
'reason' => 'Nohting was changed (price_id and quantity are the same), so 0 is expected. Why 29 is here?',
'expected' => 0,
'actual' => upcoming($stripe_subscription, $plans['basic_monthly']->stripe_price_id)->amount_due/100,
]);
array:3 [▼
"reason" => "Nohting was changed (price_id and quantity are the same), so 0 is expected. Why 29 is here?"
"expected" => 0
"actual" => 29
]
dump([
'reason' => 'Transition to more expensive plan was made. 49 - 29 = 20 is expected',
'expected' => 20,
'actual' => upcoming($stripe_subscription, $plans['premium_monthly']->stripe_price_id)->amount_due/100,
]);
array:3 [▼
"reason" => "Transition to more expensive plan was made. 49 - 29 = 20 is expected"
"expected" => 20
"actual" => 20
]
function step1_create_customer()
{
$stripe = new \Stripe\StripeClient(env('STRIPE_SECRET_KEY'));
$test_clock = $stripe->testHelpers->testClocks->create([
'frozen_time' => time(),
'name' => sprintf('Testing Upcoming Invoices'),
]);
$stripe_customer = $stripe->customers->create([
'test_clock' => $test_clock->id,
'payment_method' => 'pm_card_visa',
'invoice_settings' => ['default_payment_method' => 'pm_card_visa'],
'metadata' => [
'testing_upcoming_invoices' => 1,
],
'expand' => [
'test_clock',
'invoice_settings.default_payment_method',
],
]);
return $stripe_customer;
}
function step2_create_subscription($stripe_customer, $stripe_price_id)
{
$stripe = new \Stripe\StripeClient(env('STRIPE_SECRET_KEY'));
$stripe_subscription = $stripe->subscriptions->create([
'customer' => $stripe_customer->id,
'items' => [
[
'price' => $stripe_price_id,
'quantity' => 1,
],
],
'metadata' => [
'testing_upcoming_invoices' => 1,
],
]);
return $stripe_subscription;
}
function upcoming($stripe_subscription, $stripe_price_id)
{
$stripe = new \Stripe\StripeClient(env('STRIPE_SECRET_KEY'));
$stripe_invoice = $stripe->invoices->upcoming([
'subscription' => $stripe_subscription->id,
'subscription_items' => [
[
'id' => $stripe_subscription->items->data[0]->id,
'price' => $stripe_price_id,
'quantity' => 1,
],
],
'subscription_cancel_at_period_end' => false,
'subscription_proration_behavior' => 'always_invoice',
//'subscription_proration_date' => $now,
]);
return $stripe_invoice;
}
What your code is doing here is upgrading a Subscription from Price A ($29/month) to Price B ($49/month) immediately after creation. You're also passing subscription_proration_behavior: 'always_invoice'.
When you upgrade or downgrade a subscription, Stripe calculates the proration for you automatically. This is something Stripe documents in details here and here.
In a nutshell, since you move from $29/month to $49/month immediately after creation, what happens is that Stripe calculates that:
You owe your customer credit for the time they paid on $29/month that they won't use. Since it's immediately after creation, you owe them $29.
The customer owes you for the remaining time for the new Price. Since it's the start of the month they owe you the full price of $49.
In a default integration, the proration is created as 2 separate InvoiceItems that are pending until the next cycle. In your case you pass proration_behavior: 'always_invoice' so an Invoice is created immediately with only those 2 line items. -$29 + $49 = $20 which is where the amount comes from.

puppet with multiple NFS mount on same server

I have few NFS mount points on the same server but different directories.
ex:
x.x.x.x:/stats /data/stats
x.x.x.x:/scratch /data/scratch
x.x.x.x:/ops /data/ops
But when i try to run puppet it adds following to my fstab. (wrong mount assignment)
x.x.x.x:/scratch /data/stats nfs defaults,nodev,nosharecache 0 0
x.x.x.x:/scratch /data/ops nfs defaults,nodev,nosharecache 0 0
x.x.x.x:/scratch /data/scratch nfs defaults,nodev,nosharecache 0 0
It is using the last mount option on all mounted partitions. so i did a little bit of research and found the following bug.
https://tickets.puppetlabs.com/browse/DOCUMENT-242
Then added nosharecache option, but still no luck.
this is my puppet code
class profile::mounts::stats {
# Hiera lookups
$location = hiera('profile::mounts::stats::location')
$location2 = hiera('profile::mounts::stats::location2')
tag 'new_mount'
file { '/data/stats':
ensure => directory,
owner => 'root',
group => 'root',
mode => '0755',
require => File['/data'],
tag => 'new_mount',
}
mount { '/data/stats':
ensure => mounted,
fstype => 'nfs',
device => $location,
options => 'defaults,nodev,nosharecache',
require => File['/data/stats'],
tag => 'new_mount'
}
file { '/data/ops':
ensure => directory,
owner => 'root',
group => 'mail',
mode => '0775',
require => File['/data'],
tag => 'new_mount',
}
mount { '/data/ops':
ensure => mounted,
fstype => 'nfs',
device => $location2,
options => 'defaults,nodev,nosharecache',
require => File['/data/ops'],
tag => 'new_mount',
}
file { '/data/scratch':
ensure => directory,
owner => 'root',
group => 'mail',
mode => '0775',
require => File['/data'],
tag => 'new_mount',
}
mount { '/data/scratch':
ensure => mounted,
fstype => 'nfs',
device => $location2,
options => 'defaults,nodev,nosharecache',
require => File['/data/scratch'],
tag => 'new_mount',
}
}
}
My hieara lookup is as follows
profile::mounts::stats::location: x.x.x.x:/stats
profile::mounts::stats::location2: x.x.x.x:/scratch
why it is causing some unexpected behavior ?
I compiled that code and I see a few issues:
You did not include the File['/data'] resource, but I assume you have that somewhere else?
After compiling I see this in the catalog:
$ cat myclass.json | jq '.resources | .[] | select(.type == "Mount") | [.title, .parameters]'
[
"/data/stats",
{
"ensure": "mounted",
"fstype": "nfs",
"device": "x.x.x.x:/stats",
"options": "defaults,nodev,nosharecache",
"require": "File[/data/stats]",
"tag": "new_mount"
}
]
[
"/data/ops",
{
"ensure": "mounted",
"fstype": "nfs",
"device": "x.x.x.x:/scratch",
"options": "defaults,nodev,nosharecache",
"require": "File[/data/ops]",
"tag": "new_mount"
}
]
[
"/data/scratch",
{
"ensure": "mounted",
"fstype": "nfs",
"device": "x.x.x.x:/scratch",
"options": "defaults,nodev,nosharecache",
"require": "File[/data/scratch]",
"tag": "new_mount"
}
]
So you are mounting both /data/ops and /data/scratch on $location2. Is that an oversight? It does not match what you said you were trying to achieve.
Otherwise I can't reproduce what you said you are observing.
Is anything other than Puppet editing the fstab file? Did you try this code on a fresh box?

puppet Exec always runs even if the subscribed create_resource doesn't run

I have attached report screenshot from foreman and pasted below is the class that I am having issue with.
If it's hard to go through the entire code, I am highlighting the Exec section that is not working as expected
exec { $service:
path => ["/usr/bin/","/usr/sbin/","/bin"],
subscribe => Domain_ip_map[$domain_ip_map_titles],
command => "sudo service nagios restart",
}
The above Exec[$service] is subscribed to Domain_ip_map[...], this in turn notified by Exec['purge-config-files'] which require => File['deployconfig.cfg'].
Since there is no change in deployconfig.cfg file, File['deployconfig.cfg'] doesn't run and hence no notify, so Exec['purge-config-files'] and custom Domain_ip_map resource doesn't run. Up to this point everything working as expected. But the last part, Exec[$service] is subscribed to Domain_ip_map.
When Domain_ip_map is not running, how can Exec[$service] execute
successfully ?
class testclass ( $data = {
item1 => {
domain => 'testdomain.com',
ipaddress => '1.1.1.1',
},
},
$baseconfigdir = '/usr/local/servers',
$config_file_host = '/usr/local/test.cfg',
$config_file_service = '/usr/local/test_service.cfg' ) {
validate_hash($data)
$domain_ip_map_titles = keys($data)
file { "${baseconfigdir}":
ensure => directory,
}
exec { 'purge-config-files':
command => "/bin/rm -f ${baseconfigdir}/*",
notify => Domain_ip_map[$domain_ip_map_titles],
require => File['deployconfig.cfg'],
refreshonly => true,
}
file { 'deployconfig.cfg':
ensure => file,
path => '/home/deployconfig.cfg',
mode => '0644',
owner => 'root',
group => 'root',
content => "test",
notify => Exec['purge-config-files'],
}
#problem here, its subscribed to Domain_ip_map, but even if Domain_ip_map doesn't run, Exec['$service'] always execute, how???
exec { $service:
path => ["/usr/bin/","/usr/sbin/","/bin"],
subscribe => Domain_ip_map[$domain_ip_map_titles],
command => "sudo service nagios restart",
}
create_resources(domain_ip_map, $data)
}
define domain_ip_map($domain, $ipaddress) {
nagios_host { $domain:
....
}
nagios_service { "check_ping_${domain}":
....
}
}

Export ntopng log to logstash

I know ntopng can put direct to elasticsearch but my boss want use logtash as layer to transfer log to elasticsearch.
I'm try many time but failed.
ntopng log like:
{"index": {"_type": "ntopng", "_index": "ntopng-2016.08.23"}}
{ "#timestamp": "2016-08-23T01:49:41.0Z", "type": "ntopng", "IN_SRC_MAC": "04:2A:E2:0D:62:FB", "OUT_DST_MAC": "00:16:3E:8D:B7:E4", "IPV4_SRC_ADDR": "14.152.84.14", "IPV4_DST_ADDR": "xxx.xxx.xxx", "L4_SRC_PORT": 34599, "L4_DST_PORT": 53, "PROTOCOL": 17, "L7_PROTO": 5, "L7_PROTO_NAME": "DNS", "IN_PKTS": 15, "IN_BYTES": 1185, "OUT_PKTS": 15, "OUT_BYTES": 22710, "FIRST_SWITCHED": 1471916981, "LAST_SWITCHED": 1471916981, "SRC_IP_COUNTRY": "CN", "SRC_IP_LOCATION": [ 113.250000, 23.116699 ], "DST_IP_COUNTRY": "VN", "DST_IP_LOCATION": [ 105.849998, 21.033300 ], "NTOPNG_INSTANCE_NAME": "ubuntu", "INTERFACE": "ens192", "DNS_QUERY": "cpsc.gov", "PASS_VERDICT": true }
Logstash config:
input {
tcp {
port => 5000
codec => json
}
}
filter{
json{
source => "message"
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
}
stdout{ codec => rubydebug }
}
Thanks
Since the ntopng logs are already in the bulk format expected by Elasticsearch you don't need to use the elasticsearch output but you can use the http output directly like this. No need to have Logstash parse JSON, simply forward the raw bulk commands to ES.
There's one catch, though: we need to add a newline character after the second line otherwise ES will reject the bulk call. We can achieve this with a mutate/update filter that adds a verbatim newline character after the message. Try it out, this will work.
input {
tcp {
port => 5000
codec => multiline {
pattern => "_index"
what => "next"
}
}
}
filter{
mutate {
update => {"message" => "%{message}
"}
}
}
output {
http {
http_method => "post"
url => "http://localhost:9200/_bulk"
format => "message"
message => "%{message}"
}
}

logstash calculate elapsed time not working

I have file containing series of such messages:
component+branch.job 2014-09-04_21:24:46 2014-09-04_21:24:49
It is string, some white spaces, first date and time, some white spaces and second date and time. Currently I'm using such filter:
grok {
match => [ "message", "%{WORD:componentName}\+%{WORD:branchName}\.%{DATA:jobType}\s+20%{DATE:dateStart}_%{TIME:timeStart}\s+20%{DATE:dateStop}_%{TIME:timeStop}" ]
}
mutate {
add_field => {"tmp_start_timestamp" => "20%{dateStart}_%{timeStart}"}
add_field => {"tmp_stop_timestamp" => "20%{dateStop}_%{timeStop}"}
}
date {
match => [ "tmp_start_timestamp", "YYYY-MM-dd_HH:mm:ss" ]
add_tag => [ "jobStarted" ]
}
date {
match => [ "tmp_stop_timestamp", "YYYY-MM-dd_HH:mm:ss" ]
target => "stop_timestamp"
remove_field => ["tmp_stop_timestamp", "tmp_start_timestamp", "dateStart", "timeStart", "dateStop", "timeStop"]
add_tag => [ "jobStopped" ]
}
elapsed {
start_tag => "jobStarted"
end_tag => "jobStopped"
unique_id_field => "message"
}
As result I receive "#timestamp" and "stop_timestamp" fields with date time data and two tags, without elapsed time calculation. What I'm missing?
UPDATE
I tried with splitting (as #Rumbles suggested) event on two separate events, but somehow logstash creates two the same events:
input {
stdin { type => "time" }
}
filter {
grok {
match => [ "message", "%{WORD:componentName}\+%{WORD:branchName}\.%{DATA:jobType}\s+20%{DATE:dateStart}_%{TIME:timeStart}\s+20%{DATE:dateStop}_%{TIME:timeStop}" ]
}
mutate {
add_field => {"tmp_start_timestamp" => "20%{dateStart}_%{timeStart}"}
add_field => {"tmp_stop_timestamp" => "20%{dateStop}_%{timeStop}"}
update => [ "type", "start" ]
}
clone {
clones => ["stop"]
}
if [type] == "start" {
date {
match => [ "tmp_start_timestamp", "YYYY-MM-dd_HH:mm:ss" ]
target => ["start_timestamp"]
add_tag => [ "jobStarted" ]
}
}
if [type] == "stop" {
date {
match => [ "tmp_stop_timestamp", "YYYY-MM-dd_HH:mm:ss" ]
target => "stop_timestamp"
remove_field => ["tmp_stop_timestamp", "tmp_start_timestamp", "dateStart", "timeStart", "dateStop", "timeStop"]
add_tag => [ "jobStopped" ]
}
}
elapsed {
start_tag => "jobStarted"
end_tag => "jobStopped"
unique_id_field => "message"
timeout => 15
}
}
output {
stdout { codec => rubydebug }
}
I've never used this filter, however I have just had a quick read of the documentation, and I think I understand the issue you are having.
From your description I believe you are trying to run the elapsed filter on one event, from the documentation it would appear that the filter is expecting 2 events, one with the starting time the second with the ending time, with a common id helping the filter to identify when the 2 events match up:
The events managed by this filter must have some particular properties. The event describing the start of the task (the “start event”) must contain a tag equal to ‘start_tag’. On the other side, the event describing the end of the task (the “end event”) must contain a tag equal to ‘end_tag’. Both these two kinds of event need to own an ID field which identify uniquely that particular task. The name of this field is stored in ‘unique_id_field’.
Each message is considered an event, so you would need to split your messages in to two events and have each pair of events have a unique identifier to help the filter to link them back together. It's not exactly a tidy solution (split your event in to two events, and then reconnect them again later) there may be a better solution to this that I am not aware of.

Resources