Find key contain specific key value pair in Puppet Hash - puppet

I am still a beginner in Puppet. So please bear with me. Let's assume i have this hash created in Puppet through some module
account = {
user#desktop1 => {
owner => john,
type => ssh-rsa,
public => SomePublicKey
},
user#desktop2 => {
owner => mary,
type => ssh-rsa,
public => SomePublicKey
},
user#desktop3 => {
owner => john,
type => ssh-rsa,
public => SomePublicKey
},
user#desktop4 => {
owner => matt,
type => ssh-rsa,
public => SomePublicKey
}
}
How can i find find the key for specific key and value pair inside the hash? which in this case just for example i want to find all the key owned by john. So the expected result would be something like:
[user#desktop1, user#desktop3]
Thanks in advance

The question asks about how to do this in Puppet, although, confusingly, the Hash is a Ruby Hash and the question also has a Ruby tag.
Anyway, this is how you do it in Puppet:
$account = {
'user#desktop1' => {
'owner' => 'john',
'type' => 'ssh-rsa',
'public' => 'SomePublicKey',
},
'user#desktop2' => {
'owner' => 'mary',
'type' => 'ssh-rsa',
'public' => 'SomePublicKey',
},
'user#desktop3' => {
'owner' => 'john',
'type' => 'ssh-rsa',
'public' => 'SomePublicKey',
},
'user#desktop4' => {
'owner' => 'matt',
'type' => 'ssh-rsa',
'public' => 'SomePublicKey',
}
}
$users = $account.filter |$k, $v| { $v['owner'] == 'john' }.keys
notice($users)
Puppet applying that leads to:
Notice: Scope(Class[main]): [user#desktop1, user#desktop3]

https://ruby-doc.org/core-2.5.1/Hash.html#method-i-select
account.select {|key, value| value['owner'] == 'john'}.keys

Another option using Enumerable#each_with_object:
account.each_with_object([]) { |(k, v), a| a << k if v['owner'] == 'john'}
#=> ["user#desktop1", "user#desktop3"]
Supposing keys and values to be String.

Related

logstash add array in document

I would like to import data from my postgresql database into my elasticsearch database.
I have an appointments index, in this index I would like to add a persons field (list of people in an appointment).
here is my logstash configuration file and a sample document.
thank you.
input {
jdbc {
jdbc_connection_string => "jdbc:postgresql://localhost:5432/app"
jdbc_user => "postgres"
jdbc_password => "admin"
jdbc_driver_class => "org.postgresql.Driver"
jdbc_driver_library => "postgresql-42.2.21.jar"
statement => "select id::text,trim(firstname),trim(lastname) from persons"
}
}
filter {
ruby {
code => "
event['persons'].each{|subdoc| subdoc['persons'] = subdoc['persons']['firstname']}
"
}
}
output {
#stdout { codec => json_lines }
elasticsearch {
hosts => "127.0.0.1"
index => "appointments"
doc_as_upsert => true
document_id => "%{id}"
}
}
{
"_index" : "appointments",
"_type" : "_doc",
"_id" : "41",
"_score" : 1.0,
"_source" : {
... others fields
[add array fields]
ex:
persons: [{
"firstname": "firstname1"
}, {
"firstname": "firstname2"
}]
}
}
UPDATE 2:
I made a mistake, I was modifying the wrong document, I modified the document_id and I added appointment_id in my request.
It still does not work. It replaces my document with what is in the request.
input {
jdbc {
jdbc_connection_string => "jdbc:postgresql://localhost:5432/app"
jdbc_user => "postgres"
jdbc_password => "admin"
jdbc_driver_class => "org.postgresql.Driver"
jdbc_driver_library => "postgresql-42.2.21.jar"
statement => "select id::text, appointment_id::text,trim(firstname),trim(lastname) from appointments_persons order by created_at"
}
}
filter {
aggregate {
task_id => "%{appointment_id}"
code => "
map['persons'] ||= []
"
push_map_as_event_on_timeout => true
timeout_task_id_field => "appointment_id"
timeout => 10
}
}
output {
#stdout { codec => json_lines }
elasticsearch {
hosts => "127.0.0.1"
index => "appointments"
action => update
doc_as_upsert => true
document_id => "%{appointment_id}"
}
}
Unless you are running a very old version of logstash (prior to 5.0) you cannot reference or modify the event by treating it as a hash.
The jdbc filter creates one event for each row in the result set. If you want to combine all the events that have the same [id] you could use an aggregate filter. Note the warning about setting pipeline.workers to 1 so that all events go through the same instance of the filter. In your case I do not think you need to preserve event order, so pipeline.ordered can be ignored.
You will need to do something similar to example 3 in the documentation.
aggregate {
task_id => "%{id}"
code => '
map['persons'] ||= []
map['persons'] << { "firstname" => event.get("firstname"), "lastname" => event.get("lastname") }
'
push_map_as_event_on_timeout => true
timeout_task_id_field => "id"
timeout => 10
}
If you are using document_id => "%{appointment_id}" the event read from the database will be written with to elasticsearch with that document id. Then when the aggregate timeout fires a second document will overwrite that. You might want to add event.cancel to the aggregate code option so that the event from the db does not cloud things.

Stripe how to retrieve a Checkout Session's line items quantity?

Create payment Sessions :
$session = \Stripe\Checkout\Session::create([
'payment_method_types' => ['card'], //, 'fpx','alipay'
'line_items' => [[
'price_data' => [
'product_data' => [
'name' => "Topup USDT Wallet",
'images' => ["https://abc-uaha.co/uploads/site_logo/site_logo_20210321130054.png"],
'metadata' => [
'pro_id' => "USD".$_GET['price']/100
]
],
'unit_amount' => $_GET['price'],
'currency' => 'usd',
],
'quantity' => 1,
'description' => "Spartan Capital",
]],
'mode' => 'payment',
'success_url' => STRIPE_SUCCESS_URL.'?session_id={CHECKOUT_SESSION_ID}',
'cancel_url' => STRIPE_CANCEL_URL,
]);
Refer to this docs : https://stripe.com/docs/api/checkout/sessions/line_items
I tried retrieve quantity from session:
try {
$checkout_session = \Stripe\Checkout\Session::retrieve([
'id' =>$session_id,
'expand' => ['line_items'],
]);
}catch(Exception $e) {
$api_error = $e->getMessage();
}
$line_items = $session->line_items[0].quantity;
echo $line_items; //it shows nothing, how to make it output "1"?
line_items are no longer included by default when retrieving Checkout Sessions. To get them in your retrieve call, you need to expand the line_item property.
You have two syntax errors :
You're missing a layer and using dot notation instead of PHP arrow syntax. The 2nd error is using $session instead of $checkout_session. So it should be :
$quantity = $checkout_session->line_items->data[0]->quantity;

Magento 2 Multi select custom product attribute options not showing

In my custom module, used installData.php to create a custom multiselect attribute. Where i have set the option values from my source class (using Magento\Eav\Model\Entity\Attribute\Source\AbstractSource) which is working fine after installation. I can see the options while editing the product.
But the options are not visible while editing the attribute. Im not able to add/remove option after this.
Please advise.
$eavSetup->addAttribute(
\Magento\Catalog\Model\Product::ENTITY,
'my_option',
[
'group' => 'General',
'label' => 'My Label',
'type' => 'text',
'input' => 'multiselect',
'user_defined' => true,
'global' => \Magento\Eav\Model\Entity\Attribute\ScopedAttributeInterface::SCOPE_STORE,
'source' => 'Vendor\Module\Model\Attribute\Source\Options',
'required' => false,
'filterable' => true,
'filterable_in_search' => true,
'is_searchable_in_grid' => false,
'is_used_in_grid' => false,
'is_visible_in_grid' => false,
'is_filterable_in_grid' => false,
'sort_order' => 200,
'used_in_product_listing' => true,
'backend' => 'Magento\Eav\Model\Entity\Attribute\Backend\ArrayBackend',
'visible' => true,
'visible_on_front' => true,
'searchable' => false,
'comparable' => false,
]
);
1. Create InstallData.php file at Vendor\Extension\Setup\ folder.
<?php
namespace Vendor\Extension\Setup;
use Magento\Eav\Setup\EavSetupFactory;
use Magento\Framework\Setup\InstallDataInterface;
use Magento\Framework\Setup\ModuleContextInterface;
use Magento\Framework\Setup\ModuleDataSetupInterface;
class InstallData implements InstallDataInterface
{
private $eavSetupFactory;
public function __construct(EavSetupFactory $eavSetupFactory)
{
$this->eavSetupFactory = $eavSetupFactory;
}
public function install(ModuleDataSetupInterface $setup, ModuleContextInterface $context)
{
$setup->startSetup();
$eavSetup = $this->eavSetupFactory->create(['setup' => $setup]);
$eavSetup->addAttribute(
\Magento\Catalog\Model\Product::ENTITY,
'eway_option',
[
'group' => 'Groupe Name',
'label' => 'Multiselect Attribute',
'type' => 'text',
'input' => 'multiselect',
'source' => 'Vendor\Extension\Model\Config\Product\Extensionoption',
'required' => false,
'sort_order' => 30,
'global' => \Magento\Catalog\Model\ResourceModel\Eav\Attribute::SCOPE_STORE,
'used_in_product_listing' => true,
'backend' => 'Magento\Eav\Model\Entity\Attribute\Backend\ArrayBackend',
'visible_on_front' => false
]
);
$setup->endSetup();
}
}
2. Create Extensionoption.php file at Vendor\Extension\Model\Config\Product folder.
<?php
namespace Vendor\Extension\Model\Config\Product;
use Magento\Eav\Model\Entity\Attribute\Source\AbstractSource;
class Extensionoption extends AbstractSource
{
protected $optionFactory;
public function getAllOptions()
{
$this->_options = [];
$this->_options[] = ['label' => 'Label 1', 'value' => 'value 1'];
$this->_options[] = ['label' => 'Label 2', 'value' => 'value 2'];
return $this->_options;
}
}

Kibana Index Pattern showing wrong results

I am using ELK stack in which i have used jdbc input in logstash
I have created 2 indexes
users
employees
Both the indexes have one same column objid
Logstash config file
input {
jdbc {
jdbc_driver_library => "/opt/application/cmt/ELK/logstash-5.3.0/ojdbc14.jar"
jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
jdbc_connection_string => "jdbc:oracle:thin:#xx.xxx.xx.xx:xxxx:abc"
jdbc_user => "xxxx"
jdbc_password => "xxxxx"
schedule => "*/2 * * * *"
statement => "select * from table_employee"
}
}
output {
elasticsearch {
index => "employees"
document_type => "employee"
document_id => "%{objid}"
hosts => "xx.xxx.xxx.xx:9200"
}
}
input {
jdbc {
jdbc_driver_library => "/opt/application/cmt/ELK/logstash-5.3.0/ojdbc14.jar"
jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
jdbc_connection_string => "jdbc:oracle:thin:#xx.xxx.xx.xx:xxxx:abc"
jdbc_user => "xx"
jdbc_password => "xxxxxxx"
schedule => "*/2 * * * *"
statement => "select A.OBJID,A.LOGIN_NAME,A.STATUS,A.USER_ACCESS2PRIVCLASS,A.USER_DEFAULT2WIPBIN,A.SUPVR_DEFAULT2MONITOR,A.USER2RC_CONFIG,A.OFFLINE2PRIVCLASS,A.WIRELESS_EMAIL from table_user a where A.STATUS=1"
}
}
output {
elasticsearch {
index => "users"
document_type => "user"
document_id => "%{objid}%{login_name}"
hosts => "xx.xxx.xxx.xx:9200"
}
}
1st input jdbc 'employees' contains 26935 records
2nd input jdbc 'users' contains 10619 records
Common Records : 9635 ( objid matches )
1st problem is that when i create an index pattern in kibana as '
users
It's showing count of 37554 ,why ? it should show only 10619
2nd problem : when i create an index pattern as '
employees
It's showing count of 27919 ,why ? it should show only 26935
Also i have create different document Id for index 'users' %{objid}%{login_name}
If your users and employees input and output are in the same file/executed at the same time, as what your example shows, you need to use conditionals to route your data to the correct elasticsearch index. Logstash concatenates your files/file into one pipeline, so all your inputs run through all of the filters/outputs, which is likely why you're getting unexpected results. See this discussion.
You will need to do something like this:
input {
jdbc {
statement => "SELECT * FROM users"
type => "users"
}
}
input {
jdbc {
statement => "SELECT * FROM employees"
type => "employees"
}
}
output {
if [type] == "users" {
elasticsearch {
index => "users"
document_type => "user"
document_id => "%{objid}%{login_name}"
hosts => "xx.xxx.xxx.xx:9200"
}
}
if [type] == "employees" {
elasticsearch {
index => "employees"
document_type => "employee"
document_id => "%{objid}"
hosts => "xx.xxx.xxx.xx:9200"
}
}
}

How to purge directory with puppet and keep the dot files

Is there a way to purge the folder and keep the dotfiles? I'd like to purge the /root.
Something like:
file { '/root':
ensure => present,
owner => 'root',
group => 'root',
mode => 0550,
purge => true,
recurse => true,
}
file { '/root/.*':
ensure => present,
owner => 'root',
group => 'root',
}
Either go for the ignore param as h2ooooooo correctly stated.
You may find it cleaner to not recurse and use the tidy type and its matches parameter instead.
tidy { "/root": recurse => 1, matches => '[a-zA-Z0-9_]*' }
My final solution looks like this:
file { '/root':
ensure => present,
owner => 'root',
group => 'root',
mode => 0550,
purge => true,
recurse => true,
force => true,
ignore => ['.*',
'bullx_yum_install.log',
'install.log',
'install.log.syslog'],
}

Resources