I am new to puppet world . I have to create a module to create a multiple users ,parameterized .
I have created a module as below . Not sure if that proper .
class users () {
group { "sysadmin":
ensure => present,
}
define users::add ($username, $groupname= "" ,$shell= "",$ensure, $login= false) {
user { "$username":
ensure => "$ensure" ,
groups => "$groupname" ,
shell => "$shell" ,
require => Group["sysadmin"] ,
}
}
}
I have to create multiple user with above code . how and where to create that . Can I call in the same file ?(init.pp)
I tried by appending below line in the same file
class users::add {
users::add { "user1": username => "myuser1" , groupname => "sysadmin" , shell => "/bin/bash" , ensure => present , login => true }
users::add { "user2": username => "myuser2" , groupname => "sysadmin" , shell => "/bin/bash" , ensure => present , login => true }
}
it is not working .
Any help is highly appreciable . Thanks in advance .
for multiple actions you should better use define instead of class like this:
define users::account::add(
$userid,
$fullname,
$groups,
$shell,
$password = '',
$home_path = hiera('global::home_path')
) {
group { $name:
gid => $userid,
ensure => 'present',
} ->
user { $name:
uid => $userid,
gid => $userid,
comment => $fullname,
home => "${home_path}/${name}",
shell => $shell,
password => $password,
ensure => 'present',
groups => $groups,
}
}
this define may be used directly one or several times:
users::account::add { 'root':
userid => 0,
groups => [ ],
fullname => 'kroot',
shell => "${global::bash}",
password => '',
home_path => '',
}
users::account::add { 'raven':
userid => 1001,
groups => [ "${global::def_group}" ],
fullname => 'super admin',
shell => "${global::bash}",
password => '*',
}
users::account::add { 'admin':
userid => 1002,
groups => [ "${global::def_group}" ],
fullname => 'other admin',
shell => "${global::bash}",
password => '*',
}
much pretty variant is using this define with hiera:
class users::accounts {
create_resources('add', hiera('users'))
}
while hieradata should contain users data:
{
"users": {
"root": {
"userid": "0",
"groups": [ ],
"fillname": "kroot",
"shell": "/bin/sh",
"password": "",
"home_path": ""
},
"raven": {
"userid": "1001",
"groups": [ "root", "wheel" ],
"fillname": "super admin",
"shell": "/usr/local/bin/bash",
"password": ""
},
"admin": {
"userid": "1002",
"groups": [ "root", "wheel" ],
"fillname": "other admin",
"shell": "/usr/local/bin/bash",
"password": ""
}
}
}
Related
The contents of LogStash's conf file looks like this:
input {
beats {
port => 5044
}
file {
path => "/usr/share/logstash/iway_logs/*"
start_position => "beginning"
sincedb_path => "/dev/null"
#ignore_older => 0
codec => multiline {
pattern => "^\[%{NOTSPACE:timestamp}\]"
negate => true
what => "previous"
max_lines => 2500
}
}
}
filter {
grok {
match => { "message" =>
['(?m)\[%{NOTSPACE:timestamp}\]%{SPACE}%{WORD:level}%{SPACE}\(%{NOTSPACE:entity}\)%{SPACE}%{GREEDYDATA:rawlog}'
]
}
}
date {
match => [ "timestamp", "yyyy-MM-dd'T'HH:mm:ss.SSS"]
target => "#timestamp"
}
grok {
match => { "entity" => ['(?:W.%{GREEDYDATA:channel}:%{GREEDYDATA:inlet}:%{GREEDYDATA:listener}\.%{GREEDYDATA:workerid}|W.%{GREEDYDATA:channel}\.%{GREEDYDATA:workerid}|%{GREEDYDATA:channel}:%{GREEDYDATA:inlet}:%{GREEDYDATA:listener}\.%{GREEDYDATA:workerid}|%{GREEDYDATA:channel}:%{GREEDYDATA:inlet}:%{GREEDYDATA:listener}|%{GREEDYDATA:channel})']
}
}
dissect {
mapping => {
"[log][file][path]" => "/usr/share/logstash/iway_logs/%{serverName}#%{configName}#%{?ignore}.log"
}
}
}
output {
elasticsearch {
hosts => "${ELASTICSEARCH_HOST_PORT}"
index => "iway_"
user => "${ELASTIC_USERNAME}"
password => "${ELASTIC_PASSWORD}"
ssl => true
ssl_certificate_verification => false
cacert => "/certs/ca.crt"
}
}
As one can make out, the idea is to parse a custom log employing multiline extraction. The extraction does its job. The log occasionally contains an empty first line. So:
[2022-11-29T12:23:15.073] DEBUG (manager) Generic XPath iFL functions use full XPath 1.0 syntax
[2022-11-29T12:23:15.074] DEBUG (manager) XPath 1.0 iFL functions use iWay's full syntax implementation
which naturally is causing Kibana to report an empty line:
In an attempt to supress this line from being sent to ES, I added the following as a last filter item:
if ![message] {
drop { }
}
if [message] =~ /^\s*$/ {
drop { }
}
The resulting JSON payload to ES:
{
"#timestamp": [
"2022-12-09T14:09:35.616Z"
],
"#version": [
"1"
],
"#version.keyword": [
"1"
],
"event.original": [
"\r"
],
"event.original.keyword": [
"\r"
],
"host.name": [
"xxx"
],
"host.name.keyword": [
"xxx"
],
"log.file.path": [
"/usr/share/logstash/iway_logs/localhost#iCLP#iway_2022-11-29T12_23_33.log"
],
"log.file.path.keyword": [
"/usr/share/logstash/iway_logs/localhost#iCLP#iway_2022-11-29T12_23_33.log"
],
"message": [
"\r"
],
"message.keyword": [
"\r"
],
"tags": [
"_grokparsefailure"
],
"tags.keyword": [
"_grokparsefailure"
],
"_id": "oRc494QBirnaojU7W0Uf",
"_index": "iway_",
"_score": null
}
While this does drop the empty first line, it also unfortunately interferes with the multiline operation on other lines. In other words, the multiline operation does not work anymore. What am I doing incorrectly?
Use of the following variation resolved the issue:
if [message] =~ /\A\s*\Z/ {
drop { }
}
This solution is based on Badger's answer provided on the Logstash forums, where this question was raised as well.
I am trying to import the PHP FPM logs into an ELK stack. For this I use the filebeat to read the files. Before sending this data to logstash, the multiline log entries should be merged.
For this I built this filebeat configuration:
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
- type: filestream
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- '/var/log/app/fpm/*.log'
multiline.type: pattern
multiline.pattern: '^\[\d{2}-\w{3}-\d{4} \d{2}:\d{2}:\d{2} [\w/]*\] PHP\s*at.*'
multiline.negate: false
multiline.match: after
processors:
- add_fields:
fields.docker.service: "fpm"
But as you can see in the ruby debug output from logstash, the messages were not merged:
{
"#timestamp" => 2021-08-10T13:54:10.149Z,
"agent" => {
"version" => "7.13.4",
"hostname" => "3cb76d7d4c7d",
"id" => "61dec25e-12ec-4a65-9f1f-ec72a5aa83ee",
"ephemeral_id" => "631db0d8-60ad-4625-891c-3da09cb0a442",
"type" => "filebeat"
},
"input" => {
"type" => "filestream"
},
"log" => {
"offset" => 344,
"file" => {
"path" => "/var/log/app/fpm/error.log"
}
},
"tags" => [
[0] "beats_input_codec_plain_applied",
[1] "_grokparsefailure"
],
"fields" => {
"docker" => {
"service" => "fpm"
}
},
"#version" => "1",
"message" => "[17-Jun-2021 13:07:56 Europe/Berlin] PHP [WARN] (/var/www/html/Renderer/RendererTranslator.php:92) - unable to translate type integer. It is not a string (/url.php)",
"ecs" => {
"version" => "1.8.0"
}
}
{
"input" => {
"type" => "filestream"
},
"module" => "PHP IES\\ServerException",
"ecs" => {
"version" => "1.8.0"
},
"#version" => "1",
"log" => {
"offset" => 73,
"file" => {
"path" => "/var/log/ies/fpm/error.log"
}
},
"#timestamp" => 2021-06-17T11:10:41.000Z,
"agent" => {
"version" => "7.13.4",
"hostname" => "3cb76d7d4c7d",
"id" => "61dec25e-12ec-4a65-9f1f-ec72a5aa83ee",
"ephemeral_id" => "631db0d8-60ad-4625-891c-3da09cb0a442",
"type" => "filebeat"
},
"tags" => [
[0] "beats_input_codec_plain_applied"
],
"fields" => {
"docker" => {
"service" => "fpm"
}
},
"message" => "core.login"
}
{
"#timestamp" => 2021-08-10T13:54:10.149Z,
"agent" => {
"version" => "7.13.4",
"hostname" => "3cb76d7d4c7d",
"id" => "61dec25e-12ec-4a65-9f1f-ec72a5aa83ee",
"ephemeral_id" => "631db0d8-60ad-4625-891c-3da09cb0a442",
"type" => "filebeat"
},
"ecs" => {
"version" => "1.8.0"
},
"input" => {
"type" => "filestream"
},
"tags" => [
[0] "beats_input_codec_plain_applied",
[1] "_grokparsefailure"
],
"fields" => {
"docker" => {
"service" => "fpm"
}
},
"#version" => "1",
"message" => "[17-Jun-2021 13:10:41 Europe/Berlin] PHP at App\\Module\\ComponentModel\\ComponentModel->doPhase(/var/www/html/Component/Container.php:348)",
"log" => {
"offset" => 204,
"file" => {
"path" => "/var/log/app/fpm/error.log"
}
}
}
I tested the regular expression with Rubular and it matches the stack trace messages.
What am I doing wrong here?
Instead of adjusting the filebeat configuration, I adjusted the log configuration of the application.
Now JSON files are written, which can be easily read with the filebeat. The consideration of the line break is then no longer necessary.
You need to set multiline.negate to true.
I have a stock table where I have stocks for shops/products:
input {
jdbc {
statement => "SELECT ShopId, ProductCode, Quantity FROM stock ORDER BY productcode;"
}
}
then I have a simple filter to aggregate that data:
filter {
aggregate {
task_id => "%{productcode}"
code => "
map['productcode'] ||= event.get('productcode')
map['objectID'] ||= event.get('productcode')
map['stocks'] ||= []
map['stocks'] << {
'ShopId' => event.get('ShopId'),
'quantity' => event.get('quantity'),
}
event.cancel()
"
push_previous_map_as_event => true
timeout => 3
}
}
which gives me output I expect, for example:
{
"productcode": "123",
"objectID": "123",
"stocks": [
{
"ShopId": 1
"Quantity": 2
},
{
"ShopId": 2
"Quantity": 5
}
]
}
now I can push that data to Algolia via http output plugin.
But the issue I have is that it's thousands of objects which makes thousands of calls.
That's why I think to use batch endpoint, pack those objects to package of f.e. 1000 objects, but to do so, I need to adjust structure to:
{
"requests": [
{
"action": "addObject",
"body": {
"productcode": "123",
"objectID": "123",
...
}
},
{
"action": "addObject",
"body": {
"productcode": "456",
"objectID": "456",
...
}
}
]
}
which looks to me like another aggregate function, but I already tried:
aggregate {
task_id => "%{source}"
code => "
map['requests'] ||= []
map['requests'] << {
'action' => 'addObject',
'body' => {
'productcode' => event.get('productcode'),
'objectId' => event.get('objectID'),
'stocks' => event.get('stocks')
}
}
event.cancel()
"
push_previous_map_as_event => true
timeout => 3
but it does not work.
Also with this type of aggregate function I'm not able to configure how big packages I would like to send to batch output.
I will be very grateful for any help or clues.
I have two tables in sql server i.e. AppDetails and AppBranchDetails.
I want to read all rows of these two tables and merge based on the condition.
Below are the two queries which I want to run:
select id as colg_id, name, sirname from AppDetails order by id
select id as branch_id, branch_name, branch_add from AppBranchDetails order by id
From the above two queries, "id" is a primary key which is same for both the tables.
The output will looks like below for id == 1:
{
"name": "ram",
"sirname": "patil",
"id": 1,
"BRANCH": [
{
"id": 1,
"branch_name": "EE",
"branch_add": "IND"
},
{
"id": 1,
"branch_name": "ME",
"branch_add": "IND"
}
]
}
The output will looks like below for id == 2:
{
"name": "sham",
"sirname": "bhosle",
"id": 2,
"BRANCH": [
{
"id": 2,
"branch_name": "SE",
"branch_add": "US"
},
{
"id": 2,
"branch_name": "FE",
"branch_add": "US"
}
]
}
I am trying with below configuration (app.conf):
input {
jdbc {
jdbc_connection_string => "jdbc:sqlserver://x.x.x.x:1433;databaseName=AAA;"
jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
jdbc_user => "sa"
jdbc_password => "sa#111"
statement => "select id as colg_id, name, sirname from AppDetails order by id"
tracking_column => "colg_id"
use_column_value => true
type => "college"
}
jdbc {
jdbc_connection_string => "jdbc:sqlserver://x.x.x.x:1433;databaseName=AAA;"
jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
jdbc_user => "sa"
jdbc_password => "sa#111"
statement => "select id as branch_id, branch_name, branch_add from AppBranchDetails order by id"
tracking_column => "branch_id"
use_column_value => true
type => "branch"
}
}
filter {
if [type] == "branch" {
aggregate {
task_id => "%{branch_id}"
code => "
map['BRANCH'] ||= []
map['BRANCH'] << event.get('branch_id')
map['BRANCH'] << event.get('branch_name')
map['BRANCH'] << event.get('branch_add')
event.cancel()
"
push_previous_map_as_event => true
timeout => 5
}
mutate {
remove_field => [ "#version" , "#timestamp" ]
}
}
}
output {
stdout { codec => json_lines }
}
Can anyone please suggest how I can make the result which I have mentioned above.
I have Es index with multiple type and each type caters it's own filter parameters. Now we are building a Global Search on Es for multiple type and I am bit confused how to use type specific where clause to be included in NEST.
Elastic Search
-> Type 1 (where x=1)
-> Type 2 (where y=1)
Now we are building a search query
var result = client.Search<ISearchDto>(s => s
.From(from)
.Size(PageSize)
.Types(lstTypes)
.Query(q => q.QueryString(qs => qs.Query(query)))
);
*lstTypes will have Type 1 and Type 2
Now how can i add the where clause for all type 1 items with x=1 and for all type 2 items with y=1 in NEST.
Hope the question is clear, any help on this will be highly appreciated.
You can query on the _type meta field in much the same way as you query on any other field. To perform different queries based on type within one search query, you can use a bool query with multiple clauses
client.Search<ISearchDto>(s => s
.From(from)
.Size(pageSize)
.Type(Types.Type(typeof(FirstSearchDto), typeof(SecondSearchDto)))
.Query(q => q
.Bool(b => b
.Should(sh => sh
.Bool(bb => bb
.Filter(
fi => fi.Term("_type", "firstSearchDto"),
fi => fi.Term(f => f.X, 1)
)
), sh => sh
.Bool(bb => bb
.Filter(
fi => fi.Term("_type", "secondSearchDto"),
fi => fi.Term(f => f.Y, 1)
)
)
)
)
)
);
We have a bool query with 2 should clauses; each should clause is a bool query with the conjunction of 2 filter clauses, one for _type and the other for the property to be queried for each type, respectively.
NEST supports operator overloading so this query can be written more succinctly with
client.Search<ISearchDto>(s => s
.From(from)
.Size(pageSize)
.Type(Types.Type(typeof(FirstSearchDto), typeof(SecondSearchDto)))
.Query(q => (+q
.Term("_type", "firstSearchDto") && +q
.Term(f => f.X, 1)) || (+q
.Term("_type", "secondSearchDto") && +q
.Term(f => f.Y, 1))
)
);
Both produce the following query
{
"from": 0,
"size": 20,
"query": {
"bool": {
"should": [
{
"bool": {
"filter": [
{
"term": {
"_type": {
"value": "firstSearchDto"
}
}
},
{
"term": {
"x": {
"value": 1
}
}
}
]
}
},
{
"bool": {
"filter": [
{
"term": {
"_type": {
"value": "secondSearchDto"
}
}
},
{
"term": {
"y": {
"value": 1
}
}
}
]
}
}
]
}
}
}