Add user to multiple groups with Puppet - puppet

I'm attempting to assign users to multiple groups with a manifest, but am running into walls.
Attempt 1:
class usergroup {
group { "user_one":
ensure => present,
gid => 500,
}
group { "user_two":
ensure => present,
gid => 501,
}
group { "dev_site_one":
ensure => present,
gid => 502,
}
group { "dev_site_two":
ensure => present,
gid => 503,
}
group { "dev_site_three":
ensure => present,
gid => 504,
}
user { "user_one":
ensure => present,
uid => 500,
gid => 500,
gid => 502,
gid => 503,
gid => 504,
}
user { "user_two":
ensure => present,
uid => 501,
gid => 501,
}
}
Running this:
puppet apply --noop ./init.pp
Yields:
Error: Duplicate parameter 'gid' for on User[user_one] at
/etc/puppet/modules/webserver/manifests/init.pp:159 on node
my_web_server
Attempt 2:
I tried to break out each gid declaration like so:
class usergroup {
group { "user_one":
ensure => present,
gid => 500,
}
group { "user_two":
ensure => present,
gid => 501,
}
group { "dev_site_one":
ensure => present,
gid => 502,
}
group { "dev_site_two":
ensure => present,
gid => 503,
}
group { "dev_site_three":
ensure => present,
gid => 504,
}
user { "user_one":
ensure => present,
uid => 500,
gid => 500,
}
user { "user_one":
gid => 502,
}
user { "user_two":
ensure => present,
uid => 501,
gid => 501,
}
}
Running this:
puppet apply --noop ./init.pp
Yields:
Error: Duplicate declaration: User[user_one] is already declared in
file /etc/puppet/modules/webserver/manifests/init.pp:156; cannot
redeclare at /etc/puppet/modules/webserver/manifests/init.pp:160 on
node my_web_server
...where 160 is where I try to assign gid 502 to user_one.
Question
Is there a way to assign multiple groups with Puppet, or do I have to hand-assign these groups?

Yes there is a way!
Have a look at http://docs.puppetlabs.com/references/latest/type.html#user.
The parameter gid specifies the user's primary group which must be unique. Additional groups can be specified with the groups parameter.
Assuming that 500 should be the primary group ...
user { "user_one":
ensure => present,
uid => 500,
gid => 500,
groups => [502, 503, 504],
}
... should do the job.

If you want, you can use the name of group too, look below.
group { "group_one":
ensure => present,
gid => 500,}
group { "group_two":
ensure => present,
gid => 501,}
user { "user_one":
ensure => present,
comment => 'New user being created',
uid => 505,
groups => ["group_one","group_two"],}

Related

Logstash - Split escape character " \ " is not working

I have logstash to check log from window file ; there is many app running on window show I think using the folder to determinate this log come from what app ; but it is not working and get the exception :
Failed to execute action
{:action=>LogStash::PipelineAction::Create/pipeline_id:main,
:exception=>"LogStash::ConfigurationError", :message=>"Expected one of
\', ', any character at line 21, column 1 (byte 237)
my config
input {
beats {
port => 5044
}
}
filter {
mutate {
split => { "source" => '\\' }
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => mt4log
}
}
someone can help me find out what is problem here thanks

Logs are ignoring input section in config files

I have a simple setup for capturing logs though HTTP and TCP.
I've created 2 conf files at /etc/logstash/conf.d/ (see below) but logs sent though HTTP are also being passed through the TCP pipeline and vise versa. For example when I send a log through TCP it ends up both in http-logger-* index and in tcp-logger-*.. it makes no sense to me :(
http_logger.conf
input {
http {
port => 9884
}
}
filter {
grok {
match => ["[headers][request_path]", "\/(?<component>[\w-]*)(?:\/)?(?<env>[\w-]*)(?:\/)?"]
}
}
output {
amazon_es {
hosts => ['XXXXX']
region => 'us-west-2'
aws_access_key_id => 'XXXXX'
aws_secret_access_key => 'XXXXX'
index => 'http-logger-%{+YYYY.MM.dd}'
}
stdout { codec => rubydebug }
}
tcp_logger.conf
input {
tcp {
port => 9885
codec => json
}
}
filter {
}
output {
amazon_es {
hosts => ['XXXXX']
region => 'us-west-2'
aws_access_key_id => 'XXXXX'
aws_secret_access_key => 'XXXXX'
index => 'tcp-logger-%{+YYYY.MM.dd}'
}
stdout { codec => rubydebug }
}
Any ideas on what am I missing?
Thank you
The Input, filter and Output configuration even when split across a different file the logstash while processing it will process it as a single big configuration as if all the input, filter and output is specified in a single file.
So said that the event coming into logstash will pass through all the output and filter plugin configured, in your case, each event picked up by the TCP and HTTP input plugin will pass through filter plugin and output plugin configured in both http_logger.conf and tcp_logger.conf, that's the reason you are seeing events stashed in both http-logger-* and tcp-logger-* index
So in order to fix this, we can specify a unique type field for events picked by both tcp and http input plugins and then apply the filter and output plugin selectively using the type set in the input plugin as shown below
http_logger.conf
input {
http {
port => 9884
type => "http_log"
}
}
filter {
if [type] == "http_log"
{
grok {
match => ["[headers][request_path]", "\/(?<component>[\w-]*)(?:\/)?(?<env>[\w-]*)(?:\/)?"]
}
}
}
output {
if ([type] == "http_log")
{
amazon_es {
hosts => ['XXXXX']
region => 'us-west-2'
aws_access_key_id => 'XXXXX'
aws_secret_access_key => 'XXXXX'
index => 'http-logger-%{+YYYY.MM.dd}'
}
}
stdout { codec => rubydebug }
}
tcp_logger.conf
input {
tcp {
port => 9885
codec => json
type => "tcp_log"
}
}
output {
if ([type] == "tcp_log")
{
amazon_es {
hosts => ['XXXXX']
region => 'us-west-2'
aws_access_key_id => 'XXXXX'
aws_secret_access_key => 'XXXXX'
index => 'tcp-logger-%{+YYYY.MM.dd}'
}
}
stdout { codec => rubydebug }
}
The explanation provided by #Ram is spot on however there is a cleaner way of solving the issue: enter pipelines.yml.
By default it looks like this:
- pipeline.id: main
path.config: "/etc/logstash/conf.d/*.conf"
basically it loads and combines all *.conf files - in my case I had two.
To solve the issue just separate the pipelines like so:
- pipeline.id: httplogger
path.config: "/etc/logstash/conf.d/http_logger.conf"
- pipeline.id: tcplogger
path.config: "/etc/logstash/conf.d/tcp_logger.conf"
The pipelines are now running separately :)
P.S. Don't forget to reload logstash after any changes here

puppet with multiple NFS mount on same server

I have few NFS mount points on the same server but different directories.
ex:
x.x.x.x:/stats /data/stats
x.x.x.x:/scratch /data/scratch
x.x.x.x:/ops /data/ops
But when i try to run puppet it adds following to my fstab. (wrong mount assignment)
x.x.x.x:/scratch /data/stats nfs defaults,nodev,nosharecache 0 0
x.x.x.x:/scratch /data/ops nfs defaults,nodev,nosharecache 0 0
x.x.x.x:/scratch /data/scratch nfs defaults,nodev,nosharecache 0 0
It is using the last mount option on all mounted partitions. so i did a little bit of research and found the following bug.
https://tickets.puppetlabs.com/browse/DOCUMENT-242
Then added nosharecache option, but still no luck.
this is my puppet code
class profile::mounts::stats {
# Hiera lookups
$location = hiera('profile::mounts::stats::location')
$location2 = hiera('profile::mounts::stats::location2')
tag 'new_mount'
file { '/data/stats':
ensure => directory,
owner => 'root',
group => 'root',
mode => '0755',
require => File['/data'],
tag => 'new_mount',
}
mount { '/data/stats':
ensure => mounted,
fstype => 'nfs',
device => $location,
options => 'defaults,nodev,nosharecache',
require => File['/data/stats'],
tag => 'new_mount'
}
file { '/data/ops':
ensure => directory,
owner => 'root',
group => 'mail',
mode => '0775',
require => File['/data'],
tag => 'new_mount',
}
mount { '/data/ops':
ensure => mounted,
fstype => 'nfs',
device => $location2,
options => 'defaults,nodev,nosharecache',
require => File['/data/ops'],
tag => 'new_mount',
}
file { '/data/scratch':
ensure => directory,
owner => 'root',
group => 'mail',
mode => '0775',
require => File['/data'],
tag => 'new_mount',
}
mount { '/data/scratch':
ensure => mounted,
fstype => 'nfs',
device => $location2,
options => 'defaults,nodev,nosharecache',
require => File['/data/scratch'],
tag => 'new_mount',
}
}
}
My hieara lookup is as follows
profile::mounts::stats::location: x.x.x.x:/stats
profile::mounts::stats::location2: x.x.x.x:/scratch
why it is causing some unexpected behavior ?
I compiled that code and I see a few issues:
You did not include the File['/data'] resource, but I assume you have that somewhere else?
After compiling I see this in the catalog:
$ cat myclass.json | jq '.resources | .[] | select(.type == "Mount") | [.title, .parameters]'
[
"/data/stats",
{
"ensure": "mounted",
"fstype": "nfs",
"device": "x.x.x.x:/stats",
"options": "defaults,nodev,nosharecache",
"require": "File[/data/stats]",
"tag": "new_mount"
}
]
[
"/data/ops",
{
"ensure": "mounted",
"fstype": "nfs",
"device": "x.x.x.x:/scratch",
"options": "defaults,nodev,nosharecache",
"require": "File[/data/ops]",
"tag": "new_mount"
}
]
[
"/data/scratch",
{
"ensure": "mounted",
"fstype": "nfs",
"device": "x.x.x.x:/scratch",
"options": "defaults,nodev,nosharecache",
"require": "File[/data/scratch]",
"tag": "new_mount"
}
]
So you are mounting both /data/ops and /data/scratch on $location2. Is that an oversight? It does not match what you said you were trying to achieve.
Otherwise I can't reproduce what you said you are observing.
Is anything other than Puppet editing the fstab file? Did you try this code on a fresh box?

puppet Exec always runs even if the subscribed create_resource doesn't run

I have attached report screenshot from foreman and pasted below is the class that I am having issue with.
If it's hard to go through the entire code, I am highlighting the Exec section that is not working as expected
exec { $service:
path => ["/usr/bin/","/usr/sbin/","/bin"],
subscribe => Domain_ip_map[$domain_ip_map_titles],
command => "sudo service nagios restart",
}
The above Exec[$service] is subscribed to Domain_ip_map[...], this in turn notified by Exec['purge-config-files'] which require => File['deployconfig.cfg'].
Since there is no change in deployconfig.cfg file, File['deployconfig.cfg'] doesn't run and hence no notify, so Exec['purge-config-files'] and custom Domain_ip_map resource doesn't run. Up to this point everything working as expected. But the last part, Exec[$service] is subscribed to Domain_ip_map.
When Domain_ip_map is not running, how can Exec[$service] execute
successfully ?
class testclass ( $data = {
item1 => {
domain => 'testdomain.com',
ipaddress => '1.1.1.1',
},
},
$baseconfigdir = '/usr/local/servers',
$config_file_host = '/usr/local/test.cfg',
$config_file_service = '/usr/local/test_service.cfg' ) {
validate_hash($data)
$domain_ip_map_titles = keys($data)
file { "${baseconfigdir}":
ensure => directory,
}
exec { 'purge-config-files':
command => "/bin/rm -f ${baseconfigdir}/*",
notify => Domain_ip_map[$domain_ip_map_titles],
require => File['deployconfig.cfg'],
refreshonly => true,
}
file { 'deployconfig.cfg':
ensure => file,
path => '/home/deployconfig.cfg',
mode => '0644',
owner => 'root',
group => 'root',
content => "test",
notify => Exec['purge-config-files'],
}
#problem here, its subscribed to Domain_ip_map, but even if Domain_ip_map doesn't run, Exec['$service'] always execute, how???
exec { $service:
path => ["/usr/bin/","/usr/sbin/","/bin"],
subscribe => Domain_ip_map[$domain_ip_map_titles],
command => "sudo service nagios restart",
}
create_resources(domain_ip_map, $data)
}
define domain_ip_map($domain, $ipaddress) {
nagios_host { $domain:
....
}
nagios_service { "check_ping_${domain}":
....
}
}

Simplifying Puppet Manifest

I want to provision multiple sets of things on a server using existing puppet modules the simplest example would be:
file { "/var/www/MYVARIABLEHERE":
ensure => "directory",
}
mysql::db { MYVARIABLEHERE:
user => MYVARIABLEHERE,
password => MYVARIABLEHERE,
host => 'localhost',
grant => ['all'],
}
Is there a way to abstract this out so that I can have say an array of pre defined options and then pass them into existing puppet modules so I don't end up with a manifest file that's thousands of lines long?
As per the answer below I have setup:
define mySites {
mysql::db { $name:
user => $name,
password => $name,
host => 'localhost',
grant => ['all'],
}
file { "/var/www/${name}.drupal.dev":
ensure => "directory",
}
}
I then call:
mySites {"site": $name => "test", }
and get the following error:
Could not parse for environment production: Syntax error at 'name'; expected '}'
You could use a define type to simplify as much :
define mydef( $usern, $passn) {
file { "/var/www/$usern":
ensure => "directory",
}
mysql::db { $usern :
user => $usern,
password => $passn,
host => "localhost",
grant => ['all'],
}
}
# You have to call the define type for each cases.
mydef{"u1": usern => "john", password => "pass", }
# It might be possible to provide multiple arrays to a define
# type if you use puppet's future parser

Resources