I am currently setting up a reverse proxy in puppet so that I can authenticate using Active Directory.
I have the following in my puppet module.
class { 'apache::mod::ldap' :}
class { 'apache::mod::authnz_ldap' :}
apache::vhost { 'reverse-proxy':
port => '443',
docroot => '/var/www/html',
ssl => true,
ssl_cert => '/etc/httpd/ssl/cert.crt',
ssl_key => '/etc/httpd/ssl/cert.key',
require => [ File['/etc/httpd/ssl/cert.crt'], File['/etc/httpd/ssl/cert.key']],
rewrites => [
{
comment => 'Eliminate Trace and Track',
rewrite_cond => ['%{REQUEST_METHOD} ^(TRACE|TRACK)'],
rewrite_rule => [' .* - [F]'],
},
],
proxy_preserve_host => true,
proxy_pass => {
path => '/',
url => 'http://127.0.0.1:5601/',
},
directories => [
{
path => '/',
provider => 'location',
auth_name => 'Kibana Authentication',
auth_type => 'Basic',
auth_basic_provider => 'ldap',
auth_ldap_bind_dn => 'cn=serviceuser,ou=Users,dc=example,dc=com',
auth_ldap_bind_password => 'supersecretpassword',
auth_ldap_url => 'ldaps://ldap.example.com/dc=example,dc=com?CN?
sub?(objectClass=user)',
require => 'ldap-group
cn=application_users,ou=application_groups,ou=groups,dc=example,dc=com',
},
],
}
The problem I'm running into is that when I apply this configuration to my apache server auth_ldap_bind_dn, auth_ldap_bind_password, and auth_ldap_url are not being copied over. Puppet isn't throwing any errors and apache runs fine, but it isn't authenticating against LDAP.
old thread but for the benefit of anyone else with the same issue:
I've taken a look at the apache module's code in github and it doesn't appear to support the parameters you've mentioned (auth_ldap_bind_dn, auth_ldap_bind_password, and auth_ldap_url).
However, the directories resource allows you to include custom fragments, which you can use to inject any custom configuration outside of the apache module's scope into your config.
In your case, this should work:
class { 'apache::mod::ldap' :}
class { 'apache::mod::authnz_ldap' :}
apache::vhost { 'reverse-proxy':
port => '443',
docroot => '/var/www/html',
ssl => true,
ssl_cert => '/etc/httpd/ssl/cert.crt',
ssl_key => '/etc/httpd/ssl/cert.key',
require => [ File['/etc/httpd/ssl/cert.crt'], File['/etc/httpd/ssl/cert.key']],
rewrites => [
{
comment => 'Eliminate Trace and Track',
rewrite_cond => ['%{REQUEST_METHOD} ^(TRACE|TRACK)'],
rewrite_rule => [' .* - [F]'],
},
],
proxy_preserve_host => true,
proxy_pass => {
path => '/',
url => 'http://127.0.0.1:5601/',
},
directories => [
{
path => '/',
provider => 'location',
auth_name => 'Kibana Authentication',
auth_type => 'Basic',
auth_basic_provider => 'ldap',
custom_fragment => "AuthLDAPURL 'ldaps://ldap.example.com/dc=example,dc=com?CN?sub?(objectClass=user)'
AuthLDAPBindDN 'cn=serviceuser,ou=Users,dc=example,dc=com'
AuthLDAPBindPassword supersecretpassword",
require => 'ldap-group cn=application_users,ou=application_groups,ou=groups,dc=example,dc=com',
},
],
}
Related
Logstash 7.16. OpenSearch output plugin. Tarball.
Run:
./bin/logstash --path.settings /opt/logstash/config --verbose
Error message:
...
[ERROR][logstash.javapipeline ][fallback] Pipeline error {:pipeline_id=>"fallback", :exception=>#<Manticore::UnknownException: Unsupported or unrecognized SSL message>,
...
Output configuration file:
output {
opensearch {
hosts => [ "<IP>" ]
user => "user"
password => "password"
index => "%{[#metadata][beat]}-%{[#metadata][version]}-%{+YYYY.MM.dd}"
ssl => true
ssl_certificate_verification => false
cacert => "/opt/logstash/config/tls/root-ca.crt"
keystore => "/opt/logstash/config/tls/logstash-elasticsearch-output-client.p12"
keystore_password => "<passwd>"
}
}
Thanks for your attention
When trying to run this code in my puppet-server - it complains on the port TCP 3000.
Error: /Stage[main]/Main/Grafana_datasource[prometheus]: Could not evaluate: Failed to open TCP connection to localhost:3000 (Connection refused - connect(2) for "localhost" port 3000)
class { 'grafana':
cfg => {
app_mode => 'production',
},
database => {
type => 'mysql',
host => '127.0.0.1:3306',
name => 'grafana',
user => 'root',
type => '',
},
users => {
allow_sign_up => false,
},
}
grafana_datasource { 'Prometheus':
grafana_url => 'http://localhost:3000',
grafana_user => 'admin',
grafana_password => 'grafanapw',
type => 'prometheus',
url => 'http://prom-ip:9090',
access_mode => 'proxy',
is_default => true,
require => Class['grafana'],
}
If I try to add this code into the class { 'grafana':}
It stops complaining - but no datasource is created
class { 'grafana':
cfg => {
server => {
http_port => 8080,
}
},
database => {
...
},
}
Overall the main issue is that grafana won't be created with a working datasource & dashboard(not shown here)
https://i.stack.imgur.com/MOz01.png
Grafana bind defaults to 3000. To use port 8080 you need to either give the Grafana binary:
$ sudo setcap 'cap_net_bind_service=+ep' /usr/sbin/grafana-server
A bit of a Puppet newbie here. I am trying to recursively purge all files and directories under /var/www except keep one file present (/var/www/html/appicon.ico). This is my code:
file {'/var/www':
ensure => directory,
recurse => true,
purge => true,
force => true,
require => Package['httpd'],
subscribe => Package['httpd']
} ->
file {'/var/www/html':
ensure => directory,
owner => 'root',
group => 'root',
mode => '0755'
} ->
file {'/var/www/html/appicon.ico':
ensure => file,
owner => 'root',
group => 'root',
mode => '0644',
content => ''
}
The code does appear to purge all files and directories, except for other files under /var/www/html. Any thoughts what I am doing wrong here or how this can be done properly?
You would need to set purge on the html directory too, that is:
file {'/var/www':
ensure => directory,
recurse => true,
purge => true,
force => true,
require => Package['httpd'],
subscribe => Package['httpd'],
} ->
file {'/var/www/html':
ensure => directory,
recurse => true, # note here
purge => true, #
owner => 'root',
group => 'root',
mode => '0755'
} ->
file {'/var/www/html/appicon.ico':
ensure => file,
owner => 'root',
group => 'root',
mode => '0644',
content => ''
}
Explicitly including a file/directory in a Puppet manifest "protects" it from being purged by purge => true, recurse => true set on its parent directory.
I am using puppet 3.8.7.I want to write all of the below code in a single manifest file and run it.every code works fine separately.is it possible? first, I want to install nodejs,then update my nodejs, then run my bashscript,then install git and download git repo
install nodejs:
class { 'nodejs':
repo_url_suffix => '6.x',
}
then update node js:
exec { 'install-node-version-manager':
cwd => '/',
path => '/usr/bin:/bin:/usr/local/bin:/usr/lib/node_modules/npm/bin',
logoutput => 'on_failure',
command => 'npm install -g n',
}
exec { 'install-node-version-manager':
cwd => '/',
path => '/usr/bin:/bin:/usr/local/bin:/usr/lib/node_modules/npm/bin',
logoutput => 'on_failure',
command => 'n latest',
}
then run bash_script.sh
file {'/home/ec2-user/my_bash_script.sh':
source => "puppet:///modules/mymodule/my_bash_script.sh",
mode => '755',
}
exec {'/home/ec2-user/my_bash_script.sh':
refreshonly => 'true',
require => File["/home/ec2-user/my_bash_script.sh"],
subscribe => File["/home/ec2-user/my_bash_script.sh"],
}
then install git and download repo
package
{ 'git':
ensure => 'latest',
}
vcsrepo { "/nodejs-helloworld":
ensure => latest,
provider => git,
require => [ Package["git"] ],
source => "git#gitlab.dev.abc.net:hello-world/nodejs-helloworld.git",
revision => 'master',
}
Puppet provides various ways to establish relationships and ordering between resources.
You can use meta-parameters - require, before, notify, subscribe for example. You can also use chaining arrows to control the flow of the execution.
here your code, in one module -
class installnodejs{
class { 'nodejs':
repo_url_suffix => '6.x',
before => Exec['install-node-version-manager-global'],
}
exec { 'install-node-version-manager-global':
cwd => '/',
path => '/usr/bin:/bin:/usr/local/bin:/usr/lib/node_modules/npm/bin',
logoutput => 'on_failure',
command => 'npm install -g n',
before => Exec['install-node-version-manager-latest'],
}
exec { 'install-node-version-manager-latest':
cwd => '/',
path => '/usr/bin:/bin:/usr/local/bin:/usr/lib/node_modules/npm/bin',
logoutput => 'on_failure',
command => 'n latest',
before => File['/home/ec2-user/my_bash_script.sh'],
}
file {'/home/ec2-user/my_bash_script.sh':
source => "puppet:///modules/mymodule/my_bash_script.sh",
mode => '755',
before => Exce['/home/ec2-user/my_bash_script.sh'],
}
exec {'/home/ec2-user/my_bash_script.sh':
refreshonly => 'true',
require => File["/home/ec2-user/my_bash_script.sh"],
subscribe => File["/home/ec2-user/my_bash_script.sh"],
before => Vcsrepo['/nodejs-helloworld'],
}
package { 'git':
ensure => 'latest',
}
vcsrepo { "/nodejs-helloworld":
ensure => latest,
provider => git,
require => [ Package["git"] ],
source => "git#gitlab.dev.uberops.net:hello-world/nodejs-helloworld.git",
revision => 'master',
}
}
please notice that I've changed the names of your resources. you can't include the same resource twice in the same module.
I'm using Kohana 3.3 and have the following directory structure setup (+ sign means a folder, • means a file):
+ modules
+ app-admin
+ classes
+ admin
• Companies.php
• Users.php
• Locations.php
+ i18n
+ views
+ app-front
+ classes
+ i18n
+ views
For "app-admin" module I have following routes defined:
Route::set('admin default', 'admin')
->defaults(array(
'directory' => 'admin',
'controller' => 'authentication',
'action' => 'login'
));
Route::set('admin', 'admin/<controller>(/<action>(/<id>))')
->defaults(array(
'directory' => 'admin'
));
These routes enable me to access "admin" controllers as such:
http://localhost/admin/companies
http://localhost/admin/companies/edit/2
http://localhost/admin/companies/add
This works with no issue. I installed a pagination module (https://github.com/webking/kohana-pagination) which has following config:
'admin' => array(
'current_page' => array('source' => 'query_string', 'key' => 'page'), // source: "query_string" or "route"
'total_items' => 0,
'items_per_page' => 2,
'view' => 'admin/_partials/pagination',
'auto_hide' => FALSE,
'first_page_in_url' => FALSE,
)
When I do this, I'm getting following error:
Kohana_Exception [ 0 ]: Required route parameter not passed: controller
SYSPATH\classes\Kohana\Route.php [ 599 ]
What am I doing wrong?
Thanks,
Z
I ended up setting up a route specifically for each controller in the "admin" module and provided a default "controller" value as such:
Route::set('admin users', 'admin/users(/<action>(/<id>))')
->defaults(array(
'directory' => 'admin',
'controller' => 'users', // Provided a default value for <controller>
'action' => 'index'
));
And it did the job, the pagination is working ok now. I thought a "catch-all" route for "admin" would do this for me.