I have a Linux server in which I have installed Lighttpd (1.4.28). Now I have setup authentication for multiple folders (13) like this:
auth.debug = 2
auth.backend = "plain"
auth.backend.plain.userfile = "/home/.lighttpdpasswd"
auth.require = (
"/test1" =>
(
"method" => "basic",
"realm" => "Password protected area",
"require" => "user=test1"
),
.
.
.
"/test13" =>
(
"method" => "basic",
"realm" => "Password protected area",
"require" => "user=test13"
),
)
And the lighttpdpasswd is like this:
test1:test1
test2:test2
test3:test3
test4:test4
test5:test5
test6:test6
test7:test7
test8:test8
test9:test9
test10:test10
test11:test11
test12:test12
test13:test13
Now, for folders from 1 to 9, authentication works great, for 10,11,..13 the access is refused with correct credentials !
Is this a bug of lighttpd or should I add some parameters ?
lighttpd mod_auth does a simple prefix match as it walks the auth.require list. It does not look for complete path match, just prefix match.
A workaround is to place your longer paths before your shorter paths in the auth.require list, so "test10" and "test11" prior to "test1"
Related
I have a custom form with managed_file field for video upload in my custom drupal 8 module. Once video uploaded, it is accessible to everyone. I want to restrict video access to logged in users or specific user roles. I tried "Private Files Download Permission" module but it always says forbidden for everyone. I have setup private file system path and files are being uploaded on private directory but not accessible over browser. My custom form field code below: This is a field from my custom form.
$form['activity']['videos'] = [
'#type' => 'managed_file',
'#upload_location' => 'private://activity/videos/',
'#multiple' => TRUE,
'#description' => t('Allowed extensions: mp4 avi'),
'#title' => t('Upload Video'),
'#upload_validators' => [
'file_validate_extensions' => array('mp4 avi')
],
'#weight' => '3',
'#ajax' => [
'callback' => '::fix_ajax_callback',
],
'#disabled' => (!empty($activity))? TRUE : FALSE,
];
Have you tried the following settings in www.drupal.org/project/private_files_download_permission:
Under "Enabled Users" and "Enabled Roles", choose who can download these files.
Request some help please.
Requirement is to create a custom firewall service and then allow this custom firewall service only to a selected ips (trying to use firewalld_rich_rules here).
Here is the sample code:
class foo::fwall (
$sourceip = undef,
)
{
include firewalld
if $sourceip {
$sourceip.each |String $ipaddr| {
firewalld_rich_rule { "rich_rule_${ipaddr}":
ensure => enabled,
permanent => true,
zone => 'public',
family => ipv4,
source => $ipaddr,
element => service,
servicename => 'bar',
action => accept,
}
}
}
# this is defined in firewalld class and works good
firewalld::custom_service { 'bar':
short => 'bar custom service',
description => 'custom service ports',
ports => [
{
port => '7771',
protocol => 'tcp',
},
{
port => '8282',
protocol => 'tcp',
},
{
port => '8539',
protocol => 'tcp',
},
],
}
}
and while running it on a node, with couple of ip addresses (provided as an array for $sourceip), it results in duplicate declaration error
Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Evaluation Error: Error while evaluating a Resource Statement, Evaluation Error: Error while evaluating a Resource Statement, Duplicate declaration: Firewalld_rich_rule[rich_rule_2] is already declared at (file: .../dev/modules/test/manifests/fwall.pp, line: 11); cannot redeclare (file: .../dev/modules/test/manifests/fwall.pp, line: 11) (file: .../dev/modules/test/manifests/fwall.pp, line: 11, column: 7) on node server.domain
Trying it in puppet v5.5 (from puppetlabs) for Redhat Enterprise Linux 7 servers
Note: tried defining a resource following this example from Puppet documentation but getting invalid address error.
define puppet::binary::symlink ($binary = $title) {
file {"/usr/bin/${binary}":
ensure => link,
target => "/opt/puppetlabs/bin/${binary}",
}
}
Use the defined type for the iteration somewhere ele in your manifest file:
$binaries = ['facter', 'hiera', 'mco', 'puppet', 'puppetserver']
puppet::binary::symlink { $binaries: }
I had to change the datatype for $sourceip to array in RH Satellite's smart class parameters which was String by default. Everything works good now.
I have a basic puppet install using this tutorial https://www.digitalocean.com/community/tutorials/how-to-install-puppet-4-on-ubuntu-16-04
When I run /opt/puppetlabs/bin/puppet agent --test on my node I get
Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Error while evaluating a Resource Statement. Could not find declared class firewall at /etc/puppetlabs/code/environments/production/manifests/site.pp:7:1 on node mark-inspiron.
On my node:
/opt/puppetlabs/bin/puppet module list
returns
/etc/puppetlabs/code/environment/production/modules
----- puppetlabs-firewall (v1.9.0)
On my puppet master at /etc/puppetlabs/code/environments/production/manifests/site.pp:
file {'/tmp/it_works.txt': # resource type file and filename
ensure => present, # make sure it exists
mode => '0644', # file permissions
content => "It works on ${ipaddress_eth0}!\n", # Print the eth0 IP fact
}
class { 'firewall': }
resources { 'firewall':
purge => true,
}
firewall { "051 asterisk-set-rate-limit-register":
string => "REGISTER sip:",
string_algo => "bm",
dport => '5060',
proto => 'udp',
recent => 'set',
rname => 'VOIPREGISTER',
rsource => 'true';
}
firewall { "052 asterisk-drop-rate-limit-register":
string => "REGISTER sip:",
string_algo => "bm",
dport => '5060',
proto => 'udp',
action => 'drop',
recent => 'update',
rseconds => '600',
rhitcount => '5',
rname => 'VOIPREGISTER',
rsource => true,
rttl => true;
}
The file part works but not firewall.
You need to install the modules on your master in a master setup with Puppet. They need to be somewhere in your modulepath. You can either place it in the modules directory within your $codedir (normally /etc/puppetlabs/code/modules) or in your directory environment modules directory (likely /etc/puppetlabs/code/environments/production/modules in your case since your cited site.pp is there). If you defined additional module paths in your environment.conf, then you can also place the modules there.
You can install/deploy them with a variety of methods, such as librarian-puppet, r10k, or code-manager (in Enterprise). However, the easiest method for you would be puppet module install puppetlabs-firewall on the master. Your Puppet catalog will then find the firewall class during compilation.
On a side note, that:
resources { 'firewall':
purge => true,
}
will remove any changes to associated firewall configurations (as defined by Puppet's knowledge of the system firewall configuration according to the module's definition of what the resource manages) that are not managed by Puppet. This is nice for eliminating local changes that people make, but it can also have interesting side effects, so be careful.
Question regarding SAML configuration.
I'm currently running Gitlab 9.1 CE edition on CentOs 7. I have an Apache instance on the front end for a reverse proxy to Gitlab handling http(s)
My gitlab.rb has the following configured
external_url 'http://external.apache.server/gitlab/'
gitlab_rails['omniauth_enabled'] = true
gitlab_rails['omniauth_allow_single_sign_on'] = ['saml']
gitlab_rails['omniauth_auto_sign_in_with_provider'] = 'saml'
gitlab_rails['omniauth_block_auto_created_users'] = false
# gitlab_rails['omniauth_auto_link_ldap_user'] = false
gitlab_rails['omniauth_auto_link_saml_user'] = true
# gitlab_rails['omniauth_external_providers'] = ['twitter', 'google_oauth2']
# gitlab_rails['omniauth_providers'] = [
# {
# "name" => "google_oauth2",
# "app_id" => "YOUR APP ID",
# "app_secret" => "YOUR APP SECRET",
# "args" => { "access_type" => "offline", "approval_prompt" => "" }
# }
# ]
In order to setup SAML my provider is asking for the information returned from http://external.apache.server/gitlab/users/auth/saml/metadata which returns a 404.
In reading the SAML documentation, it mentions that Gitlab needs to be configured for SSL, not sure if this is why the URL mentioned above is returning a 404.
The problem with enabling SSL is that my external URL is already providing that and if I use it as is https://external.apache.server then Gitlab is looking for key/cert for that domain on the box which doesn't seem correct. I don't want to change the external URL as it should be fronted by Apache. Bit confused on what the proper configuration should be.
Thanks
Is there any way to create a "Directory" in a vhost and put inside an "Include" with Puppet?
Like this:
<Directory "/var/www">
Options Indexes FollowSymLinks MultiViews
AllowOverride None
Require all granted
Include /etc/apache2/myconf.d/htpasswd.conf
</Directory>
I did it with "custom_fragment" but I would like to do with "additional_includes", but "additional_includes" can't use it inside the variable "directories".
Is there any another way?
Thanks.
I assume you are using Puppet Enterprise or the PLAM.
It has indeed no native support for what you are trying. custom_fragment is actually a very good choice here.
If you really want to add the include through a dedicated hash key, you can modify the module and open a pull request. You will basically have to add a section like the existing ones to the template. Also, some brief documentation. The guys love pull requests ;-)
Looks like you're looking for an array?
if you are using the puppetlabs module, you can use "additional_includes"
additional_includes
Specifies paths to additional static, vhost-specific Apache configuration files. Useful for implementing a unique, custom configuration not supported by this module. Can be an array. Defaults to '[]'.
https://forge.puppetlabs.com/puppetlabs/apache#parameter-directories-for-apachevhost
apache::vhost { 'myvhost.whaterver.com':
port => 8080,
docroot => '/var/www/folder',
directories => [
{ 'path' => '/var/www/folder',
'options' => 'None',
'allow_override' => 'None',
'order' => 'Allow,Deny',
'allow' => 'from All',
'additional_includes' => ['/etc/apache2/myconf.d/htpasswd.conf', 'other settings'],
},],
`
Here a snippet that works for me:
class {'apache':
default_vhost => false,
}
apache::vhost {'mydefault':
port => 80,
docroot => '/var/www/html',
directories => [
{
'path' => '/var/www/html',
'provider' => 'files',
},
{
'path' => '/media/my_builds',
'options' => 'Indexes FollowSymLinks MultiViews',
'allowoverride' => 'None',
'require' => 'all granted',
'additional_includes' => ['what Randy Black said'],
},
],
aliases => [
{
alias => '/my_builds',
path => '/media/my_builds',
},
],
}