Access Private files to only logged in users in drupal - file-permissions

I have a custom form with managed_file field for video upload in my custom drupal 8 module. Once video uploaded, it is accessible to everyone. I want to restrict video access to logged in users or specific user roles. I tried "Private Files Download Permission" module but it always says forbidden for everyone. I have setup private file system path and files are being uploaded on private directory but not accessible over browser. My custom form field code below: This is a field from my custom form.
$form['activity']['videos'] = [
'#type' => 'managed_file',
'#upload_location' => 'private://activity/videos/',
'#multiple' => TRUE,
'#description' => t('Allowed extensions: mp4 avi'),
'#title' => t('Upload Video'),
'#upload_validators' => [
'file_validate_extensions' => array('mp4 avi')
],
'#weight' => '3',
'#ajax' => [
'callback' => '::fix_ajax_callback',
],
'#disabled' => (!empty($activity))? TRUE : FALSE,
];

Have you tried the following settings in www.drupal.org/project/private_files_download_permission:
Under "Enabled Users" and "Enabled Roles", choose who can download these files.

Related

Puppet-Passwords as plain text in Windows agent output and in catalog file

I encrypted password using hiera:
dsc_xADUser {'FirstUser':
dsc_ensure => 'present',
dsc_domainname => 'ad.contoso.com',
dsc_username => 'tfl',
dsc_userprincipalname => 'tfl#ad.contoso.com',
dsc_password => {
'user' => 'tfl#ad.contoso.com',
'password' => Sensitive(lookup('password'))
},
dsc_passwordneverexpires => true,
dsc_domainadministratorcredential => {
'user' => 'Administrator#ad.contoso.com',
'password' => Sensitive(lookup('password'))
},
}
but on node,when running agent -t -v password is shown as plain text in agent output and in catalog JSON file.
I also tried node_encrypt(lookup('password')) then getting content of my encrypted password (which is good) and windows complains that password doesn't meet password complexity (bad-because it's trying to set all below as password)
'password' = '-----BEGIN PKCS7-----
MIIMyQYJKoZIhvcNAQcDoIIMujCCDLYCAQAxggKdMIICmQIBADCBgjB9MXsweQYD
VQQDDHJQdXBwZXQgRW50ZXJwcmlzZSBDQSBnZW5lcmF0ZWQgb24gbXlwdXBwZXQt
eGwwZGJ5a212Z2xrYnl2eS5ldS13ZXN0LTEub3Bzd29ya3MtY20uaW8gYXQgKzIw
MTgtMTEtMDIgMTQ6MDQ6MDAgKzAwMDACAQUwCwYJKoZIhvcNAQEBBIICABkJDfGb
4CdHUntrVR1E......
hiera config:
---
version: 5
defaults:
datadir: data
data_hash: yaml_data
hierarchy:
- name: "Eyaml hierarchy"
lookup_key: eyaml_lookup_key # eyaml backend
paths:
- "nodes/%{trusted.certname}.yaml"
- "windowspass.eyaml"
options:
pkcs7_private_key: "/etc/puppetlabs/puppet/keys/private_key.pkcs7.pem"
pkcs7_public_key: "/etc/puppetlabs/puppet/keys/public_key.pkcs7.pem"
EDIT: just found this, it seems it's opened issue and related to Windows only
UPDATE: i managed to configure puppet not to cache catalog file on Windows client (adding catalog_cache_terminus="" to puppet config file on windows so i'll use this as "workaround", it seems no way to remove passwords from agent debug output

Chef: Modify existing resource from another cookbook

I have two cookbooks: elasticsearch and curator.
Elasticsearch cookbook installs and configure an elasticsearch. The following resource (from elasticsearch cookbook), has to be modified from curator cookbook:
elasticsearch_configure 'elasticsearch' do
configuration ({
'http.port' => port,
'cluster.name' => cluster_name,
'node.name' => node_name,
'bootstrap.memory_lock' => false,
'discovery.zen.minimum_master_nodes' => 1,
'xpack.monitoring.enabled' => true,
'xpack.graph.enabled' => false,
'xpack.watcher.enabled' => true
})
end
I need to modify it on curator cookbook and add a single line:
'path.repo' => (["/backups/s3_currently_dev", "/backups/s3_currently", "/backups/s3_daily", "/backups/s3_weekly", "/backups/s3_monthly"])
How I can do that?
I initially was going to point you to the chef-rewind gem, but that actually points to the edit_resource provider that is now built into Chef. A basic example of this:
# cookbook_a/recipes/default.rb
file 'example.txt' do
content 'this is the initial content'
end
.
# cookbook_b/recipes/default.rb
edit_resource! :file, 'example.txt' do
content 'modified content!'
end
If both of these are in the Chef run_list, the actual content within example.txt is that of the edited resource, modified content!.
Without fully testing your case, I'm assuming the provider can be utilized the same way, like so:
edit_resource! :elasticsearch_configure, 'elasticsearch' do
configuration ({
'http.port' => port,
'cluster.name' => cluster_name,
'node.name' => node_name,
'bootstrap.memory_lock' => false,
'discovery.zen.minimum_master_nodes' => 1,
'xpack.monitoring.enabled' => true,
'xpack.graph.enabled' => false,
'xpack.watcher.enabled' => true,
'path.repo' => ["/backups/s3_currently_dev", "/backups/s3_currently", "/backups/s3_daily", "/backups/s3_weekly", "/backups/s3_monthly"]
})
end

puppet couldn't retrieve information from source

My Puppet manifest looks like this
$abrt_config = [ 'abrt.conf','abrt-action-save-package-data.conf' ]
file { $abrt_config:
ensure => present,
path => "/etc/abrt/${abrt_config}",
owner => 'root',
group => 'root',
mode => '0644',
source => "puppet:///modules/abrt/${abrt_config}",
}
My config files are located in the following path.
/abrt/files/abrt.conf
/abrt/files/abrt-action-save-package-data.conf
I'm getting the following error when executing puppet on client nodes.
Error: /Stage[main]/Abrt/File[/etc/abrt/abrt-action-save-package-data.conf]: Could not evaluate: Could not retrieve information from environment development source(s) puppet:///modules/abrt//etc/abrt/abrt.conf/etc/abrt/abrt-action-save-package-data.conf
Error: /Stage[main]/Abrt/File[/etc/abrt/abrt.conf]: Could not evaluate: Could not retrieve information from environment development source(s) puppet:///modules/abrt//etc/abrt/abrt.conf/etc/abrt/abrt-action-save-package-data.conf
You cannot implicitly convert an array to a string in the source attribute like that and expect desired behavior.
If you are using a non-obsolete version of Puppet, then you can use a lambda iterator to solve this problem in the following way:
['abrt.conf', 'abrt-action-save-package-data.conf'].each |$abrt_config| {
file { $abrt_config:
ensure => present,
path => "/etc/abrt/${abrt_config}",
owner => 'root',
group => 'root',
mode => '0644',
source => "puppet:///modules/abrt/${abrt_config}",
}
}
Check the documentation here for more details: https://docs.puppet.com/puppet/4.8/function.html#each

Lighttpd Authentication

I have a Linux server in which I have installed Lighttpd (1.4.28). Now I have setup authentication for multiple folders (13) like this:
auth.debug = 2
auth.backend = "plain"
auth.backend.plain.userfile = "/home/.lighttpdpasswd"
auth.require = (
"/test1" =>
(
"method" => "basic",
"realm" => "Password protected area",
"require" => "user=test1"
),
.
.
.
"/test13" =>
(
"method" => "basic",
"realm" => "Password protected area",
"require" => "user=test13"
),
)
And the lighttpdpasswd is like this:
test1:test1
test2:test2
test3:test3
test4:test4
test5:test5
test6:test6
test7:test7
test8:test8
test9:test9
test10:test10
test11:test11
test12:test12
test13:test13
Now, for folders from 1 to 9, authentication works great, for 10,11,..13 the access is refused with correct credentials !
Is this a bug of lighttpd or should I add some parameters ?
lighttpd mod_auth does a simple prefix match as it walks the auth.require list. It does not look for complete path match, just prefix match.
A workaround is to place your longer paths before your shorter paths in the auth.require list, so "test10" and "test11" prior to "test1"

Put an Include directive inside Directory in a vhost with puppet

Is there any way to create a "Directory" in a vhost and put inside an "Include" with Puppet?
Like this:
<Directory "/var/www">
Options Indexes FollowSymLinks MultiViews
AllowOverride None
Require all granted
Include /etc/apache2/myconf.d/htpasswd.conf
</Directory>
I did it with "custom_fragment" but I would like to do with "additional_includes", but "additional_includes" can't use it inside the variable "directories".
Is there any another way?
Thanks.
I assume you are using Puppet Enterprise or the PLAM.
It has indeed no native support for what you are trying. custom_fragment is actually a very good choice here.
If you really want to add the include through a dedicated hash key, you can modify the module and open a pull request. You will basically have to add a section like the existing ones to the template. Also, some brief documentation. The guys love pull requests ;-)
Looks like you're looking for an array?
if you are using the puppetlabs module, you can use "additional_includes"
additional_includes
Specifies paths to additional static, vhost-specific Apache configuration files. Useful for implementing a unique, custom configuration not supported by this module. Can be an array. Defaults to '[]'.
https://forge.puppetlabs.com/puppetlabs/apache#parameter-directories-for-apachevhost
apache::vhost { 'myvhost.whaterver.com':
port => 8080,
docroot => '/var/www/folder',
directories => [
{ 'path' => '/var/www/folder',
'options' => 'None',
'allow_override' => 'None',
'order' => 'Allow,Deny',
'allow' => 'from All',
'additional_includes' => ['/etc/apache2/myconf.d/htpasswd.conf', 'other settings'],
},],
`
Here a snippet that works for me:
class {'apache':
default_vhost => false,
}
apache::vhost {'mydefault':
port => 80,
docroot => '/var/www/html',
directories => [
{
'path' => '/var/www/html',
'provider' => 'files',
},
{
'path' => '/media/my_builds',
'options' => 'Indexes FollowSymLinks MultiViews',
'allowoverride' => 'None',
'require' => 'all granted',
'additional_includes' => ['what Randy Black said'],
},
],
aliases => [
{
alias => '/my_builds',
path => '/media/my_builds',
},
],
}

Resources