Sourcing Puppet files from outside of modules - puppet

I'm installing a package from a module (Nginx in this specific case) and would like to include a configuration file from outside of the module, i.e. from a top level files directory parallel to the top level manifests directory. I don't see any way to source the file though without including it in a module or in my current Vagrant environment referring to the absolute local path.
Does Puppet allow for sourcing files from outside of modules as described in the documentation?

if I understand your question correctly, you can.
In your module a simple code like this
file { '/path/to/file':
ensure => present,
source => [
"puppet:///files/${fqdn}/path/to/file",
"puppet:///files/${hostgroup}/path/to/file",
"puppet:///files/${domain}/path/to/file",
"puppet:///files/global/path/to/file",
],
}
will do the job. The /path/to/file will be sourced using a file located in the "files" Puppet share.
(in the example above, it search in 4 different locations).
update maybe you're talking about a directory to store files which is not shared by Puppet fileserver (look at http://docs.puppetlabs.com/guides/file_serving.html), and in this case you can't i think, Vagrant or not, but you can add it to your Puppet fileserver to do it. I thinks it's the best (and maybe only) way to do it.

If you have a number of Vagrant VMs you can simply store files within your Vagrant project directory (containing your VagrantFile).
This directory is usually available to all VMs as /vagrant within the VM on creation.
If you want other directories on your computer to be available to your VMs just add the following to your VagrantFile
# see http://docs.vagrantup.com/v1/docs/config/vm/share_folder.html
config.vm.share_folder "v-packages", "/vagrant_packages", "../../dpkg"
Then to use the files within puppet you can simply treat them as local files to the VM
# bad example, bub basically use 'source => 'file:///vagrant/foo/bar'
file { '/opt/cassandra':
ensure => directory,
replace => true,
purge => true,
recurse => true,
source => 'file:///vagrant/conf/dist/apache-cassandra-1.2.0',
}
This is probably only wise to do if you only using local puppet manifests/modules.

Probably too late to help bennylope, but for others who happen across this question, as I did before figuring it out for myself ...
Include stuff like this in your Vagrantfile ...
GUEST_PROVISIONER_CONFDIR = "/example/destination/path"
HOST_PROVISIONER_CONFDIR = "/example/source/path"
config.vm.synced_folder HOST_PROVISIONER_CONFIDIR, GUEST_PROVISIONER_CONFDIR
puppet.options = "--fileserverconfig='#{GUEST_PROVISIONER_CONFDIR}/fileserver.conf'"
Then make sure /example/source/path contains the referenced fileserver.conf file. It should look something like ...
[foo]
path /example/destination/path
allow *
Now, assuming example-file.txt exists in /example/source/path, the following will work in your manifests:
source => "puppet:///foo/example-file.txt",
See:
Puppet configuration reference entry for fileserverconfig
Serving Files From Custom Mount Points

Related

How to include multiple values under a section in inifile

I am trying to create a puppet manifest using inifile. This would be for a configuration file where I need to have the following format.
[safe]
directory = /home/foo
directory = /home/test
directory = /home/something
I know that there is a way to use directory1, and directory2 but I was wondering if there is a way to do it without changing the directory since it needs that specific attribute. This implementation is meant for puppet manifest.
Also, I was thinking puppetlabs/inifile module but if there is another option to achieve this would be great too.
Thanks for the help in advance
So far, I have an implementation like:
ini_setting { 'procedure cache size':
ensure => present,
path => '/var/lib/somethning/test.config',
section => 'safe',
setting => 'directory',
value => '/home/foo',
indent_char => "\t",
}
This is for each directory. The purpose for this implementation is to address the new git configuration for safe.repository in the recent update. My understanding is that for multiple directories, it adds a new value as directory = <directory> I don't believe that it likes directories separate by commas.
First I thought about file_line, but this is not idempotent for multi-line settings (you get problems when you run again). You can try:
Sample puppet code dir.pp
$safe_directories="directory = /home/foo
directory = /home/test
directory = /home/something"
notice "Testing\n${safe_directories}"
file { "/tmp/result.ini":
ensure => present,
content => template('/tmp/layout.erb'),
}
notice "Check /tmp/result.ini"
Sample template /tmp/layout.erb
[unsafe]
directory=/unsafe
[safe]
<%=#safe_directories%>
otherfield=secure
[header3]
nothing = here
Now run command from commandline
puppet apply dir.pp

how to avoid mixture of \ and / in file paths when joining paths in Docker containerized Python code

As far as I'm aware I'm using best practices to define paths (using raw strings) and how I go about joining them (using os.path.join()), e.g.
import os
fdir = r'C:\Code\...\samples'
fpath = os.path.join(fdir, 'fname.ext')
and doing so has not caused me any problems when running my code within a Python or command shell. If I print fpath to the console I get consistent use of \s in the path:
C:\Code...\samples\fname.ext
But when I run a Docker containerized version of the code and run the image I get the error:
FileNotFoundError: [Errno 2] No such file or directory:
'C:\Code\...\samples/fname.ext'
I don't understand why os.path.join() has used a / to join fdir and fname.ext when the rest of the path included \\. It doesn't do this when I run the code outside of the container.
I have tried using os.path.normpath():
fpath = os.path.join(fdir, 'fname.ext')
fpath = os.path.normpath(fpath)
as discussed here, and os.sep.join():
fpath = os.sep.join([fdir, 'fname.ext'])
as covered here, and Path().joinpath():
from pathlib import Path
fpath = Path(fdir).joinpath('fname.ext')
as well as Path() / 'path_to_add':
fpath = Path(fdir) / 'fname.ext'
as discussed here, but in every case I end up with the same result using os.path.join().
Can someone please help me to understand what is going on and how to create consistent paths that will work whether I run the code in Python in a Windows environment, or in a Docker container?
Update Nov. 16:
In trying to keep my question brief I think I've left out details that are crucial. Apologies to those who have kindly taken the time to offer suggestions based on my incomplete description of the problem.
My code needs to import/export files from/to directories that are defined within a user-specified configuration file.
So the configuration file has a section of code where the user defines variables and paths, e.g.
samplesDir = r"path-to-samples-directory"
The variables are stored in a dictionary of dictionaris and stored as a .json.
At the start of the code the user defines the key that selects the dictionary of interest so that at various parts in my code when a file needs to be imported/exported, the paths are at hand.
So back to my example, samplesDir is stored in the configuration dictionary, cfgDict, so all I need to do is append the file name:
sampleFpath = os.path.join(sampleDir, sampleFname)
and sampleFname is determined based on other variables.
Because of the dynamic nature of the variables (including directory paths and file paths), I think it rules out the use of static path defined in a .yml with Docker Compose.
Update Nov. 18:
It may help to include a few more details and some screenshots.
The above screenshot shows the file and folder structure of the src directory containing the source code, the main app.py script for command-line use, the Dockerfile, etc.
The configs folder contains JSON files that includes variables, paths to directories and files. The user can create configuration files either by copying an existing one and modifying the entries, or configuration files can be generated by calling config.py.
Within config.py I have pre-set variables and paths, so that the directory path to the configuration files (configs), sample files (sample_DROs) and others (e.g. fiducials) are all within src.
I don't anticipate any reason why the user would want to store the config files anywhere else, nor do I expect them to want to use different sample files (or move them elsewhere). However, they will undoubtedly create their own fiducials and may decide not to store them in the fiducials directory (i.e. somewhere not within the src directory).
Likewise I have pre-set the download directory (based on the parameters stored within the configuration files, files are fetched from a server and downloaded) to be the default Downloads directory:
rootDownloadDir = os.path.join(Path.home(), "Downloads", "xnat_downloads")
Those files are later imported, processed, and the outputs are (by default) exported into sub-directories within rootDownloadDir.
Within Dockerfile I set the working directory of the container to be that of the source code and copy all of the contents of src (with the exception of some directories defined in .dockerignore):
WORKDIR C:/Code/WP1.3_multiple_modalities/src
...
COPY . .
so that the structure of the container mimics that of WORKDIR:
Hence I have allowed for flexibility in import/export directories, and they are by default a combination of paths within and outside of the src directory. And so, the code executed within the container will need to access files both within and outside of src.
That said, I don't know what rootDownloadDir will look like when os.path.join(Path.home(), "Downloads", "xnat_downloads") is run within the container.
This has got me thinking - Is it bad practice to set the download directory outside of src?
Returning to the original error:
the sample file is in the container:
From the actual behavior I can suppose that the container is based on Unix-like image. Path separator is / in such systems.
To build an environment-independent path which works inside and outside of the container you need the following steps:
Mounting of host folder to container directory.
Environment variable inside and outside the container.
I can show an example of how this is achievable via docker-compose tool and its configuration file docker-compose.yml:
# docker-compose.yml file
version: '3'
services:
<service_name>: # your service name here
image: <image_name> # name of image your container is built on
environment:
- SAMPLES_PATH=/samples
volumes:
- C:\Code\somepath\samples:/samples
In your python code you can use the following structure:
import os
fdir = os.getenv('SAMPLES_PATH', r'C:\Code\...\samples')
fpath = os.path.join(fdir, 'fname.ext')

puppet delete a directory and replace it with a link

I am working a Puppet manifest that configures a router in the equipment that I support. The router runs pretty much plain vanilla Debian 8 or 9.
The problem is in the way the SSD on the router is partitioned.I am not able to change the partitioning, so have to work around the fact that the root file system is small. I have found a solution that I am trying to implement in Puppet but my first attempt doesn't feel right to me so I thought I would ask the community.
I have been and am reading the Puppet docs. Unfortunately I don't have my router hand to play with today so I am unable to test my current solution.
My issue is that by df -H the root file system is at 95% capacity and puppet is failing complaining about not enough space. Because of quirky decisions made a long time ago by others, the /opt/ file system is 5 times the size of / and is at 10% usage.
So my solution, tested manually, is to move /var/cache/apt/archives/ to /opt/apt-archives and then create a symlink using:
ln -s /opt/apt-archives /var/cache/apt/archives
That works and allows the puppet run to finish without errors.
My challenge is to implement this operation in a Puppet class
class bh::profiles::move_files {
$source_dir = '/var/cache/apt/archives'
$target_dir = '/opt/apt-cache'
file { $targetDir :
ensure => 'directory',
source => "file://${source_dir}",
recurse => true,
before => File[$source_dir]
}
file { $source_dir :
ensure => 'absent',
purge => true,
resurse => true,
force => true,
ensure => link,
target => "file://${target_dir}"
}
}
It just doesn't feel right to have ensure repeated in one file resource. And based on what I understand of creating links in puppet I would need the same name for the file resource that deletes the archives directory and the one that creates the link.
What am I missing?
Use exec:
exec { 'Link /var/cache/apt/archives':
command => 'mv /var/cache/apt/archives /opt/apt-archives
ln -s /opt/apt-archives /var/cache/apt/archives',
path => '/bin',
unless => 'test -L /var/cache/apt/archives',
}
Note that Puppet was not really designed to solve automation problems like this one, although using Exec it is possible to do most things anyway.
Your solution sounds like a work-around and it is therefore totally ok to implement a work-around using Exec. I would say, just make sure you add some comments explaining why you had to do something like this.

Puppet creates a broken symlink

For example I have a symlink /etc/foo/folder11/some/link.txt which points to etc/foo/folder12/some/file.txt.
And in puppet I have the following
ensure_resource('file', "/etc/bar/link.txt", {
owner => $someUser,
mode => '0444',
source => `/etc/foo/folder11/some/link.txt`,
})
After puppet run it creates a broken symlink /etc/bar/link.txt which points to ../../folder12/some/file.txt.
Why does it create so strange symlink? And how can I force puppet to create /etc/foo/link.txt symlink which should point to the same file to which /etc/foo/folder11/some/link.txt points to ?
Note that I don't use ensure => link because sometimes /etc/foo/folder11/some/link.txt may be a regular file and in this case /etc/bar/link.txt should be a copy of this file.
As it turned out the problem was in /etc/foo/folder11/some/link.txt which was a relative symlink. I changed it to be absolute and now it works fine.

How can I refer to the current puppet module's files directory?

This is most likely an anti-pattern, but I'd like to know nonetheless:
I need to extract a tgz which is in puppet and then move the contents somewhere else. Is it possible, in a puppet exec { }, to refer to the file where it is stored on disk?
For example, puppet is available at /usr/local/puppet, and the tgz file I need it in /usr/local/puppet/modules/components/files/file.tgz. In the exec { } can I do something like command => "/bin/cp $modules/components/files/file.tgz /somewhere_else" ? Or do I have to declare a file { source => "..." } block first?
Both approaches are correct if you run puppet with puppet apply.
In master-agent architecture using exec to copy file probably will not work at all.
In my opinion using file resource is more "puppet-like" but is has one significant drawback.
You can use:
file { '/some_path/somewhere_else':
source => '/usr/local/puppet/modules/components/files/file.tgz',
}
This will create file /some_path/somewhere_else with the same content as /usr/local/puppet/modules/components/files/file.tgz (it will make a copy of the original file).
But if /some_path doesn't not exist in the file system, the command will fail.
If you are working with tgz files you can also consider using some of the archive puppet modules e.g gini.
UPDATE:
I can propose two approaches:
Use puppet file server to serve files (or define module path for old puppet versions). Next just use it e.g:
file { '/some_path/somewhere_else':
source => "puppet:///modules/components/file.tgz',
}
Define custom facter fact 1, 2 that points path in your filesystem containing required files. E.g:
file { '/some_path/somewhere_else':
source => "${::my_custom_fact}/some_path/file.tgz',
}
I do not think that any of the core facts might be useful for you.

Resources