Copying files among CFENGINE nodes - linux

I am trying few features of CFENGINE 3.5 and stuck with a very basic issue.
I want to copy certain files which are kept in cfengine Policy hub to various cfengine clients. These files are spread into various locations and further cfengine should copy these files to targeted machines on same location as master server has.
How to do this ?

If you want to copy certain files from the hub onto the same location on the clients, you can do something like this:
vars:
"files" slist => { "/some/file", "/other/file", "/one/more/file" };
files:
"$(files)"
copy_from => secure_cp("$(files)", "$(sys.policy_hub)");
This will loop over the files, copying each one in turn. Make sure you include the appropriate standard library file to secure_cp(), something like this:
body common control
{
inputs => { "lib/3.5/files.cf" };
bundlesequence => { ... };
}

https://cfengine.com/docs/3.5/examples-policy-copy-single-files.html
This might help.
Thanks & Regards,
Alok Thaker

Related

Puppet - How to write yaml files based on Role/Profile method

I've added our infrastructure setup to puppet, and used roles and profiles method. Each profile resides inside a group, based on their nature. For example, Chronyd setup and Message of the day are in "base" group, nginx-related configuration is in "app" group. Also, on the roles, each profile is added to the corresponding group. For example for memcached we have the following:
class role::prod::memcache inherits role::base::debian {
include profile::app::memcache
}
The profile::app::memcached has been set up like this :
class profile::app::memcache {
service { 'memcached':
ensure => running,
enable => true,
hasrestart => true,
hasstatus => true,
}
}
and for role::base::debian I have :
class role::base::debian {
include profile::base::motd
include profile::base::chrony
}
The above structure has proved to be flexible enough for our infrastructure. Adding services and creating new roles could not been easier than this. But now I face a new problem. I've been trying to separate data from logic, write some yaml files to keep the data there, using Hiera version 5. Been looking through internet for a couple of days, but I cannot deduct how to write my hiera files based on the structure I have. I tried adding profile::base::motd to common.yaml and did a puppet lookup, it works fine, but I could not append chrony to common.yaml. Puppet lookup returns nothing with the following common.yaml contents :
---
profile::base::motd::content: This server access is restricted to authorized users only. All activities on this system are logged. Unauthorized access will be liable to prosecution.'
profile::base::chrony::servers: 'ntp.centos.org'
profile::base::chrony::service_enable: 'true'
profile::base::chrony::service_ensure: 'running'
Motd lookup works fine. But the rest, no luck. puppet lookup profile::base::chrony::servers returns with no output. Don't know what I'm missing here. Would really appreciate the community's help on this one.
Also, using hiera, is the following enough code for a service puppet file?
class profile::base::motd {
class { 'motd':
}
}
PS : I know I can add yaml files inside modules to keep the data, but I want my .yaml files to reside in one place (e.g. $PUPPET_HOME/environment/production/data) so I can manage the code with git.
The issue was that in init.pp file inside the puppet module itself, the variable $content was assigned a value. Removing the value fixed the problem.

SFTP Node.js - Is it possible to list files using wildcards?

I'm trying to list all files in my SFTP server from a top level folder in Node.js using the npm module ssh2-sftp-client. However, I cannot find any documentation or previous posts which discuss whether using a wildcards in the file paths is possible. The file paths look like so:
../mnt/volume_lon1_01/currency/curve/date/filename.csv
There can be many different currencies, curves and dates - Hundreds in fact - I need a means of just listing every file name at the final level of the file structure.
I thought a sensible approach would be to use wildcards:
../mnt/volume_lon1_01/ * / * / * / *.csv
But this doesn't seem to work and I can't find anything to suggest it could. Can anyone advise how would be best to list every file from SFTP in Node.js?
Many thanks,
George
Mmm, I don't think this is possible in ssh2, but what you can do is list them algorithmically and access each one, pseudo-code:
Connect SFTP
List Folders -> Save this to a dictionary
For each folder in Folders
List Folders - > Save this to a dictionary
At the end of it you'll have a dictionary object with the full path of the remote server, like so
{
sftp: {
"subfolders": {
"0": {
"name": "/rootfolder",
"subfolders": {
"0": {
"name": "/rootfolder",
"subfolders": {
...
}
}
}
}
}
}
}
From that you can easily access whatever you need by doing
sftp["/rootfolder"]["/subfolder1"]... etc

syslog-ng multiple destinations

We are using syslog-ng to send access-log file to remote servers via tcp. And I already know that multiple destination can be configured to do this job, just like:
source s_xxx { file("/xxx/access.log"); };
destination d_one {tcp("1.2.3.4", port(1234));};
destination d_two {tcp("1.2.3.5", port(1234));};
log {source(s_xxx); destination(d_one); destination(d_two);};
What I am going to figure out is that how to poll my content to these two destinations(such as round-robin). In other words, my content is either sent to d_one or d_two, not both of them.
thanks very much.
My scenario is very similar: I have a syslog-ng collector that forwards messages to an analytic application. It became overloaded and I needed to split the load. I have no requirement for traffic on which to filter and I did not want to maintain a list of types. I simply wanted message by message to round-robin as you are seeking. I decided to use mod(%) to achieve this.
Syslog-ng OSE v3.7.2:
destination d_net_qr1 { network("ip1"); };
destination d_net_qr2 { network("ip2"); };
filter f_qr1 { "$(% ${RCPTID} 2)" eq "0" };
filter f_qr2 { "$(% ${RCPTID} 2)" eq "1" };
log { source(s_net); filter(f_qr1); destination(d_net_qr1); };
log { source(s_net); filter(f_qr2); destination(d_net_qr2); };
syslog-ng Open Source Edition does not currently have a straightforward way to send the messages in round-robin fashion. If you want to do this for load-balancing, you can probably come up with a filter that switches between the destinations every few second using the $SEC macro and comparing macro values, see http://www.balabit.com/sites/default/files/documents/syslog-ng-ose-3.6-guides/en/syslog-ng-ose-v3.6-guide-admin/html/filters-comparing.html
HTH,
Regards,
Robert

Subscribe to new file(s) in directory in Puppet

I know I can sync directory in Puppet:
file { 'sqls-store':
path => '/some/dir/',
ensure => directory,
source => "puppet:///modules/m1/db-updates",
recurse => true,
purge => true
}
So when the new files are added they are copied to '/some/dir/'. However what I need is to perform some action for every new file. If I "Subscribe" to such resource, I don't get an array of new files.
Currently I created external shell script which finds new files in that dir and executes action for each of them.
Naturally, I would prefer not to depend on external script. Is there a way to do that with Puppet?
Thanks!
The use case for that is applying changes to DB schema that are being made from time to time and should be applied to all clients managed by puppet. In the end it's mysql [args] < update.sql for every such file.
Not sure I would recommend to have puppet applying the db changes for me.
For small db, it may work but for real world db... you want to be aware of when and how these kind of changes got applied (ordering of the changes, sometime require temp disk space adjustement, db downtime, taking backup before/after, reorg,...), most of the times your app should be adapted at the same time. You want more orchestration (and puppet isn't good at orchestration)
Why not using a tool dedicated to this task like
liquid-base
rails db migrations and capistrano
...
A poor men solution would be to use vcs-repo module and an exec to list modified files since last "apply".
I agree with mestachs, puppet dealing with db updates it's not a great idea
You can try some kind of define:
define mydangerousdbupdate($name, $filename){
file { "/some/dir/$filename":
ensure => present,
source => "puppet:///modules/m1/db-updates/$filename",
}
exec{"apply $name":
command => "/usr/bin/mysql [args] < /some/dir/$filename > /some/dir/$filename.log",
creates => "/some/dir/$filename.log"
}
}
And then, you can instantiate with the different patches, in the preferred order
mydangerousdbupdate{"first_change":
name => "first",
filename => "first.sql",
}->mydangerousdbupdate{"second_change":
name => "second",
filename => "second.sql",
}

Puppet Servers of same type

I have a best practice question around Puppet when working is server/agent mode.
I have created a working solution using a manifest/sites.pp configuration that identifies the configuration using the hostname of the agent.
For example:
node 'puppetagent.somedomain.com' {
include my_module
notify { 'agent configuration applied':
}
}
This works great for configuring a single node but what if I had a scenario in which I had multiple applications servers all with differing hostnames but all of which needed the same configuration.
Adding multiple node entries, comma separated hostname list or regular expressions doesn't feel like the 'right' way to do this.
Are there alternative ways? Can you define node 'types'? What do the community consider best practice for this?
Many thanks
If all the servers have the same configuration, inheritance, or the hieara hierarchy are the easiest ways to achieve this.
Once you need to maintain a larger set of systems where certain nodes have types such as 'web server' or 'database server' the configurations will diverge and the single inheritance model is not entirely sufficient.
You can use composition in those places. Take a peak at this article for more details.
Regular expressions might not be so bad, but I suppose the current trend is to use hiera_include.
You can do something dirty like this :
$roles = { 'webserver' => [ 'server1', 'server2', 'server3' ]
, 'smtp' => [ 'gw1', 'gw2' ]
}
node default {
$roles . filter |$k,$v| { $hostname in $v }
. each |$k,$v| { hiera_include($k) }
}
I would suggest taking a look at the concept of "roles and profiles" here: http://www.craigdunn.org/2012/05/239/
You can have multiple nodes all of which include the same configuration with a "profile" that includes one or more "roles".
As for defining multiple nodes with the same configuration or a "profile" containing "role(s)", I would suggest using hiera_include like #bartavelle mentioned. Except to use a common environment variable for identifying the nodes rather than using regular expressions.

Resources