I know I can read the contents of a file using
$var = file("some_path)
In puppet, but when I try and pass in a path that is a puppet file server uri it can't seem to find it. Is it possible in puppet 4.2.2 to do
$var = file("puppet:///...")
No. The puppet:// style URLs are meaningful only to the source property of the file type.
file { '/etc/my_custom_config':
source => 'puppet:///modules/my_custom_module/etc/my_custom_config'
}
Related
How do I set file permission from list of file names from cat command?
For example, below command returns 3 file names:
$ cat /tmp/test | grep file
/etc/systemd/file_1.log
/etc/systemd/file_2.log
/etc/systemd/file_3.log
How do I use puppet to run the command, get the file names and then loop the 3 file names and set permission accordingly?
The files are resources and if you want to manage a resource you have to know it's there so dynamically created log files are not easy. If you know the file names already then you can use something like this and pass an array into the file resource.
file { ['/etc/systemd/file_1.log',
'/etc/systemd/file_1.log',
'/etc/systemd/file_1.log'] :
ensure => 'file',
mode => '0644',
owner => 'root',
group => 'root',
}
An other method might be to use an exec
exec { 'chmod 644 /etc/systemd/file_*.log':
path => ['/usr/bin', '/usr/sbin',],
}
But you really need something like an onlyif or unless or this is going to execute every 30 minutes and that breaks the idempotent rule we try and apply with Puppet code where things only change if they need correcting. So you're going to need a command line that'll test the permissions and return a boolean to the onlyif.
There are more details here https://puppet.com/docs/puppet/5.5/types/exec.html
A alternative (and the way I'd do it) would be to expose the contents of that file via an external fact which passes the list of files to Puppet to use in the catalog compilation. An external fact can be a bash script so I'd create a file called /etc/facter/facts.d/logfiles.sh, obviously I'd deploy this using Puppet.
#!/usr/bin/env bash
logfiles=($(grep file /tmp/test))
echo "logfiles=${logfiles[*]}"
Then in my Puppet code I'd have something like this;
$logfiles.each |String $logfile| {
file { $logfile :
ensure => 'file',
mode => '0644',
owner => 'root',
group => 'root',
}
}
So when the Puppet run happens the list of log files will be returned to Puppet via the facts and then each file listed is defined as a resource with the correct permissions.
How do I set file permission from list of file names from cat command?
There are two main alternatives, but I observe first that your example is of the output from grep, not cat, and that the cat in that example is superfluous. Nevertheless, those details don't change the big picture -- substantially the same approaches are applicable for data output by any command.
It would be more idiomatic to write a custom fact that captures the filenames (as of the time of each catalog request), and use that information to create the appropriate File resources.
Custom facts are not that hard, but the full details are more than would be appropriate for an SO answer. Supposing that you have a fact $facts['systemd_logs'] whose value is an array of the absolute filenames, you can compactly express the whole group of wanted File resources like so:
file { $facts['systemd_logs']:
mode => '0644',
}
(or whatever mode it is that you want).
It would be quicker (and dirtier) to use an Exec resource to run an appropriate command:
exec { 'ensure correct file permissions':
command => 'chmod 0644 $(/bin/grep file /tmp/test)',
onlyif => '/bin/grep -q file /tmp/test',
provider => 'shell',
}
i want to sftp to other server and copy a file by changing file name dynamically using shell variable. i want to do this in a single line
Ex : i want to copy test.txt to other server with name my_test.txt
sftp user#hostname:/home/pavan/ <<< 'put test.txt $Dynamic_test.txt'
With this file is copied to destination server but copied with name as $Dynamic_test.txt and not my_test.txt
i also tried this, but no luck
sftp user#hostname:/home/pavan/ <<< 'put test.txt $Dynamic\\_test.txt'
Please let know. if some one has idea on this
To resolve the variables, you have to use double quotes (as everywhere else in shell):
sftp user#hostname:/home/pavan/ <<< "put test.txt $Dynamic_test.txt"
I have written a Groovy script to check the existence of a file field1_field2_field3.txt in my unix path.
def fileName = "/path/to/file/field1_field2_field3.txt"
File f = new File(fileName);
if(f.exists())
{
println (" Required files exists.. \n");
}
Now i want to extend this script to check if the files with name field1_field2_*.txt exist.
Kindly let me know if there is a command which can give me the desired list of files or i should look to implement using regular expressions.
I originally suggested using FileNameFinder, but it seems that's not allowed, and for good reason. Groovy scripts are run on the master, so file finds wouldn't happen on the slave where the code is anyway.
You'll probably have to do something along the lines of this:
FileNameFinder().getFileNames fails on one Jenkins node
Perhaps:
def filesListAsString = bat( returnStdout: true, script: '#echo off & dir /b /path/to/file/field1_field2*.txt').trim()
Which should give you a whitespace-delimited String listing of files that match.
I have a folder with images and they can have different formats but the name will always be unique. Is there a way to get the file extension if I know the file's name without the extension (eg. index and not index.html)? I could just check if the file exists for every extension I expect to be there but that seems like a bad solution to me.
Example:
I know there is an image called PIC but I don't know the extension (could be either '.png', '.jpg' etc.) therefore I can not use the file command.
Well, if your running Unix based systems, this could be a workaround.
var sys = require('util')
var exec = require('child_process').exec;
function puts(error, stdout, stderr) {
console.log(stdout)
}
// this is where you should get your path/filename
var filename = "login";
// execute unix command 'file'
exec("file " + filename, puts);
I tested it for a PNG file and an EJS file, both with no extensions (what wouldn't make difference).
The results are below:
PNG:
photo: PNG image data, 100 x 100, 8-bit/color RGB, non-interlaced
EJS (what's basically a HTML):
login: HTML document, ASCII text
You can check file command line parameters to make it easier to work with the string (e.g. file -b filename).
If using Windows, then you'd have to check an alternative command for file.
Hope it's somehow useful.
I know I'm late to the party, but you can use the grep command in unix based systems.
ie ls | grep PIC. What this does is first give a directory listing of the working directory, and then searches for the phrase PIC from the output of the directory listing and prints it. (so the only thing that will be printed is the filename)
In Windows, use dir PIC.* /b
You can execute these commands using child_process as shown in other answers
var path = require('path')
path.extname('index.html')
// returns
'.html'
Here is the referenced answer
Node.js get file extension
Updated :
https://www.npmjs.com/package/file-extension
npm install --save file-extension
allows you to specify a filename then it will return the extension
I'm using ImageMagick to do some image processing from the commandline, and would like to operate on a list of files as specified in foo.txt. From the instructions here: http://www.imagemagick.org/script/command-line-processing.php I see that I can use Filename References from a file prefixed with #. When I run something like:
montage #foo.txt output.jpg
everything works as expected, as long as foo.txt is in the current directory. However, when I try to access bar.txt in a different directory by running:
montage /some_directory/#bar.txt
output2.jpg
I get:
montage: unable to open image
/some_directory/#bar.txt: No such file
or directory # blob.c/OpenBlob/2480.
I believe the issue is my syntax, but I'm not sure what to change it to. Any help would be appreciated.
Quite an old entry but it seems relatively obvious that you need to put the # before the full path:
montage #/some_directory/bar.txt output2.jpg
As of ImageMagick 6.5.4-7 2014-02-10, paths are not supported with # syntax. The # file must be in the current directory and identified by name only.
I haven't tried directing IM to pull the list of files from a file, but I do specify multiple files on the command line like this:
gm -sOutputFile=dest.ext -f file1.ppm file2.ppm file3.ppm
Can you pull the contents of that file into a variable, and then let the shell expand that variable?