In my Puppet module, I have something like this:
file {'myfile':
ensure => 'file',
path => '/whatever/myfile',
content => inline_template(file(
"puppet:///modules/my_module/templates/${domain}/${::hostname}_myfile.erb",
"puppet:///modules/my_module/templates/${domain}/myfile.erb"
))
}
And my spec looks like:
require 'spec_helper'
describe 'my_module' do
context 'with defaults for all parameters' do
it { should compile }
end
end
If try to run it, I get:
Failure/Error: it { should compile }
error during compilation: Evaluation Error: Error while evaluating a Function Call, Could not find any files from puppet:///modules/my_module/templates/dev.contaazul.local/myfile.erb, puppet:///modules/my_module/templates/dev.contaazul.local/myfile.erb at /home/carlos/Code/Puppet/modules/my_module/spec/fixtures/modules/my_module/manifests/init.pp:48:33 on node becker.local
Obviously, it cannot find any of the ERB templates. If I replace the puppet:// part for /home/carlos/Code/Puppet/ (where the code actually lives), it passes, but in production it is /etc/puppet/, so, it will not work.
How can I make this work?
RI Pienaar has released a code snippet for a function with this behavior. You will need to copy this code into a file in one of your modules at the path lib/puppet/parser/functions/multi_source_template.rb.
Once you do that, you should be able to use it like so:
file {'myfile':
ensure => 'file',
path => '/whatever/myfile',
content => multi_source_template(
"my_module/${domain}/${::hostname}_myfile.erb",
"my_module/${domain}/myfile.erb"
)
}
As to why the original approach doesn't work: URLs are usually used with the source property only and transferred to the agent as is. The agent consumes the URL and makes an according request to a Puppet fileserver (which is just another master).
The file function on the other hand (as used here with the content property in conjunction with inline_template) will not process URLs and expects local paths instead.
All that being said, the issue could possibly have been side-stepped by specifying both the paths for the test box and the production system, with the latter just acknowledging that /home/carlos is missing. But that would have been far from a clean solution.
Related
I am building a Node CLI app that needs to be passed a single file as an argument.
For example:
myapp.js /Users/jdoe/my_file.txt
I know I can reference /Users/jdoe/my_file.txt through the _ object in yargs but how do I require that it be provided? I see the demandOption() method but I do not know how to demand options that do not have a corresponding flag (name).
I tried the following and it does not work:
.demandOption('_', "A file path is required")
I ended up using .demandCommand(1) which works!
If you're satisfied with yargs and your solution, then by all means continue with what you're doing if you like! I would like to point out some alternatives. There's of course commander - a well-known cli creator tool. Commander seems to handle required arguments more gracefully than yargs. I have also created a cli creator tool to be (in my opinion) an improvement on the existing tools. The published tool is wily-cli, and it should be able to handle what you're wanting to do. For example...
const cli = require('wily-cli);
cli
.parameter('file', true)
.on('exec', (options, parameters, command) => {
const file = parameters.file;
// ...
});
That would cover the example you provided. The true flag denotes that the parameter is required. If the parameter isn't provided to the command, it'll display an error saying the parameter is required.
How about this at the top?
if (process.argv.length < 3) {
console.error("A file path is required");
process.exit(1);
}
I have a rather specific problem: I am trying to build a complex stack using Sails.js with machinepack-sailsgulpify.
In order to inject assets into my templates I use gulp-inject plugin, as the machinepack suggests. The problem is that for anything other than html and ejs the injector doesn't work. It simply doesn't change anything. No errors, nothing.
My task looks like this:
gulp.task('sails-linker-gulp:devViews', function() {
return gulp.src('views/**/*.?(html|ejs|jade|pug|haml|slim|dust)') // Read templates
.pipe(
plugins.inject(
gulp.src(require('../pipeline').jsFilesToInject, {read: false}), // Link the javaScript
{
starttag: generateScriptStartTag,
endtag: generateScriptEndTag,
ignorePath: '.tmp/public',
transform: (filepath, file, i, length) => {
return `script(src="${filepath}")`;
}
}
)
)
.pipe(
plugins.inject(
gulp.src(require('../pipeline').cssFilesToInject, {read: false}), // Link the styles
{
starttag: generateStyleStartTag,
endtag: generateStyleEndTag,
ignorePath: '.tmp/public'
}
)
)
.pipe(
plugins.inject(
gulp.src( ['.tmp/public/jst.js'], {read: false}), // Link the JST Templates
{
starttag: generateTemplateStartTag,
endtag: generateTemplateEndTag,
ignorePath: '.tmp/public'
}
)
)
.pipe(gulp.dest('views/'))// Write modified files...
Don't worry about the generateScriptStartTag and such functions, they are just there for control and I am 1000% sure they work correctly, tested a lot. They generate the tags kind of like this:
//- SCRIPTS
//- SCRIPTS END
depending on the template language.
Adding custom transform function did not work. If I use ejs or html or really anything that resembles html syntax it works fine.
Now, about Sails: I can NOT add a gulp task to compile the template before injecting because Sails renders templates on request in development, it doesn't actually pre-compile them into any directory. And honestly: why should I? The injection is just adding lines to my .jade/.pug files in views, the files are there already, so I don't see why there's a problem there. Can someone advise?
UPDATE:
Rather frustrating inspection of the code revealed that the 'matches' property when running the inject function of has length 0 and when inspecting the content of the stream in node inspector, I did not see the comments, they were stripped away, despite the fact that they are clearly there in the file.
UPDATE #2:
It appears that I was wrong about ejs. ONLY HTML files are getting processed. Also it works OK when it doesn't detect the injection comments. However if it does the end event simply never emits for that file and nothing gets injected. This is true for ALL templating engines, only static HTML files have injection working fine.
UPDATE #3:
After another 5 hours of debugging I found the problem, however my understanding of streams isn't good enough to get me any closer to the solution. The problem is that in inject function of the plugin there's a loop that doesn't quit properly, and while it perfectly injects the required tags into the stream, it then runs that loop again on the same stream (with injected tags), and throws an error.
Why that error never showed up in any console I don't know but there you go. Can someone please help? I am completely lost with this... Is it a bug in the plugin?
I had to figure this out on my own.
It is a bug in gulp-inject. The regex that this plugin generates to test against the injection tags does not match the whole line, it's simply matches the first occurrence. This means that if I have my tags set like so:
//SCRIPTS
//SCRIPTS END
The regex will match the starttag: //SCRIPTS twice:
And the end tag will only be matched once. This was causing the second faulty loop with the error for missing end tag.
A workaround is to avoid repeating start of tags.
//SCRIPTS
//END SCRIPTS
That's not a solution, however. A solution would be to alter the regex so that it only allows whitespace and newline characters in order to match the tag, when using an indent-based template language.
Something like this would work: /\/\/-\s*SCRIPTS(?=\s*\n|$)/ig
Can't believe nobody has stumbled upon this until now, seems like it would be a more common problem...
I have CLI program that can be executed with a list of files that describe instructions, e.g.
node ./my-program.js ./instruction-1.js ./instruction-2.js ./instruction-3.js
This is how I am importing and validating that the target file is an instruction file:
const requireInstruction = (instructionFilePath) => {
const instruction = require(instructionFilePath)
if (!instruction.getInstruction) {
throw new Error('Not instruction file.');
}
return instruction;
};
The problem with this approach is that it will execute the file executes regardless of whether it matches the expected signature, i.e. if file contains a side action such as connecting to the database:
const mysql = require('mysql');
mysql.createConnection(..);
module.exports = mysql;
Not instruction file. will fire, I will ignore the file, but the side-action will remain in the background.
How to safely validate target file signature?
Worst case scenario, is there a conventional way to completely sandbox the require logic and kill the process if file is determined to be unsafe?
Worst case scenario, is there a conventional way to completely sandbox the require logic and kill the process if file is determined to be unsafe?
Move the check logic into a specific js file. Make it process.exit(0) when everything is fine, process.exit(1) when it s wrong.
In your current program, instead of loading the file via require, use child_process.exec to invoke your new file, giving it the required parameter to know which file to test.
In your updated program, bind the close event to know if the return code was 0 or 1.
If you need more information than 0 or 1, into the new js file which will load the instruction, print some JSON.stringified data to stdout (console.log), and retrieve then JSON.parse it in the callback of call to child_process.exec.
Alternatively, have you looked into AST processing ?
http://jointjs.com/demos/javascript-ast
It could help you to identify piece of code which are not embedded within an exported function.
(Note: I discussed this question with the author on IRC. There may be some context in my answer that isn't in the original question.)
Given that your scenario is purely about preventing against accidental inclusion of non-instruction files, rather than about preventing malicious behaviour, static analysis using something like Esprima will probably be sufficient.
One approach would be to require that every instruction file exports some kind of object with a name property, containing the name of the instruction file. As there's not really anything to put in there besides a string literal, you can be fairly certain that if you can't locate a name property through static analysis, the file is not an instruction file - even in a language like JavaScript that isn't fully statically analyzable.
For any readers of this thread that are trying to protect from malicious actors, rather than accidents - for example, when accepting untrusted code from users: you cannot sandbox or 'validate' JavaScript with Node.js alone (not with the vm module either), and the above solution will not work for you. You will need system-level containerization or virtualization to run this kind of code safely. There are no other options.
I want to xhr some opal script, and use ruby code defined there. I tried to get it with $.getScript. But no success for me.
$.ajax({
url: 'assets/foo.js.rb',
success: function(data){
#{ClassInFooJs.new}
},
dataType: "script"
});
Moreover the script is evaled (in strange manner), I mean I can call Opal.modules["file_i_required"] from console, and it will return basically the compiled code.
BUT inside of it nothing is evaled, (no console.logs, window["foos"] = #{something}), and I can't reference anything from that file.
Any help?
You should use either require "foo" from Opal or Opal.require("foo") from JavaScript. If the load "foo" behavior is required should be noted that Opal.load("foo") is also present.
The alternative is to compile files on the server with the :requirable flag set to false but that's kinda difficult unless you compile manually. See the API docs for more info.
Calling directly Opal.modules["foo"](Opal) should be avoided as it doesn't properly update $LOADED_FEATURES and in general can change in the future.
Is there a way to make a script where I can do stuff like $this->EE->db (i.e. using Expression Engine's classes, for example to access the database), but that can be run in the command line?
I tried searching for it, but the docs don't seem to contain this information (please correct me if I'm wrong). I'm using EE 2.4 (the link above should point to 2.4 docs).
The following article seems to have a possible approach: Bootstrapping EE for CLI Access
Duplicate your index.php file and name it cli.php.
Move the index.php file outside your DOCUMENT_ROOT. Now, technically, this isn’t required, but there’s no reason for prying
eyes to see your hard work so why not protect it.
Inside cli.php update the $system_path on line 26 to point to your system folder.
Inside cli.php update the $routing['controller'] on line 96 to be cli.
Inside cli.php update the APPPATH on line 96 to be $system_path.'cli/'.
Duplicate the system/expressionengine directory and name it system/cli.
Duplicate the cli/controllers/ee.php file and name it cli/controllers/cli.php.
Finally, update the class name in cli/controllers/cli.php to be Cli and remove the methods.
By default EE calls the index method, so add in an index method to do what you need.
#Zenbuman This was useful as a starting point although I would add I had issues with all of my requests going to cli -> index, whereas I wanted some that went to cli->task1, cli->task2 etc
I had to update *system\codeigniter\system\core\URI.php*so that it knew how to extract the parameters I was passing via the command line, I got the code below from a more recent version of Codeigniter which supports the CLI
// Is the request coming from the command line?
if (php_sapi_name() == 'cli' or defined('STDIN'))
{
$this->_set_uri_string($this->_parse_cli_args());
return;
}
// Let's try the REQUEST_URI first, this will work in most situations
and also created the function in the same file
private function _parse_cli_args()
{
$args = array_slice($_SERVER['argv'], 1);
return $args ? '/' . implode('/', $args) : '';
}
Also had to comment out the following in my cli.php file as all routing was going to the index method in my cli controller and ignoring my parameters
/*
* ~ line 109 - 111 /cli.php
* ---------------------------------------------------------------
* Disable all routing, send everything to the frontend
* ---------------------------------------------------------------
*/
$routing['directory'] = '';
$routing['controller'] = 'cli';
//$routing['function'] = '';
Even leaving
$routing['function'] = '';
Will force requests to go to index controller
In the end I felt this was a bit hacky but I really need to use the EE API library in my case. Otherwise I would have just created a separate application with Codeigniter to handle my CLI needs, hope the above helps others.
I found #Zenbuman's answer after solving my own variation of this problem. My example allows you to keep the cron script inside a module, so if you need your module to have a cron feature it all stays neatly packaged together. Here's a detailed guide on my blog.