I am starting using puppet to manage many servers, the problem is that whenever I try to use a class, new relic for example:
node 'mynode' {
class {'newrelic::server::linux':
newrelic_license_key => '***',
}
}
It fails, and returns the following error:
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Puppet::Parser::AST::Resource failed with error ArgumentError: Could not find declared class newrelic::server::linux at /etc/puppet/manifests/site.pp:3 on node mynode
I have installed fsalum-newrelic on the master, and everything works fine when using files, packages, services etc. What am I doing wrong?
The catalog compiler will look for class newrelic::server::linux at newrelic/manifests/server/linux.pp relative to each directory in your module path. (Note: newrelic, NOT fsalum-newrelic.) Make certain that you indeed did install the module such that such a file exists in your modulepath, and make sure that it is readable by the puppetmaster process.
Note, too, that "readable by the puppetmaster process" means more than just the ownership and permissions of the file itself. It also involves ownership and permissions of all the directories in the path to that file, and possibly other forms of access control, such as ACLs and SELinux conext and policy.
Find out where you are actually installing the new puppet forge modules using perhaps a unix utility like "locate".
Then look in the the /etc/puppet/puppet.conf at the "basemodulepath" and check that the place it is installed is in the path
Here is my basemodulepath
basemodulepath = $confdir/environments/production/modules:$confdir/environments/production/local_modules:/etc/puppet/modules
The external modules I am using are either in /etc/puppet/modules or in /etc/puppet/enviroments/production/modules
I'm trying to deploy my first meteor app to modulus.io but I'm getting the following error in the log:
Error: EACCES, permission denied '/mnt/data/cfs'
at Object.fs.mkdirSync (fs.js:654:18)
at sync (/mnt/data/1/node_modules/mkdirp/index.js:55:12)
at sync (/mnt/data/1/node_modules/mkdirp/index.js:61:24)
at Function.sync (/mnt/data/1/node_modules/mkdirp/index.js:61:24)
at new FS.Store.FileSystem (packages/cfs:filesystem/filesystem.server.js:37:1)
at app/leads.js:69:3
at app/leads.js:332:3
at /mnt/data/1/programs/server/boot.js:222:10
at Array.forEach (native)
at Function._.each._.forEach (/mnt/data/1/node_modules/underscore/underscore.js:79:11)
It's obviously something about permissions but don't know how to fix it. Any ideas?
It seems you are trying to create a directory in /mnt/data/cfs, and you don't have permissions from the OS to do that. From quickly looking over the modulus.io documentation (http://help.modulus.io/customer/portal/articles/1653448-file-storage), the platform allows you to write in exactly two directories: your local app directory, and /mnt/data/tmp. You are trying to write to a different directory. So that won't work.
Try using /mnt/data/tmp/cfs instead of /mnt/data/cfs.
It looks like you are using CollectionFS, and that package is using the directory in question. If that is the case, then you'll need to update the path option for that package:
var myStore = new FS.Store.FileSystem("something", {
path: "/mnt/data/tmp/cfs",
});
BTW, I had to infer a lot from your error (use of CFS, what directory you are trying to create). When asking questions, it is better
to provide that sort of detail.
I have a gulp.js process using the gulp-phantom plugin that works perfectly on my dev setup, Mac OS X 10.10, however on my test / prod environment (EC2 Amazon Linux) it just doesn't work at all, however it also isn't giving any sort of error message or any other helpful output, the task just starts and finishes again almost straight away:
Dev environment output:
$ gulp crawlSite
[17:39:19] Using gulpfile ~/Documents/dev/mysite.co.uk/gulpfile.js
[17:39:19] Starting 'crawlSite'...
[17:40:15] Finished 'crawlSite' after 57 s
Test environment output:
$ gulp crawlSite
[17:34:27] Using gulpfile /var/www/html/mysite.co.uk/gulpfile.js
[17:34:27] Starting 'crawlSite'...
[17:34:27] Finished 'crawlSite' after 715 ms
As you can see on the dev environment the process takes 57 seconds however on test it is only 715 milliseconds and on test it is not creating the files that my phantom script should be creating. My gulp task is very simple:
gulp.task('crawlSite', function() {
return gulp.src("phantom-crawl-website.js")
.pipe(phantom());
});
and my phantom script "phantom-crawl-website.js" file is in the same directory as the gulpfile.js file.
I have check that all the node modules are installed and that PhantomJS is installed globally on the test environment and everything checks out ok. If I run:
$ phantomjs phantom-crawl-website.js
from the command prompt on the test environment that works fine and it crawls the site and creates the files.
I have tried to use the gulp-phantom options for "debug" however I can never seem to see any output from this. I have tried using gulp-debug as well as follows:
gulp.task('crawlSite', function() {
return gulp.src("phantom-crawl-website.js")
.pipe(phantom({debug: true}))
.pipe(debug());
});
However all this does is give me the gulp-phantom output filename ("phantom-crawl-website.txt"). I have also tried to write the gulp-phantom output file in the following way:
gulp.task('crawlSite', function() {
return gulp.src("phantom-crawl-website.js")
.pipe(phantom({debug:true}))
.pipe(gulp.dest("./phantomOutput/"));
});
But all I get from this is a blank file created in the "phantomOutput" directory called "phantom-crawl-website.txt".
Can anyone advise what I am doing wrong and how I would be able to see the phantomJS debug output so I can work out what the problem is.
Thanks so much in advance.
UPDATE
I've managed to get some output from the gulp-phantom process by adding the following to the gulp-phantom index.js file:
program.stderr.on('data', function (data) {
console.log('stderr: ' + data);
});
Once this was added I'm now getting the following error message:
stderr: Can't open '/dev/stdin'
But still no luck actually getting it to work.
Found the issue. In the gulp-phantom module there appears to be an error with it using /dev/stdin were phantomjs expecting the phantom filename to be passed. On Mac OS X the /dev/stdin contains the contents of the file but on Linux it is denied permission to read it.
To fix it I removed the line that was pushing '/dev/stdin' into the arguments stack and then added one a bit further down in the "through" function call to pass the full path and filename to the phantomjs process instead.
I will issue a pull request to the gulp-phantom module creator and see if they accept this as fix for the issue.
I want make thumbnails from videos uploaded to S3, I know how to make it with Node.js and ffmpeg.
According to this forum post I can add libraries:
ImageMagick is the only external library that is currently provided by
default, but you can include any additional dependencies in the zip
file you provide when you create a Lambda function. Note that if this
is a native library or executable, you will need to ensure that it
runs on Amazon Linux.
But how can I put static ffmpeg binary on aws lambda?
And how can I call from Node.js this static binary (ffmpeg) with AWS Lambda?
I'm newbie with amazon AWS and Linux
Can anyone help me?
The process as outlined by Naveen is correct, but it glosses over a detail that can be pretty painful - including the ffmpeg binary in the zip and accessing it within your lambda function.
I just went through this, it went like this:
Include the ffmpeg static binary in your zipped lambda function package (I have a gulp task to copy this into the /dist every time it builds)
When your function is called, move the binary to a /tmp/ dir and chmod it to give yourself access (Update Feb 2017: it's reported that this is no longer necessary, re: #loretoparisi and #allen's answers).
update your PATH to include the ffmpeg executable (I used fluent-ffmpeg which lets you set two env vars to handle that more easily.
Let me know if more detail is necessary, I can update this answer.
The copy and chmod (step 2) is obviously not ideal.... would love to know if anyone's found a better way to handle this, or if this is typical for this architecture style.
(2nd Update, writing it before the first update b/c it's more relevant):
The copy + chmod step is no longer necessary, as #Allen pointed out – I'm executing ffmpeg in Lambda functions directly from /var/task/ with no trouble at this point. Be sure to chmod 755 whatever binaries before uploading them to Lambda (also as #Allen pointed out).
I'm no longer using fluent-ffmpeg to do the work. Rather, I'm updating the PATH to include the process.env['LAMBDA_TASK_ROOT'] and executing simple bash scripts.
At the top of your Lambda function:
process.env['PATH'] = process.env['PATH'] + "/" + process.env['LAMBDA_TASK_ROOT']
For an example that uses ffmpeg: lambda-pngs-to-mp4.
For a slew of useful lambda components: lambduh.
The below update left in for posterity, but no longer necessary:
UPDATE WITH MORE DETAIL:
I downloaded the static ffmpeg binary here. Amazon recommends booting up an EC2 and building a binary for your use on there, because that environment will be the same as the conditions Lambda runs on. Probably a good idea, but more work, and this static download worked for me.
I pulled only the ffmpeg binary into my project's to-be-archived /dist folder.
When you upload your zip to lambda, it lives at /var/task/. For whatever reason, I ran into access issues trying to use the binary at that location, and more issues trying to edit permissions on the file there. A quick work-around is to move the binary to /tmp/ and chmod permissions on it there.
In Node, you can run shell via a child_process. What I did looks like this:
require('child_process').exec(
'cp /var/task/ffmpeg /tmp/.; chmod 755 /tmp/ffmpeg;',
function (error, stdout, stderr) {
if (error) {
//handle error
} else {
console.log("stdout: " + stdout)
console.log("stderr: " + stderr)
//handle success
}
}
)
This much should give you an executable ffmpeg binary in your lambda function – but you still need to make sure it's on your $PATH.
I abandoned fluent-ffmpeg and using node to launch ffmpeg commands in favor of just launching a bash script out of node, so for me, I had to add /tmp/ to my path at the top of the lambda function:
process.env.PATH = process.env.PATH + ':/tmp/'
If you use fluent-ffmpeg, you can set the path to ffmpeg via:
process.env['FFMPEG_PATH'] = '/tmp/ffmpeg';
Somewhat related/shameless self-plug: I'm working on a set of modules to make building Lambda functions out of composable modules easier under the name Lambduh. Might save some time getting these things together. A quick example: handling this scenario with lambduh-execute would be as simple as:
promises.push(execute({
shell: "cp /var/task/ffmpeg /tmp/.; chmod 755 /tmp/ffmpeg",
})
Where promises is an array of promises to be run.
I created a GitHub repo that does exactly this (as well as resizes the video at the same time). Russ Matney's answer was extremely helpful to make the FFmpeg file executable.
I am not sure what custom mode library you would use for the ffmpeg task; nevertheless the steps to accomplish that are the same.
Create a separate directory for your lambda project
Run npm install <package name> inside that directory ( this would automatically put in place the node_modules and appropriate files )
Create index.js file in the lambda project directory then use the require(<package-name>) and perform your main task for video thumbnails creation
Once you are done, you can zip the lambda project folder and upload it I'm AWS management console and configure the index file and handler.
Rest of configurations follow the same process like IAM Execution Role, Trigger, Memory and Timeout specification etc.
I got this working without moving it to /tmp. I ran chmod 755 on my executable and then it worked! I had problems when I previously set it to chmod 777.
At the time I'm writing, as well described above there is no need anymore to copy binaries from current folder, that is the var/task or the process.env['LAMBDA_TASK_ROOT'] folder to the /tmp folder.
So it is just necessary to do
chmod 755 dist/ff*
if you have your ffmpeg and ffprobe binaries there.
By the way, previously my 2 cents solution that wasted 2 days time was this
Configure : function(options, logger) {
// default options
this._options = {
// Temporay files folder for caching and modified/downloaded binaries
tempDir : '/tmp/',
/**
* Copy binaries to temp and fix permissions
* default to false - since this is not longer necessary
* #see http://stackoverflow.com/questions/27708573/aws-lambda-making-video-thumbnails/29001078#29001078
*/
copyBinaries : false
};
// override defaults
for (var attrname in options) { this._options[attrname] = options[attrname]; }
this.logger=logger;
var self=this;
// add temporary folder and task root folder to PATH
process.env['PATH'] = process.env['PATH'] + ':/tmp/:' + process.env['LAMBDA_TASK_ROOT']
if(self._options.copyBinaries)
{
var result = {}
execute(result, {
shell: "cp ./ffmpeg /tmp/.; chmod 755 /tmp/ffmpeg", // copies an ffmpeg binary to /tmp/ and chmods permissions to run it
logOutput: true
})
.then(function(result) {
return execute(result, {
shell: "cp ./ffprobe /tmp/.; chmod 755 /tmp/ffprobe", // copies an ffmpeg binary to /tmp/ and chmods permissions to run it
logOutput: true
})
})
.then(function(result) {
self.logger.info("LambdaAPIHelper.Configure done.");
})
.fail(function(err) {
self.logger.error("LambdaAPIHelper.Configure: error %s",err);
});
} //copyBinaries
}
helped by the good lambduh module:
// lambuh & dependencies
var Q = require('q');
var execute = require('lambduh-execute');
As described here and confirmed by module author now this can be considered not needed, by the way it's interesting to have a well understanding of the lambda runtime (the machine) environment that is well described in Exploring the Lambda Runtime environment.
I just went through the same issues as described above and ended up moving with the same concept of moving my scripts requiring execution to the /tmp directory.
var childProcess = require("child_process");
var Q = require('q');
Code I used is below with promises:
.then(function(result) {
console.log('Move shell ffmpeg shell script to executable state and location');
var def = Q.defer();
childProcess.exec("mkdir /tmp/bin; cp /var/task/bin/ffmpeg /tmp/bin/ffmpeg; chmod 755 /tmp/bin/ffmpeg",
function (error, stdout, stderr) {
if (error) {
console.log("error: " + error)
} else {
def.resolve(result);
}
}
)
return def.promise;
})
In order for the binary to be directly executable on AWS Lambda (without first having to copy to /tmp and chmod), you need to ensure the binary has executable permission when it is added to the ZIP file.
This is problematic on Windows because Windows doesn't recognize Linux binaries. If you're using Windows 10, use the Ubuntu Bash shell to create the package.
I created a Node.js function template specifically for this purpose here. It allows you to deploy one or more binaries to Lambda, then execute an arbitrary shell command and capture the output.
I have been trying to use the command to rollback the last process of deploying the website which was interrupted due to a network failure.
The generic command that I am using while inside the bin directory of server's SDK (On Linux) is :
./appcfg.sh rollback /path_to_the_war_directory_that_has_appengine-web.xml
Is this the way we do a rollback ? If not please tell me the method.
_(I was asked to make a directory war in the project directory and place the WEB-INF folder in that with appengine-web.xml inside it. It may be wrong)_
I am fully convinced that I am making a mistake while giving the path to my app .
Shot where my .war file is there :
Now the command that I am using is (while inside the bin directory of the server's SDK) :
./appcfg.sh rollback /home/non-admin/NetbeansProjects/'Personal Site'/web/war
The following is the representation of the path to war directory :
Where am I wrong ? How should I run this command so that I am able to deploy my project once again ?
On running the above command I get this message :
Unable to find the webapp directory /home/non-admin/NetbeansProjects/Personal Site/web/war
usage: AppCfg [options] <action> [<app-dir>] [<argument>]
NOTE : I have duplicated the folder WEB-INF. There is still a folder named WEB-INF inside the web directory that contains all other xml files.
The error tells you that the folder /home/non-admin/NetbeansProjects/Personal Site/web/war does not exist. If you look carefully the name of the folder is NetBeansProjects (the filesystem in Linux is case-sensitive).
So, you should run instead the command:
./appcfg.sh rollback /home/non-admin/NetBeansProjects/'Personal Site'/web/war
and just to make sure that the directory exists run first
ls /home/non-admin/NetBeansProjects/'Personal Site'/web/war