I want make thumbnails from videos uploaded to S3, I know how to make it with Node.js and ffmpeg.
According to this forum post I can add libraries:
ImageMagick is the only external library that is currently provided by
default, but you can include any additional dependencies in the zip
file you provide when you create a Lambda function. Note that if this
is a native library or executable, you will need to ensure that it
runs on Amazon Linux.
But how can I put static ffmpeg binary on aws lambda?
And how can I call from Node.js this static binary (ffmpeg) with AWS Lambda?
I'm newbie with amazon AWS and Linux
Can anyone help me?
The process as outlined by Naveen is correct, but it glosses over a detail that can be pretty painful - including the ffmpeg binary in the zip and accessing it within your lambda function.
I just went through this, it went like this:
Include the ffmpeg static binary in your zipped lambda function package (I have a gulp task to copy this into the /dist every time it builds)
When your function is called, move the binary to a /tmp/ dir and chmod it to give yourself access (Update Feb 2017: it's reported that this is no longer necessary, re: #loretoparisi and #allen's answers).
update your PATH to include the ffmpeg executable (I used fluent-ffmpeg which lets you set two env vars to handle that more easily.
Let me know if more detail is necessary, I can update this answer.
The copy and chmod (step 2) is obviously not ideal.... would love to know if anyone's found a better way to handle this, or if this is typical for this architecture style.
(2nd Update, writing it before the first update b/c it's more relevant):
The copy + chmod step is no longer necessary, as #Allen pointed out – I'm executing ffmpeg in Lambda functions directly from /var/task/ with no trouble at this point. Be sure to chmod 755 whatever binaries before uploading them to Lambda (also as #Allen pointed out).
I'm no longer using fluent-ffmpeg to do the work. Rather, I'm updating the PATH to include the process.env['LAMBDA_TASK_ROOT'] and executing simple bash scripts.
At the top of your Lambda function:
process.env['PATH'] = process.env['PATH'] + "/" + process.env['LAMBDA_TASK_ROOT']
For an example that uses ffmpeg: lambda-pngs-to-mp4.
For a slew of useful lambda components: lambduh.
The below update left in for posterity, but no longer necessary:
UPDATE WITH MORE DETAIL:
I downloaded the static ffmpeg binary here. Amazon recommends booting up an EC2 and building a binary for your use on there, because that environment will be the same as the conditions Lambda runs on. Probably a good idea, but more work, and this static download worked for me.
I pulled only the ffmpeg binary into my project's to-be-archived /dist folder.
When you upload your zip to lambda, it lives at /var/task/. For whatever reason, I ran into access issues trying to use the binary at that location, and more issues trying to edit permissions on the file there. A quick work-around is to move the binary to /tmp/ and chmod permissions on it there.
In Node, you can run shell via a child_process. What I did looks like this:
require('child_process').exec(
'cp /var/task/ffmpeg /tmp/.; chmod 755 /tmp/ffmpeg;',
function (error, stdout, stderr) {
if (error) {
//handle error
} else {
console.log("stdout: " + stdout)
console.log("stderr: " + stderr)
//handle success
}
}
)
This much should give you an executable ffmpeg binary in your lambda function – but you still need to make sure it's on your $PATH.
I abandoned fluent-ffmpeg and using node to launch ffmpeg commands in favor of just launching a bash script out of node, so for me, I had to add /tmp/ to my path at the top of the lambda function:
process.env.PATH = process.env.PATH + ':/tmp/'
If you use fluent-ffmpeg, you can set the path to ffmpeg via:
process.env['FFMPEG_PATH'] = '/tmp/ffmpeg';
Somewhat related/shameless self-plug: I'm working on a set of modules to make building Lambda functions out of composable modules easier under the name Lambduh. Might save some time getting these things together. A quick example: handling this scenario with lambduh-execute would be as simple as:
promises.push(execute({
shell: "cp /var/task/ffmpeg /tmp/.; chmod 755 /tmp/ffmpeg",
})
Where promises is an array of promises to be run.
I created a GitHub repo that does exactly this (as well as resizes the video at the same time). Russ Matney's answer was extremely helpful to make the FFmpeg file executable.
I am not sure what custom mode library you would use for the ffmpeg task; nevertheless the steps to accomplish that are the same.
Create a separate directory for your lambda project
Run npm install <package name> inside that directory ( this would automatically put in place the node_modules and appropriate files )
Create index.js file in the lambda project directory then use the require(<package-name>) and perform your main task for video thumbnails creation
Once you are done, you can zip the lambda project folder and upload it I'm AWS management console and configure the index file and handler.
Rest of configurations follow the same process like IAM Execution Role, Trigger, Memory and Timeout specification etc.
I got this working without moving it to /tmp. I ran chmod 755 on my executable and then it worked! I had problems when I previously set it to chmod 777.
At the time I'm writing, as well described above there is no need anymore to copy binaries from current folder, that is the var/task or the process.env['LAMBDA_TASK_ROOT'] folder to the /tmp folder.
So it is just necessary to do
chmod 755 dist/ff*
if you have your ffmpeg and ffprobe binaries there.
By the way, previously my 2 cents solution that wasted 2 days time was this
Configure : function(options, logger) {
// default options
this._options = {
// Temporay files folder for caching and modified/downloaded binaries
tempDir : '/tmp/',
/**
* Copy binaries to temp and fix permissions
* default to false - since this is not longer necessary
* #see http://stackoverflow.com/questions/27708573/aws-lambda-making-video-thumbnails/29001078#29001078
*/
copyBinaries : false
};
// override defaults
for (var attrname in options) { this._options[attrname] = options[attrname]; }
this.logger=logger;
var self=this;
// add temporary folder and task root folder to PATH
process.env['PATH'] = process.env['PATH'] + ':/tmp/:' + process.env['LAMBDA_TASK_ROOT']
if(self._options.copyBinaries)
{
var result = {}
execute(result, {
shell: "cp ./ffmpeg /tmp/.; chmod 755 /tmp/ffmpeg", // copies an ffmpeg binary to /tmp/ and chmods permissions to run it
logOutput: true
})
.then(function(result) {
return execute(result, {
shell: "cp ./ffprobe /tmp/.; chmod 755 /tmp/ffprobe", // copies an ffmpeg binary to /tmp/ and chmods permissions to run it
logOutput: true
})
})
.then(function(result) {
self.logger.info("LambdaAPIHelper.Configure done.");
})
.fail(function(err) {
self.logger.error("LambdaAPIHelper.Configure: error %s",err);
});
} //copyBinaries
}
helped by the good lambduh module:
// lambuh & dependencies
var Q = require('q');
var execute = require('lambduh-execute');
As described here and confirmed by module author now this can be considered not needed, by the way it's interesting to have a well understanding of the lambda runtime (the machine) environment that is well described in Exploring the Lambda Runtime environment.
I just went through the same issues as described above and ended up moving with the same concept of moving my scripts requiring execution to the /tmp directory.
var childProcess = require("child_process");
var Q = require('q');
Code I used is below with promises:
.then(function(result) {
console.log('Move shell ffmpeg shell script to executable state and location');
var def = Q.defer();
childProcess.exec("mkdir /tmp/bin; cp /var/task/bin/ffmpeg /tmp/bin/ffmpeg; chmod 755 /tmp/bin/ffmpeg",
function (error, stdout, stderr) {
if (error) {
console.log("error: " + error)
} else {
def.resolve(result);
}
}
)
return def.promise;
})
In order for the binary to be directly executable on AWS Lambda (without first having to copy to /tmp and chmod), you need to ensure the binary has executable permission when it is added to the ZIP file.
This is problematic on Windows because Windows doesn't recognize Linux binaries. If you're using Windows 10, use the Ubuntu Bash shell to create the package.
I created a Node.js function template specifically for this purpose here. It allows you to deploy one or more binaries to Lambda, then execute an arbitrary shell command and capture the output.
Related
I feel like I'm making a silly mistake here, but Pulumi cannot seem to see the file I'm using to [create an asset][1] with. The files are right beside each other in the directory. Here is the file structure:
-- /resources
-- function.js
-- lambda.ts
Pretty simple. Here is the file where we try and use the Javascript file as an asset for the Lambda we are creating:
// lambda.ts
import * as pulumi from '#pulumi/pulumi';
import * as aws from '#pulumi/aws';
const file = new pulumi.asset.FileAsset('./function.js');
console.log('file', file);
export const exampleFunction = new aws.lambda.Function('exampleFunction', {
role: role.arn,
runtime: 'nodejs14.x',
code: file,
});
I've written a script to run my Pulumi commands. I also pwd and ls the file directory to confirm that the file is there.
# deploy.sh
#!/bin/sh
ls resources
pulumi preview
Output/error:
ls resources -- function.js lambda.ts
pulumi preview --
Error: failed to register new resource exampleFunction [aws:lambda/function:Function]: 2 UNKNOWN: failed to compute archive hash: couldn't read archive path './function.js': stat ./function.js: no such file or directory
Kinda lost on this one. The file is right there.
Edit:
I've realized that the file path must be relative to where the Pulumi.yaml file is located, so one directory above /resources in this case.
Where is deploy.sh or more specifically, pulumi preview being run?
Is it being run in the "resources" folder, or someplace else - e.g. in the folder above?
If it's not being run from the resources folder, then that's likely the issue and so you need to provide the path to function.js from where pulumi is being run.
You should be using FileAsset and not FileArchive, as FileArchive is for tar.gz and related archive files: https://www.pulumi.com/docs/intro/concepts/assets-archives/
At least I think that's what's going on. If that's true then the error message is not great as it doesn't tell you what the actual problem is.
When I run the function locally on NodeJS 11.7.0 it works, when I run it in AWS Lambda NodeJS 8.10 it works, but I've recently tried to run it in AWS Lambda NodeJS 10.x and get this response and this error in Cloud Watch.
Any thoughts on how to correct this?
Response
{
"success": false,
"error": "Error: Could not find openssl on your system on this path: openssl"
}
Cloudwatch Error
ERROR (node:8) [DEP0005] DeprecationWarning: Buffer() is deprecated due to security and usability issues. Please use the Buffer.alloc(), Buffer.allocUnsafe(), or Buffer.from() methods instead.
Function
...
const util = require('util');
const pem = require('pem');
...
return new Promise((fulfill) => {
require('./certs').get(req, res, () => {
return fulfill();
});
}).then(() => {
const createCSR = util.promisify(pem.createCSR);
//This seems to be where the issue is coming from
return createCSR({
keyBitsize: 1024,
hash: HASH,
commonName: id.toString(),
country: 'US',
state: 'Maryland',
organization: 'ABC', //Obfuscated
organizationUnit: 'XYZ', //Obfuscated
});
}).then(({ csr, clientKey }) => {
...
}).then(async ({ certificate, clientKey }) => {
...
}, (err) => {
return res.status(404).json({
success: false,
error: err,
});
});
...
I've tried with
"pem": "^1.14.3", and "pem": "^1.14.2",
I tried the answer documented by #Kris White, but I was not able to get it to work. Each execution resulted in the error Could not find openssl on your system on this path: /opt/openssl. I tried several different paths and approaches, but none worked well. It's entirely possible that I simply didn't copy the OpenSSL executable correctly.
Since I needed a working solution, I used the answer provided by #Wilfred Dittmer. I modified it slightly since I wasn't using Docker. I launched an Amazon Linux 2 server, built OpenSSL on it, transferred the package to my local machine, and deployed it via Serverless.
Create a file named create-openssl-zip.sh with the following contents. The script will create the Lambda Layer OpenSSL package.
#!/bin/bash -x
# This file should be copied to and run inside the /tmp folder
yum update -y
yum install autoconf bison gcc gcc-c++ libcurl-devel libxml2-devel -y
curl -sL http://www.openssl.org/source/openssl-1.1.1d.tar.gz | tar -xvz
cd openssl-1.1.1d
./config --prefix=/tmp/nodejs/openssl --openssldir=/tmp/nodejs/openssl && make && make install
cd /tmp
rm -rf nodejs/openssl/share nodejs/openssl/include
zip -r lambda-layer-openssl.zip nodejs
rm -rf nodejs openssl-1.1.1d
Then, follow these steps:
Open a terminal session in this project's root folder.
Run the following command to upload the Linux bash script.
curl -F "file=#create-openssl-zip.sh" https://file.io
Note: The command above uses the popular tool File.io to copy the script to the cloud temporarily so it can be securely retrieved from the build server.
Note: If curl is not installed on your dev machine, you can also upload the script manually using the File.io website.
Copy the URL for the uploaded file from either the terminal session or the File.io website.
Note: The url will look similar to this example: https://file.io/a1B2c3
Open the AWS Console to the EC2 Instances list.
Launch a new instance with these attributes:
AMI: Amazon Linux 2 AMI (HVM), SSD Volume Type (id: ami-0a887e401f7654935)
Instance Type: t2.micro
Instance Details: (use all defaults)
Storage: (use all defaults)
Tags: Name - 'build-lambda-layer-openssl'
Security Group: 'Create new security group' (use all defaults to ensure Instance will be publicly accessible via SSH over the internet)
When launching the instance and selecting a key pair, be sure to choose a Key Pair from the list to which you have access.
Launch the instance and wait for it to be accessible.
Once the instance is running, use an SSH Client to connect to the instance.
More details on how to open an SSH connection can be found here.
In the SSH terminal session, navigate to the tmp directory by running cd /tmp.
Download the bash script uploaded earlier by running curl {FILE_IO_URL} --output create-openssl-zip.sh.
Note: In the script above, replace FILE_IO_URL with the URL returned from File.io and copied in step 3.
Execute the bash script by running sudo bash ./create-openssl-zip.sh. The script may take a while to complete. You may need to confirm one or more package install prompts.
When the script completes, run the following command to upload the package to File.io: curl -F "file=#lambda-layer-openssl.zip" https://file.io.
Copy the URL for the uploaded file from the terminal session.
In the terminal session on the local development machine, run the following command to download the file: curl {FILE_IO_URL} --output lambda-layer-openssl.zip.
Note: In the script above, replace FILE_IO_URL with the URL returned from File.io and copied in step 13.
Note: If curl is not installed on your dev machine, you can also download the file manually by pasting the copied URL in the address bar of your favorite browser.
Close the SSH session.
In the EC2 Instances list, terminate the build-lambda-layer-openssl EC2 instance since it is not needed any longer.
The OpenSSL Lambda Layer is now ready to be deployed.
For completeness, here is a portion of my serverless.yml file:
functions:
functionName:
# ...
layers:
- { Ref: OpensslLambdaLayer }
layers:
openssl:
name: ${self:provider.stage}-openssl
description: Contains openssl command line utility for lambdas that need it
package:
artifact: 'path\to\lambda-layer-openssl.zip'
compatibleRuntimes:
- nodejs10.x
- nodejs12.x
retain: false
...and here is how I configured PEM in the code file:
import * as pem from 'pem';
process.env.LD_LIBRARY_PATH = '/opt/nodejs/openssl/lib';
pem.config({
pathOpenSSL: '/opt/nodejs/openssl/bin/openssl',
});
// other code...
I contacted AWS Support about this and it turns out that the openssl library is still on the Node10x image, just not the command line utility. However, it's pretty easy to just grab it off a standard AMI and use it as a Lambda layer.
Steps:
Launch an Amazon Linux 2 AMI as an EC2
SSH into the box, or use an SFTP utility to connect to the box
Copy the command line utility for openssl at /usr/bin/openssl somewhere you can work with it locally. In my case I downloaded it to my Mac even though it is a Linux file.
Verify that it's still marked as executable (chmod a+x openssl if necessary if you've downloaded it elsewhere)
Zip up the file
Optional: Upload it to an S3 bucket you can get to
Go to Lambda Layers in the AWS console
Create a new lambda layer. I named mine openssl and used the S3 pointer to the file on S3. You can also upload the zip directly if you have it on a local file system.
Attach the arn provided for the layer to your Lambda function. I use serverless so it was defined in the function setup per their documentation.
In your code, reference openssl as /opt/openssl or you can avoid pathing it in your code (or may not have an option if it's a package you don't control) by adding /opt to you path, i.e.
process.env['PATH'] = process.env['PATH'] + ':' + process.env['LAMBDA_TASK_ROOT'] + ':/opt';
The layer will have been unzipped for you and because you set it to be executable beforehand, it should just work. The underlying openssl libraries are there, so just copying the cli works just fine.
What you can do is to create a lambda layer with the openssl library.
Using the lambdaci/lambda:build-nodejs10.x you can compile the openssl library and create a zip file from the install. The zip file you can then use as a layer for your lambda.
Create a file called create-openssl-zip.sh and make sure to chmod u+x it.
#!/bin/bash -x
# This file should be run inside the lambci/lambda:build-nodejs10.x container
yum update -y
yum install autoconf bison gcc gcc-c++ libcurl-devel libxml2-devel -y
curl -sL http://www.openssl.org/source/openssl-1.1.1d.tar.gz | tar -xvz
cd openssl-1.1.1d
./config --prefix=/var/task/nodejs/openssl --openssldir=/var/task/nodejs/openssl && make && make install
cd /var/task/
rm -rf nodejs/openssl/share
rm -rf nodejs/openssl/include
zip -r lambda-openssl-layer.zip nodejs
cp lambda-openssl-layer.zip /opt/layer/
Then run:
docker run -it -v `pwd`:/opt/layer lambci/lambda:build-nodejs10.x /opt/layer/create-openssl-zip.sh
This will run the script inside the docker container and when it is done you have a file called lambda-openssl-layer.zip in your current directory.
Upload this lambda to an s3 bucket and create a lambda layer.
On your original lambda, add this layer and modify your code so that the PEM library knows where to look for the OpenSSL library as follows:
PEM.config({
pathOpenSSL: '/opt/nodejs/openssl/bin/openssl'
})
And finally add an extra environment variable to your lambda called LD_LIBRARY_PATH with value /opt/nodejs/openssl/lib
Otherwise it will fail with:
/opt/nodejs/openssl/bin/openssl: error while loading shared libraries: libssl.so.1.1: cannot open shared object file: No such file or directory
PEM NPM docs says:
Setting openssl location
In some systems the openssl executable might not be available by the default name or it is not included in $PATH. In this case you can define the location of the executable yourself as a one time action after you have loaded the pem module:
So I think it is not able to find OpenSSL path in system you can try configuring it programmatically :
var pem = require('pem')
pem.config({
pathOpenSSL: '/usr/local/bin/openssl'
})
As you are using AWS Lambda so just try printing process.env.path you will get idea of whether OpenSSL is included in path env variable or not.
You can also check 'OpenSSL' by running below code
const exec = require('child_process').exec;
exec('which openssl',function(err,stdopt,stderr){
console.log(err ? err : stdopt);
})
UPDATE
As #hoangdv mentioned in his answer openssl is seems to be removed for node10.x runtime and I think he is right. Also, we have read-only access to file system so we can't do much.
#Seth McClaine, you can give try for node-forge npm module. One of the module built on top of this is 'https://github.com/jfromaniello/selfsigned' which will make your task easier
https://github.com/lambci/git-lambda-layer/issues/13#issue-444697784 (announcement email)
It seem openssl has been removed in nodejs10.x runtime.
I have checked again on lambci/lambda:build-nodejs10.x docker image and confirmed that. Maybe, you need to change your runtime version or find another way to createCSR.
which: no openssl in (/var/lang/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/bin)
I wrote a simple module to install a package (BioPerl) on a Ubuntu VM. The whole init.pp file is here:
https://gist.github.com/anonymous/17b4c31bf7309aff14dfdcd378e44f40
The problem is it doesn't work, and it gives me no feedback to let me know why it doesn't work. There are 3 simple steps in the module. I checked and it didn't do any of them. Heres the first 2:
Step 1: Download an archive and save it to /usr/local/lib
exec { 'bioperl-download':
command => "sudo /usr/bin/wget --no-check-certificate -O ${archive_path} ${package_uri}",
require => Package['wget']
}
Step 2: Extract the archive
exec { 'bioperl-extract':
command => "sudo /usr/bin/tar zxvf ${archive_path} --directory ${install_path}; sudo rm ${archive_path}",
require => Exec['bioperl-download']
}
pretty simple. But I have no idea where the problem is because I can't see what its doing. The provisioner is set to verbose mode, and here are the output lines for my module:
==> default: Notice: /Stage[main]/Bioperl/Exec[bioperl-download]/returns: executed successfully
==> default: Notice: /Stage[main]/Bioperl/Exec[bioperl-extract]/returns: executed successfully
==> default: Notice: /Stage[main]/Bioperl/Exec[bioperl-path]/returns: executed successfully
So all I know is it executed these three steps successfully. It doesn't tell me anything about whether the steps did their job properly or not. I know that it didn't download the archive to /usr/local/lib that directory, and that it didn't add an environment variable file to /usr/profile.d. Maybe the issue is the variables containing the directories are wrong. Maybe the variable containing the archives download URI is wrong. How can I find these things out?
UPDATE:
It turns out the module does work. But to improve the module (since I want to upload it to forge.puppetlabs.com, I tried implementing the changes suggested by Matt. Heres the new code:
file { 'bioperl-download':
path => "${archive_path}",
source => "http://cpan.metacpan.org/authors/id/C/CJ/CJFIELDS/${archive_name}",
ensure => "present"
}
exec { 'bioperl-extract':
command => "sudo /bin/tar zxvf ${archive_name}",
cwd => "${bioperl_target_dir}",
require => File['bioperl-download']
}
A problem: It gives me an error telling me that the source cannot be http://. I see in the docs that they do indeed allow http:// files as the source for the file resource. Maybe I'm using an older version of puppet?
I want to try out the puppet-archive module, but I'm not sure how I can set it as a required dependency. By that, I mean how I can make sure its installed first. Do I need to get my module to download the module from github and save it to the modules directory? Or is there a way to let puppet install it automatically? I added it as a dependency to the metadata.json file, but that doesn't install it. I know I can just get my module to download the package, but I was wondering what best practice for this is.
The initial problem you describe is acceptance testing. Verifying that the Puppet resources and code you wrote actually resulted in the desired end state you wanted is normally accomplished with Serverspec: http://serverspec.org/. For example, you can write a Puppet module to deploy an application, but you only know that Puppet did what you told it to, and not that the application actually successfully deployed. Note Serverspec is also what people generally use to solve this problem for Ansible and Chef also.
You can write a Serverspec test similar to the following to help test your module's end state:
describe file('/usr/local/lib/bioperl.tar.gz') do
it { expect(subject).to be_file }
end
describe file('/usr/profile.d/env_file') do
it { expect_subject).to be_file }
its(:content) { is_expected.to match(/env stuff/) }
end
However, your problem also seems to deal with debugging why your acceptance tests failed. For that, you need unit testing. This is normally solved with RSpec-Puppet: http://rspec-puppet.com/. I would show you how to write some tests for your situation, but I don't think you should be writing your Puppet module the way that you did, so it would render the unit tests irrelevant.
Instead, consider using a file resource with the source attribute and a HTTP URI to grab the tarball instead of an exec with wget: https://docs.puppet.com/puppet/latest/type.html#file-attribute-source. Also, you might want to consider using the Puppet archive module to assist you: https://forge.puppet.com/puppet/archive.
If you have questions on how to use these tools to provide unit and acceptance testing, or have questions on how to refactor your module, then don't hesitate to write followup questions on StackOverflow and we can help you.
I'm currently starting up my NodeJS application and I have the following if-statement:
Error: ENOENT, no such file or directory './realworks/objects/'
at Object.fs.mkdirSync (fs.js:654:18)
at Object.module.exports.StartScript (/home/nodeusr/huizenier.nl/realworks.js:294:7)
The weird thing, however, is that the folder exists already, but the check fails on the following snippet:
if(fs.existsSync(objectPath)) {
var existingObjects = fs.readdirSync(objectPath);
existingObjects.forEach(function (objectFile) {
var object = JSON.parse(fs.readFileSync(objectPath+objectFile));
actualObjects[object.ObjectCode] = object;
});
}else{
fs.mkdirSync(objectPath); // << this is line 294
}
I fail to understand how a no such file or directory can occur on CREATING a directory.
When any folder along the given path is missing, mkdir will throw an ENOENT.
There are 2 possible solutions (without using 3rd party packages):
Recursively call fs.mkdir for every non-existent directory along the path.
Use the recursive option, introduced in v10.12:
fs.mkdir('./path/to/dir', {recursive: true}, err => {})
Solve here How to create full path with node's fs.mkdirSync?
NodeJS version 10.12.0 has added a native support for both mkdir and mkdirSync to create a directory recursively with recursive: true option as the following:
fs.mkdirSync(targetDir, { recursive: true });
And if you prefer fs Promises API, you can write
fs.promises.mkdir(targetDir, { recursive: true });
When you are using fs.mkdir or fs.mkdirSync, while passing the path like folder1/folder2/folder3, folder1 and folder2 must exist otherwise you will get the above error.
The following worked for me:
fs.mkdir( __dirname + '/realworks/', err => {})
Problem was caused by forever running the application relative to the working directory the forever start command is called in, not the location of the application entrypoint.
Try:
fs.mkdir('./realworks/', err => {})
The reason for the error is that if any of the folders exist along the path given to fs.mkdir or fs.mkdirSync these methods will throw/callback with an ENOENT error.
ENOENT is described in the linux documentation as the following:
No such file or directory (POSIX.1-2001).
Typically, this error results when a specified path‐
name does not exist, or one of the components in the
directory prefix of a pathname does not exist, or the
specified pathname is a dangling symbolic link.
Another possible reason for ENOENT is that you lack sufficient privileges to create the directory.
This happened to me while building a docker image where I didn't have sufficient privilege to create a subfolder in the current WORKDIR. Changing the owner of the folder using --chown=user:usergroup OR changing the USER to the root user for the directive were both valid solutions to the problem.
WHAT WORKED FOR ME WAS ;
Deleting my yarn.lock, package-lock.json, and nodemodules
reinstalling with yarn build
restarting my local server
so... you probably might be using ubuntu terminal to create your react app.
It happens to me that I am testing the ubuntu terminal that windows recently launched to be installed on windows computer, like a virtual machine but actually not so messy.
I occured to have the same error as you folk, after testing all the options that the community has given before, none of them work. However, i did find a solution for my problem. It was giving me the ENOENT error, like, test file or directory not found. but I was there indeed. I was using npm start, and came up with the idea of using sudo npm start... and it worked.
I'm running two commands
indexer idx_name --rotate
indexer idx_name --buildstops dict_file 10
Everything is fine when I run these commands from command line. However, when I pass these two commands through my node application using exec, first command works successfully and for second command the dict_file is not getting generated.
I tried some combinations with sudo, but it didn't help. I checked the stdout from both these ways(node and shell) and it looked same.
Here is my node js code:
var exec = require('child_process').exec;
var cmd = 'indexer idx_name --rotate && indexer idx_name --buildstops dict_file 10';
exec(cmd, function(err, stdout, stderr) {
console.log(stdout);
});
Is there something I'm missing ?
Which ever user is running node, will need permission to write dict_file.
Might even find it easier to delete the file, and let it be created by the right user via node (assuming that user can write the the folder)
Sudo could also work, but will need to make sure the user running node, has sudo permissions. Sorting that is definitly outside the remit of stackoverflow.
... do also check you looking in the right place. In your example you dont show a path, so the file dict_file will just be created in the current working directory (not sure how node configures that)