AWS Lambda: module initialization error: Error at Error (native) at Object.fs.openSync (fs.js:641:18) - node.js

I have an AWS DynammoDB lambda which triggers by a DynamoDB stream. All implementation has been done in JS with ClaudiJS. When the lambda is deployed with the claudia create command there is no issue.
The problem is when the same function is deployed with GoCD pipeline using a dockerized build server following error occurs when the lambda function gets called.
module initialization error: Error
at Error (native)
at Object.fs.openSync (fs.js:641:18)
at Object.fs.readFileSync (fs.js:509:33)
at Object.Module._extensions..js (module.js:578:20)
at Module.load (module.js:487:32)
at tryModuleLoad (module.js:446:12)
at Function.Module._load (module.js:438:3)
at Module.require (module.js:497:17)
at require (internal/module.js:20:19)
Now I have spent over 10 hours and I have no idea of resolving this issue. Can someone please help me?
Lambda uses Node 6.10 and I use babel to transpile to node 6.10 .
Tried withnode:boron and ubuntu:16.04 images as builder images for Docker.

I spent more than a day on this issue. At the end, I have tried almost all possible approaches and finally solved the issue by switching to Serverless from ClaudiaJS. For use of everyone I will mention here the approaches I tried with the results.
Used the same localhost environment inside the build docker container used by the GoCD pipeline (Same node version, same yarn version, Ubuntu 16:04). But issue was still there.
Removed docker and setup the GoCD pipeline to run directly on the build server (Again used the same node version, same yarn version, Ubuntu 16:04 as I used in my local machine). But again there was no lock and issue was there without any change.
Committed the node_modules folder and build folder of my local machine to the git repository and used the same node_modules and build files withing the GoCD pipeline without executing yarn and without transpiling the code on the build server. But nothing changed.
Finally, I switched to Serverless framework. In the 1st attempt I used Serverless with babel and without webpack even though serverless recommendation is to use webpack. But again the same issue was occurred when the lambda is deployed with the pipeline. The I changed the configuration to use webpack with serverless. Then all the issues were resolved and lambda was deployed successfully. This is the webpack.config.js I used at the end.
const path = require('path');
const slsw = require('serverless-webpack');
const nodeExternals = require('webpack-node-externals');
const build = {
entry: slsw.lib.entries,
resolve: {
extensions: ['.js'],
},
target: 'node',
output: {
libraryTarget: 'commonjs',
path: path.join(__dirname, '.webpack'),
filename: '[name].js',
},
module: {
rules: [
{
test: /\.js$/,
exclude: /node_modules/,
use: [
{
loader: 'babel-loader',
},
],
},
],
},
mode: slsw.lib.webpack.isLocal ? 'development' : 'production',
optimization: {
// Do not minimize the code.
minimize: false,
},
performance: {
// Turn off size warnings for entry points
hints: false,
},
externals: [nodeExternals()],
};
module.exports = build;

This error occurs me when my serverless instance is going to read a .json file and extract a json object out of it. so i created that json object inside the script as a json object. then everything was fine.. I used basic configuration for webpack.config

Let me prefix this by saying that I do not have specific experience with GoCD, but I have run into this error in other contexts.
One potential cause for this error is a file permissions issue in deploying the code to the VM.
The error is a generic error which means that a Lambda function was unable to start. You can receive this error inside of AWS as well. Unfortunately the logging level that you see with GoCD appears to be at the same level as AWS CloudWatch, which is not very good and does not tell you what prevented Lambda from starting. You need more logging to determine the exact cause in your situation.
If you do happen to experience this error in AWS, open your Lambda function. At the top of the AWS page there should be a dropdown with a "Test" button next to it.
Open the dropdown and select "Configure Test Events." You will have to craft this test to match your specific lambda function. Next, select the new test and click the "Test" button. Lambda will show you a success or failure message with details from the call.
In my case, we had scripted the upload with the AWS sam utility on a linux box, and the file permissions were not quite right (the error was something about "permission denied, open '/var/task/index.js'").

Related

#loadable/server pass the whole stats JSON to eval('require')(modulePath)

I'm trying to setup SSR for react app with #loadable/components. I setup all based on docs with babel and webpack plugins. When I try to run node server.js it runs ok but when I open a browser and throws the following error (into node console):
TypeError [ERR_INVALID_ARG_TYPE]: The "id" argument must be of type string. Received an instance of Object
at validateString (internal/validators.js:118:11)
at Module.require (internal/modules/cjs/loader.js:1033:3)
at require (internal/modules/cjs/helpers.js:72:18)
at smartRequire (/Users/max/Documents/repos/app/node_modules/#loadable/server/lib/util.js:44:25)
at new ChunkExtractor (/Users/max/Documents/repos/app/node_modules/#loadable/server/lib/ChunkExtractor.js:181:50)
at renderer (webpack://app/./node_modules/#MYSCOPE/elm/dist/elm.esm.js?:3619:19)
at eval (webpack://app/./src/server.tsx?:64:90)
at processTicksAndRejections (internal/process/task_queues.js:97:5) {
code: 'ERR_INVALID_ARG_TYPE'
}
As you can see there is #MYSCOPE in the traceback which holds some of my internal packages (if it matters).
#loadable/server/lib/util.js is the following function:
And when I try to console.log(modulePath) on line 42 I see a whole stats JSON output which seems wrong and I should get a single module path (as I understand).
Any help?
I can share some specific parts of my configuration files if needed. Because I see my own package in console output seems like something is wrong with it's build (it works perfectly on the client-side with cjs build), but having full stats object as module path is very confusing.
UPD: Demo https://www.dropbox.com/s/9r947cgg4qvqbu4/loadable-test.zip?dl=0
Run
yarn
yarn dev:server
# go to localhost:3000 and see the error in console
to rebuild:
yarn
yarn dev:build-client
yarn dev:build-server
yarn dev:server # go to localhost:3000
The statsFile option passed to ChunkExtractor expects a path to the loadable-stats.json file, not the actual JSON content of it. By doing require('../loadable-stats.json'), webpack actually resolve the JSON during build time and assign it to the loadableJson variable.
You can change your loadableJson as follow:
import path from 'path';
const loadableJson = path.resolve(__dirname, '../bundle_client/loadable-stats.json');
This will solve the problem you had on your question. But, if you only do this, you will notice that you have another problem. Loadable by default assumes that your entry chunk name is main. This is not the case in your demo, as you have set the entry chunk name to be app instead.
entry: {
app: ['#babel/polyfill', './src/index.tsx']
},
To solve this, simply tell loadable about your entrypoints names by passing an array to the ChunkExtractor contructor as such:
const extractor = new ChunkExtractor({
statsFile: loadableJson,
entrypoints: ["app"], // name of your entry chunk
});
That's it, everything should now be working properly!
If it helps, I set up the demo on GitHub so you can easily see the changes I made here.

PM2 in cluster mode, don't start after reboot until I remove the folder

I'm new in NodeJS but for some reasons I need to use this app:
https://github.com/ZitRos/save-analytics-from-content-blockers
I've cloned the repository, installed the required modules using npm install, and I'm able to run the app using the command npm start.
Because I need to start the app on boot and I want to use it in multiple threads, I've installed and configured PM2 for that, and I'm using this ecosystem.config.js:
module.exports = {
apps : [{
name: 'Analytics Proxy',
script: './src/api.js',
cwd: '/server/proyects/analytics_proxy/',
// Options reference: https://pm2.keymetrics.io/docs/usage/application-declaration/
interpreter_args: '-r esm',
instances: 1,
exec_mode: 'cluster',
autorestart: true,
watch: true,
max_memory_restart: '256M',
env: {
NODE_ENV: 'production'
},
}],
};
At first attemp the cluster was running without problem even with 4 instances, but after reboot I've noticed that the cluster wont boot again. After looking into the logs I've seen that there's an error finding the ESM module:
Error: Cannot find module 'esm'
at Function.Module._resolveFilename (internal/modules/cjs/loader.js:636:15)
at Function.Module._load (internal/modules/cjs/loader.js:562:25)
at Module.require (internal/modules/cjs/loader.js:692:17)
at Module._preloadModules (internal/modules/cjs/loader.js:901:12)
at preloadModules (internal/bootstrap/node.js:602:7)
at startup (internal/bootstrap/node.js:273:9)
at bootstrapNodeJSCore (internal/bootstrap/node.js:623:3)
I've tried to install the module globally and doesn't fix the problem, but anyway is strange because in Fork mode it start without problem even after reboot and if I remove the entire /root.pm2/ folder then it works again in cluster mode.
Also If I use the pm2 update command, sometimes it works again.
PM2 Update
Anyone knows how I can solve this problem?.
Best regards, and thanks!

"TypeError: Cannot read property 'indexOf' of undefined" raised when using packages "onoff" or "rpi-gpio" with WebPack

I wrote a Node.JS project for the Raspberry PI, to control the GPIO.
This is my first time using GPIO.
The project uses the "onoff" package to communicate with GPIO. And the compiler is WebPack.
I can compile the project without issue.
But when I run the application on the RaspberryPI, I receive this error:
webpack:///./node_modules/bindings/bindings.js?:178
if (fileName.indexOf(fileSchema) === 0) {
^
TypeError: Cannot read property 'indexOf' of undefined
at Function.getFileName (webpack:///./node_modules/bindings/bindings.js?:178:16)
at bindings (webpack:///./node_modules/bindings/bindings.js?:82:48)
at eval (webpack:///./node_modules/epoll/epoll.js?:7:86)
at eval (webpack:///./node_modules/epoll/epoll.js?:15:3)
at Object../node_modules/epoll/epoll.js (/home/pi/xilium/raspi.node/Raspi.node/dist/raspi.multi-monitor.js:809:1)
at __webpack_require__ (/home/pi/xilium/raspi.node/Raspi.node/dist/raspi.multi-monitor.js:20:30)
at eval (webpack:///./node_modules/rpi-gpio/rpi-gpio.js?:6:20)
at Object../node_modules/rpi-gpio/rpi-gpio.js (/home/pi/xilium/raspi.node/Raspi.node/dist/raspi.multi-monitor.js:1375:1)
at __webpack_require__ (/home/pi/xilium/raspi.node/Raspi.node/dist/raspi.multi-monitor.js:20:30)
at eval (webpack:///./src/raspi.multi-monitor.ts?:29:15)
So, I tried replacing the "onoff" package with "rpi-gpio". Unfortunately, the result is the same.
It seems that there is a configuration issue for "epoll" package (a dependence of "onoff" and "rpi-gpio").
Can anyone help me?
As a disclaimer, I am new to electron, webpack and everything around it, but after a lot of searching, I finally managed to get it working. I am not sure if this is the proper way to do it yet, but I just got it to work.
While searching far and wide, I found this comment on an issue from the serialport package, where they use electron-rebuild to rebuild the serialport module. More info about using native node modules can be found in the Electron documentation here.
Basically, I this to the scripts of my package.json:
"rebuild": "electron-rebuild -f -w onoff"
Then I ran npm run rebuild. Unfortunately, it still didn't work.
What was the missing link, was to tell webpack that the onoff module should be external.
I did it like so, in the webpack config that builds the electron parts of my app (setup is based on this guide I read):
'use strict';
const path = require('path');
const webpack = require('webpack');
module.exports = {
mode: 'development',
entry: './src/electron/main.js',
output: {
filename: 'index.js',
path: path.resolve(__dirname, 'out/electron')
},
module: {
rules: []
},
resolve: {
extensions: ['.js']
},
plugins: [
// This is the important part for onoff to work
new webpack.ExternalsPlugin('commonjs', [
'onoff'
])
],
// tell webpack that we're building for electron
target: 'electron-main',
node: {
// tell webpack that we actually want a working __dirname value
// (ref: https://webpack.js.org/configuration/node/#node-__dirname)
__dirname: false
}
};
As I wrote this, I stumbled upon externals config that might just work the same as well.
Now, finally I can blink my LEDs. I hope this answer can help anyone else in the future that might have the same issue.

How to bundle NodeJs Application?

Consider I have been done with my NodeJs with Express and MongoDB Application. I can bundle up my application to deliver the same to client but in that case client have to install all required npm modules and change mongoDB connection url. So to counter this issue:
Is there any other way to bundle my application so client do not need to install any npm modules to run application? If yes, then how client can connect to his mongodb?
pkg by Vercel works really well for me. It bundles everything into a single executable that runs out of a terminal. This makes it really easy to distribute.
To connect to a Mongo instance, you would have to have some kind of configuration file that the user edits. I recommend putting into the user data folder (Application Support on Mac and AppData on Windows), but you could have it sit right next to the packaged executable.
You can use good old webpack by setting it's target. here is an example webpack.config.js file for node:
import webpack from 'webpack';
export default {
entry: "./server.js",
output: {
// output bundle will be in `dist/buit.js`
filename: `built.js`,
},
target: 'node',
mode: 'production',
plugins: [
new webpack.optimize.LimitChunkCountPlugin({
maxChunks: 1
})
],
};

Node process.env.VARIABLE_NAME returning undefined

I'm using environment variables on my mac to store some sensitive credentials, and trying to access them through Node. I added them into my environment profile with
export VARIABLE_NAME=mySensitiveInfo
When I use echo $VARIABLE_NAME I receive the correct output (my sensitive info).
However, when I am trying to access this same variable in Node with process.env.VARIABLE_NAME and try to print it out on the console, I get an undefined.
Other environment variables appear to be okay though. For example, when I console.log(process.env.FACEBOOK_CALLBACK_URL), it prints the correct value to my console. I added FACEBOOK_CALLBACK_URL a few days ago.
Do I have to restart my machine or something? Does it take a certain time before environment variables become available in Node? The closest answer I've seen on SO is this post, but nobody was able to figure out why it was happening.
nodemon.json file is only for setting nodemon specific configuration
So for create custom environment variables we can use dotenv package
First , Install dotenv package
npm install dotenv --save
after that create .env file in root and include environment variables as bellows
MONGO_ATLAS_PW=xxxxx
JWT_KEY=secret
Finally, inside your app.js file insert following after your imports.
require('dotenv').config()
Then you can use environment varibale like this
process.env.MONGO_ATLAS_PW
process.env.JWT_KEY
process.env.VARIABLE_NAME returns undefined because the Node.js execution environment does not know the newly added VARIABLE_NAME yet. To fix the issue, the Node.js execution environment (e.g. IDE) need to restart.
The following steps can be used to reproduce this issue:
Open IDE such as WebStorm and write a simple Node.js program: console.log(process.env.VARIABLE_NAME). It will print undefined as expected, as VARIABLE_NAME is not defined yet. Keep the IDE running, don't close it.
Open environment profile such as .bash_profile and add export VARIABLE_NAME=mySensitiveInfo in it.
Open system console and run source .bash_profile, so that the above export statement will be executed. From now on, whenever system console is opened, VARIABLE_NAME environment variable exists.
In system console, execute the Node.js program in step 1, it will print mySensitiveInfo.
Switch to IDE and execute Node.js program, it will print undefined.
Restart the IDE and execute Node.js program, this time, it will print mySensitiveInfo
I got a problem just now , and I use this solved it in webpack config
const plugins = [
new webpack.NoEmitOnErrorsPlugin(),
// after compile global will defined `process.env` this Object
new webpack.DefinePlugin({
BUILD_AT : Date.now().toString(32),
DEBUG: process.env.NODE_ENV !== 'production',
'process.env': {
'NODE_ENV': JSON.stringify(process.env.NODE_ENV || "development"),
'VARIABLE_NAME': JSON.stringify(process.env.VARIABLE_NAME)
}
})
]
For everyone who might have this issue in the future and none of the solutions above are working, it may also be because you're running the node <filename>.js in a subfolder or subdirectory, and since your .env file is in the root folder, processs.env.<variable> will always return undefined.
A simple way to check is to try the following code 👉
const test = require('dotenv').config()
console.log(test)
In the console we get the following error
{ error: Error: ENOENT: no such file or directory, open 'C:\Users\vxcbv\Desktop\voisascript-auth\model\.env'
at Object.openSync (node:fs:585:3)
at Object.readFileSync (node:fs:453:35)
at Object.config (C:\Users\vxcbv\Desktop\voisascript-auth\node_modules\dotenv\lib\main.js:72:42)
at Object.<anonymous> (C:\Users\vxcbv\Desktop\voisascript-auth\model\db.js:1:32)
at Module._compile (node:internal/modules/cjs/loader:1105:14)
at Object.Module._extensions..js (node:internal/modules/cjs/loader:1159:10)
at Module.load (node:internal/modules/cjs/loader:981:32)
at Function.Module._load (node:internal/modules/cjs/loader:822:12)
at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:77:12)
at node:internal/main/run_main_module:17:47 {
errno: -4058,
syscall: 'open',
code: 'ENOENT',
path: 'C:\\Users\\vxcbv\\Desktop\\voisascript-auth\\model\\.env' } }
Emphasis on the Error: ENOENT: no such file or directory, open.
To solve this, just navigate back to the root and run the file from there, but specify the full path of the file from the root, like 👉
node /<subfolder>/<file.js>
please check the dir of .env file, If it is in same file as your app.js or (abc.js) than move .env to one level up
Close the code Runner Or Ide environment once and reopen it.
If you use VS code, close it completely. (Or CMD or Power Shell or ...)
I had the same issue. In my case the env file and db.js was inside a subfolder configs.
So, in index/server.js , while importing the dotven , I used
require('dotenv').config({path: './configs/.env'});
This way my env variables were being accessed.
Hope this example of mine helps you! 😀

Resources