Microsoft Azure hosting | Nodejs | file doesn't get compiled by webpack - node.js

I'm trying to build a webapp using nodejs. It compiles and runs fine on my local machine but when I try to host it on Azure, webpack seems to cause problem.
//webpack.config.js
var config = {
entry: './main.js',
output: {
path:'/',
filename: 'index.js',
},
devServer: {
// inline: true,
// port: 8080
},
module: {
loaders: [
{
test: /\.jsx?$/,
exclude: /node_modules/,
loader: 'babel-loader',
query: {
presets: ['es2015', 'react']
}
}
]
}
}
module.exports = config;
This is the file hierarchy:
This is the sources tab in Chrome Dev tool for local machine. I notice here the index.js get compiled as specified in the config file.
Then I just place the source on the server using git. This is the response I get from the server:
This is the sources tab for the hosting server.
I suspect that it could be because there is difference in interpreting the directories on my local machine and the host?! I'm using MacOS.

Basically, you would need to compile your WebPack application first before deploying to Azure production server. However, you can also leverage Custom Deployment Script to install Node.js modules and run custom scripts to build your WebPack application on Azure Web Apps during the Azure Deployment task. For detailed steps, please check out this post on StackOverflow.

Related

Node/Express server deploy with Heroku and PM2

Attempting to add clustering ability via PM2 and deploy via my Node/Express application.
I've set up the following command:
pm2 start build/server/app.js -i max
The above works fine locally. I'm testing the functionality on a staging environment on Heroku via Performance 1X.
The above shows the log for the command but attempting 1 instance rather than max. It shows typical info after successfully running pm2 start however you can see app immediately crashes afterward.
Any advice or guidance is appreciated.
I ended up using the following documentation: https://pm2.keymetrics.io/docs/integrations/heroku/
Using a ecosystem.config.js with the following:
module.exports = {
apps : [
{
name: `app-name`,
script: 'build/server/app.js',
instances: "max",
exec_mode: "cluster",
env: {
NODE_ENV: "localhost"
},
env_development: {
NODE_ENV: process.env.NODE_ENV
},
env_staging: {
NODE_ENV: process.env.NODE_ENV
},
env_production: {
NODE_ENV: process.env.NODE_ENV
}
}
],
};
Then the following package.json script handles the deployment per the environment I am looking to deploy e.g. production:
"deploy:cluster:prod": "pm2-runtime start ecosystem.config.js --env production --deep-monitoring",
I got the same error but I fixed it by adding
{
"preinstall":"npm I -g pm2",
"start":"pm2-runtime start build/server/app.js -i 1"
}
To my package.json file
This is advised for production environment
But running
pm2 start build/server/app.js -i max
Is for development purpose

Run multiple git branches with PM2

I have an express application with three branches: master, staging, and development. I want to run all three branches simultaneously, each on a different port, i.e. master on 3000, staging on 3001, development on 3002.
I'm hoping to achieve this with PM2, but haven't been able to get this to work yet. I was trying with an ecosystem.config.yml like the one below, which successfully runs the application on ports 3000-3002 and injects the corresponding environment variables, but runs the code of the active branch on all three ports.
Is it possible configure PM2 to run different git branches on different ports? Maybe with the PM2 deploy command and associated configuration somehow?
module.exports = {
apps: [
{
name: "api",
script: "app.js",
watch: ".",
env: {
NODE_ENV: "production",
PORT: "3000",
},
},
{
name: "api-staging",
script: "app.js",
watch: ".",
env: {
NODE_ENV: "staging",
PORT: "3001",
},
},
{
name: "api-dev",
script: "app.js",
watch: ".",
env: {
NODE_ENV: "development",
PORT: "3002",
},
},
],
};
Answering my own question here. The only way I've found to easily do this is to make a local copy of each branch of my repo in its own directory using git clone --branch <branchname> --single-branch <remote-repo-url>, e.g. git clone --branch development --single-branch git#github.com:myuser/my-api.git api-dev to clone the development branch of my repo into a local directory /api-dev.
Then in each local directory, which contains a single branch of my repo, I create an ecosystem.config.js file with configuration appropriate for that branch, e.g.
module.exports = {
apps: [
{
name: "api-dev",
script: "app.js",
watch: ".",
env: {
NODE_ENV: "development",
PORT: "3002",
},
},
],
};
Then I can run pm2 start ecosystem.config.js in each of my local repositories and run each branch on its own port.
Does anyone know a simpler way to do this?

Strapi on Azure does not run

I am using the lastest version of strapi (v3.x)with Node v10.15.2. I am trying to deploy to Azure Web App using this server.js configuration.
module.exports = ({ env }) => ({
host: env('HOST', 'localhost'),
port: env.int('PORT', 1337),
url: 'https://clinicaback.azurewebsites.net',
cron: {
enabled: false
},
admin: {
url: "/dashboard",
autoOpen: false,
build: {
backend: "https://clinicaback.azurewebsites.net"
}
}
});
It build successful and seems like is running with the development configuration. Here is the output from Azure's kudu service
but when I enter to the website, it does not load. and I ran Diagnose and solve problems from Azure and it's showing this...
The webapp only supports port 80 and port 443. It is recommended to modify the relevant port settings in your code.
It is recommended to release the code after build, add npx serve -s as Startup Command for your App Service> General settings.

How to setup serverless.yml and webpack.config for a multiple-runtime AWS Lambda service

I want to deploy AWS Lambda functions with Node8.10 and Ruby2.5 runtimes from one serverless.yml file.
I set up the following folder structure, with /node and /ruby holding my respective handlers.
-/nodeRubyLambdas
-/node
-handler.js
-package.json, package-lock.json, /node_modules
-/ruby
-rubyRijndaelEncryption.rb
-Gemfile, Gemfile.lock, /vendor
-serverless.yml
-webpack.config.js
-package.json for serverless-webpack
Here is my serverless.yml
service: nodeRubyLambdas
plugins:
- serverless-webpack
- serverless-offline
custom:
webpack:
webpackConfig: ./webpack.config.js
includeModules: true
provider:
name: aws
stage: dev
region: us-west-2
iamRoleStatements:
- Effect: Allow
Action:
- lambda:InvokeFunction
Resource: "*"
package:
individually: true
functions:
nodeMain:
handler: node/handler.main
runtime: nodejs8.10
events:
- http:
path: main
method: get
package:
individually: true
rubyEncryption:
handler: ruby/rubyRijndaelEncryption.lambda_handler
runtime: ruby2.5
environment:
RIJNDAEL_PASSWORD: 'a string'
package:
individually: true
My webpack configuration: (This is the base example, I just added the bit to ignore ruby files when I got my first error.)
const slsw = require("serverless-webpack");
const nodeExternals = require("webpack-node-externals");
module.exports = {
entry: slsw.lib.entries,
target: "node",
// Generate sourcemaps for proper error messages
devtool: 'source-map',
// Since 'aws-sdk' is not compatible with webpack,
// we exclude all node dependencies
externals: [nodeExternals()],
mode: slsw.lib.webpack.isLocal ? "development" : "production",
optimization: {
// We do not want to minimize our code.
minimize: false
},
performance: {
// Turn off size warnings for entry points
hints: false
},
// Run babel on all .js files and skip those in node_modules
module: {
rules: [
{
test: /\.js$/,
loader: "babel-loader",
include: __dirname,
exclude: [/node_modules/, /\.rb$/]
}
]
}
};
Fail #0:
[Webpack Compilation error] Module parse failed
Fail #1:
Basically, webpack assumes all functions are .js and tries to package them as such. Based off this suggestion, I forced my entry point in webpack config to be my handler.js
module.exports = {
entry: "./node/handler.js",
target: "node",
...
This packages ONLY the Node Lambda. An empty placeholder for the Ruby Lambda is created on AWS.
Fail #2:
I commented out webpack from serverless.yml and added include and exclude statements in the functions package options.
functions:
nodeMain:
package:
individually: true
include:
- node/**
- handler.js
exclude:
- ruby/**
- rubyLambda/**
rubyEncryption:
package:
individually: true
include:
- vendor/**
- rubyRijndaelEncryption.rb
exclude:
- Gemfile
- Gemfile.lock
- node/**
This gets an [ENOENT: no such file or directory] for node/node_modules/#babel/core/node_modules/.bin/parser. This file is not there but I don't understand why it is looking for it, since webpack is not being called.
Sort of success?:
I was able to get the Lambdas to deploy if I commented out webpack and used
serverless deploy function -f <function name here>
to deploy the Ruby Lambda and then uncommented webpack and used the same thing to deploy the Node Lambda.
I'm convinced that there's a better way to get them to deploy; Have I missed something in my setup? Is there another option I haven't tried?
P.S. I did see this pull request https://github.com/serverless-heaven/serverless-webpack/pull/256, but it seems to be abandoned since 2017.
serverless-webpack is not designed for non-JS runtimes. It hijacks serverless packaging and deploys ONLY the webpack output.
Here are your options:
Don't use serverless-webpack and simply use serverless' built-in packaging.
You can use webpack directly (not serverless-webpack), and change your build process to compile using webpack first and then let serverless deploy the output folder.
P.S. The package.individually property is a root-level property in your serverless.yml. It shouldn't be in provider or in your function definitions.
For those who may be looking for options for multiple-runtimes other than serverless-webpack, I ended up switching to this plugin: https://www.npmjs.com/package/serverless-plugin-include-dependencies.
It works with my runtimes (Ruby and Node) and lets you use package.individually with package.include/exclude at the root and function level if the plugin misses something.

Google credential json file on Serverless AWS

I try to use DialogFlow (API.AI or Google Cloud Dialogflow API) on my serverless project but the problem that I couldn't find any solution for pushing google credential json file to serverless side. I followed this tutorial (it's on google cloud website) and It works correctly on my local but not on lambda. I even tried to copy the file from webpack but It still doesn't work. For DialogFlow, I'm using dialogflow v2 nodejs library.
--- edit
I'm getting this error on lambda which is related to not find the json file I think because I'm not using this module (dialogFlow is using)
(rejection id: 2): Error: Cannot find module '/var/task/node_modules/grpc/src/node/extension_binary/node-v48-linux-x64-glibc/grpc_node.node'
--- edit end
node.js: 6.x
serverless: 1.26
====
serverless.yml
service: test-dialogflow-svc
plugins:
- serverless-webpack
- serverless-plugin-common-excludes
- serverless-offline
- serverless-offline-scheduler
package:
individually: true
include:
- googleCredentials.json
custom:
webpackIncludeModules: true
serverless-offline:
port: 3000
provider:
name: aws
runtime: nodejs6.10
stage: dev
region: eu-west-2
memorySize: 128
timeout: 5
environment:
GOOGLE_APPLICATION_CREDENTIALS: './googleCredentials.json'
functions:
hello:
handler: src/handlers/helloworld.handler
events:
- http:
path: hello
method: get
package:
include:
- googleCredentials.json
webpack.config.js
const path = require('path');
const slsw = require('serverless-webpack');
const nodeExternals = require('webpack-node-externals');
const WebpackPluginCopy = require('webpack-plugin-copy');
module.exports = {
entry: slsw.lib.entries,
target: 'node',
resolve: {
extensions: ['.js', '.json', '.ts', '.tsx']
},
externals: [nodeExternals()],
module: {
rules: [
{
test: /\.ts(x?)$/,
use: [
{
loader: 'awesome-typescript-loader'
}
]
}
]
},
plugins: [ // I tried to copy file with webpack as well
new WebpackPluginCopy([{
copyPermissions: true,
from: './googleCredentials.json'
}])
],
output: {
libraryTarget: 'commonjs',
path: path.join(__dirname, '.webpack'),
filename: '[name].js'
}
};
The answer to this question is made of two parts:
Copying the Google credentials .json file in the serverless .zip bundle
Compile the gRPC c++ native node module for amazon-linux or use the REST JSON API
1) It's possible to copy the Google credentials .json file into the .zip bundle using the serverless-webpack plugin and the webpack-plugin-copy.
serverless.yml
...
plugins:
- serverless-webpack
...
webpack.config.js
...
const WebpackPluginCopy = require('webpack-plugin-copy');
module.exports = {
...
plugins: [
new WebpackPluginCopy([{
copyPermissions: true,
from: `./googleCredentials.json`,
}])
],
};
2) The DialogFlow node client use this gRPC client which has a c++ native module dependency. This is also true for all other node clients for Google Cloud Platform products like Datastore.
You will need to build the native c++ modules on amazon-linux instance on your computer through Docker or an EC2.
C++ Addons as AWS Lambda functions
Using Packages and Native nodejs Modules in AWS Lambda
REST JSON API instead of gRPC
Since native c++ module are annoying to build and all the Google Cloud Platform node clients also add ~30mb to your serverless .zip bundle you might want to avoid the gRPC client and find/write a HTTP client that calls the REST JSON API instead. JSON over HTTP has a higher latency than gRPC but this is not significant unless you have many layers of micro services calling each other.
In the future the node gRPC clients might work without c++ modules using javascript and weight far less than 30mb but at the time of writing there is no sign of commitment except for an alpha stage submodule on the gRPC node client.

Resources