I am trying to deploy my node.js app on Azure webapp via bitbucket.
when I checked the the wwwroot folder on kudu console, I could not find any node_modules folder and hence the app failed to start
I have tried both npm install and npm install --production in kudu console (inside the wwwroot folder), and I could see the node_modules and files being install via filezilla.... however when I try to start the app again, the node_modules just disappears, can't see it neither in kudu console nor in filezilla.
the package.json file in the project folder:
{
"name": "fo",
"version": "1.0.0",
"description": "xx xx xx",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "xx xx",
"license": "MIT",
"dependencies": {
"angular-map-it": "0.0.20",
"angular-maps": "^6.0.1",
"angular-waypoints": "^2.0.0",
"axios": "^0.19.0",
"bcrypt": "^3.0.6",
"bluebird": "^3.5.5",
"body-parser": "^1.19.0",
"connect-mongodb-session": "^2.2.0",
"convert-json": "^0.5.0",
"csvtojson": "^2.0.10",
"dotenv": "^8.0.0",
"express": "^4.17.1",
"express-session": "^1.16.2",
"fixed-width-string": "^1.0.0",
"guid": "0.0.12",
"json2csv": "^4.5.2",
"jsontoxml": "^1.0.1",
"moment": "^2.24.0",
"moment-business-days": "^1.1.3",
"money": "^0.2.0",
"mongoose": "^5.6.4",
"multer": "^1.4.1",
"ng-storage": "^0.3.2",
"node-crisp-api": "^1.8.3",
"nodemailer": "^6.3.0",
"objects-to-csv": "^1.0.1",
"open-exchange-rates": "^0.3.0",
"sanitize": "^2.1.0",
"svg-assets-cache": "^1.1.3"
}
}
I don't understand, how people make node.js app work on azure? why node_modules is disappearing? and why azure is not automatically installing them based on my package.json??
Azure App Service understands package.json and npm-shrinkwrap.json files and can install modules based on entries in these files.
Azure App Service does not support all native modules and might fail when compiling modules with specific prerequisites. While some popular modules like MongoDB have optional native dependencies and work fine without them, the following workarounds proved successful with almost all native modules available today:
Navigate to Kudu - https://yoursite.scm.azurewebsites.net/
Locate the wwwroot folder and run the install command command.
cd site
cd wwwroot
npm install
Run npm install on a Windows machine that has all the native module's prerequisites installed. Then, deploy the created node_modules folder as part of the application to Azure App Service. Before compiling, check that your local Node.js installation has matching architecture and the version is as close as possible to the one used in Azure (the current values can be checked on runtime from properties process.arch and process.version). So, ensure that
Azure App Service can be configured to execute custom bash or shell scripts during deployment, giving you the opportunity to execute custom commands and precisely configure the way npm install is being run. For a video showing how to configure that environment, see Custom Website Deployment Scripts with Kudu. Kindly ensure that all the configuration is appropriate.
If the issue still persist, kindly let us know what specific error message you receive ( app failed to start) for further investigation and also take a look at this ‘Best practices and troubleshooting guide for node applications on Azure App Service Windows’ for more details on the topic.
Related
I've built a react app from create-react-app that uses an API built on express.
I'm trying to deploy the app to heroku and I've run into some issues. This will be my first deploy.
Originally, I separated the express API backend from the React front end by using two servers operating on different PORTS. Then I used concurrently to start both servers in the app's top level package.json file.
The project looks like:
app
|package.json
|client
|package.json
|public
|src
|server
|package.json
|app.js
This works fine locally when webpack launches a development server for the React app. On deploy, however, Heroku would point the landing page to the express server, rather than the react-app home page, resulting in, well, a whole lot of nothing.
I'm wondering if I should:
A. Run everything through a single express server and just serve the react app from there
B. Find a way to run both servers but point to the React app server.
Here is the top level package.json file
{
"name": "",
"version": "2.0.0",
"description": "",
"main": "app.js",
"dependencies": {
"#material-ui/icons": "^4.9.1",
"concurrently": "^5.3.0",
"cors": "^2.8.5",
"#material-ui/core": "^4.11.0",
"axios": "^0.20.0",
"chart.js": "^2.9.4",
"material-table": "^1.69.1",
"query-string": "^6.13.2",
"react": "^16.13.1",
"react-chartjs-2": "^2.10.0",
"react-dom": "^16.13.1",
"react-scripts": "^3.4.3",
"spotify-web-api-js": "^0.22.1",
"bluebird": "^3.7.2",
"body-parser": "^1.19.0",
"cookie-parser": "1.3.2",
"dotenv": "^8.2.0",
"express": "~4.0.0",
"express-session": "^1.17.1",
"handlebars": "^4.7.6",
"querystring": "~0.2.0",
"request": "~2.34.0",
"uuid": "^8.3.0"
},
"devDependencies": {},
"scripts": {
"start": "concurrently \"npm run server\" \"npm run client\"",
"test": "echo \"Error: no test specified\" && exit 1",
"client": "cd client && npm start",
"server": "cd server && npm start"
},
"engines": {
"node": "12.16.2",
"npm": "6.14.4"
},
}
A. Run everything through a single express server and just serve the react app from there
This would be the best choice as far as complexity of the deployment, performance and security are concerned.
To further reduce the possibility of getting issues during Heroku deployment, consider optionally containerizing your solution. You can install Docker, build a container and run it locally. After Heroku deployment the software running inside the container e.g. Express cannot (well, almost) tell the difference between running locally and in the cloud. It eliminates many deployment issues due to differences between the run-time environment you provide locally and that of Heroku. Practical example, it provides the sequence of seven commands to execute in order to get a container with Express/React built and deployed. I'm the author.
Why is my one computer including dev dependencies when packaging a Serverless Framework project but my other computer is not?
When packaging and deploying my Serverless project targeted for AWS, I found that the zip package contained dev dependencies in the node_modules folder. This only happened on one of my two computers. When performing the same build steps on AWS CodeBuild, the package was also ok with not including dev dependencies.
package.json
{
"name": "project-name",
"version": "0.0.1",
"description": "description",
"main": "index.js",
"dependencies": {
"amazon-cognito-identity-js": "^3.0.10",
"aws-sdk": "^2.488.0",
"axios": "^0.18.0",
"js-sha256": "^0.9.0",
"jsonwebtoken": "^8.5.1",
"jwk-to-pem": "^2.0.1",
"node-fetch": "^2.3.0",
"uuid": "^3.3.2",
"lodash": "^4.17.11"
},
"devDependencies": {
"chai": "^4.2.0",
"eslint": "^5.16.0",
"eslint-config-node": "^4.0.0",
"mocha": "^6.0.2",
"serverless": "^1.69.0",
"sinon": "^7.4.2",
"sinon-test": "^2.4.0"
},
"scripts": {
"test": "mocha ./test --recursive"
},
"repository": {
"type": "git",
"url": "https://git-codecommit.us-east-1.amazonaws.com/v1/repos/reponame"
},
"author": "",
"license": "ISC"
}
serverless.yml file that tried packaging but included dev dependencies.
service: service-name
# pinning serverless version for this project so all contributors are using the same version for consisten results
frameworkVersion: ">=1.60.0"
provider:
name: aws
runtime: nodejs10.x
stage: ${opt:stage, 'dev'} # default stage to use, unless overridden on command lin
region: us-east-1
functions:
create:
handler: create/index.create
events:
- http:
path: /{id}
method: post
cors: true
Both computers were windows 10 with the following Node versions
- npm version
6.4.1
- node version
10.15.3
I tried completely uninstalling Node.js from my Windows computer following Uninstall Node.js and reinstalling Node.js but it did not work.
I tried searching for other node_modules folders on my computer and removing them but the project still included the dev dependencies.
I tried building a simple Serverless project but it also included the dev dependencies.
The only solution I found to solve my problem was to perform a reset of windows (keeping user data, only removing application data). There must have been something in the applications, AppData, etc. that caused this issue.
After resetting windows 10 where user data was kept and only application data was removed, installing Node.js (same version as before) and installing Serverless (same version as before), the dev dependencies were no longer included in the node_modules section of the package that was generated.
I'm trying to upgrade my Meteor app. Meteor recommends a specific version of Node to be used when deploying an app. For the latest version of Meteor this is Node 8.15.1.
Now I checked the Kudu management app for App Service which shows all installed (Node) runtimes (https://x.scm.azurewebsites.net/api/diagnostics/runtime) and I'm surprised to learn that the latest installed Node 8 version is 8.11.1, which is more then a year old (!).
How can I use the recommended version of Node (8.15.1) on my App Service for Windows?
I'm unable to switch to a Linux-based App Service Plan atm. If I was able to, I could use a different Docker base image.
Edit: I’ve tried setting the ‘WEBSITE_NODE_DEFAULT_VERSION’ setting, but that only works for Node versions available on App Service
You have to do the following to upgrade to latest version of Node.JS
1) package.json
put the following in your package.json
{
"name": "azure_cosmos_db_webservice",
"version": "0.0.0",
"private": true,
"scripts": {
"start": "node --inspect server.js"
},
"engines":{"node": "8.x"},
"dependencies": {
"async": "^2.1.2",
"body-parser": "~1.15.2",
"cookie-parser": "~1.4.3",
"debug": "~2.2.0",
"documentdb": "^1.10.0",
"dotenv": "^4.0.0",
"express": "~4.14.0",
"morgan": "~1.7.0",
"serve-favicon": "~2.3.0"
}
}
2) Application Setting of node js App in portal
Go to Application Settings, and update the value for WEBSITE_NODE_DEFAULT_VERSION to 8.15.1
It should work then. Hope it helps.
I'm trying to deploy a Node app to Heroku, but I'm having an issue successfully running browserify when the app is deployed.
When I'm running locally, I browserify my script with npm run bundle like so (from package.json):
"bundle": "./node_modules/browserify/bin/cmd.js build/main.js -o public/scripts/bundle.js
which browserifies the script in build/main.js and puts it into public/scripts/bundle.js.
For deploying to Heroku, I added
"postinstall": "npm run bundle"
However, when I deploy, I get the following error:
Error: ENOENT: no such file or directory, open 'public/scripts/bundle.js.tmp-browserify-59309133185877094263'
Well, that's correct, that file shouldn't exist... yet. When I run npm run bundle locally, I do see that file briefly pop into existence, but then it is quickly removed and I'm left with a nice updated bundle.js.
I read through Heroku's docs on this, but I'm miffed... can anyone clarify how to get through this?
For reference, here are the relevant parts of my package.json:
"scripts": {
"bundle": "./node_modules/browserify/bin/cmd.js build/main.js -o public/scripts/bundle.js",
"postinstall": "npm run bundle"
},
"dependencies": {
"body-parser": "^1.17.1",
"browserify": "^14.1.0",
"ejs": "^2.5.6",
"express": "^4.15.2",
"jquery": "^3.2.1",
"path": "^0.12.7",
"superagent": "^3.5.2"
},
"devDependencies": {},
"engines": {
"node": "6.8.1",
"npm": "4.0.5"
}
Solved! I had bundle.js included in my global gitinore configuration. Just had to take that out, good to go!
I'm setting up a Jenkins build on Windows Server 2012 R2 Standard. Part of the build involves using npm to install from package.json:
package.json
{
"name": "localtest",
"version": "1.0.0",
"description": "",
"main": "index.js",
"directories": {
"test": "test"
},
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"repository": {
"type": "git",
"url": url_for_git_repo
},
"author": "",
"license": "ISC",
"devDependencies": {
"extract-text-webpack-plugin": "^0.8.2",
"jasmine-core": "^2.3.4",
"karma": "^0.13.9",
"karma-chrome-launcher": "^0.2.0",
"karma-jasmine": "^0.3.6",
"karma-mocha-reporter": "^1.1.1",
"karma-phantomjs-launcher": "^0.2.1",
"karma-sourcemap-loader": "^0.3.5",
"karma-webpack": "^1.7.0",
"node-sass": "^3.3.1",
"phantomjs": "^1.9.18",
"raw-loader": "^0.5.1",
"sass-loader": "^2.0.1",
"webpack": "^1.12.0"
}
}
When I run npm install from command line it works successfully but it fails when the Jenkins build attempts it. The full Jenkins output can be viewed on pastebin. The specific error seems to be LINK : fatal error LNK1181: cannot open input file 'C:\Windows\system32\config\systemprofile\.node-gyp\0.12.7\Release\node.lib' [C:\bld\localtest\node_modules\karma\node_modules\socket.io\node_modules\engine.io\node_modules\ws\node_modules\utf-8-validate\build\validation.vcxproj]. The folder Release doesn't actually exist on my system so that would seem to be a sensible error message other than the fact that the install completes successfully from command line (both cmd.exe and git bash FWIW).
With this working from command line I think the problem is related to some environment variable or other, or maybe something with the path but having tried to replicate the path from the command line into the Jenkins build I still haven't had any joy. Does anyone have any suggestions for what I might try next?
UPDATE 1:
I've just set the Jenkins service to log on under my account rather than the system account and restarted it. The build completed successfully. I think that makes it even more likely that this is a problem in the environment variables somewhere.
UPDATE 2:
I installed the Environment Injector plugin for Jenkins so that I could update the environment variables which were different between my user and the system user. This still resulted in the same error.
This isn't much of an answer, but it's what worked out for me.
I ended up doing
npm uninstall -g node-gyp
And then uninstalling nodejs. I then re-installed nodejs and ran
npm install -g node-gyp
And the Jenkins build is now running successfully