Stop EBS Linux 2 (Node.js) from trying to do npm install? - node.js

I'm trying to run a Node application on AWS Linux 2 on Elastic Beanstalk and need to install the dependencies using yarn. (My Node app causes errors if you try to use npm to install dependencies instead of yarn.)
I've already figured out how to set up a script in .platform/hooks/prebuild/ to get it to run yarn, but even though it's running the yarn installation, it still also tries to run npm install, which errors out, causing my deploy to fail.
So I need to figure out how to prevent the default npm install step from running.
(Does anyone know what file that command is run from in the AWS Linux 2 setup process? I was wondering if I could just add another script in .platform/hooks/prebuild/ that would modify that file to prevent the call to npm.)

yes, you can avoid npm install
When you deploy a node_modules directory to an Amazon Linux 2 Node.js platform version, Elastic Beanstalk assumes that you're providing your own dependency packages, and avoids installing dependencies specified in a package.json file.
source doc

Related

Use npm install to deploy a node application?

Say I have a node.js application (some web server for the manner of sake). So I have a directory structure with a src which contains my code and a package.json which includes some metadata about my project and all the packages needed to run it. So I run npm install to get all the packages and run node server.js to run the application.
I have a CI pipeline for my application that upon PR to master runs tests, and if succeeds, merges to master. The next step I want is to deploy the application on a server.
This means that eventually I need my source code plus all the dependencies on the server, and run node server.js.
Is it correct to publish my application as a package (as part of the CI pipeline), and then on the server run npm install to fetch it? Or is npm installing packages only makes sense for packages that serve as some functionality for other applications?
The reason I doubt this is that when you run npm install (at least on a directory with a package.json) you get all the packages in the node_modules directory, which makes me believe that the second option I stated is true.
NOTE
The application is running on a Windows server, no Dockers.

Unable to run packages installed using npm on VM provisioned by Chef

I provisioned my VM on AWS using Chef and installed NodeJS using the NodeJS recipe (https://github.com/redguide/nodejs). When I do a global npm install of any package, I am not able to run that package using command line. Attached the screenshot below.
My poise-javascript cookbook has node_package and javascript_execute resources to take care of all the required path munging for you.
There are two options:
1)add the /usr/local/nodejs-binary-6.3.0/bin/ to PATH variable.
Or
2)Run /usr/local/nodejs-binary-6.3.0/bin/http-server.
The npm package binaries are not added to path by default. I would prefer option 2 to keep the path unpolluted

Installation of vue-cli : how does it work?

I'm very new to Node Package Manager and also Vue, and I'm trying to understand what exactly is going on with using the Vue CLI.
The vue.js website has this as instructions for running the official Vue CLI:
I have a few questions about this:
Does npm install --global vue-cli need to be executed only once on a machine, or once on a directory, or once per new project you're starting? In other words, once it's on your computer, is that the last time you need to run that command, or do you need to execute this command every single new project you start?
Once a new project is initiated, are local copies of the newest version of vue (and vue-router, if selected) installed?
If I finish this project and want to deploy it, how do I then port this over to a production server?
Once in a machine, except for the rare cases where one is isolating one's npm install (such as by using nodeenv or inside a container); that's what the global option is for.
After running npm install, yes.
Running npm run build and copying the contents of the resulting dist directory to the production machine (often within a /var/www directory or similar). This can be automated further in many ways.

Installing node modules in Openshift Python gear

I have a Python application in Openshift, with Python 3.3 and PostgreSQL cartridges. The Python cartridge is running Django 1.8, based off the template on the Website.
Recently, I started using Gulp to automate my build, and while it's worked great on my local machine, I can't figure out what to do to use it in Openshift. I have django-gulp installed so it just runs whenever I use runserver, but the Openshift server obviously doesn't have gulp or any plugins installed, so that won't do anything. I don't know how to install them on the server, though.
Including a package.json does nothing. I've tested it and it works fine if I go with a node cartridge, but I've got a Python one.
Since npm is on the server, I tried SSHing and running npm install manually, but it threw up a permission denied error.
So, after trying to work this out for a while, I've finally figured out a solution. This is a bit convoluted, but it does work:
Install your own Node and npm. The version included in Openshift is hopelessly outdated and doesn't really work. SSH into your server and install a local version in your dependencies folder. cd $OPENSHIFT_HOMEDIR/app_root/dependencies; wget https://nodejs.org/dist/v6.3.0/node-v6.3.0-linux-x64.tar.xz; tar xf node-v6.3.0-linux-x64.tar.xz; rm node-v6.3.0-linux-x64.tar.xz Install the dev version. DO NOT install the stable version as that'll burn through your inodes like nothing else.
Change your PATH to include the the new node version export PATH="$OPENSHIFT_HOMEDIR/app_root/dependencies/node_modules/.bin/:$OPENSHIFT_HOMEDIR/app_root/dependencies/node/bin/:$PATH" Environment variables get reset whenever you disconnect from the server. You can change them permanently with rhc env set, but as this is only used before deployment, I recommend sticking this in the /.openshift/action_hooks/pre_build.
You also need to change the NPM_CONFIG_USERCONFIG variable. export NPM_CONFIG_USERCONFIG=$OPENSHIFT_HOMEDIR/app-root/build-dependencies/.npmrc. Once again, I recommend doing this in the pre_build hook.
At this point, you can change the cache without getting a permission denied error. npm config set cache "$OPENSHIFT_HOMEDIR/app_root/dependencies/.npm
You can now finally use npm install! Install the packages you need. Use the --prefix flag to install to the node_modules folder.
However, to use gulp, you need to have a gulp module installed in the place you're calling from, so npm install gulp WITHOUT the prefix flag.
You can now call gulp! However, your gulpfile will not not find the modules unless you edit your gulpfile to link to the dependencies folder you have set up. So instead of require('gulp-cssnano'), you'd have require([OPENSHIFT_HOMEDIR]/app_root/dependencies/node_modules/gulp-cssnano). I keep a separate gulpfile for Openshift to maintain my sanity.
Call gulp with your new gulpfile in pre_build.

Installing npm modules in a VM shared directory and grunt issues

I'm trying to put together a development environment and npm is causing me problems. Here is my scenario:
I have a development machine running Windows and VMWare Player. I have a Ubuntu Server VM (no UI) which is configured with Apache, PHP, NodeJS etc. As the VM has no UI I want to use the host OS for development. I set up a shared directory which in the VM is accessed as /mnt/hgfs/source/<project name>.
The problem comes when I attempt to run npm install within this directory. I see a lot of errors like Error: UNKNOWN, symlink '../requirejs/bin/r.js'. I know that my package.json file is OK because if I copy all files out of the share and into a regular unix directory (/var/www/<project name>) npm install works fine. So npm has a problem installing modules in the shared directory.
I thought I could get around this by installing the node packages globally but, for whatever reason, the GruntJS enthusiasts don't like that and it must be present locally. I then tried to create an npm link from global to local but that just results in a new error: Error: May not delete: /usr/lib/node_modules/grunt. I have full permissions on the /usr/lib/node_modules directory and all sub-directories.
I really don't want to write the entire project using a command-line text editor in the VM but it looks like I cannot have my code-base in a directory available to both the host and guest OS through VMWare.
I would very much appreciate any suggestions on how to either 1) allow npm modules to be installed in my shared directory, 2) run Grunt globally, or 3) solve the npm link error I'm seeing.
EDIT: Shortly after posting this I realised the fundamental issue here - it's not possible to create symbolic links within a VM shared directory when the host OS is Windows. As npm install uses symlinks by default it didn't work, and this is why the accepted solution does work.
Try the following:
npm install --no-bin-links
Grunt should be local since the plugins and gruntfile.js may require a certain version of Grunt in order to run your tasks. If another developer would like to run your tasks, they could just issue an npm install and they are set. (See this for more info.) grunt-cli is global which is used to run the local version of grunt

Resources