Does GAE standard for Node support a way to have build scripts? I tried using postinstall within package.json but that did not work.
My codebase has subdirectories with package.json within the subdirectories. In my root package.json there is
scripts: {
postinstall: cd vendor && npm install
....
}
However I'm not seeing any vendor packages installed so I'm inclined to believe the postinstall does not get triggered on GAE Node standard.
Is there any way for me to install subdirectory dependencies without having to copy and paste all my vendor/package.json dependencies to the root?
Note: I've also tried putting an "install" within the package.json scripts but that didn't seem to get triggered either.
In GAE standard, installation of dependencies are automatically managed. You should add them in your package.json.
As Google documentation mentioned :
When you deploy your app, the Node.js runtime automatically installs all dependencies declared in your package.json file using the npm install command.
{
"dependencies": {
"lodash": "^4.0.1"
}
}
Installation will be done during app deployment via :
gcloud app deploy
To add a build step, run the following:
gcloud beta app gen-config --custom
This will generate the default dockerfile and config that is run. In your .dockerfile, add your build step:
RUN npm run build --unsafe-perm || \
((if [ -f npm-debug.log ]; then \
cat npm-debug.log; \
fi) && false)
"prestart": "if [ ! -d build ]; then npm run build; fi",
" -d build" here is the build process generated folder, replace it to whatever you actually use.
Not sure if this will work for your case, but seems like GAE standard has added the ability to run a custom build step.
However it does state:
After executing your custom build step, App Engine removes and regenerates the node_modules folder by only installing the production dependencies declared in the dependencies field of your package.json file.
Maybe since the node_modules are in your vendor/ directory, GAE may not detect and remove them, thus accomplishing your goal. This is a pre-install step, unlike postinstall specified in your script. Not sure if it matters.
Related
I'm not familiar with npm so I might be holding the wrong end of the shovel here...
There is a package on npm that I would like to modify and use in my own project. The package is angular-crumbs. I forked the source repo (https://github.com/emilol/angular-crumbs) into my own account (https://github.com/capesean/angular-crumbs) and then run npm install capesean/angular-crumbs -force. However, this produces a node_modules folder in my project that hasn't been built (and whatever else - as I understand it) with the commands in the source repo's package.json file:
"build": "npm run clean && npm run transpile && npm run package && npm run minify && npm run copy"
i.e. it doesn't have the types, the correct package.json file, etc.
So my question is, how do I get the properly-built files (including type definitions, etc.) from my own repo to install or build-after-installing in my target project?
I am not sure about what you're trying to do, are trying to work on the
angular-crumbs source code, or are you trying to use it in your own project as a dependencuy ?
Anyway, running npm install will install all your dependencies so that you can directly use them in your project, those dependencies don't need to be built after they are installed.
In your case you seem to have an angular application (which is completely different from node.js), usually to start an angular app you can run ng serve which will build your source code and run an angular server so you can access it on localhost.
I am generating the production version of an API I made using the NESTJS framework and would like to know which files I should upload to the server. When I run the "npm run start: prod" compile it generates the "dist" folder but I tried to run only with it but it is not enough to run my application. Do I need to upload all files to the server? I did several tests removing the folders I used during development but only managed to run in production mode when I was all the same in dev mode.
I looked in the documentation for something about this but found nothing. can anybody help me?
Thank you
Honestly, you should only really need the dist folder as that's the JS 'complied' files. To run your application, commonly you'd use this command node dist/main.js. As to what files you upload it's up to you. Me personally, I use a lot of continuous integration so I would just clone to repo into my container/server and use yarn start:prod. This is so everytime I deploy I'm generating the required files to run in a production environment.
Like #Kim Kern mentioned, some node modules are native built using node-gyro; so it's also always best to build your node_modules on the server/container when deploying. Your deployment script should look something like this
git clone git#github.com:myuser/myrepo.git /var/www/
cd /var/www/
node -v && \
yarn && \
yarn build && \
yarn start:prod
The above script should
1) pull the required repo into a 'hosted' directory
2) check the node version
3) install node_modules and build native scripts etc
4) build the production distribution
5) run the production JS scripts
If you look in your package.json file you'll notice the different scripts that are run when you use yarn start, yarn start:dev and yarn start:prod. When in dev you'll notice the use of ts-node which is a typescript node runner type thing (can't remember the correct phrase). Also the start:dev script uses nodemode to restart the ts-node script. You'll also see the start:prod script uses node dist/main.js and that the prestart:prod script runs rm -rf dist && tsc which removes the dist folder and 'compiles' the javascript required for a production environment.
However, the drawback of a typescript application on your server without continuous integration is that there is the possibility of typescript compilation errors which you wouldn't see or know about until running the prod scripts. I would recommend putting a procedure in place to compile the javascipt from typescript before making a deployment as you don't want to delete the current dist build before knowing the next release will build and run!
For me this approach worked and all you need is the dist folder for this:
Create a prod build of your application using npm run start:prod, this would create a dist folder within your application source
Copy the dist folder to your server.
For getting all the node_modules dependencies on your server just copy your package.json file into the dist folder (that you have copied onto the server) and then run npm install from there.
If you are using pm2 to run your node applications just run pm2 start main.js from within the dist folder
Mostly, you will only need the dependencies in node_modules. You should build the libraries on your server though instead of copying them from your dev machine. Libraries like bcrypt have machine specific code and probably won't run on a different machine. (30% of the npm libraries have native bindings.)
So for your deployment I would recommend to checkout your git repository on your server and then just run npm run start:prod (which builds the project every time) directly there.
Just use the Nest-CLI and build with
nest build
Afterwards you get a dist folder with the compiled Code.
You can then place it on a server an run e.g. with PM2 proccess manager:
production=true pm2 start dist/main.js
In former command the environment variable production is set to true. That could e.g. be usefull when running the Nest.js server over HTTPS.
If you want to run a HTTPS secured server you also have to include the certificates in the starting process of the server. When the environment variable production is set and true the certificates get included in the starting proccess of the Nest.js application in main.ts like following:
async function bootstrap() {
let appConfig = {}
if (process.env.production) {
console.log('process env production: ', process.env.production)
const httpsOptions = {
key: fs.readFileSync('/etc/certs/letsencrypt/live/testtest.de/privkey.pem'),
cert: fs.readFileSync('/etc/certs/letsencrypt/live/testtest.de/fullchain.pem'),
}
// prod config
appConfig = {
httpsOptions,
}
}
const app = await NestFactory.create<NestExpressApplication>(
AppModule,
appConfig,
)
app.enableCors()
app.setGlobalPrefix('v1')
await app.listen(3300)
}
bootstrap()
We don't build our application on production, but instead build it when creating our docker container.
The steps for us roughly are:
Run npm install and whatever tooling you need to build the application.
Create docker container and copy dist/, node_modules and package.json
Inside the docker container run npm rebuild bcrypt --update-binary
We are using NX for monorepo where we hold our API's. And we use docker for our images and containers. When we have to create docker image, only run: npx nx build <project> and this generate build on dist/apps/<project>. This folder goes to the docker image, with the package.json and that's it. You don't need to add node_modules, because they are on the package.json. Just be sure to include npm install on your Dockerfile.
One common problem I have, is that sometimes my .npmignore file is too aggressive, and I ignore files that I actually will to include in the NPM tarball.
My question is - is there a way to test the results of NPM publish, without actually publishing to NPM?
I am thinking something like this. Assuming I have a local NPM package with package name "foo"
set -e;
local proj="bar";
local path_to_foo="."
mkdir -p "$HOME/.local.npm"
npm --tarball -o "$HOME/.local.npm" # made up command, but you get the idea
(
cd "$HOME/.temp_projects"
rm -rf "$proj"
mkdir "$proj"
cd "$proj"
npm init -f
npm install "$path_to_foo"
)
copy_test_stuff -o "$HOME/.temp_projects/bar"
cd "$HOME/.temp_projects/bar"
npm test
I don't think this will work. Because whatever we include in the NPM publish tarball, might not have enough to do the full test. But maybe if we copy all the test files (including fixtures, etc) when we do copy_test_stuff, it might work?
Simply run
npm publish --dry-run
or, with tarball generation in the current directory
npm pack
In npm 6 and up, these will display what files are going to be uploaded.
I'll elaborate my comment I posted earler, (thanks Alexander Mills).
I'm a verdaccio contributor, so, I closely follow whom are implementing and how to verdaccio. I'll describe couples or examples (e2e mostly) that I've found and might be interesting or as a valid answer.
create-react-app
By far, the most popular integration. Let me give you some context, they are using lerna and have multiple packages that need to test before to publish on main registry aka (npmjs). I'll quote here Dan Abramov explaining their reasons to use a custon registry.
The script is self-explanatory but let me highlight some parts.
+nohup npx verdaccio#2.7.2 &>$tmp_registry_log &
+# Wait for `verdaccio` to boot
+grep -q 'http address' <(tail -f $tmp_registry_log)
+
+# Set registry to local registry
+npm set registry http://localhost:4873
+yarn config set registry http://localhost:4873
+
+# Login so we can publish packages
+npx npm-cli-login#0.0.10 -u user -p password -e user#example.com -r http://localhost:4873 --quotes
# Test local start command
yarn start --smoke-test
+./tasks/release.sh --yes --force-publish=* --skip-git --cd-version=prerelease --exact --npm-tag=latest
As you see, they are running verdaccio and instead a custom config file they have decided to use npm-cli-login and then they run the tests against verdaccio. When all is ready, they publish on verdaccio. As last step, later in the same file, they fetch packages with their own app.
pnpm
They have created a project called pnpm-registry-mock which is an abstraction that allows them to run verdaccio before running the tests.
"pretest:e2e": "rimraf ../.tmp/ && rimraf node_modules/.bin/pnpm && pnpm-registry-mock prepare",
"test:e2e": "preview --skip-prepublishOnly && npm-run-all -p -r pnpm-registry-mock test:tap",
"test": "npm run lint && npm run tsc && npm run test:e2e",
Basically, using npm scripts they prepare verdaccio and run the test as last step. I cannot go too much into details, since I've only saw it shallowly. But I know what it does.
Mozilla Neutrino
This is work in progress, but, it's also interesting to mention here.
+if [ "$PROJECT" == "all" ]; then
+ yarn link:all;
+ yarn validate:eslintrc;
+ yarn lint;
+ yarn build;
+ yarn test;
+else
+ yarn verdaccio --config verdaccio.yml & sleep 10;
+ yarn config set registry "http://localhost:4873";
+ npm config set registry "http://localhost:4873";
+ .scripts/npm-adduser.js;
+ yarn lerna publish \
+ --force-publish=* \
+ --skip-git \
+ --skip-npm \
+ --registry http://localhost:4873/ \
+ --yes \
+ --repo-version $(node_modules/.bin/semver -i patch $(npm view neutrino version));
+ yarn lerna exec npm publish --registry http://localhost:4873/;
+ PROJECT="$PROJECT" TEST_RUNNER="$TEST_RUNNER" LINTER="$LINTER" yarn test:create-project;
+fi
Again, the same approach, project is being built and then verdaccio is being executed and they publish all packages.
Babel.js
I know Babel.js has been experimenting with a smoke-testing for Babel 6 and have plans to integrate a registry with Babel 7. I quote Henry Zhu early this year talking about babel-smoke-tests in the same thread of create-react-app.
The experiment is called babel-smoke-tests and babel-smoke-tests/scripts/test.sh is the key file for you.
Here I see the same pattern than other projects. They are launching verdaccio and then they do their stuff.
START=$(cd scripts; pwd)/section-start.sh
END=$(cd scripts; pwd)/section-end.sh
$START 'Setting up local npm registry' setup.npm.registry
node_modules/.bin/verdaccio -l localhost:4873 -c verdaccio.yml &
export NPM_CONFIG_REGISTRY=http://localhost:4873/
NPM_LOGIN=$(pwd)/scripts/npm-login.sh
$NPM_LOGIN
$END 'Done setting up local npm registry' setup.npm.registry
scripts/bootstrap.sh
export THEM=$(cd them; pwd)
if [[ $SPECIFIC_TEST ]]; then
scripts/tests/$SPECIFIC_TEST.sh
else
scripts/tests/jquery.sh
scripts/tests/react.sh
fi
Wrap up
First of all, I hope my small research give you new ideas how to address your issue. I think npm pack solve some issues, but mocking a registry using verdaccio which is quite light and straightforward to use might be a real option for you. Some big projects are being (or getting started) using it and they follow more or less the same approach. So, Why don't try? :)
https://www.verdaccio.org/
I had the exact same problem, so I created a package called package-preview. What package-preview does is:
packs your package (it is what npm does before publish)
installs your package in a temp location
links the package to your project's node_modules
This allows you to basically require the package as a dependency in your tests. So in tests of "awesome-pkg", intead of require('../lib') you write require('awesome-pkg')
I use this package in all the pnpm repos for several months and it works really well. I also posted an article about this package that explains all the different errors that it can catch: Never ever forget to install a dependency
Referring to npm docs:
[--dry-run] As of npm#6, does everything publish would do except
actually publishing to the registry. Reports the details of what would
have been published.
Similar to --dry-run see npm pack, which figures out the files to be included and packs them into a tarball to be uploaded to the registry.
https://docs.npmjs.com/cli/v6/commands/npm-publish#description
I see too many complicated answers, but according to documentation, you just need to install your local package globally (because it will be installed on different directory)
Go to your module root directory and do
npm install . -g
In my project we are using nodejs with typescript for google cloud app engine app development. We have our own build mechanism to compile ts files into javascript ,then collect them into a complete runable package, so that we don't want to relay on google cloud to install dependencies, instead we want to upload all node packages inside the node_modules to google cloud.
But it seems google cloud will always ignore the node_modules folder and run npm install during the deployment. Even I tried to remove 'skip_files: - ^node_modules$' from app.yaml, it doesn't work, google cloud will always install packages by itself.
Does anyone have ideas of this of deploy node app with node_modules together? Thank you.
I observed the same issue.
My workaround was to rename node_modules/ to node_modules_hack/ before deploying. This prevents AppEngine from removing it.
I restore it to the original name on installation, with the following (partial) package.json file:
"__comments": [
"TODO: Remove node_modules_hack once AppEngine stops stripping node_modules/"
],
"scripts": {
"install": "mv -fn node_modules_hack node_modules",
"start": "node server.js"
},
You can confirm that AppEngine strips your node_modules/ by looking at the Docker image it generates. You can find it on the Images page. They give you a commandline that you can run on the cloud console to fetch it. Then you can run docker run <image_name> ls to see your directory structure. The image is created after npm install, so once you use the workaround above, you'll see your node_modules/ there.
The newest solution is to allow node_modules in .gcloudignore.
Below's the default .gcloudignore (one that initial execution of gcloud app deploy generates if you don't have one already) with the change you need:
# This file specifies files that are *not* uploaded to Google Cloud Platform
# using gcloud. It follows the same syntax as .gitignore, with the addition of
# "#!include" directives (which insert the entries of the given .gitignore-style
# file at that point).
#
# For more information, run:
# $ gcloud topic gcloudignore
#
.gcloudignore
# If you would like to upload your .git directory, .gitignore file or files
# from your .gitignore file, remove the corresponding line
# below:
.git
.gitignore
# Node.js dependencies:
# node_modules/ # COMMENT OR REMOVE THIS LINE
Allowing node_modules in .gcloudignore no longer works.
App Engine deployment is switched to buildpacks since Oct/Nov 2020. Cloud Build step triggered by it will always remove uploaded node_modules folder and reinstall dependencies using yarn or npm.
Here is the related buildpack code:
https://github.com/GoogleCloudPlatform/buildpacks/blob/89f4a6ba669437a47b482f4928f974d8b3ee666d/cmd/nodejs/yarn/main.go#L60
This is a desirable behaviour since uploaded node_modules could come from a different platform and could break compatibility with Linux runner used to run your app in App Engine environment.
So, in order to skip npm/yarn dependencies installation in Cloud Build, I would suggest to:
Use Linux runner CI with the same Node version you are using in the App Engine environment.
Create tar archive with your node_modules, to not upload multitude of files on each gcloud app deploy.
Keep node_modules dir ignored in .gcloudignore.
Unpack node_modules.tar.gz archive in preinstall script. Don't forget to keep backward compatibility in case tar archive is missing (local development, etc.):
{
"scripts": {
"preinstall": "test -f node_modules.tar.gz && tar -xzf node_modules.tar.gz && rm -f node_modules.tar.gz || true"
}
}
Note ... || true thing. This will ensure preinstall script returns zero exit code no matter what, and yarn/npm install will continue.
Github Actions workflow to pack and upload your dependencies for App Engine deployment could look like this:
deploy-gae:
name: App Engine Deployment
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v2
# Preferable to use the same version as in GAE environment
- name: Set Node.js version
uses: actions/setup-node#v2
with:
node-version: '14.15.4'
- name: Save prod dependencies for GAE upload
run: |
yarn install --production=true --frozen-lockfile --non-interactive
tar -czf node_modules.tar.gz node_modules
ls -lah node_modules.tar.gz | awk '{print $5,$9}'
- name: Deploy
run: |
gcloud --quiet app deploy app.yaml --no-promote --version "${GITHUB_ACTOR//[\[\]]/}-${GITHUB_SHA:0:7}"
This is just an expanded version of the initially suggested hack.
Note: In case you have a gcp-build script in your package.json you will need to create two archives (one for production dependencies and one for dev) and modify preinstall script to unpack the one currently needed (depending on the NODE_ENV set by buildpack).
Currently, if you are using a package.json file to manage your project's dependencies (whatever project it is, may it be a ruby, php, python or js app), by default everything is installed under ./node_modules.
When some dependencies have binaries to save, they're installed under ./node_modules/.bin.
What I need is a feature that allow me to change the ./node_modules/.bin directory for ./bin.
Simple example:
A PHP/Symfony app has a ./vendor dir for Composer dependencies, and all binaries are saved in ./bin, thanks to the config: { bin-dir: bin } option in composer.json.
But if I want to use Gulp to manage my assets, I create a package.json file, require all my dependencies and then run npm install.
Then, my wish is to run bin/gulp to execute gulp, but actually I have to run node_modules/.bin/gulp which is not as friendly as bin/gulp.
I've looked at package.json examples/guides on browsenpm.org and docs.npmjs.com, but none of them works, because they are here to define your own project's binaries. But I don't have any binaries, because I want to use binaries from other libraries.
Is there an option for that with NodeJS/NPM ?
You might consider adding gulp tasks to your package.json.
// package.json
{
"scripts": {
"build-templates": "gulp build-templates",
"minify-js": "gulp minify-js"
}
}
You can run any scripts specified in package.json by simply running the following:
$ npm run build-templates
$ npm run minify-js
You get the idea. You can use the gulp command inside the string without doing ./node_modules/.bin/gulp because npm is smart enough to put all scripts from ./node_modules/.bin/ into the path for that script execution.