Recommended .gitignore for Node-RED - node.js

Is there a standard or recommended .gitignore file to use with Node-RED projects? Or are there files or folders that should be ignored? For example, should files like .config.json or flow_cred.json be ignored?
At present I'm using the Node template generated by gitignore.io (see below), but this doesn't contain anything specific to Node-RED.
I found these github projects with .gitignore files:
https://github.com/dceejay/node-red-project-starter/blob/master/.gitignore
https://github.com/natcl/node-red-project-template/blob/master/.gitignore
https://github.com/natcl/electron-node-red/blob/master/.gitignore
But I'm unsure if these are generic to any Node-RED project.
The Node .gitignore file:
# Created by https://www.gitignore.io/api/node
# Edit at https://www.gitignore.io/?templates=node
### Node ###
# Logs
logs
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*
lerna-debug.log*
# Diagnostic reports (https://nodejs.org/api/report.html)
report.[0-9]*.[0-9]*.[0-9]*.[0-9]*.json
# Runtime data
pids
*.pid
*.seed
*.pid.lock
# Directory for instrumented libs generated by jscoverage/JSCover
lib-cov
# Coverage directory used by tools like istanbul
coverage
*.lcov
# nyc test coverage
.nyc_output
# Grunt intermediate storage (https://gruntjs.com/creating-plugins#storing-task-files)
.grunt
# Bower dependency directory (https://bower.io/)
bower_components
# node-waf configuration
.lock-wscript
# Compiled binary addons (https://nodejs.org/api/addons.html)
build/Release
# Dependency directories
node_modules/
jspm_packages/
# TypeScript v1 declaration files
typings/
# TypeScript cache
*.tsbuildinfo
# Optional npm cache directory
.npm
# Optional eslint cache
.eslintcache
# Optional REPL history
.node_repl_history
# Output of 'npm pack'
*.tgz
# Yarn Integrity file
.yarn-integrity
# dotenv environment variables file
.env
.env.test
# parcel-bundler cache (https://parceljs.org/)
.cache
# next.js build output
.next
# nuxt.js build output
.nuxt
# react / gatsby
public/
# vuepress build output
.vuepress/dist
# Serverless directories
.serverless/
# FuseBox cache
.fusebox/
# DynamoDB Local files
.dynamodb/
# End of https://www.gitignore.io/api/node

Have you gone through the node-red projects? Here is a link to the video and here is the documentation of how you can set it up.
It creates a folder with your flows, encrypts your credentials, adds a read-me and a file with your package dependencies. It also lets you set up your git/GitHub account so you can push to your local and remote repositories safely and from the node red console.
The .gitignore file inside your project folder automatically starts with *.backup
This is because node-red creates a copy of your files and appends it .backup every time you deploy the nodes.
This way, the only thing you need to back up separately is a file outside your project folder called config_projects.json which stores your encryption key for your credentials.
I just set it up and I'm really happy with it.

Related

reading .env file from node - env file is not published

I am trying to read .env file using "dotenv" package but it returns undefined from process.env.DB_HOST after published to gcloud run. I see all files except for the .env file in root directory when I output all files to log. I do have .env file in my project on a root directory. Not sure why it's not getting pushed to gcloud or is it?. I do get a value when I tested locally for process.env.DB_HOST.
I used this command to publish to google run.
gcloud builds submit --tag gcr.io/my-project/test-api:1.0.0 .
If you haven't a .gcloudignore file in your project, gcloud CLI use the .gitignore by default
Create a .gcloudignore and put the file that you don't want to upload when you use gcloud CLI command. So, don't put the .env in it!
EDIT 1
When you add a .gcloudignore, the gcloud CLI no longer read the .gitignore file and use it instead.
Therefore, you can define 2 different logics
.gitignore list the file that you don't want to push to the repository. Put the .env file in it to NOT commit it
.gcloudignore list the file that you don't want to send with the gcloud CLI. DON'T put the .env file in it to include it when you send your code with the gcloud CLI

Gcloud app deploy cannot ignore Dockerfile

I usually deploy my NodeJS app to Google App Engine and ignore all docker assets when deploying by a .gcloudignore file as below:
.git
.gitignore
Dockerfile
docker-compose.yml
nginx/
redis-data/
.vscode/
.DS_Store
.prettierrc
README.md
node_modules/
.env
Last week I have successfully deployed my app to App Engine without any problems. But today (without any changes except source code) it failed and threw me an Error:
ERROR: (gcloud.app.deploy) There is a Dockerfile in the current directory, and the runtime field in /Users/tranphongbb/Works/unstatic/habitify-annual-report-backend/app.yaml is currently set to [runtime: nodejs]. To use your Dockerfile to build a custom runtime, set the runtime field to [runtime: custom]. To continue using the [nodejs] runtime, please remove the Dockerfile from this directory.
Even when removing the .gcloudignore file and go with the skip_files option in app.yaml, it still failed.
My source tree:
.dockerignore
.eslintrc.json
.gcloudignore
.gitignore
.prettierrc
.vscode
Dockerfile
README.md
app.yaml
docker-compose.yml
nginx
package.json
src
I reproduced your issue by cloning both Node.js App Engine Flex Quickstart and adding a Dockerfile to the same folder as the app.yaml file.
Indeed, I received the same error message as you did. But I was able to see that if I move the Dockerfile to a different directory, the deploy succeeds. It seems that gcloud app deploy doesn't respect the .gcloudignore file.
For node.js in the Flexible Environment, there’s no skip_files entry in the App Engine Official Documentation.
To ignore your files defined in .gcloudignore file, please run the command gcloud beta app deploy which worked for me to ignore the Dockerfile when using Nodejs Runtime in app.yaml or you can use gcloud app deploy command but move your Dockerfile to another directory.
The purpose of the .gcloudignore file is to avoid certain files to be uploaded to App Engine, Cloud Functions, etc, deployments which is documented here. When using gcloud app deploy this command will notice if there is a Dockerfile and will correlate that in the app.yaml there is set runtime: custom. In case that condition is not meet, you'll get a similar error message as follows:
ERROR: (gcloud.app.deploy) There is a Dockerfile in the current directory, and the runtime field in path/app.yaml is currently set to [runtime: nodejs]. To use your Dockerfile to build a custom runtime, set the runtime field to [runtime: custom]. To continue using the [nodejs] runtime, please remove the Dockerfile from this directory.
Now the last question, why does this work with gcloud beta app deploy and not with gcloud app deploy?
Checking at the source code of the Cloud SDK which can be viewed by anyone, the gcloud app deploy has the following code which makes the verification mentioned before:
if info.runtime == 'custom':
if has_dockerfile and has_cloudbuild:
raise CustomRuntimeFilesError(
('A custom runtime must have exactly one of [{}] and [{}] in the '
'source directory; [{}] contains both').format(
config.DOCKERFILE, runtime_builders.Resolver.CLOUDBUILD_FILE,
source_dir))
elif has_dockerfile:
log.info('Using %s found in %s', config.DOCKERFILE, source_dir)
return False
elif has_cloudbuild:
log.info('Not using %s because cloudbuild.yaml was found instead.',
config.DOCKERFILE)
return True
else:
raise NoDockerfileError(
'You must provide your own Dockerfile when using a custom runtime. '
'Otherwise provide a "runtime" field with one of the supported '
'runtimes.')
else:
if has_dockerfile:
raise DockerfileError(
'There is a Dockerfile in the current directory, and the runtime '
'field in {0} is currently set to [runtime: {1}]. To use your '
'Dockerfile to build a custom runtime, set the runtime field to '
'[runtime: custom]. To continue using the [{1}] runtime, please '
'remove the Dockerfile from this directory.'.format(info.file,
On the other hand the gcloud beta app deploy does not do this verification at all (assuming I reviewed the correct code):
if runtime == 'custom' and self in (self.ALWAYS,
self.WHITELIST_BETA,
self.WHITELIST_GA):
return needs_dockerfile
In conclusion, the .gcloudignore will prevent some files/folder to be uploaded but not will be considered when doing some pre-checks of this command. In this case a Dockerfile should be considered since it could be part of the deployment.

What is the best approch to handle env file?

I have .env file at my project root directory.
How should I handle .env file for dev, qa, stage and prod?
Should include them in git repo? if not where I put them? different folder on external drive for example?
What is the correct extensions? .env.qa or .qa.env?
If I want to build my bundle using webpack to the dist folder (server side), should I include the env file or manually copy it to the dist folder?
You should not check-in your env files into any source control. Any of those secrets will be forever available to anyone having access to the repo until the history is rewritten to remove them.
If you use AWS services, for example, I would suggest using the Secrets Manager.
Any environment variables introduced to Webpack should not be secrets but be configuration values. Anyone who can view source can read those values. If you need to have environment-specific configurations, the Webpack DefinePlugin will replace vars like MY_API_HOST with their values with the following config:
const plugins = [
new webpack.DefinePlugin({
MY_API_HOST: JSON.stringify('https://my-domain.com/api/'),
MY_API_VERSION: JSON.stringify('v2')
})
]
Config module is a easy way to address the different env specific values. Read about config module at - https://www.npmjs.com/package/config. You will have a config folder in the repository with env specific files and I like this approach as the files are in the repository but very well separated.This provides a really easy way to set default values, override the environment specific values etc. It is also very convenient to use the different environment specific files by setting the appropriate node environment variable(export NODE_ENV=development or acceptance or production).

App engine ignores symlinks to directories

I'm creating an app which runs on Google's App Engine with the custom flex environment. This app uses several (relative) symlinks which point to other directories in the project. But somehow those symlinks are ignored when I deploy the app.
It seems that the gcloud tool sends the source context (which is, all the files in my project) to the google container builder before building and deploying the app:
$ gcloud --project=my-project --verbosity=info app deploy
(...)
Beginning deployment of service [default]...
Building and pushing image for service [default]
INFO: Uploading [/tmp/tmpZ4Jha_/src.tgz] to [eu.gcr.io/my-project/appengine/default.20171212t160803:latest]
Started cloud build [some-uid].
If I extract the contents of the .tgz file I can see that all the files and directories in the project are there. Except for symlinks pointing to directories (symlinks to files are included though). So the source context is missing all the symlinks to directories.
Not using symlinks is not an option, so does anybody know how to include symlinks to directories in the source context send to google?
Although I don't think it's relevant, here are the contents of the app.yaml:
env: flex
runtime: custom
runtime_config:
document_root: docroot
manual_scaling:
instances: 1
resources:
cpu: 2
memory_gb: 2
disk_size_gb: 10
I've worked around this by deploying my python cloud functions from a temp directory, and using tar (on a Mac) to include files inside symlinked directories:
tar hc --exclude='__pycache__' {name} | tar x -C {tmpdirname}
I use a workaround solution similar to Steve Alexander's, but in a more elaborate way: I have a shell script that creates a temp dir, copies the dependencies into in, sets the environment and runs the gcloud command. It is basically something like this:
. .env.sh
SRC_FILE=$1
SRC_FUNC=$2
TRIGGER_RESOURCE=$3
TRIGGER_EVENT=$4
TMP_DIR=./tmp/deploy
mkdir -p $TMP_DIR
cp -r modules/dep1 $TMP_DIR
cp -r modules/dep2 $TMP_DIR
cp requirements.txt $TMP_DIR
cp $SRC_FILE $TMP_DIR/main.py
gcloud functions deploy $SRC_FUNC \
--source=$TMP_DIR \
--runtime=python39 \
--trigger-resource $TRIGGER_RESOURCE \
--trigger-event $TRIGGER_EVENT \
--env-vars-file=./.env.yml \
--timeout 540s
rm -rf $TMP_DIR
This script is tailored for a Google Storage event, ie. to deploy a function that should be triggered when a new file is uploaded to a bucket:
./deploy.func.sh functions.py gs_new_file_event project-bucket1 google.storage.object.finalize
So in the example above gs_new_file_event is a Python function defined in functions.py. The script copies the file with the Python code to the temp dir as main.py which is what the function deployer expects. This works well for a project where there are multiple cloud functions defined in the same repository that also contains dependencies and it is not possible to have all of the apps and functions defined in the top-level main.py. The script removes the temp dir after it is done, but it is a good idea to add the path to .gitingnore.
Here are a few things you can do to adapt the script to your own needs:
Set up the env files with all the required variables: .env.sh for the build and deployment, .env.yml for the function/app runtime.
Fix the paths and dependencies.
Improve the handling of the command line arguments to make it more flexible and work for all kinds of GCloud triggers.

How to use a private npm registry on Elastic Beanstalk?

We have a nodejs project running on Amazon Elastic Beanstalk that uses private modules that we host using nodejitsu's private npm registry.
However getting access to the private npm registry from the elastic instances hasn't been straightforward and is not documented well.
What is the best way to get this access set up?
None of the other answers were working for me. After hours of hair pulling, we finally figured it out. The solution that worked is almost the same as the other answers but with a very minor tweak.
Set an NPM_TOKEN environment variable on Elastic Beanstalk under Configuration > Software Configuration > Environment Properties.
Create a .ebextensions/npm.config file. (The name does not have to be 'npm'.)
Put this content into the file:
files:
"/tmp/.npmrc":
content: |
//registry.npmjs.org/:_authToken=${NPM_TOKEN}
Note that it uses ${NPM_TOKEN} and not $NPM_TOKEN. This is vital. Using $NPM_TOKEN will not work; it must have the curly braces: ${NPM_TOKEN}.
Why are the curly braces needed? No idea. In shell/POSIX languages, ${VAR} and $VAR are synonymous. However, in .npmrc files (at the time of this writing), variables without the curly brackets are not recognized as variables, so npm must be using a slightly different syntax standard.
UPDATE
Also, this has worked for us only on new or cloned environments. For whatever reason, environments which were not initialized with a /tmp/.npmrc will not read it in any future deployments before running npm install --production. We've tried countless methods on 4 different apps, but cloning and replacing an environment has been the only method which has worked.
So, we managed to get this working by using the npm userconfig file. See the doc page for npmrc for more info.
When a nodejs application is being deployed to Elastic Beanstalk, the root user runs npm install. So you will need to write the root's npm userconfig file, which is at /tmp/.npmrc.
So if you add a file called private_npm.config (or whatever name you choose) to your .ebextensions folder with all the information needed, you will be good to go. See Customizing and Configuring AWS Elastic Beanstalk Environments for more info.
So here is what my file looks like to use nodejitsu private registry.
.ebextensions/private_npm.config:
files:
#this is the npm user config file path
"/tmp/.npmrc":
mode: "000777"
owner: root
group: root
content: |
_auth = <MY_AUTH_KEY>
always-auth = true
registry = <PATH_TO_MY_REGISTRY>
strict-ssl = true
email = <NPM_USER_EMAIL>
Using an .npmrc within the project also works. For example...
.npmrc
registry=https://npm.mydomain.com
You may want to .gitignore this file if you include an _authToken line but make sure you don't .ebignore it so it's correctly bundled up with each deployment. After trying a few things unsuccessfully, I came across this post which made me realize specifying it locally in a project is possible.
The answer above as a step in the right direction, but the permissions and owner did not work for me. Managed to get it to work with the following combination:
files:
#this is the npm user config file path
"/tmp/.npmrc":
mode: "000600"
owner: nodejs
group: nodejs
content: |
_auth = <MY_AUTH_KEY>
always-auth = true
registry = <PATH_TO_MY_REGISTRY>
strict-ssl = true
email = <NPM_USER_EMAIL>
Place the below within your .ebextensions/app.config.
files:
"/tmp/.npmrc":
mode: "000777"
owner: root
group: root
content: |
//registry.npmjs.org/:_authToken=$NPM_TOKEN
Where NPM_TOKEN is an environment variable with the value of your actual npmjs auth token.
Note that environment variables within elasticbeanstalk can and should be set from within the AWS console Elasticbeanstalk software configuration tab.
AWS Elasticbeanstalk Configuration
In new Elastic Beanstalk Linux 2 Platforms, none of these solutions work (apart from the .npmrc file solution that works but has its issues when using them in development evironments due to the requirements that all developers have their ${NPM_TOKEN} Env Var defined in their own environments).
The reason is that the /tmp/.npmrc location no longer works.
Option 1
You have to change the .ebextensions/npm.config file to this new format:
files:
#this is the npm user config file path
"/root/.npmrc":
mode: "000777"
owner: root
group: root
content: |
_auth= ${NPM_TOKEN}
registry = https://{yourprivatenpmrepository.com}/
Option 2
Add a custom .npmrc_{any-suffix} to the root of your app and create a prebuild hook to rename it before Beanstalk executes the npm install so that it can use your private repository configuration:
Add the following file (path from your app root) .platform/hooks/prebuild/01_set_npmrc.sh with the following content:
#!/bin/bash
#Copy and rename .npmrc_beanstalk to .npmrc
mv .npmrc_beanstalk .npmrc
Create an .npmrc_beanstalk file in your root with the following content (modify it depending on your private npm config):
_auth= ${NPM_TOKEN}
registry = https://{yourprivatenpmrepository.com}/
Chmod the hook file so that it has the necessary exec permissions when uploaded to EB: chmod +x .platform/hooks/prebuild/01_set_npmrc.sh
Re-deploy using EB CLI and you are done!
With modern platforms, you no longer need to do this via .ebextensions
You can simply create a .npmrc file at the root of your deployment package, alongside your package.json with the following line:
//registry.npmjs.org/:_authToken=${NPM_TOKEN}
Using this method, you can create an environment variable named NPM_TOKEN in your AWS console so you don't have to store the token in your repo.
Structure:
~/your-app/
|-- package.json
|-- .npmrc

Resources