why YARN_FLAGS are being ignored? - netlify

I'm having hard time figuring out why YARN_FLAGS are not being used
# Related to YARN_FLAGS https://git.io/fx1W5 https://git.io/fx1RF # debug with --verbose
[build]
base = "./services/frontend"
command = "echo $YARN_FLAGS && yarn build"
publish = "./services/frontend/build"
[build.environment]
NODE_VERSION = "10.12.0"
YARN_FLAGS = "--ignore-optional --frozen-lockfile --network-timeout 1000000 --network-concurrency 1 --verbose"
[context.production.environment]
NODE_ENV = "production"
[context.deploy-preview.environment]
NODE_ENV = "test"
[[redirects]]
from = "/*"
to = "/index.html"
status = 200
https://app.netlify.com/sites/monstereos-gabo/deploys/5bd08b5bc965924622aeccce

The Netlify docs tell us that they make specific decisions based on whether there is a yarn.lock vs a package.lock and how it handles the YARN_FLAGS.
If you have a /yarn.lock file: you can set YARN_VERSION (any released version), YARN_FLAGS (flags to pass to our automatic yarn install that runs when this file is present). YARN_FLAGS is set to --ignore-optional by default. /package.json file is ignored in regards to the next step below if you have a /yarn.lock!
Make sure to push your yarn.lock file to your repository. It looks like the build process decides to use the flags environment variable when the lock file exists.
Note: You may want to trigger a deploy and Clear build cache for good measure after you push your yarn.lock file to your repository.

Related

npm run publish doesn't do anything when I push repository from master branch to gh-pages

While trying to run publish.sh file in VS Code
#!/usr/bin/env sh
# abort on errors
set -e
# build
npm run docs:build
# navigate into the build output directory
cd docs/.vuepress/dist
# if you are deploying to a custom domain
# echo 'www.example.com' > CNAME
git init
git add -A
git commit -m 'deploy'
# if you are deploying to https://<USERNAME>.github.io
# git push -f git#github.com:boldak/<USERNAME>.github.io.git master
# if you are deploying to https://<USERNAME>.github.io/<REPO>
git push -f https://github.com/shunichkaaaa/edu_db_labs_IO-12_Group-4 master:gh-pages
cd -
I meet the problem, that npm run do completely ANYTHING. It doesn't return a mistake, nor creates the second branch on GitHub as it should be.
console output
As was advised on one of the websites I tried to set npm ignore-scripts value to false.
npm config set ignore-scripts false
But it already was set to false.
I can host my service locally while using npm run docs:dev.
Switching from VS Code terminal to Windows Command Prompt just repeated the previous problem.
So any opinions on this count??

Elastalert2 rules folder config not working

I'm using Elastalert2 now to get notifications from error log in slack.
We need to receive alarms of all service logs through our dozens of rules.
Docker builds ElastAlert2 and deploy it on Argocd.
But, there is a problem that the rules_folder config does not work
There is rules_folder in config.yaml
rules_folder: /home/elastalert/rules
and this is Example Dockerfile
FROM python:3.9.13-slim
# installation
RUN pip3 install --upgrade pip \
&& pip3 install cryptography elastalert2
ENV LANG="en_US.UTF-8"
# add configuration and alarm
RUN mkdir -p /home/elastalert
WORKDIR /home/elastalert
ADD ./config.yaml /home/elastalert
COPY ./rules /home/elastalert/rules
and this is run command
command: [ "/bin/sh", "-c" ]
args:
- >-
echo "Finda Elastalert is started!!" &&
elastalert-create-index &&
elastalert --verbose --config config.yaml
...
but error occur like...
[error][1]
I think the rule files cannot be imported as args.
In other words, it seems that rules_folder does not apply
If, specify a specific rule file in the start command, it works well.
For example,
elastalert --verbose --config config.yaml --rule ./rules/example/example.yaml
However, it can only execute one rule.
We have dozens of rules.
What's the problem?
Solve.
Don't store empty yaml in your rules/ sub.
The problem was that I commented out all the yaml files except the test rule yaml for the operation test.
By replacing the commented yaml file with another extension such as .text.
Now elastalert recognizes and operates all rules.

Digital Ocean Apps can't locate chromium directory for puppeteer

I've been wrestling with this all day. I get the below error whenever I try to run my puppeteer script on Digital Ocean Apps.
/workspace/node_modules/puppeteer/.local-chromium/linux-818858/chrome-linux/chrome: error while loading shared libraries: libnss3.so: cannot open shared object file: No such file or directory
I did some research and found this helpful link below that seems to be the accepted solution.
https://github.com/puppeteer/puppeteer/issues/5661
However since I'm using digital ocean apps instead of the standard digital ocean droplets it seems I can't use sudo commands in the command line so I can't actually run these in the command line.
Does anyone know of anyway around this?
I'm using node.js by the way.
Thank you!!!!
I've struggled with this issue myself. I needed a way to run puppeteer in order to run Scully (static site generator for Angular) during my build process.
My solution was to use a Dockerfile to build my DigitalOcean App Platform's app. You can ignore the Scully stuff if not needed (Putting it here for others who struggles with this use case like me) and have a regular yarn build:prod or such, and use something like dist/myapp (instead of dist/scully) as output_dir in the appspec.
In your app's root folder, you should have these files:
Dockerfile
FROM node:14-alpine
RUN apk add --no-cache \
chromium \
ca-certificates
# This is to prevent the build from getting stuck on "Taking snapshot of full filesystem"
ENV PUPPETEER_SKIP_CHROMIUM_DOWNLOAD true
WORKDIR /usr/src/myapp
COPY package.json yarn.lock ./
RUN yarn
COPY . .
# Needed because we set PUPPETEER_SKIP_CHROMIUM_DOWNLOAD true
ENV SCULLY_PUPPETEER_EXECUTABLE_PATH /usr/bin/chromium-browser
# build_prod_scully is set in package.json to: "ng build --configuration=production && yarn scully --scanRoutes"
RUN yarn build_prod_scully
digitalocean.appspec.yml
domains:
- domain: myapp.com
type: PRIMARY
- domain: www.myapp.com
type: ALIAS
name: myapp
region: fra
static_sites:
- catchall_document: index.html
github:
branch: master
deploy_on_push: true
repo: myname/myapp
name: myapp
output_dir: /usr/src/myapp/dist/scully # <--- /user/src/myapp refering to the Dockerfile's WORKDIR, dist/scully refers to scully config file's `outDir`
routes:
- path: /
source_dir: /
dockerfile_path: Dockerfile # <-- may be unneeded unless you already have an app set up.
Then upload the digitalocean.appspec.yml from your computer to your
App Dasboard > Settings > App > AppSpec > edit > Upload File.
For those who use it with Scully, I've also used the following inside scully.config.ts
export const config: ScullyConfig = {
...
puppeteerLaunchOptions: {
args: ['--no-sandbox', '--disable-setuid--sandbox'],
},
};
Hope it helps.

Recommended .gitignore for Node-RED

Is there a standard or recommended .gitignore file to use with Node-RED projects? Or are there files or folders that should be ignored? For example, should files like .config.json or flow_cred.json be ignored?
At present I'm using the Node template generated by gitignore.io (see below), but this doesn't contain anything specific to Node-RED.
I found these github projects with .gitignore files:
https://github.com/dceejay/node-red-project-starter/blob/master/.gitignore
https://github.com/natcl/node-red-project-template/blob/master/.gitignore
https://github.com/natcl/electron-node-red/blob/master/.gitignore
But I'm unsure if these are generic to any Node-RED project.
The Node .gitignore file:
# Created by https://www.gitignore.io/api/node
# Edit at https://www.gitignore.io/?templates=node
### Node ###
# Logs
logs
*.log
npm-debug.log*
yarn-debug.log*
yarn-error.log*
lerna-debug.log*
# Diagnostic reports (https://nodejs.org/api/report.html)
report.[0-9]*.[0-9]*.[0-9]*.[0-9]*.json
# Runtime data
pids
*.pid
*.seed
*.pid.lock
# Directory for instrumented libs generated by jscoverage/JSCover
lib-cov
# Coverage directory used by tools like istanbul
coverage
*.lcov
# nyc test coverage
.nyc_output
# Grunt intermediate storage (https://gruntjs.com/creating-plugins#storing-task-files)
.grunt
# Bower dependency directory (https://bower.io/)
bower_components
# node-waf configuration
.lock-wscript
# Compiled binary addons (https://nodejs.org/api/addons.html)
build/Release
# Dependency directories
node_modules/
jspm_packages/
# TypeScript v1 declaration files
typings/
# TypeScript cache
*.tsbuildinfo
# Optional npm cache directory
.npm
# Optional eslint cache
.eslintcache
# Optional REPL history
.node_repl_history
# Output of 'npm pack'
*.tgz
# Yarn Integrity file
.yarn-integrity
# dotenv environment variables file
.env
.env.test
# parcel-bundler cache (https://parceljs.org/)
.cache
# next.js build output
.next
# nuxt.js build output
.nuxt
# react / gatsby
public/
# vuepress build output
.vuepress/dist
# Serverless directories
.serverless/
# FuseBox cache
.fusebox/
# DynamoDB Local files
.dynamodb/
# End of https://www.gitignore.io/api/node
Have you gone through the node-red projects? Here is a link to the video and here is the documentation of how you can set it up.
It creates a folder with your flows, encrypts your credentials, adds a read-me and a file with your package dependencies. It also lets you set up your git/GitHub account so you can push to your local and remote repositories safely and from the node red console.
The .gitignore file inside your project folder automatically starts with *.backup
This is because node-red creates a copy of your files and appends it .backup every time you deploy the nodes.
This way, the only thing you need to back up separately is a file outside your project folder called config_projects.json which stores your encryption key for your credentials.
I just set it up and I'm really happy with it.

Use pre-installed Terraform plugins instead of downloading them with terraform init

While running terraform init when using Terraform 0.11.3 we are getting the following error:
Initializing provider plugins...
- Checking for available provider plugins on https://releases.hashicorp.com...
Error installing provider "template": Get
https://releases.hashicorp.com/terraform-provider-template/: read tcp
172.25.77.25:53742->151.101.13.183:443: read: connection reset by peer.
Terraform analyses the configuration and state and automatically
downloads plugins for the providers used. However, when attempting to
download this plugin an unexpected error occured.
This may be caused if for some reason Terraform is unable to reach the
plugin repository. The repository may be unreachable if access is
blocked by a firewall.
If automatic installation is not possible or desirable in your
environment, you may alternatively manually install plugins by
downloading a suitable distribution package and placing the plugin's
executable file in the following directory:
terraform.d/plugins/linux_amd64
I realized it's because of connectivity issues with https://releases.hashicorp.com domain. For some obvious reasons, we will have to adjust with this connectivity issue as there are some SSL and firewall issues between the control server and Hashicorp's servers.
Is there any way we could bypass this by downloading the plugins from Hashicorp's servers and copying them onto the control server? Or any other alternative to avoid trying to download things from Hashicorp's servers?
You can use pre-installed plugins by either putting the plugins in the same directory as the terraform binary or by setting the -plugin-dir flag.
It's also possible to build a bundle of every provider you need automatically using the terraform-bundle tool.
I run Terraform in our CI pipeline in a Docker container so have a Dockerfile that looks something like this:
FROM golang:alpine AS terraform-bundler-build
RUN apk --no-cache add git unzip && \
go get -d -v github.com/hashicorp/terraform && \
go install ./src/github.com/hashicorp/terraform/tools/terraform-bundle
COPY terraform-bundle.hcl .
RUN terraform-bundle package terraform-bundle.hcl && \
mkdir -p terraform-bundle && \
unzip -d terraform-bundle terraform_*.zip
####################
FROM python:alpine
RUN apk add --no-cache git make && \
pip install awscli
COPY --from=terraform-bundler-build /go/terraform-bundle/* /usr/local/bin/
Note that the finished container image also adds git, make and the AWS CLI as I also require those tools in the CI jobs that uses this container.
The terraform-bundle.hcl then looks something like this (taken from the terraform-bundle README):
terraform {
# Version of Terraform to include in the bundle. An exact version number
# is required.
version = "0.10.0"
}
# Define which provider plugins are to be included
providers {
# Include the newest "aws" provider version in the 1.0 series.
aws = ["~> 1.0"]
# Include both the newest 1.0 and 2.0 versions of the "google" provider.
# Each item in these lists allows a distinct version to be added. If the
# two expressions match different versions then _both_ are included in
# the bundle archive.
google = ["~> 1.0", "~> 2.0"]
# Include a custom plugin to the bundle. Will search for the plugin in the
# plugins directory, and package it with the bundle archive. Plugin must have
# a name of the form: terraform-provider-*, and must be build with the operating
# system and architecture that terraform enterprise is running, e.g. linux and amd64
customplugin = ["0.1"]
}
config plugin_cache_dir in .terraformrc
plugin_cache_dir = "$HOME/.terraform.d/plugin-cache"
then move the pre-installed provider into the plugin_cache_dir,
terraform will not download the provider anymore
btw, use the ~/.terraform.d/plugin directory doesn't work
/.terraform.d/plugin/linux_amd64$ terraform -v
Terraform v0.12.15
The proper way to handle this since terraform 0.14, as also discussed on the terraform-bundle page mentioned in the currently accepted answer, is to use terraform providers mirror as described on https://www.terraform.io/cli/commands/providers/mirror. This command creates all the necessary index files etc so the folder can be used for plugins. Eg:
$ cd your-tf-root-module
$ terraform providers mirror path/to/tf-plugins
...
$ terraform init --plugin-dir path/to/tf-plugins
...
You can cd to each of your root modules (ie those that have terraform state) and run the mirror command; multiple versions of a plugin may be installed there, and that's ok. When you run the terraform init command, it will fetch the proper one. Same as without the --plugin-dir arg.
So the only difference is that the internet is not used to acquire the plugins, terraform init gets them from the plugin folder.
This is very useful also for creating a cache that can then be used by terraform in ci/cd. Eg in circleci you would have a manual job that calls mirror and does a save-cache; and your automated terraform init job would restore-cache, and use --plugin-dir arg; then the automated terraform apply job would behave as usual.
Starting 0.13.2 version of Terraform release, one could download plugins from a local webserver/http server via network mirror protocol.
For more details, check this link
It expects a .terraformrc file in $HOME path, pointing to the provider path of the plugins like below. If the file is in different directory, you could provide the path with TERRAFORM_CONFIG env var.
provider_installation {
network_mirror {
url = "https://terraform-plugins.example.net/providers/"
}
}
Then, you define providers in a custom tf like below.
providers.tf::
terraform {
required_providers {
azurerm = {
source = "registry.terraform.io/example/azurerm"
}
openstack = {
source = "registry.terraform.io/example/openstack"
}
null = {
source = "registry.terraform.io/example/null"
}
random = {
source = "registry.terraform.io/example/random"
}
local = {
source = "registry.terraform.io/example/local"
}
}
}
However, you have to upload the plugin file in .zip format along with index.json and the <version>.json files for terraform to discover the version of plugin to download.
Example index.json containing the version of plugin::
{
"versions": {
"2.3.0": {}
}
}
Again, 2.3.0.json contains hashes of the plugin file. In this case it's <version>.json
{
"archives": {
"linux_amd64": {
"hashes": [
"h1:nFL6uiwsQFLiP8QCr35sPfWe9LpXI3/c7gP9tYnih+k="
],
"url": "terraform-provider-random_2.3.0_linux_amd64.zip"
}
}
}
How do you get details of index.json and <version>.json files?
By running terraform providers on the directory containing tf files. Note, the machine running this command, needs to connect to public terraform registry. Terraform will download the information of these files. If you have different terraform configuration files, it makes sense to automate these steps otherwise, you could manually do :)
Upon, terraform init, terraform downloads the plugins from above web server rather from terraform registry. Make sure you don't use plugin-dir argument with terraform init as it will override all the changes you made.
Updated Dockerfile for #ydaetskcoR 's solution, because currently terraform-bundle doesn't work with 0.12.x (the problem was fixed at 0.12.2, but appeared on 0.12.18)
FROM hashicorp/terraform:0.12.18 as terraform-provider
COPY provider.tf .
RUN terraform init && \
mv .terraform/plugins/linux_amd64/terraform-provider* /bin/
FROM hashicorp/terraform:0.12.18
# Install terraform pre-installed plugins
COPY --from=terraform-provider /bin/terraform-provider* /bin/
And here is the content of provider.tf
provider "template" { version = "~>2.1.2" }
provider "aws" { version = "~>2.15.0" }
...
This took me awhile, had the same problem. I ended up having to download from source and use the image that this spits out. Its nasty, but it does what i need it do to to work with the Google provider.
FROM golang:alpine AS terraform-bundler-build
ENV TERRAFORM_VERSION=0.12.20
ENV GOOGLE_PROVIDER=3.5.0
RUN apk add --update --no-cache git make tree bash curl
ENV GOPATH=/go
RUN mkdir -p $GOPATH/src/github.com/terraform-providers
RUN cd $GOPATH/src/github.com/terraform-providers && curl -sLO https://github.com/terraform-providers/terraform-provider-google-beta/archive/v$GOOGLE_PROVIDER.tar.gz
RUN cd $GOPATH/src/github.com/terraform-providers && tar xvzf v$GOOGLE_PROVIDER.tar.gz && mv terraform-provider-google-beta-$GOOGLE_PROVIDER terraform-provider-google-beta
RUN cd $GOPATH/src/github.com/terraform-providers/terraform-provider-google-beta && pwd && make build
RUN cd $GOPATH/src/github.com/terraform-providers && curl -sLO https://github.com/terraform-providers/terraform-provider-google/archive/v$GOOGLE_PROVIDER.tar.gz
RUN cd $GOPATH/src/github.com/terraform-providers && tar xvzf v$GOOGLE_PROVIDER.tar.gz && mv terraform-provider-google-$GOOGLE_PROVIDER terraform-provider-google
RUN cd $GOPATH/src/github.com/terraform-providers/terraform-provider-google && pwd && make build
RUN mkdir -p $GOPATH/src/github.com/hashicorp
RUN cd $GOPATH/src/github.com/hashicorp && curl -sLO https://github.com/hashicorp/terraform/archive/v$TERRAFORM_VERSION.tar.gz
RUN cd $GOPATH/src/github.com/hashicorp && tar xvzf v$TERRAFORM_VERSION.tar.gz && mv terraform-$TERRAFORM_VERSION terraform
RUN cd $GOPATH/src/github.com/hashicorp/terraform && go install ./tools/terraform-bundle
ENV TF_DEV=false
ENV TF_RELEASE=true
COPY my-build.sh $GOPATH/src/github.com/hashicorp/terraform/scripts/
RUN cd $GOPATH/src/github.com/hashicorp/terraform && /bin/bash scripts/my-build.sh
ENV HOME=/root
COPY terraformrc $HOME/.terraformrc
RUN mkdir -p $HOME/.terraform.d/plugin-cache
########################################
FROM alpine:3
ENV HOME=/root
RUN ["/bin/sh", "-c", "apk add --update --no-cache bash ca-certificates curl git jq openssh"]
RUN ["bin/sh", "-c", "mkdir -p /src"]
COPY --from=terraform-bundler-build /go/bin/terraform* /bin/
RUN mkdir -p /root/.terraform.d/plugins/linux_amd64
COPY --from=terraform-bundler-build /root/.terraform.d/ $HOME/.terraform.d/
RUN cp /bin/terraform-provider-google $HOME/.terraform.d/plugin-cache/linux_amd64
RUN cp /bin/terraform-provider-google-beta $HOME/.terraform.d/plugin-cache/linux_amd64
COPY terraformrc $HOME/.terraformrc
COPY provider.tf $HOME/
COPY backend.tf $HOME/
# For Testing (This should be echoed or taken care of in the CI pipeline)
#COPY google.json $HOME/.google.json
WORKDIR $HOME
ENTRYPOINT ["/bin/bash"]
.terraformrc:
plugin_cache_dir = "$HOME/.terraform.d/plugins/linux_amd64"
disable_checkpoint = true
provider.tf
# Define which provider plugins are to be included
provider "google" {
credentials = ".google.json"
}
provider "google-beta" {
credentials = ".google.json"
}
my-build.sh
#!/usr/bin/env bash
#
# This script builds the application from source for multiple platforms.
# Get the parent directory of where this script is.
SOURCE="${BASH_SOURCE[0]}"
while [ -h "$SOURCE" ] ; do SOURCE="$(readlink "$SOURCE")"; done
DIR="$( cd -P "$( dirname "$SOURCE" )/.." && pwd )"
# Change into that directory
cd "$DIR"
echo "DIR=$DIR"
# Get the git commit
GIT_COMMIT=$(git rev-parse HEAD)
GIT_DIRTY=$(test -n "`git status --porcelain`" && echo "+CHANGES" || true)
# Determine the arch/os combos we're building for
XC_ARCH=${XC_ARCH:-"amd64 arm"}
XC_OS=${XC_OS:-linux}
XC_EXCLUDE_OSARCH="!darwin/arm !darwin/386"
mkdir -p bin/
# If its dev mode, only build for ourself
if [[ -n "${TF_DEV}" ]]; then
XC_OS=$(go env GOOS)
XC_ARCH=$(go env GOARCH)
# Allow LD_FLAGS to be appended during development compilations
LD_FLAGS="-X main.GitCommit=${GIT_COMMIT}${GIT_DIRTY} $LD_FLAGS"
fi
if ! which gox > /dev/null; then
echo "==> Installing gox..."
go get -u github.com/mitchellh/gox
fi
# Instruct gox to build statically linked binaries
export CGO_ENABLED=0
# In release mode we don't want debug information in the binary
if [[ -n "${TF_RELEASE}" ]]; then
LD_FLAGS="-s -w"
fi
# Ensure all remote modules are downloaded and cached before build so that
# the concurrent builds launched by gox won't race to redundantly download them.
go mod download
# Build!
echo "==> Building..."
gox \
-os="${XC_OS}" \
-arch="${XC_ARCH}" \
-osarch="${XC_EXCLUDE_OSARCH}" \
-ldflags "${LD_FLAGS}" \
-output "pkg/{{.OS}}_{{.Arch}}/${PWD##*/}" \
.
## Move all the compiled things to the $GOPATH/bin
GOPATH=${GOPATH:-$(go env GOPATH)}
case $(uname) in
CYGWIN*)
GOPATH="$(cygpath $GOPATH)"
;;
esac
OLDIFS=$IFS
IFS=: MAIN_GOPATH=($GOPATH)
IFS=$OLDIFS
#
# Create GOPATH/bin if it's doesn't exists
if [ ! -d $MAIN_GOPATH/bin ]; then
echo "==> Creating GOPATH/bin directory..."
mkdir -p $MAIN_GOPATH/bin
fi
# Copy our OS/Arch to the bin/ directory
DEV_PLATFORM="./pkg/$(go env GOOS)_$(go env GOARCH)"
if [[ -d "${DEV_PLATFORM}" ]]; then
for F in $(find ${DEV_PLATFORM} -mindepth 1 -maxdepth 1 -type f); do
cp ${F} bin/
cp ${F} ${MAIN_GOPATH}/bin/
ls -alrt ${MAIN_GOPATH}/bin/
echo "MAIN_GOPATH=${MAIN_GOPATH}"
done
fi
bucket.tf
terraform {
backend "gcs" {
bucket = "my-terraform-bucket"
prefix = "terraform/state"
credentials = ".google.json"
}
required_version = "v0.12.20"
}
You can use pre-installed plugins by either putting the plugins binaries in the same directory where Terraform binary is available by setting the "plugins-dir" flag.
By default, all plugins downloaded in .terraform folder. For example, Null resource plugin will be available at below location
.terraform\providers\registry.terraform.io\hashicorp\null\3.0.0.\windows_amd64.
Create new folder like "terraform-plugins" inside Terraform directory and copy all content including registry.terraform.io folder mentioned in above example in created folder.
Now run the terraform init command with plugins-dir flag
terraform init -plugin-dir="/terraform-plugins"
specify complete directory path with plugin-dir flag

Resources