Issue in deploying Angular NodeJs app on Google Cloud Build - node.js

Im having trouble deploying my app through GCP Cloud Build.
Do I have to update to GLIBC_2.28 but if so how do I do that on GCP Clould Build?
I'm also assuming it could be the Nodejs version but I specified node 16.15.0 in my cloudbuild.yaml and package.json files... I thought that the problem was only for versions 18x.
In my app.yaml I also use a runtime of Nodejs16.
deploying my application works in the GCP Cloud Shell but I'm having troubles with GCP Cloud Build.
Do you want to continue (Y/n)?
Beginning deployment of service [default]...
╔════════════════════════════════════════════════════════════╗
╠═ Uploading 0 files to Google Cloud Storage ═╣
╚════════════════════════════════════════════════════════════╝
File upload done.
Updating service [default]...
.................................................................................................................................................................................................................................................................................................................................................................................................................................................failed.
ERROR: (gcloud.app.deploy) Error Response: [9] Cloud build 32fb7223-bf43-47ae-bd3b-787ba77c43f0 status: FAILURE
npm ERR! code 1
npm ERR! path /workspace/node_modules/#scarf/scarf
npm ERR! command failed
npm ERR! command sh -c node ./report.js
npm ERR! node: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by node)
npm ERR! A complete log of this run can be found in:
npm ERR! /www-data-home/.npm/_logs/2022-07-05T12_26_54_920Z-debug-0.log
Full build logs: https://console.cloud.google.com/cloud-build/builds;region=europe-west1/32fb7223-bf43-47ae-bd3b-787ba77c43f0?project=772319101637
Edit:
Heres my cloudbuild.yaml
steps:
# Install node packages
- name: node:16.15.0
entrypoint: npm
args: ['install']
# Build productive files
- name: node:16.15.0
entrypoint: npm
args: ['run', 'build']
# Deploy to google cloud app egnine
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: 'bash'
args: ['-c', 'gcloud app deploy']

The problem was with my app.yaml file. I switched back to a python27 runtime (which was giving me an issue with files not being found by the handler after building with no problem) but I added a third handler and it works.
runtime: python27
api_version: 1
threadsafe: yes
handlers:
- url: /(.*\.(js|css|svg|png)(|\.map))$
static_files: dist/\1
upload: dist/(.*)(|\.map)
- url: /.*
static_files: dist/index.html
upload: dist/.
- url: /.*
script: auto
secure: always
redirect_http_response_code: 301
skip_files:
- e2e/
- node_modules/
- src/
- ^(.*/)?\..*$
- ^(.*/)?.*\.json$
- ^(.*/)?.*\.md$
- ^(.*/)?.*\.yaml$
- ^LICENSE

Related

Astro 2.0 on AWS Amplify

I'm trying to use the SSR with AWS Amplify but when I activate the Node.js and change the output type to server. When I deploy to server I got an 404 error page.
I tried to build the project and I have to run two npm commands: npm run build and after that the npm run server. But the deploy is not working.
frontend:
phases:
preBuild:
commands:
- npm ci
build:
commands:
- npm run build
postBuild:
commands:
- npm run server
artifacts:
baseDirectory: /dist
files:
- '**/*'
cache:
paths:
- node_modules/**/*
Astro has, adapters for each SSR cloud solution, and I haven't seen AWS listed
you could use vercel or cloudflare
and install the adapter of those server
npx astro add cloudflare
on your option I think amplify is needing for nodejs adapter and its already exist
use this instead
npx astro add node

Hoe to run the node js code in aws amplify

I have a React/Node app which i am trying to host on AWS amplify. first try, my app deployed but i saw some pages/buttons are not working because of node js. Then i did some search and i saw that i need to modify "amplify.yml" file to:
version: 1
backend:
phases:
build:
commands:
- '# Execute Amplify CLI with the helper script'
- amplifyPush --simple
artifacts:
baseDirectory: build
files:
- '**/*'
frontend:
phases:
preBuild:
commands:
- yarn install
build:
commands:
- yarn build
artifacts:
baseDirectory: build
files:
- '**/*'
cache:
paths:
- node_modules/**/*
Getting build issues(Build time out) with the above build settings.
Make sure you have created a user with AdministratorAccess-Amplify privileges in IAM.
Then it is necessary to replace line 6 of the Hands-On amplify.yml with
npm install -g #aws-amplify/cli.
The code should now display correctly to complete Hands-On

Static file referenced by handler not found: build/index.html -Bitbucket Pipeline React App Engine

I have an issue with my Bitbucket CI/CD pipeline. The pipeline itself runs fine, but the application is broken when I try to access it. The pipeline deploys a React App Engine Node.js application. The problem comes when I access the site. This is the error I receive in Google Logging "Static file referenced by handler not found: build/index.html".
If I deploy the application manually, I have no issues and the application works fine. This application error only occurs if the deployment happens in the bitbucket pipeline.
Here is the app.yaml
runtime: nodejs12
handlers:
# Serve all static files with url ending with a file extension
- url: /(.*\..+)$
static_files: build/\1
upload: build/(.*\..+)$
# Catch all handler to index.html
- url: /.*
static_files: build/index.html
upload: build/index.html
Here is the bitbucket-pipelines.yml
pipelines:
branches:
master:
- step:
name: NPM Install and Build
image: node:14.15.1
script:
- npm install
- unset CI
- npm run build
- step:
name: Deploy to App Engine
image: google/cloud-sdk
script:
- gcloud config set project $GCLOUD_PROJECT
- 'echo "$GOOGLE_APPLICATION_CREDENTIALS" > google_application_credentials.json'
- gcloud auth activate-service-account --key-file google_application_credentials.json
- gcloud app deploy app.yaml
Any help would be greatly appreciated. Thank you so much.
Bitbucket pipelines does not save artifacts between steps. You need to declare an artifacts config in the build step so that you can reference it in the deploy step. Something like this:
pipelines:
branches:
master:
- step:
name: NPM Install and Build
image: node:14.15.1
script:
- npm install
- unset CI
- npm run build
artifacts: # Declare artifacts here for later steps
- build/**
- step:
name: Deploy to App Engine
image: google/cloud-sdk
script:
- gcloud config set project $GCLOUD_PROJECT
- 'echo "$GOOGLE_APPLICATION_CREDENTIALS" > google_application_credentials.json'
- gcloud auth activate-service-account --key-file google_application_credentials.json
- gcloud app deploy app.yaml
See here for more details: https://support.atlassian.com/bitbucket-cloud/docs/use-artifacts-in-steps/
Note that I have not tested this.

Problem with fs-extra while deploying python using serverless

I'm not that much expert using npm and bitbucket-pipelines, but I want to create a pipeline on Bitbucket to deploy my python (flask) project using serverless to AWS Lambda. It's being deployed locally, but when I run it using the Bitbucket pipeline, this happens:
Error: Cannot find module '/opt/atlassian/pipelines/agent/build/node_modules/fs-extra/lib/index.js'. Please verify that the package.json has a valid "main" entry
Here is my code:
bitbucket-pipelines.yml
image: node:14.13.1-alpine3.10
pipelines:
branches:
master:
- step:
caches:
- node
script:
- apk add python3
- npm install
- npm install -g serverless
- serverless config credentials --stage dev --provider aws --key ${AWS_DEV_LAMBDA_KEY} --secret ${AWS_DEV_LAMBDA_SECRET}
- serverless deploy --stage dev
serverless.yml
service: serverless-flask
plugins:
- serverless-python-requirements
- serverless-wsgi
custom:
wsgi:
app: app.app
packRequirements: false
pythonRequirements:
dockerizePip: non-linux
provider:
name: aws
runtime: python3.8
stage: dev
region: us-west-2
functions:
app:
handler: wsgi.handler
events:
- http: ANY /
- http: 'ANY {proxy+}'
alert:
handler: alerts.run
events:
- schedule: rate(1 day)
package:
exclude:
- .venv/**
- venv/**
- node_modules/**
- bitbucket-pipelines.yml
How can I fix this?
What helped me in the same situation was:
Deleted /node_modules folder
run npm install inside service folder
run serverless deploy
I had the same issue and resolved the problem by (re)installing fs-extra
npm install fs-extra

serverless-domain-manager cannot be found by serverless deployment

I was getting the below error while deploying the lambda on AWS using bitbucket pipeline
Error: Could not set up basepath mapping. Try running sls create_domain first.
Error: 'staging-api.simple.touchsuite.com' could not be found in API Gateway.
ConfigError: Missing region in config
at getDomain.then.then.catch (/opt/atlassian/pipelines/agent/build/node_modules/serverless-domain-manager/index.js:181:15)
at
at runMicrotasksCallback (internal/process/next_tick.js:121:5)
at _combinedTickCallback (internal/process/next_tick.js:131:7)
at process._tickDomainCallback (internal/process/next_tick.js:218:9)
For debugging logs, run again after setting the "SLS_DEBUG=*" environment variable.
Get Support
Docs: docs.serverless.com
Bugs: github.com/serverless/serverless/issues
Issues: forum.serverless.com
Your Environment Information
Operating System: linux
Node Version: 8.10.0
Framework Version: 1.61.3
Plugin Version: 3.2.7
SDK Version: 2.3.0
Components Core Version: 1.1.2
Components CLI Version: 1.4.0
So, I updated the serverless-domain-manager to the newest version 3.3.1
I tried to deploy the lambda after updating the serverless-domain-manager and now I am getting the below error.
Serverless Error
Serverless plugin "serverless-domain-manager" not found. Make sure it's installed and listed in the "plugins" section of your serverless config file.
serverless.yml snippet
plugins:
- serverless-plugin-warmup
- serverless-offline
- serverless-log-forwarding
- serverless-domain-manager
custom:
warmup:
schedule: 'cron(0/10 12-23 ? * MON-FRI *)'
prewarm: true
headers:
- Content-Type
- X-Amz-Date
- Authorization
- X-Api-Key
- X-Amz-Security-Token
- TS-Staging
- x-tss-correlation-id
- x-tss-application-id
stage: ${opt:stage, self:provider.stage}
domains:
prod: api.simple.touchsuite.com
staging: staging-api.simple.touchsuite.com
dev: dev-api.simple.touchsuite.com
customDomain:
basePath: 'svc'
domainName: ${self:custom.domains.${self:custom.stage}}
stage: ${self:custom.stage}
bitbucket-pipeline.yml snippet
image: node:8.10.0
pipelines:
branches:
master:
- step:
caches:
- node
name: Run tests
script:
- npm install --global copy
- npm install
- NODE_ENV=test npm test
- step:
caches:
- node
name: Deploy to Staging
deployment: staging # set to test, staging or production
script:
- npm install --global copy
- npm run deploy:staging
- npm run deploy:integrations:staging
- node -e 'require("./scripts/bitbucket.js").triggerPipeline()'
Need some insight, what am I missing that creating the error
I have found with Bitbucket I needed to add an npm install command to make sure that my modules and the plugins were all installed before trying to run them. This may be what is missing in your case. You can also turn caching on for the resulting node_modules folder so that it doesn't have to download all modules every time you deploy.

Resources