Bitbucket pipeline deploying serverless stack fails - bitbucket-pipelines

I've been able to deploy serverless-stack applications over to AWS via Bitbucket pipeline.
Basically the pipeline triggers the build every merge to the main branch – usual stuff.
The problem starts with the following script:
npm run deploy -- --stage qa --verbose
There are going to be a random set of stacks successfully deployed e.g.
Building
...
...
...
Building function src/projects/queue.roleCreation
Followed by a series of deployment related output:
Deploying stacks
intr-armada-X | CREATE_IN_PROGRESS | AWS::ApiGatewayV2::Route | CountryRouteGETXXXXX | Resource creation Initiated
intr-armada-X | CREATE_COMPLETE | AWS::ApiGatewayV2::Route | CountryRouteGETXXXXX
Checking stack status intr-armada-X
✅ intr-armada-X
The above and a few others were a success, however, there tends to be a random list of failures as well (I say random because the next time I redeploy (same pipeline and build number), it completes.
deploy stack: done intr-armada-Y {
status: 'failed',
statusReason: undefined,
account: undefined,
outputs: undefined,
exports: undefined
}
❌ intr-armada-Y failed: The intr-armada-Y stack failed to deploy.
deploy stack: poll stack status: unknown
deploy stack: poll stack status: unknown
deploy stack: poll stack status: unknown
deploy stack: poll stack status: unknown
deploy stack: run cdk deploy: exited with code null
deploy stack: run cdk deploy: exited with code null
deploy stack: run cdk deploy: exited with code null
deploy stack: run cdk deploy: exited with code null
deploy stack: poll stack status: cp exited
...and a few more others...
Our pipeline includes more than a dozen separate microservices deployed to AWS (lambda functions for proxies to API Gateway)
I have searched and even updated SST to v1 recently to see if it resolves the issue, but no dice. Same errors.
Any help would be greatly appreciated!

Related

problemes hosting a nodejs app on anw Elastic Beanstalk with code pipeline from github

I'm trying to import code from github via pipeline to Elastic Beanstalk and I get those errors on eb-engine.log
2022/06/03 21:48:30.995763 [INFO] CommandService Response: {"status":"FAILURE","api_version":"1.0","results":[{"status":"FAILURE","msg":"Engine execution has encountered an error.","returncode":1,"events":[{"msg":"Instance deployment: You didn't specify a Node.js version in the 'package.json' file in your source bundle. The deployment didn't install a specific Node.js version.","timestamp":1654292898290,"severity":"INFO"},{"msg":"Instance deployment: 'npm' failed to install dependencies that you defined in 'package.json'. For details, see 'eb-engine.log'. The deployment failed.","timestamp":1654292910995,"severity":"ERROR"},{"msg":"Instance deployment failed. For details, see 'eb-engine.log'.","timestamp":1654292910995,"severity":"ERROR"}]}]}
2022/06/03 21:48:30.996582 [INFO] Platform Engine finished execution on command: app-deploy

GCP Gitlab CI Cloud Function NodeJs Deployment failed: RESOURCE_ERROR 400

When trying to deploy a cloud function on gcp using SLS I receive the following exception
{"ResourceType":"gcp-types/cloudfunctions-v1:projects.locations.functions","ResourceErrorCode":"400","ResourceErrorMessage":"Build failed: Build error details not available."}
The solution was to define in .gitlab-ci.yml the specific version of the image that the ci is running in by specifying the following key value: image: node:12-alpine at the top of the .gitlab-ci.yml.

Azure functions deploy from github actions results in Error: connect ECONNREFUSED 127.0.0.1:443

I have the following yaml file in my .github/workflows folder which I got from here.
name: Deploy Python project to Azure Function App
on:
[push]
env:
AZURE_FUNCTIONAPP_NAME: zypp-covid # set this to your application's name
AZURE_FUNCTIONAPP_PACKAGE_PATH: '.' # set this to the path to your web app project, defaults to the repository root
PYTHON_VERSION: '3.8' # set this to the python version to use (supports 3.6, 3.7, 3.8)
jobs:
build-and-deploy:
runs-on: ubuntu-latest
steps:
- name: 'Checkout GitHub Action'
uses: actions/checkout#master
- name: Setup Python ${{ env.PYTHON_VERSION }} Environment
uses: actions/setup-python#v1
with:
python-version: ${{ env.PYTHON_VERSION }}
- name: 'Resolve Project Dependencies Using Pip'
shell: bash
run: |
pushd './${{ env.AZURE_FUNCTIONAPP_PACKAGE_PATH }}'
python -m pip install --upgrade pip
pip install -r requirements.txt --target=".python_packages/lib/site-packages"
popd
- name: 'Run Azure Functions Action'
uses: Azure/functions-action#v1
id: fa
with:
app-name: ${{ env.AZURE_FUNCTIONAPP_NAME }}
package: ${{ env.AZURE_FUNCTIONAPP_PACKAGE_PATH }}
publish-profile: ${{ secrets.AZURE_FUNCTIONAPP_PUBLISH_PROFILE }}
This results in the following error which isn't very informative, so I have no clue how to solve this:
Using SCM credential for authentication, GitHub Action will not perform resource validation.
Error: ECONNREFUSED
Error: Execution Exception (state: ValidateAzureResource) (step: Invocation)
Error: When request Azure resource at ValidateAzureResource, Get Function App Settings : Failed to acquire app settings (SCM)
Error: Failed to fetch Kudu App Settings.
Error: connect ECONNREFUSED 127.0.0.1:443
Error: Error: Failed to fetch Kudu App Settings.
Error: connect ECONNREFUSED 127.0.0.1:443
at Kudu.<anonymous> (/home/runner/work/_actions/Azure/functions-action/v1/node_modules/azure-actions-appservice-rest/Kudu/azure-app-kudu-service.js:62:23)
at Generator.throw (<anonymous>)
at rejected (/home/runner/work/_actions/Azure/functions-action/v1/node_modules/azure-actions-appservice-rest/Kudu/azure-app-kudu-service.js:6:65)
at processTicksAndRejections (internal/process/task_queues.js:93:5)
Error: Deployment Failed!
Could someone point me in the right direction?
I had exactly the same error.
I created a virgin http trigger function in .net core 3.1, a new Azure function, downloaded the profile to find a url issue that Microsoft are responsible for that was not correct.
Get past that then I end up with the same as the above.
This was infuriating. Working it through from basics and it fails.
The url issue is found here:
https://learn.microsoft.com/en-us/answers/questions/137869/publish-profile-publishurl-needs-to-be-adjusted-af.html
The problem lies with the updated url.
The original in my test scenario from the publish profile was:
publishUrl="waws-prod-ln1-035.publish.azurewebsites.windows.net:443"
This I changed to:
publishUrl="ps-az-02.scm.azurewebsites.windows.net:443"
Which gives the same results as your code.
Upon close examination from an older profile, after many hours trying to trouble shoot the whole publishing problem, I changed to:
publishUrl="ps-az-02.scm.azurewebsites.net:443"
Which works!
Hope this helps someone.

gcloud nodejs cloudbuild.yaml stuck in infinite loop

I have a node.js website that runs fine when I run it locally with node server.js. I'm trying to deploy it online with GCP. I created a project, enabled app engine API, and gave my '#cloudbuild.gserviceaccount.com' account app engine deployer role permissions.
I also added a cloudbuild.yaml file to my repo:
steps:
- name: "gcr.io/cloud-builders/gcloud"
args: ["app", "deploy"]
Now If I run gcloud app deploy , my build gets triggered, but appears to me causing an infinite loop of builds? For example, starting out my build log looks like this (nothing currently running):
i run gcloud app deploy and it begins a new build (1a19d9ba):
But something with this build keeps triggering new builds? in my local computer terminal, the output I get for running gcloud app deploy keeps triggering new builds:
$ gcloud app deploy
Services to deploy:
descriptor: [/mnt/c/Users/marti/Documents/projects/martinbarker/app.yaml]
source: [/mnt/c/Users/marti/Documents/projects/martinbarker]
target project: [martinbarker2]
target service: [default]
target version: [20201003t165547]
target url: [https://martinbarker2.wl.r.appspot.com]
Do you want to continue (Y/n)? y
Beginning deployment of service [default]...
Building and pushing image for service [default]
Started cloud build [1a19dxxxxx627d].
To see logs in the Cloud Console: https://console.cloud.google.com/cloud-build/builds/1axxx
------------------------------------------------- REMOTE BUILD OUTPUT --------------------------------------------------starting build "1a19xxxxxx27d"
FETCHSOURCE
Fetching storage object: gs://staging.martinbarker2.appspot.com/us.gcr.io/martinbarker2/appengine/default.20xxx47:latest#160xxx288
Copying gs://staging.martinbarker2.appspot.com/us.gcr.io/martinbarker2/appengine/default.202xxx547:latest#16xxx88...
| [1 files][117.7 MiB/117.7 MiB]
Operation completed over 1 objects/117.7 MiB.
BUILD
Already have image (with digest): gcr.io/cloud-builders/gcloud
Services to deploy:
descriptor: [/workspace/app.yaml]
source: [/workspace]
target project: [martinbarker2]
target service: [default]
target version: [20201003t235749]
target url: [https://martinbarker2.wl.r.appspot.com]
Do you want to continue (Y/n)?
WARNING: Unable to verify that the Appengine Flexible API is enabled for project [martinbarker2]. You may not have permission to list enabled services on this project. If it is not enabled, this may cause problems in running your deployment. Please ask the project owner to ensure that the Appengine Flexible API has been enabled and that this account has permission to list enabled APIs.
Beginning deployment of service [default]...
Building and pushing image for service [default]
Started cloud build [d0d0xxx9a987].
To see logs in the Cloud Console: https://console.cloud.google.com/cloud-build/builds/d0d0xxx987?project=114941087848
----------------------------- REMOTE BUILD OUTPUT ------------------------------
starting build "d0d0dxxx987"
FETCHSOURCE
Fetching storage object: gs://staging.martinbarker2.appspot.com/us.gcr.io/martinbarker2/appengine/default.20201xxx49:latest#160176947xx11
Copying gs://staging.martinbarker2.appspot.com/us.gcr.io/martinbarker2/appengine/default.20xxx:latest#16xxx548211...
\ [1 files][117.7 MiB/117.7 MiB]
Operation completed over 1 objects/117.7 MiB.
BUILD
Already have image (with digest): gcr.io/cloud-builders/gcloud
Services to deploy:
descriptor: [/workspace/app.yaml]
source: [/workspace]
target project: [martinbarker2]
target service: [default]
target version: [20201003t235818]
target url: [https://martinbarker2.wl.r.appspot.com]
Do you want to continue (Y/n)?
WARNING: Unable to verify that the Appengine Flexible API is enabled for project [martinbarker2]. You may not have permission to list enabled services on this project. If it is not enabled, this may cause problems in running your deployment. Please ask the project owner to ensure that the Appengine Flexible API has been enabled and that this account has permission to list enabled APIs.
Beginning deployment of service [default]...
Building and pushing image for service [default]
Started cloud build [683bb8cxxx0368f36].
To see logs in the Cloud Console: https://console.cloud.google.com/cloud-build/builds/683bxxxf36?project=114xx48
----------------------------- REMOTE BUILD OUTPUT ------------------------------
starting build "683bb8xx368f36"
FETCHSOURCE
Fetching storage object: gs://staging.martinbarker2.appspot.com/us.gcr.io/martinbarker2/appengine/default.2020xxx18:latest#16xxx376
Copying gs://staging.martinbarker2.appspot.com/us.gcr.io/martinbarker2/appengine/default.202xxx18:latest#16xx376...
| [1 files][117.7 MiB/117.7 MiB]
Operation completed over 1 objects/117.7 MiB.
BUILD
Already have image (with digest): gcr.io/cloud-builders/gcloud
Services to deploy:
descriptor: [/workspace/app.yaml]
source: [/workspace]
target project: [martinbarker2]
target service: [default]
target version: [20201003t235843]
target url: [https://martinbarker2.wl.r.appspot.com]
Do you want to continue (Y/n)?
WARNING: Unable to verify that the Appengine Flexible API is enabled for project [martinbarker2]. You may not have permission to list enabled services on this project. If it is not enabled, this may cause problems in running your deployment. Please ask the project owner to ensure that the Appengine Flexible API has been enabled and that this account has permission to list enabled APIs.
Beginning deployment of service [default]...
Building and pushing image for service [default]
Started cloud build [feecxxx3cd86].
To see logs in the Cloud Console: https://console.cloud.google.com/cloud-build/builds/feexxx87848
----------------------------- REMOTE BUILD OUTPUT ------------------------------
starting build "feec9xxxx3cd86"
FETCHSOURCE
Fetching storage object: gs://staging.martinbarker2.appspot.com/us.gcr.io/martinbarker2/appengine/default.2020100xxx082
Copying gs://staging.martinbarker2.appspot.com/us.gcr.io/martinbarker2/appengine/default.202xxx82...
- [1 files][117.7 MiB/117.7 MiB]
Operation completed over 1 objects/117.7 MiB.
BUILD
Already have image (with digest): gcr.io/cloud-builders/gcloud
Services to deploy:
descriptor: [/workspace/app.yaml]
source: [/workspace]
target project: [martinbarker2]
target service: [default]
target version: [20201003t235909]
target url: [https://martinbarker2.wl.r.appspot.com]
Do you want to continue (Y/n)?
This goes on, and now I have many more builds:
My app.yaml:
runtime: custom
env: flex
manual_scaling:
instances: 2
is it something with my cloudbuild.yaml file? I dont have a dockerfile in my folder
"It's not a bug; it's a feature!" But it's not documented, or I didn't find where! Actually, with App Engine Flex custom runtime, you will create a container. You can define either a cloudbuild.yaml file or a Dockerfile to describe the container creation. And this container is created with Cloud Build.
For information, if you set a specific language runtime, Buildpack is used to create the container, still on Cloud Build; however a Dockerfile is no longer required
So, in your case, as you describe, you have a cloudbuild.yaml file that deploys an App Engine flex custom runtime, that call a Cloud Build to build the container, with the cloudbuild.yaml file in parameters that deploys..... (loop!)
Ok, now, how to fix this? 2 solutions
Change the name of your cloudbuild.yaml file to not match this default naming (cloudbuild-noloop.yaml for example). Set this name in the trigger configuration or in the gcloud builds submit --config=cloudbuild-noloop.yaml command
Update your cloudbuild.yaml deploy step like this
steps:
- name: "gcr.io/cloud-builders/gcloud"
args: ["app", "deploy","--ignore-file=cloudbuild.yaml"]

Docker compose failing to run on Windows-2019 ADO build agent

We're using the Windows-2019 vmImage for our hosted build agents, which have both docker and docker-compose installed (https://github.com/actions/virtual-environments/blob/master/images/win/Windows2019-Readme.md).
But as soon as we run any docker-compose command (whether 'up' or 'down' or other), we immediately hit an error with little feedback:
Error: 3/30/2020 2:09:06 PM:
At D:\a\1\s\psakefile.ps1:16 char:5 + docker-compose down + ~~~~~~~~~~~~~~~~~~~ [<<==>>] Exception: Removing network s_default
##[error]PowerShell exited with code '1'.
Any ideas? Do we need to initialise something on the build agent first so that docker-compose works? Everything works locally tested against numerous machines.

Resources