Trying to add a CloudFront invalidation command in buildspec.yml throws 254 error - amazon-cloudfront

I'm trying to invalidate the Cloudfront cache after a build has done and what I get is the following error in Codebuild : [Container] 2022/05/16 15:46:11 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: aws cloudfront create-invalidation --distribution-id myid --paths '/*'. Reason: exit status 254
Here's my BuildSpec definition
version: 0.2
env:
variables:
APP_NAME: "managerui"
phases:
install:
runtime-versions:
nodejs: 14.x
commands:
- echo install process started
- cd src/UI/managerui/
- ls
- npm install && npm install -g #angular/cli
build:
commands:
- echo build process started now
- ls
- ng build --configuration=production
post_build:
commands:
- echo build process finished, we should uplload to S3 now
- ls
- cd dist/
- ls -la
- aws s3 sync . s3://ett-manager-ui --delete
- aws cloudfront create-invalidation --distribution-id=myid--paths '/*
Do you see anything that's wrong?? I've tried running the create-invalidation instruction on my running laptop and it works..
Thanks in advance
#UPDATE
I've resolved... it was a missing permission issue... I've added
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"cloudfront:UpdateDistribution",
"cloudfront:DeleteDistribution",
"cloudfront:CreateInvalidation"
],
"Resource": "arn:aws:cloudfront::<account_id>:distribution/<distribution_id>"
}
]
}
and it works fine.
This can be closed

Related

Getting Permission Error Deploying a Cloud Function with Private Dependency using Cloud Build

I'm trying to deploy a Cloud Function (Node.js + TypeScript) using Cloud Build.
What's unique is that my Cloud Function uses a private dependency, so that my package.json looks like this:
"dependencies": {
"#foo/my_private_repo": "git+ssh://git#github.com:foo/my_private_repo.git", // this is the tricky part
"#google-cloud/functions-framework": "^3.1.2",
"axios": "^0.27.2",
"dotenv": "^16.0.2"
},
This is working totally fine on my local computer, but I'm having a hard time deploying.
I have followed the Cloud Build document, yet I'm getting the following error.
Step #1: npm ERR! git#github.com: Permission denied (publickey).
Step #1: npm ERR! fatal: Could not read from remote repository.
Step #1: npm ERR!
Step #1: npm ERR! Please make sure you have the correct access rights
Step #1: npm ERR! and the repository exists.
My cloudbuild.yaml looks like:
steps:
# reads the deploy key of the private repo from Secret Manager, and setup ssh
- name: gcr.io/cloud-builders/git
secretEnv: ['SSH_KEY']
entrypoint: bash
args:
- -c
- |
echo "$$SSH_KEY" >> /root/.ssh/id_rsa
chmod 400 /root/.ssh/id_rsa
cp known_hosts.github /root/.ssh/known_hosts
volumes:
- name: ssh
path: /root/.ssh
# deploy
- name: gcr.io/google.com/cloudsdktool/cloud-sdk
args:
- gcloud
- functions
- deploy
- my-function
- --gen2
- --region=asia-northeast1
- --trigger-http
- --runtime=nodejs16
- --entry-point=myFunction
- --env-vars-file=.env.staging.yml
- --allow-unauthenticated
volumes:
- name: ssh
path: /root/.ssh
availableSecrets:
secretManager:
- versionName: projects/<PROJECT_ID>/secrets/github_deploy_key/versions/latest
env: SSH_KEY
I have made sure that there isn't any problem with the key because I can access the repo using this key from my CLI.
$ ssh -T -i id_github git#github.com
Hi foo/my_private_repo! You've successfully authenticated, but GitHub does not provide shell access.
Funny thing is that steps other than deploying works successfully, only the deploying part seems to have a problem.
steps:
# reads the deploy key of the private repo from Secret Manager, and setup ssh
- name: gcr.io/cloud-builders/git
secretEnv: ['SSH_KEY']
# ...same
# npm install
# this works fine
- name: node:16
entrypoint: npm
args:
- install
volumes:
- name: ssh
path: /root/.ssh
# clone
# this works fine too
- name: 'gcr.io/cloud-builders/git'
args:
- clone
- git#github.com:foo/my_private_repo
volumes:
- name: 'ssh'
path: /root/.ssh
Any ideas that might solve the problem?

Semantic release not accepting GITLAB_TOKEN on gitlab private repository

Here is the error log message:
[3:40:55 PM] [semantic-release] › ✖ The command "git push --dry-run
--no-verify https://gitlab-ci-token:[secure]#[repository-url].git
HEAD:main" failed with the error message remote: You are not allowed
to upload code.
fatal: unable to access 'https://gitlab-ci-token:[secure]#[repository-url]/': The requested URL returned error: 403.
I have a GITLAB_TOKEN set up in the repository settings with all the necessary permissions, but it seems it isn't even being used:
Here is my .releaserc.json config:
{
"branches": ["main", { "name": "beta", "prerelease": true }],
"plugins": [
"#semantic-release/commit-analyzer",
"#semantic-release/release-notes-generator",
"#semantic-release/changelog",
"#semantic-release/npm",
"#semantic-release/gitlab",
[
"#semantic-release/git",
{
"assets": ["package.json", "package-lock.json", "CHANGELOG.md"],
"message": "chore(release): ${nextRelease.version} [skip ci]\n\n${nextRelease.notes}"
}
]
]
}
Here it's my .gitlab-ci.yml config:
# NodeJs image
image: node:16
# STAGES
stages:
- checks
- build
- release
# SETUP
before_script:
- node -v
- npm config set //registry.npmjs.org/:_authToken ${NPM_TOKEN}
- npm ci --cache .npm --prefer-offline
# JOBS
lint:
stage: checks
script:
- npm run lint
test:
stage: checks
script:
- npm run test:ci
build:
stage: build
script:
- npm run build
release:
stage: release
only:
- main
script:
- npx semantic-release
And here are the semantic release dependencies I'm using and it's versions:
"#semantic-release/changelog": "^6.0.1",
"#semantic-release/git": "^10.0.1",
"#semantic-release/gitlab": "^7.0.4",
"#semantic-release": "^19.0.2",
The GITLAB_TOKEN is a member of the repository as a maintainer (just like the other repositories where semantic release is working):
Any suggestions?
So in the end the problem was that I had to add the created GITLAB_TOKEN access token in Settings > CI/CD > Variables

Why is my Nightwatch.js test failing to connect to localhost port 9515 on CircleCI vs. successfully running locally?

I am attempting to learn how to set up CI tests using Nightwatch and CircleCI. I have an example test running locally, but it fails when run by CircleCI. I am getting this output:
#!/bin/bash -eo pipefail
sudo npm test -- --headless
> nw#1.0.0 test /home/seluser/project
> nightwatch "--headless"
[First Test] Test Suite
=======================
⠋ Connecting to localhost on port 9515...
⠙ Connecting to localhost on port 9515...
⚠ Error connecting to localhost on port 9515.
_________________________________________________
TEST FAILURE: 1 error during execution; 0 tests failed, 0 passed (384ms)
✖ firstTest
Error: An error occurred while retrieving a new session: "unknown error: Chrome failed to start: exited abnormally."
at endReadableNT (_stream_readable.js:1201:12)
at processTicksAndRejections (internal/process/task_queues.js:84:21)
SKIPPED:
- Demo test ecosia.org
npm ERR! Test failed. See above for more details.
Exited with code exit status 1
CircleCI received exit code 1
My suspicion is that the docker image I am using is not set up correctly for what I'm trying to do, but I'm not really sure. This is my config.yml:
version: 2.1
jobs:
build:
docker:
- image: selenium/standalone-chrome:3.1.0
steps:
- checkout
- run: sudo apt-get update
- run: sudo apt-get install curl -y
- run: sudo curl -sL https://deb.nodesource.com/setup_13.x | sudo -E bash -
- run: sudo apt-get install -y nodejs
- run: sudo npm install chromedriver
- run: sudo npm install nightwatch
- run: sudo npm test -- --headless
Finally, here is my nightwatch.conf.js:
module.exports = {
// An array of folders (excluding subfolders) where your tests are located;
// if this is not specified, the test source must be passed as the second argument to the test runner.
src_folders: ['tests'],
webdriver: {
start_process: true,
port: 9515,
server_path: require('chromedriver').path,
cli_args: [
'--no-sandbox',
'--headless',
'--port=9515',
'--verbose'
]
},
test_settings: {
default: {
launch_url: 'https://nightwatchjs.org',
desiredCapabilities : {
browserName : 'chrome',
alwaysMatch: {
'chromeOptions': {
"args": [
'--headless',
'--verbose'
],
}
}
}
}
}
};
I think that's all the relevant information. I would greatly appreciate some insight as to what is going wrong here!
Just in case it helps, here's the successful output on my local machine:
ubuntu#ubuntu:~/nw$ npm test
> nw#1.0.0 test /home/ubuntu/nw
> nightwatch
[First Test] Test Suite
=======================
ℹ Connected to localhost on port 9515 (358ms).
Using: chrome (80.0.3987.149) on Linux platform.
Running: Demo test ecosia.org
✔ Element <body> was visible after 31 milliseconds.
✔ Testing if the page title contains 'Ecosia' (6ms)
✔ Testing if element <input[type=search]> is visible (30ms)
✔ Testing if element <button[type=submit]> is visible (34ms)
✔ Testing if element <.mainline-results> contains text 'Nightwatch.js' (138ms)
OK. 5 assertions passed. (3.481s)

Gitlab-ci is not using the node version I have specified

I'm very new to GitLab and am trying to set up the CI/CD system for my project.
My .gitlab-ci.yml file is as follows:
image: node:10.15.3
cache:
paths:
- node_modules/
before_script:
- node -v
- npm install
stages:
- test
all-tests:
stage: test
script:
- npm run lint
- npm run test:unit:cov
- npm run test:server
However the node -v line outputs 6.12.0 not 10.15.3 and my tests are failing because the node version is wrong.
How do I tell GitLab CI to use Node 10.15.3?
You are not tagging your job so perhaps it is running on a shell-executor and not a docker-executor. Check for .dockerenv in your job spec to ensure you're running in a container;
Given this simple pipeline (based on yours):
image: node:10.15.3
before_script:
- node -v
stages:
- test
all-tests:
tags:
- docker
stage: test
script:
# are we in a docker-executor
- if [ -f /.dockerenv ]; then echo "docker-executor"; fi
I get the following output, which suggests we are pulling the correct node image version:
Running with gitlab-runner 11.3.1 (0aa5179e)
on gitlab-docker-runner fdcd6979
Using Docker executor with image node:10.15.3 ...
Pulling docker image node:10.15.3 ...
Using docker image sha256:64c810caf95adbe21b5f41be687aa77aaebc197aa92f2b2283da5d57269d2b92 for node:10.15.3 ...
Running on runner-fdcd6979-project-862-concurrent-0 via af166b7f5bef...
Fetching changes...
HEAD is now at b46bb77 output container id
From https://gitlab/siloko/node-test
b46bb77..adab1e3 master -> origin/master
Checking out adab1e31 as master...
Skipping Git submodules setup
$ node -v
v10.15.3
$ if [ -f /.dockerenv ]; then echo "docker-executor"; fi
docker-executor
Job succeeded

AWS CloudFormation launch Hyperledger Fabric Failed with Error: failed to create: [EC2InstanceForDev]

Following the aws documentation: https://docs.aws.amazon.com/blockchain-templates/latest/developerguide/blockchain-templates-hyperledger.html
Using the IAM policy from the document:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:GetRepositoryPolicy",
"ecr:DescribeRepositories",
"ecr:ListImages",
"ecr:DescribeImages",
"ecr:BatchGetImage",
"s3:Get*",
"s3:List*"
],
"Resource": "*"
}
]
}
But Failed to launch the stack. Then I added all below permissions:
AmazonEC2FullAccess
AmazonEC2ContainerRegistryFullAccess
AmazonS3FullAccess
AmazonEC2ContainerRegistryReadOnly
AmazonS3ReadOnlyAccess
AmazonEC2ContainerServiceFullAccess
AdministratorAccess
But still no luck, and got this error:
The following resource(s) failed to create: [EC2InstanceForDev].
What IAM policy should I added to resolve this error?
Thanks!
The Official AWS Blockchain Cloud Formation Template for Hyperledger Fabric is a nested template (our base template calls another template which does all the setup on an EC2 instance which itself creates).
But the problem is it does everything on the EC2-Instance except installing docker-compose & it throws an error that docker-compose command not found at the end which causes the CloudFormation template to break(EC2InstanceForDev) and do a rollback. So instead of using CloudFormation Template, we can run the same script manually on the EC2-instance with a small change. The change is to install docker-compose beforehand. Rest setup remains the same i.e -- 1. Create a VPC, 2. Create Public Subnets, 3. Create EIP if you want to attach it later, 4. Create Key-Pair for SSH, 5. Create IAM Role & Policy, 6. Create Security Group with Inbound 8080(TCP) & 22(SSH), 7. launch an EC2 Instance with the created resources in step (1to6).
AMI which is preferred is -
ami-1853ac65 for us-east-1
ami-25615740 for us-east-2
ami-dff017b8 for us-west-2
Docker Image Repository -
354658284331 for us-east-1
763976151875 for us-east-2
712425161857 for us-west-2
SCRIPT TO RUN ON EC2 (Give chmod 777 and chmod +x for the script) -
#!/bin/bash -x
sudo curl -L https://github.com/docker/compose/releases/download/1.22.0/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
docker-compose --version
res=$?
echo $res
mkdir /tmp/fabric-install/
cd /tmp/fabric-install/
wget https://aws-blockchain-templates-us-east-1.s3.us-east-1.amazonaws.com/hyperledger/fabric/templates/simplenetwork/latest/HyperLedger-BasicNetwork.tgz -O /home/ec2-user/HyperLedger-BasicNetwork.tgz
cd /home/ec2-user
tar xzvf HyperLedger-BasicNetwork.tgz
rm /home/ec2-user/HyperLedger-BasicNetwork.tgz
chown -R ec2-user:ec2-user HyperLedger-BasicNetwork
chmod +x /home/ec2-user/HyperLedger-BasicNetwork/artifacts/first-run-standalone.sh
/home/ec2-user/HyperLedger-BasicNetwork/artifacts/first-run-standalone.sh us-east-1 example.com org1 org2 org3 mychannel 354658284331.dkr.ecr.us-east-1.amazonaws.com/ 354658284331
res=$?
echo $res
IAM policy which I attached to the role -
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:GetRepositoryPolicy",
"ecr:DescribeRepositories",
"ecr:ListImages",
"ecr:DescribeImages",
"ecr:BatchGetImage"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": "*"
}
]
}
NOTE -
Please replace the appropriate AWS ECR account number for your region and appropriate AWS region in the above script and script has (example.com org1 org2 org3 mychannel), Please change this too as per requirement. Its the same RootDomain, Org1SubDomain, Org2SubDomain, Org3SubDomain, ChannelName as we enter in the CF template).
This whole process is tested in the us-east-1 region. The script can be straight deployed in the us-east-1 region. To access the Hyperledger web monitor interface (http://EC2-DNS OR EIP:8080)

Resources