AWS CodeDeploy with Bamboo - node.js

we develop a NodeJS application and we want to launch them in the Amazon Cloud.
We integrated „Bamboo“ in our other Atlassian applications. Bamboo transfer the build files to the S3 Bucket from Amazon.
The problem is: how I can move and start the Application from the S3 to the EC2 instances?
You can find my appspec.yml in the attachments and in my build directory are following files:
- client | files like index.html etc
- server | files like the server.js and socketio.js
- appspec.yml
- readme
Have anyone an idea? I hope it contains all important informations you need.
Thank you :D
Attachments
version: 1.0
os: linux
files:
- source: /
destination: /

Update
I just realized that your appspec.yml seems to lack a crucial part for the deployment of a Node.js application (and most others for that matter), namely the hooks section. As outlined in AWS CodeDeploy Application Specification Files, the AppSpec file is used to manage each deployment as a series of deployment lifecycle events:
During the deployment steps, the AWS CodeDeploy Agent will look up the current event's name in the AppSpec file's hooks section. [...] If
the event is found in the hooks section, the AWS CodeDeploy Agent will
retrieve the list of scripts to execute for the current step. [...]
See for example the provided AppSpec file Example (purely for illustration, you'll need to craft a custom one appropriate for your app):
os: linux
files:
- source: Config/config.txt
destination: webapps/Config
- source: source
destination: /webapps/myApp
hooks:
BeforeInstall:
- location: Scripts/UnzipResourceBundle.sh
- location: Scripts/UnzipDataBundle.sh
AfterInstall:
- location: Scripts/RunResourceTests.sh
timeout: 180
ApplicationStart:
- location: Scripts/RunFunctionalTests.sh
timeout: 3600
ValidateService:
- location: Scripts/MonitorService.sh
timeout: 3600
runas: codedeployuser
Without such an ApplicationStart command, AWS CodeDeploy does not have any instructions what to do with your app (remember that CodeDeploy is technology agnostic, thus needs to be advised how to start the app server for example).
Initial Answer
Section Overview of a Deployment within What Is AWS CodeDeploy? illustrates the flow of a typical AWS CodeDeploy deployment:
The key aspect regarding your question is step 4:
Finally, the AWS CodeDeploy Agent on each participating instance pulls the revision from the specified Amazon S3 bucket or GitHub
repository and starts deploying the contents to that instance,
following the instructions in the AppSpec file that's provided. [emphasis mine]
That is, once you have started an AWS CodeDeploy deployment, everything should work automatically - accordingly, something seems to be configured not quite right, with the most common issue being that the deployment group does not actually contain any running instances yet. Have you verified that you can deploy to your EC2 instance from CodeDeploy via the AWS Management Console?

What do you see if you log into the Deployments list of AWS CodeDeploy console?
https://console.aws.amazon.com/codedeploy/home?region=us-east-1#/deployments
(change the region accordingly)
Also the code will be downloaded in /opt/codedeploy-agent/deployment-root/<agent-id?>/<deployment-id>/deployment-archive
And the logs in /opt/codedeploy-agent/deployment-root/<agent-id?>/<deployment-id>/logs/scripts.logs
Make sure that the agent has connectivity and permissions to download the release from the S3 bucket. That means having internet connectivity and/or using a proxy in the instance (setting http_proxy so that code_deploy uses it), and setting an IAM profile in the instance with permissions to read the S3 bucket.
Check the logs of the codedeploy agent to see if it's connecting successfully or not : /var/log/aws/codedeploy-agent/codedeploy-agent.log

You need to create a deployment in code deploy and then deploy a new revision using the drop down arrow in code depoy and your S3 bucket URL. However it needs to be a zip/tar.gz/tar

Related

How to implement XRay in NodeJS project?

I've a nodejs project with Docker and ECS in AWS and i need to implement XRay to get the traces but I couldn't get it to work yet
I installed 'aws-xray-sdk' (npm install aws-xray-sdk), then I added
const AWSXRay = require('aws-xray-sdk');
in app.js
Then, before the routes I added
app.use(AWSXRay.express.openSegment('Example'));
and after the routes:
app.use(AWSXRay.express.closeSegment());
I hit some endpoints but I can't see any trace or data in xray, maybe do I need to setup something in AWS ? I have a default group in xray.
Thanks!
It sounds like you do not have the XRay Daemon running in your ECS environment. This daemon must be used in conjunction with the SDKs to send the trace data to AWS XRay service from the SDKs. The daemon listens for the trace data traffic on UDP port 2000. Read more about the daemon here:
https://docs.aws.amazon.com/xray/latest/devguide/xray-daemon.html
See how to run the XRay Daemon on ECS via Docker here:
https://docs.aws.amazon.com/xray/latest/devguide/xray-daemon-ecs.html
You would either need to look at X-Ray SDK, Agent or Open Telemetry SDK, Collector (AWS Distro for Open Telemetry)

Running 'serverless' command on terminal showing question in Chinese

I have this problem if I install serverless framework using npm install -g serverless I encounter this message in Chinese on my computer (windows 11).
once I ran serverless on the terminal I get this message
C:\Users\User>serverless
当前未检测到 Serverless 项目,是否希望新建一个项目? Yes
请选择你希望创建的 Serverless 应用 (Use arrow keys or type to search)
> generate-usersig-for-tencent-im - Generate usersig for Tencent Cloud IM.
abc-starter
fullstack
laravel-starter - Laravel 项目模版
flask-starter - Flask 项目模版
eggjs-starter - Egg.js 项目模版
koa-starter - Koa.js 项目模版
(Move up and down to reveal more choices)
Expected output is this:
What do you want to make? (Use arrow keys)
AWS - Node.js - starter
AWS - Node.js - HTTP API
AWS - Node.js - Scheduled Task
AWS - Node.js - SQS Worker
AWS - Node.js - Express API
AWS - Node.js - Express API with DynamoDB
AWS - Python - Starter
AWS - Python - HTTP API
AWS - Python - Scheduled Task
AWS - Python - SQS Worker
AWS - Python - Flask API
AWS - Python - Flask API with DynamoDB
Other
I already fixed it. Just by changing my Date&TimeZone from Hongkong to Singapore UTC+8 and It worked for me.
Any chance you're located in China? serverless' "Getting Started" guide makes explicit mention of this as a feature:
Note: users based in China get a setup centered around the chinese Tencent provider. To use AWS instead, set the following environment variable: SERVERLESS_PLATFORM_VENDOR=aws.
As it mentions, simply set the environment variable as per their instructions to bypass this.

sam local invoke timeout on newly created project (created via sam init)

I create a new project via sam init and I select the options:
1 - AWS Quick Start Templates
1 - nodejs14.x
8 - Quick Start: Web Backend
Then from inside the project root, I run sam local invoke -e ./events/event-get-all-items.json getAllItemsFunction, which returns:
Invoking src/handlers/get-all-items.getAllItemsHandler (nodejs14.x)
Skip pulling image and use local one: public.ecr.aws/sam/emulation-nodejs14.x:rapid-1.32.0.
Mounting /home/rob/code/sam-app-2/.aws-sam/build/getAllItemsFunction as /var/task:ro,delegated inside runtime container
Function 'getAllItemsFunction' timed out after 100 seconds
No response from invoke container for getAllItemsFunction
Any idea what could be going on or how to debug this? Thanks.
Any chance the image/lambda make a call to a database someplace? and does the container running the lambda have the right connection string and/or access? To me sounds like your function is getting called and then function is trying to reach something that it can't reach.
As far as debugging - lots of console.log() statements to narrow down how far your code is getting before it runs into trouble.

Google Cloud Functions deploy - NoSuchKey failure (us.artifacts. sha256 image) after creating delete rules

After realizing I was being charged for storage as a result of Google Cloud Functions deployments, I read this thread and created a 3-day deletion rule for my us.artifacts.{myproject}.appspot.com folder. Now I am trying to deploy an existing function and am getting the below. How can I resolve this? Should I delete the whole image folder?
[0mfailed to export: failed to write image to the following tags: [us.gcr.io/myproject/gcf/us-central1/3a36a5e8-92b5-426e-b230-ba19ffc92ba8:MYFUNCTION_version-64:
GET https://storage.googleapis.com/us.artifacts.myproject.appspot.com/containers/images/sha256:{some long string}?access_token=REDACTED:
unsupported status code 404; body: <?xml version='1.0' encoding='UTF-8'?>
<Error><Code>NoSuchKey</Code>
<Message>The specified key does not exist.</Message><Details>No such object: us.artifacts.myproject.appspot.com/containers/images/sha256:{some long string}</Details></Error>]
Edit 1: My deploy command (which has been previously working for months):
gcloud functions deploy MYFUNCTIONNAME --source https://source.developers.google.com/projects/MYPROJECT/repos/MYREPO --trigger-http --runtime nodejs10 --allow-unauthenticated
Edit 2: I have an separate cloud function that points to the exact same source repository (but is located in europe-west3) and it updated fine without issue. However, this function was last updated in December while the failing function was last updated 2 days ago.
Edit 3: Well, in the end I just duplicated the Cloud Function and I am able to update and deploy the new one without issue. I retained the 3 day deletion for the container and this and other functions are updating without issue as well. No idea why this original function kept getting this error.
As is recommended in the answer of the another question, it is better delete the whole bucket, this action will destroy all elements and configurations related to this bucket, avoiding issues between Functions, Storage & Container Registry, if you delete only the containers some configurations will be remained affecting further deploys.

[AWS][Amplify] Invoke function locally crashs with no error

I have just joined a developpment team, and the project should run in the cloud using amplify. I have a function called usershandler that i want to run locally. For that, i used :
amplify invoke function usershandler
This is the output i get :
Starting execution...
EVENT: {"httpMethod":"GET","body":"{\"name\": \"Amplify\"}","path":"/users","resource":"/{proxy+}","queryStringParameters":{}}
App started
get All VSM called
Connection to database was a success
null
Result:
{"statusCode":200,"body":"{\"success\":true,\"results\":[]}","headers":{"x-powered-by":"Express","access-control-allow-origin":"*","access-control-allow-headers":"Origin, X-Requested-With, Content-Type, Accept","content-type":"application/json; charset=utf-8","content-length":"29","etag":"W/\"1d-4wD7ChrrlHssGyekznKfKxR7ImE\"","date":"Tue, 21 Jul 2020 12:32:36 GMT","connection":"close"},"isBase64Encoded":false}
Finished execution.
EDIT : Also, when running the invoke command, amplify asks me for a src/event.json while i've seen it looking for the index.js for some ??
EDIT 2 [SOLVED] : downgrading #aws-amplify/cli to 4.14.1 seems to solve this :)
Expected behavior : The server should continue running so i can use it ..
Actual behavior : It always stops after the finished execution message.
The connection to the db works fine, the config.json contains correct values. Don't know why it is acting like this. Have anybody had the same problem?
Have a nice day.
Short answer: You are running the invoke command which is doing just what it is supposed to be doing - invoking the lambda function.
If you are looking to get a local API up, then run the following command:
sam local start-api
This will read your template and based on the endpoints you have setup, run them locally essentially mocking API Gateway locally. Read more about it in the official docs here.
Explanation:
This command comes is one of offering of AWS Serverless Application Model (AWS SAM). A tool to develop serverless application. It is essentially an abstraction of AWS Cloufdformation. Similarly Amplify is an abstraction that makes it simple to not only develop and manage the backend but also brings that power to frontend.
As both of them essentially use Cloudformation templates underneeth, you can leverage the capabilities of one tool with another.
SAM provides a robust set of tools for local development invcluding running a local lambda mocking server, in case you are not using API Gateway.
I use this combination to develop and test my frontend along with backend which is in golang, a language which is not as mature as javascript as a backend language with Amplify as of now.

Resources