How to implement XRay in NodeJS project? - node.js

I've a nodejs project with Docker and ECS in AWS and i need to implement XRay to get the traces but I couldn't get it to work yet
I installed 'aws-xray-sdk' (npm install aws-xray-sdk), then I added
const AWSXRay = require('aws-xray-sdk');
in app.js
Then, before the routes I added
app.use(AWSXRay.express.openSegment('Example'));
and after the routes:
app.use(AWSXRay.express.closeSegment());
I hit some endpoints but I can't see any trace or data in xray, maybe do I need to setup something in AWS ? I have a default group in xray.
Thanks!

It sounds like you do not have the XRay Daemon running in your ECS environment. This daemon must be used in conjunction with the SDKs to send the trace data to AWS XRay service from the SDKs. The daemon listens for the trace data traffic on UDP port 2000. Read more about the daemon here:
https://docs.aws.amazon.com/xray/latest/devguide/xray-daemon.html
See how to run the XRay Daemon on ECS via Docker here:
https://docs.aws.amazon.com/xray/latest/devguide/xray-daemon-ecs.html

You would either need to look at X-Ray SDK, Agent or Open Telemetry SDK, Collector (AWS Distro for Open Telemetry)

Related

Does js-ipfs have a readonly gateway server?

When I start my local ipfs node with ipfs daemon, in the cmd I get this:
Gateway (readonly) sever listening on /ip4/127.0.0.1/tcp/8080
With this, I can say 127.0.0.1:8080/ipfs/CID and read files from IPFS.
In my Node.js app, when I run ipfs.create(), in the console I get logs about swarms, but not about a readonly gateway server. I have found out that the ipfs.create() function has an option Gateway that on default is set to /ip4/127.0.0.1/tcp/9090. But when I run my node and keep my app running, when I try to retrieve something with 127.0.0.1:9090/ipfs/CID, I get an ERR_CONNECTION_REFUSED. Why is that? While the app is running, I scanned my ports and nothing was attached to 9090.
I have found the answer. Yes, js-ipfs has a readonly gateway server, but it's not starting implicitly togheter with the node, you have to use ipfs-http-gateway package. The package doesn't really have good instructions, but here is how you do it. You import HttpGateway class from the package and give your ipfs instance to it as a constructor, then you call .start() from the HttpGateway instance. The .start() will take the config options from your ipfs instance, and will search for Adresses -> Gateway options that defaults to /ip4/127.0.0.1/tcp/9090 and start the gateway to that port. You can read the code from the package where the HttpGateway class is written, and you'll figure it all out.

How to setup automatic shutdown for Google Compute Instance?

I'm running a NodeJS app inside a docker container inside a container-optimized-OS GCE instance.
I need this instance to shutdown an self-delete upon its task completion. Only the NodeJS app is aware of the task completion.
I used to achieve this behavior by setting up this as a startup-script:
node ./dist/app.js
echo "node script execution finished. Deleting this instance"
export NAME=$(curl -X GET http://metadata.google.internal/computeMetadata/v1/instance/name -H 'Metadata-Flavor: Google')
export ZONE=$(curl -X GET http://metadata.google.internal/computeMetadata/v1/instance/zone -H 'Metadata-Flavor: Google')
gcloud compute instance-groups managed delete-instances my-group --instances=$NAME --zone=$ZONE
I've also used similar setups with additional logic based on the NodeJS app exit code.
How do I do it now?
There are two problems:
I don't know how to pass NodeJS exit event (preferably with exit code) up to the startup-script. How do I do that?
Container-optimized-OS GCE instance lacks gcloud. Is there different way of shutting down an instance?
Google Cloud's Healthcheck seems too troublesome and not universal. My app is not a web-server, I prefer not to install express or something else just for sake of handling health checks.
Right now my startup-script ends with docker run ... command. Maybe I should write the shutdown command after that and somehow make docker exit on NodeJS exit?
If you think the Healthcheck is the way to go, what would be the lightest setup for a health check given that my app is not a web-server?
Try to have your app trigger a Cloud Function when the app finishes the job
Cloud function can then have script to delete your VM. See sample script below
https://medium.com/google-cloud/start-stop-compute-engine-instance-from-cloud-function-bf9ae5199609

[AWS][Amplify] Invoke function locally crashs with no error

I have just joined a developpment team, and the project should run in the cloud using amplify. I have a function called usershandler that i want to run locally. For that, i used :
amplify invoke function usershandler
This is the output i get :
Starting execution...
EVENT: {"httpMethod":"GET","body":"{\"name\": \"Amplify\"}","path":"/users","resource":"/{proxy+}","queryStringParameters":{}}
App started
get All VSM called
Connection to database was a success
null
Result:
{"statusCode":200,"body":"{\"success\":true,\"results\":[]}","headers":{"x-powered-by":"Express","access-control-allow-origin":"*","access-control-allow-headers":"Origin, X-Requested-With, Content-Type, Accept","content-type":"application/json; charset=utf-8","content-length":"29","etag":"W/\"1d-4wD7ChrrlHssGyekznKfKxR7ImE\"","date":"Tue, 21 Jul 2020 12:32:36 GMT","connection":"close"},"isBase64Encoded":false}
Finished execution.
EDIT : Also, when running the invoke command, amplify asks me for a src/event.json while i've seen it looking for the index.js for some ??
EDIT 2 [SOLVED] : downgrading #aws-amplify/cli to 4.14.1 seems to solve this :)
Expected behavior : The server should continue running so i can use it ..
Actual behavior : It always stops after the finished execution message.
The connection to the db works fine, the config.json contains correct values. Don't know why it is acting like this. Have anybody had the same problem?
Have a nice day.
Short answer: You are running the invoke command which is doing just what it is supposed to be doing - invoking the lambda function.
If you are looking to get a local API up, then run the following command:
sam local start-api
This will read your template and based on the endpoints you have setup, run them locally essentially mocking API Gateway locally. Read more about it in the official docs here.
Explanation:
This command comes is one of offering of AWS Serverless Application Model (AWS SAM). A tool to develop serverless application. It is essentially an abstraction of AWS Cloufdformation. Similarly Amplify is an abstraction that makes it simple to not only develop and manage the backend but also brings that power to frontend.
As both of them essentially use Cloudformation templates underneeth, you can leverage the capabilities of one tool with another.
SAM provides a robust set of tools for local development invcluding running a local lambda mocking server, in case you are not using API Gateway.
I use this combination to develop and test my frontend along with backend which is in golang, a language which is not as mature as javascript as a backend language with Amplify as of now.

Stackdriver-trace on Google Cloud Run failing, while working fine on localhost

I have a node server running on Google Cloud Run. Now I want to enable stackdriver tracing. When I run the service locally, I am able to get the traces in the GCP. However, when I run the service as Google Cloud Run, I am getting an an error:
"#google-cloud/trace-agent ERROR TraceWriter#publish: Received error with status code 403 while publishing traces to cloudtrace.googleapis.com: Error: The request is missing a valid API key."
I made sure that the service account has tracing agent role.
First line in my app.js
require('#google-cloud/trace-agent').start();
running locally I am using .env file containing
GOOGLE_APPLICATION_CREDENTIALS=<path to credentials.json>
According to https://github.com/googleapis/cloud-trace-nodejs These values are auto-detected if the application is running on Google Cloud Platform so, I don't have this credentials on the gcp image
There are two challenges to using this library with Cloud Run:
Despite the note about auto-detection, Cloud Run is an exception. It is not yet autodetected. This can be addressed for now with some explicit configuration.
Because Cloud Run services only have resources until they respond to a request, queued up trace data may not be sent before CPU resources are withdrawn. This can be addressed for now by configuring the trace agent to flush ASAP
const tracer = require('#google-cloud/trace-agent').start({
serviceContext: {
service: process.env.K_SERVICE || "unknown-service",
version: process.env.K_REVISION || "unknown-revision"
},
flushDelaySeconds: 1,
});
On a quick review I couldn't see how to trigger the trace flush, but the shorter timeout should help avoid some delays in seeing the trace data appear in Stackdriver.
EDIT: While nice in theory, in practice there's still significant race conditions with CPU withdrawal. Filed https://github.com/googleapis/cloud-trace-nodejs/issues/1161 to see if we can find a more consistent solution.

Pushing cloudwatch logs to s3 with aws lambda function

We are logging data to cloudwatch logs everyday. I would like to push this to S3 batch wise every hour/day.
Is there any existing lambda libs available in nodejs to achieve this?
Logwatch
Configure logwatch on logwatch.config (where you can configure mailTo & mailFrom.
Run logwatch manually
sudo logwatch --detail high --mailto testmail#mailinator.com --service all --range all
OR
Using Winston / winston-daily-rotate-file, a versatile logging library for Node.js

Resources