How is it possible to debug an AWS Lambda function from remote? - node.js

We are taking over a whole application from another company, and they have built the whole pipeline for deploying, but we still don't have access to it. What we know, that there's a lambda function is running triggered by certain SNS messages, and all the code is in Node.js, and the development is in VS Code. We also have issues debugging it locally, but it's a bigger problem, that we need to debug it remotely.
Since I am new in AWS services, I'd really appreciate if somebody could help me in this.
Does it necessary to open a port? How is it possible to connect to a lambda? Do we need serverless to setup? Many unresolved questions.

I don't think there is way you can debug a lambda function remotely. Your best bet is to download the code on local machine, setup the env variables as you have set up on your lambda function and take it from there.
Remember at the end of the day lambda is just a container which is running the code for you. AWS doesn't allow any ssh or connection with those container. In your case you should be able to debug it on local till you have the same env variables. There are other things as well which are lambda specific but considering it is a running code which you have got so you should be able to find out the issue.
Hope it makes sense.

Thundra (https://www.thundra.io/aws-lambda-debugger) has live/remote debugging support for AWS Lambda through its native IDE plugins (VSCode and IntelliJ IDEA).

The way AWS have you 'remote' debug is to execute the lambda locally through Docker as it proxies the requests to the cloud for you, using AWS Toolkit. You have a lambda running on your local computer via docker that can access resources on the cloud, such as databases, api's etc. You can step through debug them using editors like vscode.
I use SAM with a template.yaml . This way, I can pass event data to the handler, reference dependency layers (shared code libraries) and have a deployment manifest to create a Cloudformation stack (release instance with history and resource management).
Debugging can be a bit slow as it compiles, deploys to Docker and invokes, but allows step through debugging and variable inspection.
https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-using-debugging.html

While far from ideal, any console-printing actions would likely get logged to CloudWatch, which you could then access to go through printed data.
For local debugging, there are many Github projects with Dockerfiles which which you can build a docker container locally, just like AWS does when your Lambda is invoked.

Related

Running CI/CD pipelines in AWS Lambda

As a project, I am trying to create a CI/CD pipeline running inside an AWS Lambda application.
The problem I am facing is that AWS Lambda is missing some tools (for example xargs) that certain applications (for example Gradle) require to run properly:
/tmp/repo/gradlew: line 234: xargs: command not found
Or even more interestingly:
install: apt-get: command not found
How can I install the required tools to build the applications from within an AWS Lambda container?
How can I utilize layers to speed up those containers?
Aka, I assume I need to register that certain cli tools are present in mounted layers.
On windows, I would do this by (ab)using the PATH environment variable, but what is the recommended way to do this in Linux?
And how can I tell tools to look for their dependencies in those layers? to avoid errors like:
ld.gold: error: cannot find -lcurl
The best option as far as I can tell is to create a Docker image containing all the software that you require and provide this to the AWS Lambda service.
There is extensive documentation how to run Docker containers in AWS Lambda:
https://docs.aws.amazon.com/lambda/latest/dg/images-create.html
Personal note: While I like the idea of a challenge or proof-of-concept I'd recommend using one of the many CI/CD services out there instead of building one on your own. I can not think of any upside of this. AWS itself offers CI/CD solutions like AWS CodePipeline etc.
You might want to have a look at the following documentation:
https://aws.amazon.com/getting-started/hands-on/set-up-ci-cd-pipeline/

Google Cloud Function not deploying on GCP Cloud functions - Function failed on loading user code

I have my python code that works fine on my machine. However, when deploying it on GCP with the correct requirements it is not deploying as it should.
I have resolved all error and now i am just left with the below error
I've not seen this error and cant seem to solve it - has anyone else seen this error?
As mentioned in a similar thread :
Serverless environments such as App Engine, Cloud Functions and Cloud Run, run in a sandbox, similar to gVisor. This sandbox protects the system from
malicious calls and blocks some low level instruction. This one to get the CPU capabilities should be discarded.
I got the same when I ran Tensorflow Serving on Cloud Run.
This error has been discussed in the answer:
The warning is just telling you that OpenBLAS, which is a dependency of Pandas, is not able to determine some settings of Cloud Function environments,
most likely due to Cloud Function runs on virtualized environments. I suggest that you just ignore the warning as it is not an issue in Cloud Function.
For more information, you can refer to a well explained answer on the similar issue.

How do I run puppeteer on a server/in the cloud

Feels like I've searched the entire web for an answer...to no avail. I have a puppeteer script that works perfectly locally. My local machine is a little unreliable, so I've been trying to push this script to the cloud so that it can run there. But I have no idea where to start. I'm sitting here with an IBM cloud account with no idea what to do. Can anyone help me out?
Running Puppeteer scripts can be done on any cloud platform that
exposes a Node.js environment
enables running a browser (Puppeteer will need to start Chromium)
This could be achieved, for example, using AWS EC2.
AWS Lambda, Google Cloud Functions and IBM Cloud Functions (and similar services) might also work but they might need additional work on your side to get the browser running.
For a step-by-step guide, I would suggest checking out this article and this follow-up.
Also, it might just be easier to look into services like Checkly (disclaimer: I work for Checkly), Browserless and similar (a quick search for something along the lines of "run puppeteer online" will return several of those), which allow you to run Puppeteer checks online without requiring any additional setup. Useful if you are serious about using Puppeteer for testing or synthetic monitoring in the long run.

Use Google App Engine or Google Cloud Compute VM to Test Run My App?

I'm moving my Three.js app and its customized node.js environment, which I've been running on my local machine to Google Cloud. I want to test things out there, and hopefully soon get some early alpha testing going with other people.
I'm not sure which is the wiser way to go... to upload the repo I've been running locally as-is onto a VM which users would then access via the VM's external IP until I get a good name to call this app... or merge my local node.js environment with what's available via the Google App Engine and run it on GAE.
Issues I'm running into with the linux VM approach... I'm not sure how to do the equivalent on the VM of what I've been doing locally. In Windows Powershell I cd into the app directory and then enter node index.js. I'm assuming by this method of deployment that I can get the app running as soon as the browser hits the external IP. I should mention too that the app will allow users to save content as well as upload images, and eventually, 3D models as well as json datasets.
Issues I'm running into with the App Engine approach: it looks like I only have access to a linux-based command line, and have to install all the node.js modules manually. Meanwhile I have a bunch of files to upload, both the server-side node files and all the frontend stuff. I don't see where to upload those files, and ultimately what I'd like to do is have access to a visual, editable file-tree interface, as I have in Windows and FileZilla, so I can swap files in and out, etc. Alternatively I suppose I could import a repo from Github? Github would be fine as long as I can visually see what's happening. Is there a visual interface for file structure available in GAE somewhere? Am I missing something?
I went through the GAE "Hello World" tutorial and that worked fine, but was left scratching my head afterward regarding how to actually see and edit the guts of the tutorial app, or even where to look for the files.
So first off, I want to determine what's the better approach, and then if possible, determine how to make the experience of getting my app up there and running a more visual, user-friendly experience.
Thanks.
There are many things to consider when choosing how to run an app, but my instinct for your use case is to simply use a VM on GCE. The most compelling reason for this is that it's the most similar thing to what you have now. You can SSH into the machine and run nohup node index.js & (or node index.js inside tmux/screen if you prefer) and it will start the app and not stop it when you log out of SSH. You can use SCP / SFTP with whatever GUI client you want to upload files. You don't have to learn anything new! If you wanted to, you could even use a Windows VM (although I think you have to pay a little more than for a comparable Linux VM due to the licensing fees).
That said, the other way is arguably more "correct" by modern development standards, but it will involve a lot more learning that will prevent you from getting your app running somewhere other than your laptop in the short term:
First, you'll need to learn about Docker and stateless containers, which is basically what your app runs inside of on AppEngine.
Next, you'll need to learn how to hook up a separate stateful service (database, file server, ...) to your app's container so you can store your files, etc. in it, and then probably rewrite your app somewhat to use it to store stuff.
Next, you'll probably want some way to automatically deploy this from code instead of manually doing it, which gets you into build systems, package managers, artifact storage, continuous integration systems, and on and on and on.
This latter path is certainly what you should choose for a long-running production service if you work with a big team of developers -- but that doesn't mean that it's necessarily the right path for your project today. If you don't care about scaling up automatically, load balancing between nodes, redundant copies of your app running in different regions in case there's a natural disaster, etc., then go with the easy way for now, and you can learn new ways to improve the service when they're actually needed.

How to create a mirror environment of lambda AWS

I have had a bunch of issues with few lambda functions in AWS. I would like to simulate the lambda environment in order to troubleshoot better what's wrong with my scripts. Unfortunately, the gotten errors in the logs are not too much useful. I have posted few of them here.
I would like to know then, how could I simulate in a docker image or even in an AMI the exact environment as my lambda function is running to catch more details on my error. What would you suggest me?
thanks so much
There are a number of ways to run and debug Lambda functions locally. I'm not sure any of them are 100% representative of the actual Lambda environment, however.
SAM Local is one option provided by AWS, and is built on top of Serverless Application Model (SAM). Other options include Cloud9, emulambda.
If your issues are timeouts then you should be able to detect where the delays are simply by adding more logging and review the resulting CloudWatch Logs. If the cause is high network latency for a given API request or SQL query then investigate the other end to determine why it responds slowly.
You can try to use python-lambda-local package or use SAM Local (it's still in Beta).

Resources