I have had a bunch of issues with few lambda functions in AWS. I would like to simulate the lambda environment in order to troubleshoot better what's wrong with my scripts. Unfortunately, the gotten errors in the logs are not too much useful. I have posted few of them here.
I would like to know then, how could I simulate in a docker image or even in an AMI the exact environment as my lambda function is running to catch more details on my error. What would you suggest me?
thanks so much
There are a number of ways to run and debug Lambda functions locally. I'm not sure any of them are 100% representative of the actual Lambda environment, however.
SAM Local is one option provided by AWS, and is built on top of Serverless Application Model (SAM). Other options include Cloud9, emulambda.
If your issues are timeouts then you should be able to detect where the delays are simply by adding more logging and review the resulting CloudWatch Logs. If the cause is high network latency for a given API request or SQL query then investigate the other end to determine why it responds slowly.
You can try to use python-lambda-local package or use SAM Local (it's still in Beta).
Related
I have my python code that works fine on my machine. However, when deploying it on GCP with the correct requirements it is not deploying as it should.
I have resolved all error and now i am just left with the below error
I've not seen this error and cant seem to solve it - has anyone else seen this error?
As mentioned in a similar thread :
Serverless environments such as App Engine, Cloud Functions and Cloud Run, run in a sandbox, similar to gVisor. This sandbox protects the system from
malicious calls and blocks some low level instruction. This one to get the CPU capabilities should be discarded.
I got the same when I ran Tensorflow Serving on Cloud Run.
This error has been discussed in the answer:
The warning is just telling you that OpenBLAS, which is a dependency of Pandas, is not able to determine some settings of Cloud Function environments,
most likely due to Cloud Function runs on virtualized environments. I suggest that you just ignore the warning as it is not an issue in Cloud Function.
For more information, you can refer to a well explained answer on the similar issue.
We use MongoDB atlas, a cloud MongoDB database for our DB and NodeJS in the backend. I have to run a cron job at 2 AM every day which fetches the data from some third-party API and updates some collection in the DB. The client wants us to use AWS, preferably Lamda. Our System is run on an EC2 instance. Any leads ?? What would be the most efficient solution? It worked fine with 'node-cron' in my local but they want lamdas preferably AWS.
You can do that by attaching the cron trigger event on AWS lambda. You can use the same code that you are running on local.
It will be easy for you to use SAM cli for lambda, it will help you deploy and test your lambda easily.
What would be the most efficient solution ?
I believe there won't be any challenge for efficiency. There will be difference in billing, if you are only running this code base on the EC2 instance you need to start the EC2 instance to trigger the cron-job. However, AWS lambda will be only charged when trigger run the code for the cron-job. There won't be any major difference, but I believe lambda could be better for this job.
So, I recommend you to use AWS lambda for this job.
You should check this link which tells how many different ways we can trigger cron-job in AWS.
Feels like I've searched the entire web for an answer...to no avail. I have a puppeteer script that works perfectly locally. My local machine is a little unreliable, so I've been trying to push this script to the cloud so that it can run there. But I have no idea where to start. I'm sitting here with an IBM cloud account with no idea what to do. Can anyone help me out?
Running Puppeteer scripts can be done on any cloud platform that
exposes a Node.js environment
enables running a browser (Puppeteer will need to start Chromium)
This could be achieved, for example, using AWS EC2.
AWS Lambda, Google Cloud Functions and IBM Cloud Functions (and similar services) might also work but they might need additional work on your side to get the browser running.
For a step-by-step guide, I would suggest checking out this article and this follow-up.
Also, it might just be easier to look into services like Checkly (disclaimer: I work for Checkly), Browserless and similar (a quick search for something along the lines of "run puppeteer online" will return several of those), which allow you to run Puppeteer checks online without requiring any additional setup. Useful if you are serious about using Puppeteer for testing or synthetic monitoring in the long run.
We are taking over a whole application from another company, and they have built the whole pipeline for deploying, but we still don't have access to it. What we know, that there's a lambda function is running triggered by certain SNS messages, and all the code is in Node.js, and the development is in VS Code. We also have issues debugging it locally, but it's a bigger problem, that we need to debug it remotely.
Since I am new in AWS services, I'd really appreciate if somebody could help me in this.
Does it necessary to open a port? How is it possible to connect to a lambda? Do we need serverless to setup? Many unresolved questions.
I don't think there is way you can debug a lambda function remotely. Your best bet is to download the code on local machine, setup the env variables as you have set up on your lambda function and take it from there.
Remember at the end of the day lambda is just a container which is running the code for you. AWS doesn't allow any ssh or connection with those container. In your case you should be able to debug it on local till you have the same env variables. There are other things as well which are lambda specific but considering it is a running code which you have got so you should be able to find out the issue.
Hope it makes sense.
Thundra (https://www.thundra.io/aws-lambda-debugger) has live/remote debugging support for AWS Lambda through its native IDE plugins (VSCode and IntelliJ IDEA).
The way AWS have you 'remote' debug is to execute the lambda locally through Docker as it proxies the requests to the cloud for you, using AWS Toolkit. You have a lambda running on your local computer via docker that can access resources on the cloud, such as databases, api's etc. You can step through debug them using editors like vscode.
I use SAM with a template.yaml . This way, I can pass event data to the handler, reference dependency layers (shared code libraries) and have a deployment manifest to create a Cloudformation stack (release instance with history and resource management).
Debugging can be a bit slow as it compiles, deploys to Docker and invokes, but allows step through debugging and variable inspection.
https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-using-debugging.html
While far from ideal, any console-printing actions would likely get logged to CloudWatch, which you could then access to go through printed data.
For local debugging, there are many Github projects with Dockerfiles which which you can build a docker container locally, just like AWS does when your Lambda is invoked.
How should I be running my DB migrations in a AWS Serverless application? In a traditional NodeJS app, I usually have npm start run sequelize db:migrate first. But with Lambda how should I do that?
My DB will be in a private subnet. Was wondering if CodeBuild will be able to do it? Was also considering to have a Lambda function run migration ... not sure if its recommended tho.
There are a number of ways to achieve this. You actually are kind of on the right track with CodeBuild, at least there shouldn't be anything wrong with taking that approach.
Since your DB is in a private subnet, you will need to configure CodeBuild to access your VPC. Once you have that configured, it's a simple matter of allowing access from the CodeBuild security group to your database.
You might want to setup this whole thing as a CodePipeline. You might even set it up with multiple buildspec files for different runs of CodeBuild. That way you can have a CodePipeline that looks like:
Source -> CodeBuild (test) -> Approval -> CodeBuild (migrations) -> Lambda
Theoretically, you could also create a Lambda function that does the migration, and trigger that as needed. If the migrations take a long time, you could also use AWS Batch to run them. But using CodeBuild as part of a deployment pipeline makes a lot of sense.
Lambda might not be a right tool for this task because of short runtime.
You are better of using a custom script run on the CodeBuild. And having a sequential CodeBuild tasks in your Codepipeline where first codebuild will complete the migration and on-complete of the first CodeBuild, you can execute the new CodeBuild which will deploy your lambdas. Just in case if your DB migration fails, you can exit the CodePipeLine.
Your CodePipeLine will looks like this.
pre_build:
commands:
- DB migration command
finally:
- CleanUp Command
build:
commands:
- Deploy lambdas command
finally:
- Cleanup command
Both approaches (lambda and codebuild) are ok, it depends on your Continuous Deployment/Integration flow. For example, if you need to run those migrations on several environments, Codebuild would fit better.
If you don't have a CI/CD mechanism, you can just run it on a lambda, as it's very flexible in terms of memory (you just need to be careful in the maximum execution time), or use an already made package like this (this is a suggestion and depends on your db).
As a last opinion, if your process is really heavy and/or needs to do a lot of read/write operations, you could also try running it on an AWS ECS instance which will scale when you run the migrations and after finishing, return to its minimum size defined.