I'm trying to deploy a Node/Express app to AWS lambda using serverless. It seems to work, but after several executions of serverless deploy, I get DNS_PROBE_FINISHED_NXDOMAIN error when trying to access the endpoint.
If I change to another AWS region, it works, but it happens again after a few executions of
serverless deploy
The output of serverless deploy command is always right, no error shown.
I guess this question lacks of information, but I don't know what I need to provide.
Related
Some of web apps from time to time fails to deploy or breaks and I end up getting a 503 error. I took out the website run from package and it ran fine but i have a lot of apps for both web and function apps and I want to know why this happened. Do I still need to use it or not? Should I still use it or not? Is the WEBSITE RUN FROM PACKAGE causing these apps to break or is there another way to fix them? Does something need to be updated in my part through settings etc? I deploy all of my apps through the azure pipeline builds. This is bothering me a lot as I dont want my pipelines to break now esp when I send out releases. Much appreciated with any insight.
WEBSITE_RUN_FROM_PACKAGE is recommended way for deployment of Function Apps if you want to deploy using ZIP deploy.
When you are running the function and If your function is returning the 503 Service Unavailable, then check the Request-Response time because there is a time limit for request processing in Azure Function Http Request i.e., 230 Seconds available in this MS Doc. In this case, either increase the timeout value or change the Hosting Plan to higher or use async pattern in Azure Durable Functions for long running tasks.
If you are getting the 503 Service Error unavailable post-deployment, then it will be 2 causes mainly:
check your function host is down or restarting state.
Check the memory consumption exceeded your Hosting Plan limit (Consumption Plan - Functions) and also, check the troubleshooting steps given in this SO 71658591.
Updated Answer
I can See the bug raised in GitHub Azure Functions Repo earlier with the similar scenario that the user #qJake is getting 503 Service when deployed Azure Functions using Azure DevOps when using the setting WEBSITE_RUN_FROM_PACKAGE and resolved the issue-solutions given in the ticket #11444
I have a NodeJS app which I deployed to AWS Amplify and everything went well (provision, build and deploy), but when I access the Amplify generated URL the app shows the following message:
There’s no page at this address
Check the URL and try again, or use the search bar to find what you need.
So the question is, where or how to check the Amplify error logs/messages to see where's the problem?
I need guidance regarding using AWS-SDK credentials in production nodejs app.
What is the possible way of doing this? I researched about it that always use shared credentials files for aws credentials using that link. "https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/loading-node-credentials-shared.html"
So I'm confused about the way. Do I need to create that file in that Linux path specified in that link in VM of EC2?
I made a new IAM role for S3 and allocate it to a specific EC2 instance but through code how can I be able to access S3 services. I deployed my app and it's still giving me access denied error on accessing S3 service.
Do I still need to include credentials file as discussed in like given above? And Do I still need to initialize S3?
const s3 = new aws.S3({
accessKeyId,
secretKey,
bucketRegion,
});
Please guide me that how can I deploy nodejs app without affecting AWS services.
Following the AWS Well Architected Framework the best solution would be to assign a role with your required permissions to the EC2 instance that you are going to use.
You should strive from adding credentials to the application directly as they are not needed in most of the cases.
Please take a look at IAM roles for Amazon EC2 as how does AWS guides to achieving that.
can anyone tell me how the serverless architecture works
and some people are saying this is the next technology? and is this help for Linux administration?
Serverless is a technology that you can use to create infrastructure as code to work with your cloud provider. An example would be if your company uses Amazon Web Services and you need to create a lambda function. You can do this via serverless and include several infrastructure properties such a virtual private cloud, which IAM roles to use, creating an s3 bucket, having your lambda listen to sns topics, deploying on multiple environments.
Currently our company uses Amazon Web Services in combination with the Hashicorp Stack, (Terraform, Vault, etc.), as well as serverless to create our IAC quickly.
As far as this being the next technology, I can say that maybe not serverless, but infrastructure as code is extremely powerful, reusable, fast failing, and useful.
An example could be you your work place has a production environment and a dev environment. You can deploy the same serverless project to dev and production and if you interpolate the values properly you have a serverless project that can be deployed on any of your environments.
Is technology helpful for a linux admin? I cannot attest to this as I have only used Serverless interactions with cloud providers. I believe that is what Serverless was created for.
I have no experience with AWS or bot deployment for production, so I'm looking for some suggestions on best practices.
The project is a simple Twitter automation bot written as a node.js application. Currently I am using Cloud9 in AWS to host it, but I feel this is likely not the most effective means.
What I need:
Ability to easily deploy the bot/codebase.
Multiple instances so I can deploy a new instance for each user.
Ease of access to logs and updates.
Usage reporting.
Ability to tie into a front end for users.
I'd like to use AWS if possible to familiarize myself with the platform, but open to any suggestion that I can incorporate an easy workflow.
Current workflow to deploy new bot:
Create Cloud9 EC2 instance
Install dependencies
Git clone from repository
Edit configuration with users' access keys
Run bot from console
Leave running in background
This has been very easy thus far, but I just don't know if its practical. Appreciate any advice!
Given that the bot needs to be constantly running (i.e. it can't just be spun up on-demand for a couple minutes, which rules out AWS Lambda) and that each user needs their own, I'd give AWS ECS a try.
Your initial setup will look something like this:
First, create a Docker image to run your bot, and load it into ECR or Docker Hub.
Set up ECS. I recommend using AWS Fargate so you don't have to manage a VPC and EC2 instances just to run your containers. You'll want to create your task definition using your bot Docker image.
Run new tasks as needed using your task definition. This could be done via the AWS API, AWS SDK, in the AWS console, etc.
Updating the bots would just involve updating your Docker image and task definition, then restarting the tasks so they use the new image.
You should be able to set up logging and monitoring/alarming with CloudWatch for your ECS tasks too.
Usage reporting depends on what exactly you want to report. You may get all you need from CloudWatch events/metrics, or you may want to send data from your containers to some storage solution (RDS, DynamoDB, S3, etc.).
Tying a front end to the bots depends on how the bots are set up. If they have REST servers listening to a particular port, for example, you'd be able to hit that if they're running on ECS.