I need guidance regarding using AWS-SDK credentials in production nodejs app.
What is the possible way of doing this? I researched about it that always use shared credentials files for aws credentials using that link. "https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/loading-node-credentials-shared.html"
So I'm confused about the way. Do I need to create that file in that Linux path specified in that link in VM of EC2?
I made a new IAM role for S3 and allocate it to a specific EC2 instance but through code how can I be able to access S3 services. I deployed my app and it's still giving me access denied error on accessing S3 service.
Do I still need to include credentials file as discussed in like given above? And Do I still need to initialize S3?
const s3 = new aws.S3({
accessKeyId,
secretKey,
bucketRegion,
});
Please guide me that how can I deploy nodejs app without affecting AWS services.
Following the AWS Well Architected Framework the best solution would be to assign a role with your required permissions to the EC2 instance that you are going to use.
You should strive from adding credentials to the application directly as they are not needed in most of the cases.
Please take a look at IAM roles for Amazon EC2 as how does AWS guides to achieving that.
Related
We have a Node application which is running as a Docker container in AWS Elastic Beanstalk. The application has access to PostgreSQL RDS. We want to use AWS Secrets Manager, so that our Container can access RDS without the credentials being exposed in code.
Once we create the Secrets in AWS Secrets Manager a code is generated which may be in java/javascript etc. So do we add that code in our source code and attach policy of Secret Manager to both aws-elasticbeanstalk-ec2-role and aws-elasticbeanstalk-service-role?
Please advise how this can be done.
We have created the Secrets in Secret Manager. We have not proceeded further as the application is up and running, making any changes may affect it.
As this is our first time, we need help.
I have been using a .env file to run my app in my local machine. However when deploying my app using aws ec2 instance, I am at a complete loss on how to set up the ENV vars as I am a complete beginner at using AWS. Please help me to set up the environment variables.
Based on the comments.
Since .env is used on local workstation successfully, it can also be used on ec2 instance.
Just have to be careful with not string any sensitive information in .env and using public repositories, as you may leak your passwords or access keys.
For storing secrets at AWS, the recommended way would be through secret manager or ssm parameter store. Also any permissions that your app may require to access these or other AWS services should be provided through instance role, not by hard codding AWS credentials into app or instance.
I want to create and manage AWS Workspace with terraform.
I have searched the Terraform documentation. But, I cannot find any documentation or sample codes.
The support for AWS Workspace in terraform is still a work in progress. You can track the progress on the github issue.
From the looks of it, there are no API calls yet from AWS
AWS still hasn't provided any API functionality to register a
directory for workspaces making this impossible. Been following this
https://forums.aws.amazon.com/thread.jspa?threadID=237801 which AWS
has stated there is no ETA.
I'm trying to deploy a Node/Express app to AWS lambda using serverless. It seems to work, but after several executions of serverless deploy, I get DNS_PROBE_FINISHED_NXDOMAIN error when trying to access the endpoint.
If I change to another AWS region, it works, but it happens again after a few executions of
serverless deploy
The output of serverless deploy command is always right, no error shown.
I guess this question lacks of information, but I don't know what I need to provide.
I have no experience with AWS or bot deployment for production, so I'm looking for some suggestions on best practices.
The project is a simple Twitter automation bot written as a node.js application. Currently I am using Cloud9 in AWS to host it, but I feel this is likely not the most effective means.
What I need:
Ability to easily deploy the bot/codebase.
Multiple instances so I can deploy a new instance for each user.
Ease of access to logs and updates.
Usage reporting.
Ability to tie into a front end for users.
I'd like to use AWS if possible to familiarize myself with the platform, but open to any suggestion that I can incorporate an easy workflow.
Current workflow to deploy new bot:
Create Cloud9 EC2 instance
Install dependencies
Git clone from repository
Edit configuration with users' access keys
Run bot from console
Leave running in background
This has been very easy thus far, but I just don't know if its practical. Appreciate any advice!
Given that the bot needs to be constantly running (i.e. it can't just be spun up on-demand for a couple minutes, which rules out AWS Lambda) and that each user needs their own, I'd give AWS ECS a try.
Your initial setup will look something like this:
First, create a Docker image to run your bot, and load it into ECR or Docker Hub.
Set up ECS. I recommend using AWS Fargate so you don't have to manage a VPC and EC2 instances just to run your containers. You'll want to create your task definition using your bot Docker image.
Run new tasks as needed using your task definition. This could be done via the AWS API, AWS SDK, in the AWS console, etc.
Updating the bots would just involve updating your Docker image and task definition, then restarting the tasks so they use the new image.
You should be able to set up logging and monitoring/alarming with CloudWatch for your ECS tasks too.
Usage reporting depends on what exactly you want to report. You may get all you need from CloudWatch events/metrics, or you may want to send data from your containers to some storage solution (RDS, DynamoDB, S3, etc.).
Tying a front end to the bots depends on how the bots are set up. If they have REST servers listening to a particular port, for example, you'd be able to hit that if they're running on ECS.