One off commands on Amazon Beanstalk - node.js

I just deployed my first application on amazon beanstalk and am stuck with one seemingly simple issue.
I have node.js scripts that I use to i.e.: migrate the DB schema or populate the RDS with generated sample data. For heroku apps I simply use
heroku run <statement>
Is there an equivalent of that amazon beanstalk? Whats a good workflow for that?

It looks like the only solution is using good old ssh to connect to the instances(s) and run the statements there. Caveat is that you will need to first create Key Pair first in the EC2 Dashboard and refer to that key when you create the amazon beanstalk environment, you can't create a key pair when you create a amazon beanstalk environment.

Related

inject secrets (API keys etc) into node js project

I'm migrating a nodeJS project from GCP to DigitalOcean.
I'm running this nodeJS code on a kubernetes cluster in DigitalOcean. I'm using GitHub Actions to automatically build a docker image and deploy it to my kubernetes cluster. Everything works as expected, but I have a question.
On GCP, I used the secret manager to inject secrets (database credentials, API keys, ...) into my NodeJS project. I am looking for a similar solution on DigitalOcean. I have found SecretHub, it looks interesting but I'm unable to sign up.
I have found this from 1password connect, but it looks like I have to setup a server?
Does anyone know some interesting tool or trick to secure inject secrets into my nodejs code?
Yes, you can check out the Hashi corp vault which is mainly used with Kubernetes as a secret solution to inject the configuration and variables to the deployment of Kubernetes.
It's easy to set up and integrate with Kubernetes.
Hashi corp vault : https://www.hashicorp.com/products/vault
Enterprise version is paid one however you can open-source version which will solve your all needs with UI and login panel, you can use it for Production purpose it's safe, secure, and easy to integrate.
You can run one simple POD(deployment) on the Kubernetes server.
here you can follow the demo with minikube setup: https://learn.hashicorp.com/tutorials/vault/kubernetes-minikube?in=vault/kubernetes

How should I pass secret environment variables to Docker application on Elastic Beanstalk?

I'm currently running a Node server deployed on a Docker application on AWS Elastic Beanstalk, and I have several env variables that should be kept hidden, like the database URL and the JWT secret. Passing them thru the Elastic Beanstalk application config would be optimal, but it doesn't work because I'm trying to access them within a Docker container, which doesn't receive those env variables.
I've seen a lot of answers to this but it seems to me that they all involve putting the actual variable values in places like the Dockerrun.aws.json or the Dockerfile, which would both add the secret values to the repo, exposing them to the public GitHub repo that I deploy from thru CodePipeline. So, how should I pass these secret environment variables to the Docker container? Is there a way to reference the variables in my Dockerfile or docker-compose.yml files and pass them down? Is there some other Elastic Beanstalk config I can use? Any suggestions would be greatly appreciated.
Is there some other Elastic Beanstalk config I can use?
Yes. Generally, you would setup up your secrets in AWS Secret Manager or SSM Parameter Store. Then your application, regardless whether it is docker, EB or anything else, would use AWS SDK to get the secret directly from these secret vaults.
This is not only a good practice, but you also don't have to expose your secretes before they are actually needed. You only access them just before they are really used, which reduces chances of a leak.

Is there any possible way to manage IBM virtual cloud instances with Node JS?

I'm not familiar with how cloud integration works but I have been assigned a task to try and find out a documentation on IBM portal which would provide a way to perform some specific actions on virtual cloud servers (virtual machines) instances like create, start, stop, delete, restart, upgrade etc. So far I have had no success in finding out such documentation. Although all of this has been already finalized with api's on Node js with aws ec2, alibaba ecs, azure cloud, google cloud and oracle cloud. Only service we are struggling with is IBM. Before this was done with Terraform service in golang but now we are shifting to node js. Any help would be appreciated.
The available api docs are here: https://cloud.ibm.com/docs?tab=api-docs
javascript APIs are not available. The python, go and raw (curl) APIs for VPC are here: https://cloud.ibm.com/apidocs/vpc. The vpc will include creating "instance"s within a vpc.
Unfortunately javascript is not currently available. You can craft your own based on the curl or use the "ibmcloud is ACTION" command line. All of the command line have json output.

Deploying bot instances in AWS

I have no experience with AWS or bot deployment for production, so I'm looking for some suggestions on best practices.
The project is a simple Twitter automation bot written as a node.js application. Currently I am using Cloud9 in AWS to host it, but I feel this is likely not the most effective means.
What I need:
Ability to easily deploy the bot/codebase.
Multiple instances so I can deploy a new instance for each user.
Ease of access to logs and updates.
Usage reporting.
Ability to tie into a front end for users.
I'd like to use AWS if possible to familiarize myself with the platform, but open to any suggestion that I can incorporate an easy workflow.
Current workflow to deploy new bot:
Create Cloud9 EC2 instance
Install dependencies
Git clone from repository
Edit configuration with users' access keys
Run bot from console
Leave running in background
This has been very easy thus far, but I just don't know if its practical. Appreciate any advice!
Given that the bot needs to be constantly running (i.e. it can't just be spun up on-demand for a couple minutes, which rules out AWS Lambda) and that each user needs their own, I'd give AWS ECS a try.
Your initial setup will look something like this:
First, create a Docker image to run your bot, and load it into ECR or Docker Hub.
Set up ECS. I recommend using AWS Fargate so you don't have to manage a VPC and EC2 instances just to run your containers. You'll want to create your task definition using your bot Docker image.
Run new tasks as needed using your task definition. This could be done via the AWS API, AWS SDK, in the AWS console, etc.
Updating the bots would just involve updating your Docker image and task definition, then restarting the tasks so they use the new image.
You should be able to set up logging and monitoring/alarming with CloudWatch for your ECS tasks too.
Usage reporting depends on what exactly you want to report. You may get all you need from CloudWatch events/metrics, or you may want to send data from your containers to some storage solution (RDS, DynamoDB, S3, etc.).
Tying a front end to the bots depends on how the bots are set up. If they have REST servers listening to a particular port, for example, you'd be able to hit that if they're running on ECS.

How to deploy mongodb on EC2 using API?

For a project in node.js i want to deploy mongodb instance on amazon ec2 using amazon api or something like that, is it possible? i found nothing about that.
Thanks for the time
You have many options:
vagrant with the aws provider
terraform as mentioned
cloudformation
using AWSCLI (and userdata): $ aws ec2 run-instances help

Resources