I have no experience with AWS or bot deployment for production, so I'm looking for some suggestions on best practices.
The project is a simple Twitter automation bot written as a node.js application. Currently I am using Cloud9 in AWS to host it, but I feel this is likely not the most effective means.
What I need:
Ability to easily deploy the bot/codebase.
Multiple instances so I can deploy a new instance for each user.
Ease of access to logs and updates.
Usage reporting.
Ability to tie into a front end for users.
I'd like to use AWS if possible to familiarize myself with the platform, but open to any suggestion that I can incorporate an easy workflow.
Current workflow to deploy new bot:
Create Cloud9 EC2 instance
Install dependencies
Git clone from repository
Edit configuration with users' access keys
Run bot from console
Leave running in background
This has been very easy thus far, but I just don't know if its practical. Appreciate any advice!
Given that the bot needs to be constantly running (i.e. it can't just be spun up on-demand for a couple minutes, which rules out AWS Lambda) and that each user needs their own, I'd give AWS ECS a try.
Your initial setup will look something like this:
First, create a Docker image to run your bot, and load it into ECR or Docker Hub.
Set up ECS. I recommend using AWS Fargate so you don't have to manage a VPC and EC2 instances just to run your containers. You'll want to create your task definition using your bot Docker image.
Run new tasks as needed using your task definition. This could be done via the AWS API, AWS SDK, in the AWS console, etc.
Updating the bots would just involve updating your Docker image and task definition, then restarting the tasks so they use the new image.
You should be able to set up logging and monitoring/alarming with CloudWatch for your ECS tasks too.
Usage reporting depends on what exactly you want to report. You may get all you need from CloudWatch events/metrics, or you may want to send data from your containers to some storage solution (RDS, DynamoDB, S3, etc.).
Tying a front end to the bots depends on how the bots are set up. If they have REST servers listening to a particular port, for example, you'd be able to hit that if they're running on ECS.
Related
I'm not familiar with how cloud integration works but I have been assigned a task to try and find out a documentation on IBM portal which would provide a way to perform some specific actions on virtual cloud servers (virtual machines) instances like create, start, stop, delete, restart, upgrade etc. So far I have had no success in finding out such documentation. Although all of this has been already finalized with api's on Node js with aws ec2, alibaba ecs, azure cloud, google cloud and oracle cloud. Only service we are struggling with is IBM. Before this was done with Terraform service in golang but now we are shifting to node js. Any help would be appreciated.
The available api docs are here: https://cloud.ibm.com/docs?tab=api-docs
javascript APIs are not available. The python, go and raw (curl) APIs for VPC are here: https://cloud.ibm.com/apidocs/vpc. The vpc will include creating "instance"s within a vpc.
Unfortunately javascript is not currently available. You can craft your own based on the curl or use the "ibmcloud is ACTION" command line. All of the command line have json output.
Currently I'm using following command line to deploy in different stages from my local mechine:
serverless deploy --stage qa
But this code create whole new section (project) in the API Gateway section. See my attached link.
https://photos.google.com/share/AF1QipPU6X8Dej7rNq5Ofo1eKfCq1cn6GpsL3GYdZ50yUO_a4quVPao9bllHIvRFA6VkbA?key=WVFDLVQ0cEd6aVB3cVlSY1hYcnBmS1BRT1QtNVVB
should this code create serverless deploy --stage qa different stages in the APIs -> (API Project) -> Stages section?
I'm kind of confused which way is the correct. I see many tutorials creating both ways. However for me it makes sense creating different stages in one project.
Or is there a different command to do what I want?
I would argue the way serverless is written is a much cleaner way to deploy to different stages. Whilst API Gateway does allow different stages under the same API Gateway, this leaves much more room for accidentally doing something you didnt want to do e.g. accidentally tearing down your production API instead of dev.
Also, best practice is to have each stage in its own AWS account. This allows you to better lock down your production environment at an account level, to avoid accidental changes. This is beneficial for all your AWS resources, not just API Gateway.
If you follow best practices and have an AWS account per stage, your problem is mute as you will have an API Gateway in each of your staging accounts.
If these best practices aren't for your, you can always revert back to normal CloudFormation templates to force each stage to be a different deployment under the same API Gateway.
can anyone tell me how the serverless architecture works
and some people are saying this is the next technology? and is this help for Linux administration?
Serverless is a technology that you can use to create infrastructure as code to work with your cloud provider. An example would be if your company uses Amazon Web Services and you need to create a lambda function. You can do this via serverless and include several infrastructure properties such a virtual private cloud, which IAM roles to use, creating an s3 bucket, having your lambda listen to sns topics, deploying on multiple environments.
Currently our company uses Amazon Web Services in combination with the Hashicorp Stack, (Terraform, Vault, etc.), as well as serverless to create our IAC quickly.
As far as this being the next technology, I can say that maybe not serverless, but infrastructure as code is extremely powerful, reusable, fast failing, and useful.
An example could be you your work place has a production environment and a dev environment. You can deploy the same serverless project to dev and production and if you interpolate the values properly you have a serverless project that can be deployed on any of your environments.
Is technology helpful for a linux admin? I cannot attest to this as I have only used Serverless interactions with cloud providers. I believe that is what Serverless was created for.
I just deployed my first application on amazon beanstalk and am stuck with one seemingly simple issue.
I have node.js scripts that I use to i.e.: migrate the DB schema or populate the RDS with generated sample data. For heroku apps I simply use
heroku run <statement>
Is there an equivalent of that amazon beanstalk? Whats a good workflow for that?
It looks like the only solution is using good old ssh to connect to the instances(s) and run the statements there. Caveat is that you will need to first create Key Pair first in the EC2 Dashboard and refer to that key when you create the amazon beanstalk environment, you can't create a key pair when you create a amazon beanstalk environment.
I'm working with Atlassian Bamboo on Demand for Continuous Integration and it works great.
Now I'm trying to use the "Deploy" feature and the problem is that I'm working with Azure (ftp, publish, git, mercurial... I really don't care how) and I can't find a "task" which could perform it.
Has anyone achieved this?
I do automated deployments to AWS from bamboo, but the concept is pretty much the same.
Bamboo has no specific options for deploying to the public cloud, so you have to build or call an existing deployment tool. At the end of the day bamboo deployments provide you with meta-data over which build has been deployed to which environment, and security over who can do deploys, but its up to you have to make the actual deploy work. Bamboo does give you a totally extensible engine for controlling the "how" via scripting. The deployment engine is basically a cut down version of the CI engine with a subset of tasks.
I resolved to build our deployment tooling due to it being fairly simple to get started and a worthwhile investment in time because this will be used often and improved over time. Bamboo gives me authorization and access control, and my scripts give me fine grained control of my deployments.
I'm assuming you are running a bamboo agent on a windows image like me. So powershell scripts are your friend . If you're running in linux you'll want to do the same with bash.
I have a powershell scripts controlling my deployments through a controller/agent model.
The controller script is source controlled and maintained in mercurial repo. This is pulled by the repository task.
The agent is a powershell script wrapped by a simple webapi rest service with a custom authentication mechanism. The agent is setup when an app server instance is provisioned in ec2. We use puppet for server provisioning.
The controller does the following for a deployment
connects to the vpc
determines the available nodes in my web farm using ec2
selects the first node and sends the node an "upgrade database" command
then proceeds to send "upgrade app server" command to each node
The logic for doing the deploy is parameterized so it can be re-used for deployment to different environment. I use bamboo deploy variables to manage feeding parameters for the different environments.
DEV is deployed automatically, test, staging and prod are all manual click deploys which are locked down to specific users.
One option I considered but did not invest the time to look at as aws elastic beanstalk as a deployment tool. It has a rich api for deploys. On the Azure side it looks like web deploy supports deployment to Azure IIS sites.