There is this great repository with example implementations of different serverless scenarios.
Right now I'm struggling with the combination of AppSync and Amazon RDS. I tried the implementation of the standalone rds, and the appsync examples provided in the repository. These are working like a charm.
But obviously there are many differences and difficulties if you'd like to combine these technologies. I used the schema, resolver and handler functions from the rds directory and combined it with the appsync lambda implementation. I adjusted the mapping templates and updated the serverless.yml file.
I could successfully deploy the whole appsync service and all resources without any errors. I'm able to access the graphql endpoint from graphiql and do my queries. But when I try it from the appsync console I get null as a response. I guess it has something to do with the mapping templates, but I'm not quite sure.
Has anybody got any suggestions or maybe a working example of this specific combination?
I finally could figure out a working implementation for this specific setup which I want to share with all of you.
Check out my serverless-graphql-appsync-rds repository on GitHub and leave me some feedback!
Note that this repository contains just the source code without any explanations. I'll create a better documentation in the near future.
Related
Good afternoon all, I have been running into a weird issue with the serverless cli tool over the last week. My team and I have an authorizer that we deploy separately from our APIs. When we deploy our APIs we attach said authorizer to the API gateway. All is good and well there. But our issue comes when we go to clean up a test API. When we run serverless remove on an API that uses the authorizer, the authorizer itself also seems to get removed, cloudformation stack and all. I am very confused as to why removing one of our APIs seems to remove a separately deployed authorizer. I was wondering if anyone could enlighten me as to a reason this may be happening, I don't believe I saw anything in the documentation if this is an intended feature or not.
We have been doing work on feature branches for our APIs and any time someone removes their test instance from our AWS account it also removed the authorizer. I have been under the impression that running serverless remove in a given project only removes the resources that the project itself spun up, not other things that it interfaces with as well.
Thanks in advance!
This was resolved, our naming we were using for both services was the same for some reason and so when one deployed it overwrote the other in cloudformation.
I need to mock all AWS services(For example: EC2, S3, Redshift, Lambda, Dynamodb etc) in python.
I am using pytest framework for writing test cases and I found "pytest-localstack" plugin for mocking AWS services.
But also I found few more tools like moto, localstack.
Also i know that boto3 can be used to interact with AWS cloud.
I am feeling "pytest-localstack" is best for my requirement but please provide your suggestion whether i can go ahead with it or i need use other tools.
Thanks for your help.
There are two well known Python mocking libraries
https://github.com/spulec/moto and https://github.com/garnaat/placebo
In order to use MongoDB on my node.js AWS EC2 instance do I simply install MongoDB and create a database within the instance via the command line after logging in via SSH?
In other words do I simply create a DB in the EC2 instance for my web app just as I would locally on my machine?
just from long (and at the beginning painful) experience with EC2 & MongoDB..here are some gotcha's.
I am assuming you are just starting off from your question - so I am going to assume a minimal setup:
If you install MongoDB on a server with access to the Internet, then make sure you also apply MongoDB roles to your DB. Do not, I repeat, do not leave it open to the world. Admin and read/write roles are critical here, and MongoDB docs will help you. BTW, even if it is totally secure behind a firewall and other such things, always use roles. They are your last line of defense.
Study and understand exactly how security groups work, in order to limit Inbound and Outbound.
If you can, use the Elastic IP. It will save you many headaches if you move servers, not the least of which is that your IP will not change.
When you gear up to a front facing Internet server, and Mongo behind it, be it with Sharding, Clusters etc. read up on the NAT gateway. It is confusing at first, but the NAT Gateway (not the NAT instance), is a decent setup in one configuration or another.
Take snapshots or complete images of your environment when you change it. This is great for backup, and also when you move to a more robust server, it will save you a great deal of work.
If you have not already, try using MongoBooster or RoboMongo. They will help you immensely with your Mongo work.
Good luck and enjoy!
The actual AWS implementation of MongoDB is DocumentDB, which from what i can tell is built on the opensource version of MongoDB version 3.6, so newer MongoDb features are/might/will not be supported.
An interesting article comparing DocumentDb with MongoDb Atlas(mongoDm cloud solution):
https://medium.com/#michaelrbock/nosql-showdown-mongodb-atlas-vs-aws-documentdb-5dfb00317ca2
In the end if you really want MongoDB on AWS my opinion is you should just install it on a EC2 machine, I've done it via DocumentDB and some mongodb commands don't work, or chose AWS own NOSQL solution DynamicDB instead, DocumentDB just seems to be up there for competition with MongoDB Atlas cloud solution or just for having some dedicated MongoDB for companies that use it and want to move to AWS.
You have different alternatives. To answer your question: yes, you can do it that way. But, there is also an official guide by Amazon to set up a MongoDB cluster on AWS.
Also, if you only need a NoSQL database, you should also check DynamoDB, developed by Amazon. That would eliminate the need of an EC2 instance for the database. For more info, check the official docs.
I have written a real-time multiplayer game and currently writing its server in NodeJS. I want my game to have login, level up etc, so I need to have a database. This is the first time I am deploying something and I am mostly self taught, so please correct me if I am mixing things up. Since this is my first trial, I do not want to make much commitment right away so I am looking for free options only. And since this should be a real-time game, I need a relatively fast server response. That is why I am looking for the easiest database and server provider that would do and I am aware that with those restrictions I have limited choices and functionality.
As far as I have read online, Heroku seems to be my simplest option for a server (that is why I started writing in NodeJS). However it seems like there is no free database service since all options on https://devcenter.heroku.com/articles/heroku-postgres-plans has monthly fee. I did not want to use Google App Engine since I am new (it certainly is not mentioned as beginner friendly).
So I have found AWS following Free Cloud Database Service for home development post, it seems like I could use Amazon Web Services as a server and database. However most posts I have encountered suggests Google App Engine or Heroku with little mention of AWS. Is this because I am mixing concepts up, or does AWS have drawbacks that I am not aware of? Do you think it is a good idea to use AWS for both as server and database, is it possible to use Heroku as server while using AWS as database or do you have any other suggestion?
Note: Sorry for the question bombardment but those are all related and I am sort of lost in this topic so I had to ask...
Use AWS EC2 for the server and RDS for the database. The reason why people use heroku is that it deploys to a custom url very quickly (it's easy to set up). Setting up AWS requires some knowledge of how servers work, but it's not that complicated (and it's free for small apps). Best of luck!
I am looking at using Amazon cloud services (EC2, S3 etc) for hosting. I've been looking at the JSON metadata that can be specified to configure various instances and the complexity has me concerned. Is there a dsl that will generate valid JSON metadata and more importantly validate the entries?
Unfortunately, I drew a blank after searching for this recently. I'm using Amazon Web Services CloudFormation (is that the JSON metadata you're talking about?).
There are a couple of issues with CloudFormation JSON files:
I'm at well over 1,500 lines and it's impossible to read,
You can't express everything the API gives you, notably in the area of Virtual Private Clouds,
There are lots of bugs that are taking a long time to fix, Elastic Load Balancers losing HTTPS information, for example.
So I've been using straight-up API calls in Scala using the Java API. It's actually really nice.
The Java API has a flavor of "setters" starting with with that return this so they can be chained. In Scala, you can use these to act like a poor-man's DSL. So you can do things like
val updateRequest = new UpdateAutoScalingGroupRequest()
.withAutoScalingGroupName(group.getAutoScalingGroupName)
.withAvailabilityZones(subnetAZsOfOurVPC)
.withVPCZoneIdentifier(subnetNamesOfOurVPC)
as.updateAutoScalingGroup(updateRequest)
Other things are easy to do in Scala with the Java API. For example, group all your Subnets by VPC in a Map, simply do
val subnetsByVPC = ec2.describeSubnets(new DescribeSubnetsRequest).getSubnets.groupBy(_.getVpcId)
In case anyone is still looking for the AWS CloudFormation DSL - we've been using the Ruby DSL for CloudFormation:
https://github.com/bazaarvoice/cloudformation-ruby-dsl
This neat project provides a tool to convert your existing CloudFormation template(s) to the Ruby DSL
It will generate valid JSON output
Validating Ruby template entries is similar to validating a regular CloudFormation template (see cfn-validate-template)
Your template becomes Ruby code, so it's easy to have reusable modules (DRY)
You can define local variables
You can have comments in your DSL template
Greatly improved readability
Greatly reduced DSL template size
The CloudFormation request template body size limits are annoying - we have to upload our large CloudFormation templates to S3, then create/update stacks using their S3 URLs.
There is now, though I haven't used it yet: Coffin a CoffeeScript DSL for CloudFormation.
If you're not talking about CloudFormation, but instead more general API talking, then the nicest interface I've found is AWS' own aws-sdk ruby gem. Unlike the other SDKs that they publish which are quite nicely-done-but-crude make-client/make-request/get-response/look-at-result affairs, the ruby SDK wraps a nicer domain-model over the top, so you interact via collections at a higher abstract level.
It also has quite-nice performance features in that you can cache responses to save on round-trips, if you know you don't need fresh responses.
I have CloudFormation templates upwards of 3000 lines. I have found that putting comments in the JSON helps tremendously!! You just have to strip it out before using it. There is a validator that would validate the template and strip out the comments: http://cloudformation-validation.com/
No, as of Feb 2022, AWS still does not provide any domain-specific language for infrastructure as code. They have nothing similar to Azure's Bicep or Terraform's HCL.
This really surprises me, as I generally think of AWS as being more expensive, but ahead of the curve, compared to other major competitors (Azure and GCP).
However, Cloud Formation now allows both JSON and YAML formats. Slight improvement?? IMHO, not really when you have an entire repo that represents your entire cloud stack. If you're using AWS, use Terraform to manage IaC.