I'm working on a Django (2.1) project that is hosted on Google Cloud Platform with a ~= 7GB size PostgreSQL (9.6) database.
The documentation doesn't cover this specific version of PostgreSQL, so I'm stuck in the DMS endpoints configurations to connect the old database and perform the instance replication with DMS (Database Migration Service) from AWS.
I've followed this tutorial, but there is no details about the endpoints configuration. Nothing on the documentations too (I've spent a lot of time searching on it). Only with a feel other specific databases like Oracle and MySQL.
I need to know how to configure the Source and Target endpoints of the instance on AWS DMS, so I can connect my database on GCP and start the replication.
I've found my answer by trial and error.
Actually the configuration is pretty straightfoward, after I found out that I did not create the RDS instance first:
RDS - First you need to create your DB instance that will host your DB. After creating you can see the endpoint and port of your database: e.g. Endpoint your-database.xxxxxxxxxxxx.sa-east-1.rds.amazonaws.com port 5432;
DMS - On Database Migration Service painel, go to Replication Instance and create a new one. Set the VPC to the one that you've created or the default one if it works for you.
Source Endpoint - Configure with the Google Cloud PLatform IP setted on your Django project settings.py. The source endpoint work getting your DB from GCP using the IP;
Target Endpoint - Set this one with the address and port that you created at step 1;
Test connection.
After many trials I've completed my database migration successfully.
Hope this help someone who's passing trough the same problems.
Related
I am currently building a web app which needs a database to store user info. The startup I'm working for wants to deploy it on Elastic Beanstalk. I am just getting started with all the cloud stuff and am completely at a loss.
Should I create a MongoDB Atlas cluster? Will it be able to connect to my app hosted on EB? Will I need to upgrade my plan on AWS to be able to connect to a database? Can it be integrated with DynamoDB? If yes, is DynamoDB significantly costlier?
I don't have answers to any of the above questions and am just honestly looking for a roadmap on what to do. I went through numerous articles and videos but still can't arrive at a solution. Any help will be much appreciated.
Should I create a MongoDB Atlas cluster?
That is one possible solution. You could also look at Amazon's Document DB service, which is MongoDB compatible.
Will it be able to connect to my app hosted on EB?
There is nothing preventing you from connecting to a MongoDB Atlas cluster from EB.
Will I need to upgrade my plan on AWS to be able to connect to a
database?
No
Can it be integrated with DynamoDB?
DynamoDB is a completely different database system, that shares almost nothing in common with MongoDB other than the fact that neither of them use SQL. If your application already uses MongoDB, then converting it to DynamoDB could be a large lift.
If yes, is DynamoDB significantly costlier?
In general, DynamoDB would be significantly cheaper than MongoDB, because DynamoDB is a "serverless" offering that charges based on your usage patterns, while MongoDB would include charges for a server running 24/7.
I’ve finished working on my node application which provide a series of APIs endpoint for a mobile application. I’ve made it with node and mongodb as db. Now I’ve reached the point where I should pick the right deployment environment.
Initially I’ll make a private beta but I need to choose a service I can scale easily (I’m not a devop) with the right price balance.
My initial choice is amazon aws (elastic beanstalk?). What’s about the db? I’ve not used dynamodb in order to be more service agnostic but now I don’t know how to create a reliable db infrastructure. Any suggestion to deploy both app and dB in order to make easy scaling in case it will become necessary?
We currently have an API service that simply fulfills requests for data in a mongoDB database hosted in Atlas. We have a separate node service that the API service calls to actually get/put data in the mongoDB database. I'm wondering if the API service should not just access the mongoDB database directly. It seems simpler, less services to maintain, scale, potential issues with to simply have the API service access the mongoDB database directly. I'd appreciate anyone's thoughts on this. Thanks.
If your API service is secure, I see no need for the node service step. I have an API that has read/write access to my MongoDB Atlas instance and it works just fine. I also use monitoring such as New Relic and I'd hate to have to include another service in that for no reason. I'd say, if you can track the latency etc of queries with the node service and without, you will probably benefit from removing that step.
I've built a small web app that i'm thinking of deploying as a production site to azure. I've used a code first approach. I'm after some advice regarding maintaining an azure production db when it is possible/likely that some of my models might still change in the early phase of deployment depending on user testing, etc.
My workflow is probably not ideal. I've published to azure for testing and my data is stored in an sql db called project_db on azure.
But I keep making changes to my models on my local machine. And rather than using migrations - which i kind of get - but also find a bit difficult to handle, my workflow is that I change my model eg. adding a property - then I will go and delete my local database and then build my solution again. At least on my local machine that works without having to implement migrations. And i don't need any locally stored data and it just seeds again.
I was hoping to confirm that if I head down this path i'd have to do the same thing on azure. That is if i've changed my models locally, deleted my localdb and then built my solution again locally - I can't just publish to azure and expect my previously created sql project_db to work. I'd have to delete the azure project_db and create a new azure db that would be built based on my changed models.
And once i have changed my models on my machine (before having enabled migrations) - say i've added 10 new properties to IdentityModels.cs and i want to deploy to my existing project_db that already contains data...if I enable migrations at this point will it migrate to azure and maintain my data. Or do migrations have to be enabled from the beginning before the very first publishing of the db to azure?
I did try to publish to the azure project_db after having changed my models (incl IdentityModel.cs) on my local machine. I wasn't able to log in even though the AspNetUser table still contained the email addresses, etc. that had previoulsy been entered. I'm assuming that's an issue with my changed models having mismatched the azure AspNetUser table in project_db.
Thanks for the advice.
I have been able to successfully create a Google Container Cluster in the developers console and have deployed my app to it. This all starts up fine, however I find that I can't connect to Cloud SQL, I get;
"Error: Handshake inactivity timeout"
After a bit of digging, I hadn't had any trouble connecting to the Database from App Engine or my local machine so I thought this was a little strange. It was then I noticed the cluster permissions...
When I select my cluster I see the following;
Permissions
User info Disabled
Compute Read Write
Storage Read Only
Task queue Disabled
BigQuery Disabled
Cloud SQL Disabled
Cloud Datastore Disabled
Cloud Logging Write Only
Cloud Platform Disabled
I was really hoping to use both Cloud Storage and Cloud SQL in my Container Engine Nodes. I have allowed access to each of these API's in my project settings and my Cloud SQL instance is accepting connections from any IP (I've been running Node in a Managed VM on App Engine previously), so my thinking is that Google is Explicitly disabling these API's.
So my two part question is;
Is there any way that I can modify these permissions?
Is there any good reason why these API's are disabled? (I assume there must be)
Any help much appreciated!
With Node Pools, you can sort of add scopes to a running cluster by creating a new node pool with the scopes you want (and then deleting the old one):
gcloud container node-pools create np1 --cluster $CLUSTER --scopes $SCOPES
gcloud container node-pools delete default-pool --cluster $CLUSTER
The permissions are defined by the service accounts attached to your node VMs during cluster creation (service accounts can't be changed after a VM is instantiated, so this the only time you can pick the permissions).
If you use the cloud console, click the "More" link on the create cluster page and you will see a list of permissions that you can add to the nodes in your cluster (all defaulting to off). Toggle any on that you'd like and you should see the appropriate permissions after your cluster is created.
If you use the command line to create your cluster, pass the --scopes command to gcloud container clusters create to set the appropriate service account scopes on your node VMs.
Hmm, I've found a couple of things, that maybe would be interested:
Permissions belong to a service account (so-called Compute Engine default service account, looks like 12345566788-compute#developer.gserviceaccount.com)
Any VM by default works using this service account. And its permissions do not let us Cloud SQL, buckets and so on. But...
But you can change this behavior using another service account with the right perms. Just create it manually and set only needed perms. Switch it out using gcloud auth activate-service-account --key-file=/new-service-account-cred.json
That's it.
For the cloudsql there's the possibility to connect from containers specifying a proxy as explained here https://cloud.google.com/sql/docs/postgres/connect-container-engine