I have built an application using Express, Postgres, and Sequelize on Google App Engine and I'm having some trouble running a longer migration. This migration simply dumps the data from one of my large tables into elastic search.
As of right now, I have been running my migrations in the pre-start command as such
npm i && sequelize db:migrate
but I notice that Google App Engine has been running my migration over and over again due to the auto-scaling nature of the instances. Is there a better practice for running migrations? Is there a way to only run this migration once and prevent auto-scaling for just the pre-start command?
First is necessary understand how App Engine handle the scalling types:
Automatic scaling creates instances based on request rate, response latencies, and other application metrics. You can specify thresholds for each of these metrics, as well as a minimum number instances to keep running at all times.
Basic scaling creates instances when your application receives requests. Each instance will be shut down when the application becomes idle. Basic scaling is ideal for work that is intermittent or driven by user activity.
Manual scaling specifies the number of instances that continuously run regardless of the load level. This allows tasks such as complex initializations and applications that rely on the state of the memory over time.
I recommend you to choose the manual scaling in order to set the specific number of instances you need/want, or, if you are going to use automatic scaling just pay attention to the limits (max/min (idle) instances) to set specific limits. However, it is up to you choosing the configuration that best suits your requirements.
Being this said, regardless the scaling method you choose, it seems that your script is being restarted every time your GAE scales, or, that it is the script telling your application to repeat the process over and over. It could be useful if you share details on how you are executing your script and what it does, in order to get a better perspective.
A possible workaround for this task could be port the functionality of the migration script itself into the body of an admin-protected handler in the GAE app which can be triggered with a HTTP request for a particular URL.
I think is possible to split the potentially long-running migration operation into a sequence of smaller operations (using push task queues), much more GAE-friendly.
Also, I suggest you take a look on this thread.
However I understand you want to migrate your data from PostgreSQL to one Elasticsearch table, I found out this tutorial where is recommended create a CSV file from your PostgreSQL database, then you can pass the data from CSV to the Json format, this is because you can use the service Elasticdump format your Json file as Elastic Search document,these steps are on Node JS, therefore you can create a script on App Engine or in Cloud functions depending of data size and execute the import, by example:
# import
node_modules/elasticdump/bin/elasticdump --input=formatted.json --output=http://localhost:9200/
Related
I have a nodejs web server where there seem to be a couple of bottlenecks preventing it from fully functioning peak load.
logging multiple events to our SQL server
logging multiple events to our elastic cluster
under heavy load , both SQL and elastic seem to reject my requests and/or considerably slow performance. So I've decided to reduce the load on these DBs via Logstash (for elastic) and an async task queue (for SQL)
Since i'm working on limited time , i'm wondering if i can solve these issues with just an async task queue (like kue) where i can push both SQL and elastic logs.
Do i need to implement logstash as well? or does an async queue solve the same thing as logstash.
You can try Async library's Queue feature and try and make it run on a child process or much better in a separate server as a Microservice for queues. That way you can move the load to another location, giving you a boost on your app server.
As you mentioned you are using azure I would strongly recommend using their queue solution plus a couple of azure functions to handle the read from the queue and processing.
I've rolled my own solution before using node.js and rabbitmq with node workers to read from the queue and write to elastic search because the solution couldn't go into the cloud.
It works and is robust but it takes quite a bit of configuration and custom code that is a maintenance nightmare. Ripped that out as soon as I could.
The benefits of using the azure service are:
Very little bespoke configuration is required.
Less custom code === less bugs & maintainence
scaling isn't an issue, some huge businesses rely on this
no 2am support calls, if azure is down they are going to fix it... fast
much cheaper, unless the throughput is massive and constant the scaling model is much cheaper and azure functions are perfect as you won't have running servers sitting there doing nothing when the queue is empty.
Same could be said for AWS and Google Cloud.
We are having issues with an Azure Application Service. One of our webservices (MVC) caches data from the database at startup (Application_Start) - this takes approximately 3 minutes. Until this is ready we can't handle requests.
This is known so we set it 'always on' and will aim to only restart it during off-peak times if necessary.
However, we expect heavy load to the server next month, and in our testing of the auto-scaling, we have found that when it adds additional instances, each of these instances goes through the same startup delay - but the traffic is split between the current running instance and the new one that's warming up so e.g. half of the requests start failing for that 3 minute period.
How can we configure Azure to delay using the new instance until it is ready? (or should we be using e.g. AWS instead?).
Some of the documentation points to using a custom Load Balancer Probe however it mainly talks about VM's whereas we are using PAAS.
Do try to reduce the data you need to load on app_start and try to lazy load data into Cache on first request. Some times even after doing all of this we do end up with large sets of data that is necessary on start.
There are two ways we can approach this.
One, assuming you are using in-memory caching and every instance of the app needs to hydrate its in-memory cache on App_Start. Try to use a external cache provider like Azure Cache for Redis, your new instance can just point to this external cache without having to reload the data.
Two, you can depend on Application Initialization Module which was introduced in IIS 7.5 (installed on Azure App Services' IIS). To use this feature, you need to add applicationInitialization section under web.server section of web.config. This will help you not make the instance available until the warm-up process is completed. More info on how to use ApplicationInitialization is available in this blog post
The best case would be to use the combination of both, applicationInitialization will point at a page in your application which checks if the external cache is available and hydrated, if yes, complete, else hydrate the external cache.
You can do this in Azure with other resource type than classic VM like an App Service. App Services scale up and down with instances that share the same memory pool and thread pool.
There is a lot of good information, in the link https://www.jan-v.nl/post/warming-up-your-app-service that was included in one of the comments.
Based on that information the functionality that you require is not available in the free tier.
I would approach the problem differently. Why does it take 3 mins to load the data from the database? Since it is only loaded on start it should be data that does not change often.
Could you:
Optimise the reading of data from the database?
Reduce the amount of data you read from the database?
Export the data to a file, and read it from a file?
My recommendation would be to use an Azure Load Balancer with a health probe
Currently I am solving an engineering problem, and want to open the conversation to the SO community.
I want to implement a task scheduler. I have two separate instances of a nodeJS application sitting behind an elastic load balancer (ELB). The problem is when both instances come up, they try to execute the same tasks logic, causing the tasks run more than once.
My current solution is to use node-schedule to schedule tasks to run, then have them referencing the database to check if the task hasn't already been run since it's specified run time interval.
The logic here is a little messy, and I am wondering if there is a more elegant way I could go about doing this.
Perhaps it is possible to set a particular env variable on a specific instance - so that only that instance will run the tasks.
What do you all think?
What you are describing appears to be a perfect example of a use case for AWS Simple Queue Service.
https://aws.amazon.com/sqs/details/
Key points to look out for in your solution:
Make sure that you pick a visibility timeout that is reflective of your workload (so messages don't reenter the queue whilst still in process by another worker)
Don't store your workload in the message, reference it! A message can only be up to 256kb in size and message sizes have an impact on performance and cost.
Make sure you understand billing! As billing is charged in 64KB chunks, meaning 1 220KB message is charged as 4x 64KB chucks / requests.
If you make your messages small, you can save more money by doing batch requests as your bang for buck will be far greater!
Use longpolling to retrieve messages to get the most value out of your message requests.
Grant your application permissions to SQS by the use of an EC2 IAM Role, as this is the best security practice and the recommended approach by AWS.
It's an excellent service, and should resolve your current need nicely.
Thanks!
Xavier.
I'm probably opening up a can of worms with regard to how many hundreds of directions can be taken with this- but I want high availability / disaster recovery with my MEANjs servers.
Right now, I have 3 servers:
MongoDB
App (Grunt'ing the main application, this is the front end
server)
A third server for other processing on the back-end
So at the moment, if I reboot my MongoDB server (or more realistically, it crashes for some reason), I suddenly see this in my App server terminal:
MongoDB connection error: Error: failed to connect to
[172.30.3.30:27017] [nodemon] app crashed - waiting for file changes
before starting...
After MongoDB is back online, nothing happens on the app server until I re-grunt.
What's the best practice for this situation? You can see in the error I'm using nodeMon to monitor changes to the app. I bet upon init I could get my MongoDB server to update a file on the app server within nodemon's view to force a restart? Or is there some other tool I can use for this? Or should I be handling my connections to the db server more gracefully so the app doesn't "crash"?
Is there a way to re-direct to a secondary mongodb in case the primary isn't available? This would be more apt to HA/DR type stuff.
I would like to start with a side note: Given the description in the question and the comments to it, I am not convinced that using AWS is a wise option. A PaaS provider like Heroku, OpenShift or AppFog seems to be more suitable, especially when combined with a MongoDB service provider. Running MongoDB on EBS can be quite a challenge when you are new to MongoDB. And pretty expensive, too, as soon as you need provisioned IOPS.
Note In the following paragraphs, I simplified a few things for the sake of comprehensibility
If you insist on running it on your own, however, you have an option. MongoDB itself comes with means of automatic, transparent failover, called a replica set.
A minimal replica set consists of of two data bearing nodes and a so called arbiter. Write operations go to the node currently elected "primary" only, and reads do, too, unless you explicitly allow or request reads to be performed on the current "secondary". The secondary constantly syncs to the primary. If the current primary goes down for some reason, the former secondary becomes elected primary.
The arbiter is there so that there is always a quorum (qualified majority would be an equivalent term) of members to elect the current secondary to be the new primary. This quorum is mainly important for edge cases, but since you can not rule out these edge cases, an uneven number of members is a hard requirement for a MongoDB replica set (setting aside some special cases).
The beauty of this is that almost all drivers, and the node.js for sure, are replica set aware and deal with the failover procedure pretty gracefully. They simply send the reads and writes to the new primary, without any change to be done at any other point.
You only need to deal with some cases during the failover process. Without going into much detail, you basically check for certain errors in the according callbacks and redo the operation, if you encounter one of those errors and redoing the operation is feasible.
As you might have noticed, the third member, the arbiter, does not hold much data. It is a very lightweight process and can basically run on the cheapest instance you can find.
So you have data replication and automatic, transparent failover with relative ease at the cost of the cheapest VM you can find, since you would need two data bearing nodes anyway if you used any other means.
The scenario is simple: using EF code first migrations, with multiple azure website instances, decent size DB like 100GB (assuming azure SQL), lots of active concurrent users..say 20k for the heck of it.
Goal: push out update, with active users, keep integrity while upgrading.
I've sifted through all the docs I can find. However the core details seem to be missing or I'm blatantly overlooking them. When Azure receives an update request via FTP/git/tfs, how does it process the update? What does it do with active users? For example, does it freeze incoming requests to all instances, let items already processing finish, upgrade/replace each instance, let EF migrations process, then let traffics start again? If it upgrades/refreshes all instances simultaneously, how does it ensure EF migrations run only once? If it refreshes instances live in a rolling upgrade process (upgrade 1 at a time with no inbound traffic freeze), how could it ensure integrity since instances in the older state would/could potentially break?
The main question, what is the real process after it receives the request to update? What are the recommendations for updating a live website?
To put it simply, it doesn't.
EF Migrations and Azure deployment are two very different beasts. Azure deployment gives you a number of options including update and staging slots, you've probably seen
Deploy a web app in Azure App Service, for other readers this is a good start point.
In General the Azure deployment model is concerned about the active connections to the IIS/Web Site stack, in general update ensures uninterrupted user access by taking the instance being deployed out of the load balancer pool and redirecting traffic to the other instances. It then cycles through the instances updating one by one.
This means that at any point in time, during an update deployment there will be multiple versions of your code running at the same time.
If your EF Model has not changed between code versions, then Azure deployment works like a charm, users won't even know that it is happening. But if you need to apply a migration as part of the migration BEWARE
In General, EF will only load the model if the code and DB versions match. It is very hard to use EF Migrations and support multiple code versions of the model at the same time
EF Migrations are largely controlled by the Database Initializer.
See Upgrade the database using migrations for details.
As a developer you get to choose how and when the database will be upgraded, but know that if you are using Mirgrations and deployment updates:
New code code will not easily run against the old data schema.
If the old code/app restarts many default initialization strategies will attempt roll the schema back, if this happens refer to point 1. ;)
If you get around the EF model loading up against the wrong version of the schema, you will experience exceptions and general failures when the code tries to use schema elements that are not there
The simplest way to manage a EF migration on a live site is to take all instances of the site down for deployments that include an EF Migration
- You can use a maintenance page or a redirect, that's up to you.
If you are going to this trouble, it is probably best to manually apply the DB update, then if it fails you can easily abort the deployment, because it hasn't started yet!
Otherwise, deploy the update and the first instance to spin up will run the migration, if the initializer has been configured to do so...
If you absolutely must have continuous deployment of both site code/content and model updates then EF migrations might not be the best tool to get started with as you will find it very restrictive OOTB for this scenario.
I was watching a "Fundamentals" course on Pluralsight and this was touched upon.
If you have 3 sites, Azure will take one offline and upgrade that, and then when ready restart it. At that point, the other 2 instances get taken off-line and your upgraded insance will start, thus running your schema changes.
When those 2 come back the EF migrations would already have been run, thus your sites are back.
In theory then it all sounds like it should work, although depending upon how much EF migrations need running, requests may be delayed.
However, the comment from the author was that in this scenario (i.e. making schema changes) you should consider if your website can run in this situation. The suggestion being that you either need to make your code work with both old and new schemas, or show a "maintenance system down page".
The summary seems to be that depending on what you are actually upgrading, this will impact and affect your choices and method of deployment.
Generally speaking if you want to support active upgrades you need to support multiple version of you application simultaneously. This is really the only way to reliably stay active while you migrate/upgrade. Also consider feature switches to scale up your conversion in a controlled manner.