Migrating cloudbees application to amazon beanstalk - amazon

Cloudbees is shuting down its hosting services, so I am looking for migrate my java web app to Amazon ElasticBeanstalk.
Question 1: Is it the right choice?
Question 2: In cloudbees, all I did was chose the app cells (256m) and set auto scaling true. And never worried about anything else. Now while configuring the same in beanstalk, I see a new thing called instance type t1 , t2 , small etc in addition to software configuration tab where i set the initial jvm to 256m
So what should I select the instance type to?
Question 3: In cloudbees, the price was dependent on app-cell which means the jvm memory I chose, but it seems in beanstalk, i can set memory to even higher and price is charged on the basic of instance type. Is it?
Question 4: So saying, i just need to replicate the same setting i.e init jvm with 256m and set autoscale true, what should be my corresponsing setting in beanstalk?

Question 1
This is an opinion and you will likely get differing answers.
Question 2
Instance types are all based off of EC2's list of instance types and each instance's price is determined by memory and CPU. The pricing is then rated per hour. Do take a note of the Free Tier on that page though since you are new.
Question 3
Yes, it is based on the following (unless you are in the Free Tier):
For example, if you had two t2.micro's running, one for 40 hours and one for 10, that'd be:
Question 4
You'd want any instance you wish running as a Load Balanced environment with Auto Scaling turned on.

Related

Azure App Service: what if the current instance count is higher than the selected?

So I have a very specific question which I haven't found an answer to it (partially because I don't find a easy way to search on google for this).
Imagine I have an App Service that has one autoscale rule
** From 0700 to 0800 increase to specific count = 4
The question is what if at 0650 the instances went up to 6 because of unexpected demand. When it is 0700 will Azure decrease to 4 or keep 6?
I think the latter but wanted to know if anyone had any experience on this.
Thanks
The question is what if at 0650 the instances went up to 6 because of unexpected demand. When it is 0700 will Azure decrease to 4 or keep 6?
It will decrease to 4.
As the article said, Horizontal scaling, also called scaling out and in, means adding or removing instances of a resource. The application continues running without interruption as new resources are provisioned. If demand drops, the additional resources can be shut down cleanly and deallocated.
For more details, you could refer to this article.

Please suggest Google Cloud App Engine's smallest configuration

I have a node.js web application / website hosted in Google Cloud App Engine. The website will have no more than 10 users per day and does not have any complex resource consuming feature.
I used app.yaml file given in tutorial
# [START app_yaml]
runtime: nodejs
env: flex
manual_scaling:
  instances: 1
resources:
  cpu: 1
  memory_gb: 0.5
  disk_size_gb: 10
# [END app_yaml]
But this is costing around 40 USD per month which is too high for basic application. Can you please suggest minimum possible lowest cost resource configuration? It would be helpful if you can provide app.yaml sample for it.
Google Cloud Platform's Pricing Calculator shows that the specs in your app.yaml turn out to be Total Estimated Cost: $41.91 per 1 month so your costs seem right.
AppEngine Flexible instances are charged for their resources by hour. With manual_scaling option set your instance is up all the time, even when there is no traffic and it is not doing any work. So, not turning your instance down during the idle time is the reason for the $40 bill. You might want to look into using Automatic or Basic scaling to minimize the time your instance is running, which will likely reduce your bill considering you don't have traffic 24/7 (you will find examples of proper app.yaml settings via the link).
Note that with automatic/basic scaling you get to select instance classes with less than 1 dedicated core (i.e. 0.2 & 0.5 CPUs). Not sure if setting CPU to be > 0 and < 1 with manual_scaling here would also work, you might give want to give it a try as well.
Also, don't forget to have a detailed look at your bills to see what else you are potentially being charged for.
After few searches, that seems to be the lowest possible configurations. See related answer here:
Can you use fractional vCPUs with GAE Flexible Environment?
At least for now, there is no shared CPUs so you'll pay for one even if your app is using an average 2% of it. Maybe adding few star here will help changing that in a near future:
https://issuetracker.google.com/issues/62011060
After reading articles on the internet I have created 1 f1-micro (1 vCPU, 0.6 GB memory) VM instance of bitnami MEAN stack which costs ~$5.5/month. I was able to host 1 Mongo DB instance and 2 Node.JS web applications in it. Both the applications have different domain names.
I have implemented reverse proxy using Apache HTTP server to route traffic to appropriate Node.JS application by it domain-name/hostname. I have documented the steps I followed here: https://medium.com/#prasadkothavale/host-multiple-web-applications-on-single-google-compute-engine-instance-using-apache-reverse-proxy-c8d4fbaf5fe0
Feel free to suggest if you have any other ways to implement this scenario.
The cheapest way to host a Node JS application is through Google Compute Engine, not Google App Engine.
This is because you can host it for 100% free on Compute Engine!
I have many Node apps that have been running for the last 2 years, and I have been charged a maximum of a few cents per month, if any at all.
As long as you are fine with a low spec machine (shared vCPU) and no scaling, look into the Compute Engine Always Free options.
https://cloud.google.com/free/docs/always-free-usage-limits#compute_name
The only downside is that you have to set up the server (installing Node, setting up firewalls etc). But it is a one time job, and easily repeatable after you have done it once.
App Engine Standard environment would be the best route for your use case. The standard environment runs directly on Google's Infrastructure, scales quickly and scales down to zero when there's no traffic. The free quota might be sufficient enough for this uses case as well.
App Engine Flexible environment runs as a container in a GCE VM (1 VM per instance/container). This makes it slower to scale compared to the standard environment as scaling up would require new VMs to boot up before the instance containers can be pulled and started. Flex also has the requirement of having minimum 1 instance running all the time (where as standard scales down to 0).
Flex is useful when your requirements of runtime/resources go beyond the limitations of standard environment.
You can understand more about the differences between the standard and flex environments at https://cloud.google.com/appengine/docs/the-appengine-environments
Use the Basic, not Flexible. It is a better fit and far cheaper for you.

Google Cloud node.js flexible environment

I deployed a node.js app as a learning tool and noticed that I'm getting billed for the project (around a $1/day). I know node.js on Google Cloud uses Compute Engine to run the vm's, but they say the flexible environment has all the advantages of the AppEngine platform, but it seems the instances don't auto stop and start to reduce billing when not in use.
I have java project that's been running on App Engine for years and I've never been billed anything, i'm guessing that's because the instances are shutdown automatically when not in use. So my questions are;
Is there a way to configure the flexible environment to mimic the standard environment to reduce the operating costs?
Am I miss-using something with the flexible environment?
According to Google App Engine Documentation,
Instances within the standard environment have access to a daily limit
of resource usage that is provided at no charge defined by a set of
quotas...
Instances within the flexible environment are charged the cost of the
underlying Google Compute Engine Virtual Machines.
According to this article,
Currently, the Flexible Environment needs at least one instance
running to serve traffic and there is no free tier.
This means that at any one time, you have at least one instance running, if you're using a Flexible VM. That should explain the billing.
Please note that by default appengine launches two g1-small instances. Depending on your application needs, this may be an over-kill. You should configure the compute resource settings in your app.yaml to the appropriate sizes of RAM, disk size and CPU, so as to save costs. You may also want to specify the min_num_instances as 1 in your service scaling settings.
I had the same problem. You can try to use Google's pricing calculator to figure out which configuration you need and how to minimize the cost of your application.
According to the calculator, the minimal cost for a flexible environment app is a little less than 40$ per month, There is nothing to do about it right now.
I eventually moved to Heruko because of that.

In the context of Azure Websites, is a 2 "small" standard instance setup better than 1 "medium" server instance setup? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I read a lot about the importance of having at least 2 websites instances in Azure, one reason being that MS will only honour it SLA, if there is, due to being able to patch one server while having the other available.
However we current have strict budgets, and currently have 1 medium server with the bigger RAM. I have always believed that bigger server with more RAM is always better. Also 2 cores on the same machine may be quicker as well.
We have noticed the odd recycle, but it is too early to say whether this is due to MS patching.
Assume my application is a MVC3/EF5/SQL Azure app with 10 user concurrency, and processing is straigtforward, ie simple DB queries etc.
In the context of Windows Azure, assuming a budget limit, would 1 medium(2 x 1.6Ghz cores and 3.5 GB RAM) server be better than 2 small(1 x 1.6GHz Core and 1.75GB RAM) web server instances.
Thanks.
EDIT 1
I noticed this question has attracted 2 votes for being opinion based. The question is designed to attract reports from real experience in this area, which of course informs opinion. This is hugely valuable for my work, as also others.
EDIT 2
Interesting about SLA. I was concerned about when MS does an update, then one instance would disappear while this occurred. So what would happen in this case? Does Azure just clone up another instance? Also what happens in situations where one instance is working on a slower process, it might be waiting for something like a DB transaction. With 2 instance the LB would redirect to instance 2. Logically this sounds superior. It will still work with session vars as MS has implemented "sticky sessions".
I am intrigued that you recommend going with a "small" instance. 1.75GB RAM seems so tiny for a server, and 1 core at 1.6GHz. Need to do some memory monitoring here. Out of interest, how many times would the main application dlls load into RAM, is it just the once regardless of numbers of users? May be a basic question, but just wanted to check. Makes you think when one's laptop is 16GB and 8 cores (i7). However I quess there is a lot of different bloating processes going on a laptop, rather than many fewer and small processes on the server.
Unless your app is particularly memory hungry, I would go for a single small and configure the autoscale to start more servers as needed. Then just keep an eye on the stats. You can have a look at how much memory you are currently using; if it's less than what you get with a small instance you don't get any benefit from the extra RAM.
The SLA for Websites does not require two instances, that rule applies only to Cloud Services.
I have found that you can do a surprisingly large amount of work on single, small instances; I have several systems in that kind of setup which only use a few pct of capacity, even at hundreds of requests per minute. With 10 users you are unlikely to even have IIS use more than one thread, unless you have some very slow responses (I'm assuming you are not using async) so the second core will be idle.
For another example, look at Troy Hunts detailed blog about haveibeenpwned.com which runs on small instances.

Azure web site questions

I currently have a web application deployed to "Web Sites" - This is configured in standard mode and it performs really well from what I have seen so far.
I have a few questions:
1)My instance size is currently small - however I can scale out to 10 instances. Does this also mean that if I change my instance size to medium or large, I can still have 10 instances?
2)What is the maximum number of instances I can have for an azure web site?
3)Is there any SLA for a single azure instance?
4)Is it possible to change the instance size programatically or is better to just change the instance count
1) Yes
2) 10 for standard.
3) Yes, for Websites Basic and Standard, MS guarantee a 99.9% monthly availability.
4) It depends on a lot of factors. The real question is "Is it better for your app to scale up or scale out?"
Yes, the default limit is 10 instances regardless of the size.
The default limit is 10 instances, but you can contact Azure Support to have the limit increased. Default and "real" limits for Azure services are documented here.
According to the Websites pricing page Free and Shared sites have no SLA and Basic and Standard sites have 99.9% uptime SLA. Having a single instance means that during the 0.1% outage time (43.8 minutes per month) your site will be down. If you have multiple instances then most likely at least one will be up at any given time.
Typically instance auto-scaling is used to handle variation in demand while instance size would be used for application performance. If you only get 100 requests per day but each request is slow because it's maxing out CPU then adding more instances won't help you. Likewise if you're getting millions of requests that are being processed quickly but the volume is maxing out your resources then adding more instances is probably the better solution.

Resources