I am new to Azure and was trying to understand if I could use Azure Redis in my application.
Assuming, the application to run at a decent scale(currently don't have the exact numbers), my main point to ask this question is, as per the pricing tier of Azure, it says Premium supports upto 40k client connections. Now, is this connection count per node of the cluster or for the total cluster itself?
It is per node, by default you get no cluster and to have more nodes you will enable clustering when you are creating your premium Azure Redis cache instance. Please take a look here for a detailed look per shard or per node in a cluster in the premium tier. If you are expecting load but you do not have one right now then I would recommend to start with Standard tier now and when the need arises upgrade to Premium but remember you cannot scale back to Standard then and you get clustering only if you create the Azure Redis cache resource with Premium and enable clustering while creating it. But you get HA built with both Standard and Premium tiers but Redis Enterprise features are only available with the Premium tier.
Related
I am using Azure SignalR service instance. SignalR service currently only supports 1000 concurrent connections per service instance per unit. If the number of concurrent SignalR connections exceed 1000, the service instances will have to be increased manually, and reduced manually as the users decrease.
Looking for a suitable solution to auto-scale (scale up and scale down) the SignalR service instances based on the demand.
If any idea, please share. Thanks.
Azure SignalR service doesn't support any auto-scaling capabilities out of the box.
If you want to automatically increase or decrease the number of units based on the current number of concurrent connections, you will have to implement your own solution. You may for example try to do this using a Logic App as suggested here.
The common approach is otherwise to increase the number of units manually using the portal, the REST API or the Azure CLI.
They solved the disconnection issue when scaling, according to https://github.com/Azure/azure-signalr/issues/1096#issuecomment-878387639
And for the auto-scaling feature they are working on it, and in the mean-time here are 2 ways of doing so:
Using Powershell function https://gist.github.com/mattbrailsford/84d23e03cd18c7b657e1ce755a36483d
Using Logic App https://staffordwilliams.com/blog/2019/07/13/auto-scaling-signalr-service-with-logic-apps/
Azure SignalR Service supports autoscale as of 2022 if you select premium pricing tear.
Go to Scale up on the SignalR Service and select Premium pricing
tear.
Go to Scale out and create a custom autoscale.
The examples says that you can scale up if the metric "Connection Quota Utilization" is over 70% (should be about 700 out of your 1000 connections for your first unit). You can also scale down with a similar rule. The examples says to scale down when the connection quota is under 20%.
20% from the example seems a bit restrictive, but I guess its to avoid unneeded scaling. The client connections should be closed and reconnected while scaling down, so doing so very frequently is probably a bad idea.
https://learn.microsoft.com/en-us/azure/azure-signalr/signalr-howto-scale-autoscale
The objective is to create a highly available redis cluster using kubernetes for a nodeJS client. I have already created the architecture as below:
Created a Kubernetes cluster of Kmaster with 3 nodes (slaves).
Then I created statefulsets and persistent volumes (6 - one for each POD).
Then created Redis pods 2 on each node (3 Master, 3 replicas of respective master).
I need to understand the role of Redis Sentinel hereafter, how does it manage the monitoring, scaling, HA for the redis-cluster PODs across the nodes. I understand Sentinel should be on each node and doing its job but what should be the right architecture here?
P.S. I have created a local setup for now, but ultimately this goes on Azure so any suggestions w.r.to az is also welcome.
Thanks!
From an Azure perspective, you have two options and if you are very specific to option two but are looking for the Sentinel architecture piece, there is business continuity and high availability options in both IaaS (Linux VM scale sets) and PaaS services that go beyond the Sentinel component.
Azure Cache for Redis (PaaS) where you choose & deploy your desired service tier (Premium Tier required for HA) and connect your client applications. Please see: Azure Cache for Redis FAQ and Caching Best Practice.
The second option is to deploy a solution (as you have detailed) as an IaaS solution built from Azure VMs. There are a number of Redis Linux VM images to choose from the Azure Marketplace or there is the option to create a Linux VM OS image from your on-premise solution and migrate that to Azure. The Sentinel component is enabled on each server (master, slavea, and slaveb, ...). There are networking and other considerations too. For building a system from scratch, please see: How to Setup Redis Replication (with Cluster-Mode Disabled) in CentOS 8 – Part 1 and How to Setup Redis For High Availability with Sentinel in CentOS 8 – Part 2
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
We have several website hosted on a dedicated server with following configuration
Windows OS based dedicated server
MS SQL SERVER Windows 2012 for database
IIS 7.5+
and other software to managing website such as plesk
We have developed websites in asp.net web-form with framework 4.0 or 4.5 (about 10 website)
We also have few asp.net MVC based website with framework 4.5 (about 5 websites)
We use InProc default session state for sessions
beside this we have other software installed for security etc.
we also have mobile application api running same server and same application is use to send push notification to OS & Android devices
Now i have few question regarding migration to Azure enviroment.
First, my big concerns are session state, i want to migrate to Azure without making any changes to code except changes to web.config is this possible?
Second we use MS SQL SERVER 2012 as database and on azure we have to use Azure SQL Database which i believe is not same as MS SQL SERVER, Can i use Azure SQL Database or i should still to MS SQL SERVER as i have application running on this and migrating to Azure database may create problems?
Third, let us say i choose web+Mobile--> App Service Standard Package (which comes with upto 10 instance) what are these instance? and will individual session always connect to same instance?
Forth: I have about 20 database one of them is about 6GB & other database are about 200MB-700MB which service should i use for database in case i use Azure SQL Database
Single Database or Elastic?
Can i create multiple database under elastic mode?
Let us say if i choose "100 eDTUs: 10 GB included storage per pool, 200 DBs per pool, $0.20/hour" will i have total of 10GB space for all
database in elastic pool and what is per pool and how many pools will
i get in this option.
or MS SQL SERVER on Virtual Machine is better option as i can run SQL Server based session
Fifth: Disk Space, let us say i choose App Service "S2: 2 Cores(s), 3.5 GB RAM, 50 GB Storage, $0.200", is 50GB disk space include OS or space allocated to file which we upload?
Sixth: Some of our application are used to send push notification to iOS & Android device i am not sure if they will work in Azure environment as they need certain ports to be open and also some sort of certificate to be installed on the server.
I have asked too many question as i didn't had clarity from MS chat as they just passed links which can be confusing at time, i hope i get clarity here
Q: First, my big concerns are session state, i want to migrate to Azure without making any changes to code except changes to web.config is this possible?
If one of your concerns is code refactoring, then the model you should chose is Infrastructure-As-A-Service. In this model, there is no need to change in code because the infrastructure on Azure can be similar to the on-premises in which you provision virtual machines to run Windows Server, SQL Server and IIS. Software versions are all of your choice with no limitation. As long as the software version is still supported in Microsoft product lifecycle when procuring new software license.
If you'd love to modernize your web application, Azure App Service can be a good chosen destination. Azure App Service can run code compiled against .NET 4.0 framewor. InProc session state is not guaranteed in Azure App Service so you need to look into an alternative if using Azure App Service, for example Azure Redis Cache.
Q: Second we use MS SQL SERVER 2012 as database and on azure we have to use Azure SQL Database which i believe is not same as MS SQL SERVER, Can i use Azure SQL Database or i should still to MS SQL SERVER as i have application running on this and migrating to Azure database may create problems?
Without impact analysis and how complex your data model is, it's hard to say whether Azure SQL Database is compatible with your database. Fortunately, Microsoft provides a tool named Data Migration Assistant (DAM) which assists you to perform database compatibility analysis for Azure SQL Database. This link gives you more details on DAM (https://learn.microsoft.com/en-us/azure/sql-database/sql-database-cloud-migrate). Moving from SQL Server to Azure SQL Database would gain more benefits in high availability, disaster recovery and scalability. Administration effort with server management, OS patching is significantly reduced. With SQL Server in Azure VM, the migration cost is much better as you only need to shift and lift (provision VM, perform database detach/attach or other backup/restore methods).
Q: Third, let us say i choose web+Mobile--> App Service Standard Package (which comes with upto 10 instance) what are these instance? and will individual session always connect to same instance?
No, session would not be maintained in guaranteed mode. When you chose Azure App Service, your web application will be run on virtualized servers running Windows Server and IIS. The term "Instance" is server instance. Azure App Service helps you handle scaling by allocating compute resource across multiple instance to make sure your application does not get crashed with inadequate memory and resource. The default at the first time you provision your web app is 1, but the number of instance is configurable.
Q: Forth: I have about 20 database one of them is about 6GB & other database are about 200MB-700MB which service should i use for database in case i use Azure SQL Database
Single Database or Elastic?
Can i create multiple database under elastic mode?
Let us say if i choose "100 eDTUs: 10 GB included storage per pool, 200 DBs per pool, $0.20/hour" will i have total of 10GB space for all database in elastic pool and what is per pool and how many pools will i get in this option.
or MS SQL SERVER on Virtual Machine is better option as i can run SQL Server based session
Choosing Single Database or Elastic depends on performance and peak load of your database. Single database is used for independently database, when you can specify the DTU (Data Transaction Unit) for predictable performance. While Elastic Pool is best for managing set of databases in a pool. Elastic Pool is a choice for unpredictable performance and usage.
In your case, I'd recommend to use Elastic Pool to rescue performance. Elastic Pool allows you to set eDTU for your pool no matter how much of DTU a specific database in a pool needs. Elastic Pool monitors and perform performance analysis in depth to give you an insight and overall picture of each database performance.
When it comes to pool, you should not worry about how much storage you are given to each database. You don't also have to worry about the number of databases you can store in a pool. Saying you have total 20 databases, you need only one pool.
The eDTU you need can be calculated via this website http://dtucalculator.azurewebsites.net/. Just run one of the given scripts in the website on your SQL Server (where your on-premises databases are running) to capture performance metrics, then upload Excel file to the website. It will gives you a number. For example, the result says that total 20 databases need totally 100 eDTU. Then you just create an Elastic pool and adjust 100 eDTU for the pool. However, if using Elastic Pool Basic, you are only given 10 GB per pool which is not enough for 120 GB (20 * 6 GB), then you need Elastic Pool Standard for 100 eDTU to achieve 750 GB maximum. Note that you can choose Basic plan of 1,200 eDTU to achieve 156 GB maximum. However, this way is never recommended because storage space is much cheaper than eDTU.
In a nutshell, with your draft info above, I'd recommend to chose Standard plan of Elastic Pool with 100 eDTU. You can increase number of eDTU if it does not satisfy the performance of totally 20 databases. No database downtime is needed when adjusting eDTU number.
Creating only 1 pool is not really my recommendation. It depends on your database workload. For example, in 20 databases, there are 5 databases that are heavy workload for an ERP or business-critical systems while the rest are just normal databases. In this case, you'd need two Elastic pools. One pool with high number of eDTU is set, and another pool has low number of eDTU.
Q: Fifth: Disk Space, let us say i choose App Service "S2: 2 Cores(s), 3.5 GB RAM, 50 GB Storage, $0.200", is 50GB disk space include OS or space allocated to file which we upload?
When it comes to Azure App Service, OS is not counted in. 50 GB storage space is given directly to your application's space (to store image, compiled DLL, video, library..)
Q: Sixth: Some of our application are used to send push notification to iOS & Android device i am not sure if they will work in Azure environment as they need certain ports to be open and also some sort of certificate to be installed on the server.
Azure Notification Hubs can help you achieve push notification. Azure Notification Hub allows you to use certificate of each kind of platform (e.g iOS to manage devices. This is a sample reference if you are familiar with iOS https://learn.microsoft.com/en-us/azure/notification-hubs/notification-hubs-ios-apple-push-notification-apns-get-started. Azure Notification Hub also supports token-based for APNS if you need.
For each case, please give more details (e.g. your mobile scenario) , and specific questions if possible so I and people here can elaborate more.
We are working on an application that processes excel files and spits off output. Availability is not a big requirement.
Can we turn the VM sets off during night and turn them on again in the morning? Will this kind of setup work with service fabric? If so, is there a way to schedule it?
Thank you all for replying. I've got a chance to talk to a Microsoft Azure rep and documented the conversation in here for community sake.
Response for initial question
A Service Fabric cluster must maintain a minimum number of Primary node types in order for the system services to maintain a quorum and ensure health of the cluster. You can see more about the reliability level and instance count at https://azure.microsoft.com/en-gb/documentation/articles/service-fabric-cluster-capacity/. As such, stopping all of the VMs will cause the Service Fabric cluster to go into quorum loss. Frequently it is possible to bring the nodes back up and Service Fabric will automatically recover from this quorum loss, however this is not guaranteed and the cluster may never be able to recover.
However, if you do not need to save state in your cluster then it may be easier to just delete and recreate the entire cluster (the entire Azure resource group) every day. Creating a new cluster from scratch by deploying a new resource group generally takes less than a half hour, and this can be automated by using Powershell to deploy an ARM template. https://azure.microsoft.com/en-us/documentation/articles/service-fabric-cluster-creation-via-arm/ shows how to setup the ARM template and deploy using Powershell. You can additionally use a fixed domain name or static IP address so that clients don’t have to be reconfigured to connect to the cluster. If you have need to maintain other resources such as the storage account then you could also configure the ARM template to only delete the VM Scale Set and the SF Cluster resource while keeping the network, load balancer, storage accounts, etc.
Q)Is there a better way to stop/start the VMs rather than directly from the scale set?
If you want to stop the VMs in order to save cost, then starting/stopping the VMs directly from the scale set is the only option.
Q) Can we do a primary set with cheapest VMs we can find and add a secondary set with powerful VMs that we can turn on and off?
Yes, it is definitely possible to create two node types – a Primary that is small/cheap, and a ‘Worker’ that is a larger size – and set placement constraints on your application to only deploy to those larger size VMs. However, if your Service Fabric service is storing state then you will still run into a similar problem that once you lose quorum (below 3 replicas/nodes) of your worker VM then there is no guarantee that your SF service itself will come back with all of the state maintained. In this case your cluster itself would still be fine since the Primary nodes are running, but your service’s state may be in an unknown replication state.
I think you have a few options:
Instead of storing state within Service Fabric’s reliable collections, instead store your state externally into something like Azure Storage or SQL Azure. You can optionally use something like Redis cache or Service Fabric’s reliable collections in order to maintain a faster read-cache, just make sure all writes are persisted to an external store. This way you can freely delete and recreate your cluster at any time you want.
Use the Service Fabric backup/restore in order to maintain your state, and delete the entire resource group or cluster overnight and then recreate it and restore state in the morning. The backup/restore duration will depend entirely on how much data you are storing and where you export the backup.
Utilize something such as Azure Batch. Service Fabric is not really designed to be a temporary high capacity compute platform that can be started and stopped regularly, so if this is your goal you may want to look at an HPC platform such as Azure Batch which offers native capabilities to quickly burst up compute capacity.
No. You would have to delete the cluster and recreate the cluster and deploy the application in the morning.
Turning off the cluster is, as Todd said, not an option. However you can scale down the number of VM's in the cluster.
During the day you would run the number of VM's required. At night you can scale down to the minimum of 5. Check this page on how to scale VM sets: https://azure.microsoft.com/en-us/documentation/articles/service-fabric-cluster-scale-up-down/
For development purposes, you can create a Dev/Test Lab Service Fabric cluster which you can start and stop at will.
I have also been able to start and stop SF clusters on Azure by starting and stopping the VM scale sets associated with these clusters. But upon restart all your applications (and with them their state) are gone and must be redeployed.
I recently got a trial version of Windows Azure and wanted to know if there is any way I can deploy an application using Cassandra.
I can't speak specifically to Cassandra working or not in Azure unfortuantly. That's likely a question for that product's development team.
But the challenge you'll face with this, mySQL, or any other role hosted database is persistence. Azure Roles are in and of themselves not persistent so whatever back end store Cassandra is using would need to be placed onto soemthing like an Azure Drive (which is persisted to Azure Blob Storage). However, this would limit the scalability of the solution.
Basically, you run Cassandra as a worker role in Azure. Then, you can mount an Azure drive when a worker starts up and unmount when it shuts down.
This provides some insight re: how to use Cassandra on Azure: http://things.smarx.com/#Run Cassandra
Some help w/ Azure drives: http://azurescope.cloudapp.net/CodeSamples/cs/792ce345-256b-4230-a62f-903f79c63a67/
This should not limit your scalability at all. Just spin up another Cassandra instance whenever processing throughput or contiguous storage become an issue.
You might want to check out AppHarbor. AppHarbor is a .Net PaaS built on top of Amazon. It gives users the portability and infrastructure of Amazon and they provide a number of the rich services that Azure offers such as background tasks & load balancing plus some that it doesn't like 3rd party add-ons, dead-simple deployment and more. They already have add-ons for CouchDB, MongoDB and Redis if Cassandra got high enough on the requested features I'm sure they could set it up.