Upgrade azure event hub - azure

How to upgrade EXISTING azure event hub from standard to premium or to dedicated plan?
Is it requires deleting the existing cluster & recreating the cluster with upgraded plan or we try upgrade without deleting the cluster?
In case, if upgrading the existing cluster possible, will the data in the partition rearranges when partition count increases from 32 to 100?
I cant see option in my azure portal to upgrade event hub to other higher pricing tier.

Unfortunately you cannot upgrade from standard to premium according to the docs:
We currently don't support migrating from standard namespaces to premium namespace.
Nor can you move to a dedicated cluster according to the docs:
We don't currently support an automated migration process for migrating your event hubs data from a standard or premium namespace to a dedicated one.
You will have to create a new event hub, reroute current message producers to this new instance, drain the current instance and then remove it.
Regarding partition changes, this is only allowed for premium and dedicated tiers and existing data won't be redistributed over the old and new partitions. One of the reasons I suppose is that the event hub cannot know whether a publisher specifically targeted a partition or not. So redistribution could mess things up. Also, all checkpoints would have te be updated too somehow.

Related

Azure IoT Hub Free tier deployment limitation

Situation:
I hit a throttling limitation of free tier IoT Hub where deployments are limited to 10. This limitation is described here
Failed deployments manifests itself
when Azure DevOps CI pipeline is used
Azure Portal
Problem(s):
The error is pretty clear, but what I'm struggling to understand is whether the limit of 10 gets reset at some point or is that it - no more automatic deployments for this Edge device, ever? The documentation seems to suggest that paid for IoT Hubs have 100 deployment limit, what do developers do after that?
I have created a deployment manifest and attempted to deploy it manually through VS Code. This has worked, but then within very short period of time (~5min) the Hub reverts to previous configuration. Is that because of this limit or its a property of the Hub to e.g. revert to previous configuration if newly deployed modules fail for one reason or another?
You should not create a deployment for every device. Rather target deployments to a set of devices: https://learn.microsoft.com/en-us/azure/iot-edge/how-to-deploy-at-scale?view=iotedge-2020-11
To modify a single device, follow the steps outlined here: https://learn.microsoft.com/en-us/azure/iot-edge/how-to-deploy-modules-portal?view=iotedge-2020-11
This allows you to change every one of your max 1 million devices per IoT Hub individually. But is this really needed?
Best practice: Use layered deployments (https://learn.microsoft.com/en-us/azure/iot-edge/module-deployment-monitoring?view=iotedge-2020-11) and change settings via Module Twin, if needed.

Info about the active client connections in an Azure Redis Cluster

I am new to Azure and was trying to understand if I could use Azure Redis in my application.
Assuming, the application to run at a decent scale(currently don't have the exact numbers), my main point to ask this question is, as per the pricing tier of Azure, it says Premium supports upto 40k client connections. Now, is this connection count per node of the cluster or for the total cluster itself?
It is per node, by default you get no cluster and to have more nodes you will enable clustering when you are creating your premium Azure Redis cache instance. Please take a look here for a detailed look per shard or per node in a cluster in the premium tier. If you are expecting load but you do not have one right now then I would recommend to start with Standard tier now and when the need arises upgrade to Premium but remember you cannot scale back to Standard then and you get clustering only if you create the Azure Redis cache resource with Premium and enable clustering while creating it. But you get HA built with both Standard and Premium tiers but Redis Enterprise features are only available with the Premium tier.

Turning off ServiceFabric clusters overnight

We are working on an application that processes excel files and spits off output. Availability is not a big requirement.
Can we turn the VM sets off during night and turn them on again in the morning? Will this kind of setup work with service fabric? If so, is there a way to schedule it?
Thank you all for replying. I've got a chance to talk to a Microsoft Azure rep and documented the conversation in here for community sake.
Response for initial question
A Service Fabric cluster must maintain a minimum number of Primary node types in order for the system services to maintain a quorum and ensure health of the cluster. You can see more about the reliability level and instance count at https://azure.microsoft.com/en-gb/documentation/articles/service-fabric-cluster-capacity/. As such, stopping all of the VMs will cause the Service Fabric cluster to go into quorum loss. Frequently it is possible to bring the nodes back up and Service Fabric will automatically recover from this quorum loss, however this is not guaranteed and the cluster may never be able to recover.
However, if you do not need to save state in your cluster then it may be easier to just delete and recreate the entire cluster (the entire Azure resource group) every day. Creating a new cluster from scratch by deploying a new resource group generally takes less than a half hour, and this can be automated by using Powershell to deploy an ARM template. https://azure.microsoft.com/en-us/documentation/articles/service-fabric-cluster-creation-via-arm/ shows how to setup the ARM template and deploy using Powershell. You can additionally use a fixed domain name or static IP address so that clients don’t have to be reconfigured to connect to the cluster. If you have need to maintain other resources such as the storage account then you could also configure the ARM template to only delete the VM Scale Set and the SF Cluster resource while keeping the network, load balancer, storage accounts, etc.
Q)Is there a better way to stop/start the VMs rather than directly from the scale set?
If you want to stop the VMs in order to save cost, then starting/stopping the VMs directly from the scale set is the only option.
Q) Can we do a primary set with cheapest VMs we can find and add a secondary set with powerful VMs that we can turn on and off?
Yes, it is definitely possible to create two node types – a Primary that is small/cheap, and a ‘Worker’ that is a larger size – and set placement constraints on your application to only deploy to those larger size VMs. However, if your Service Fabric service is storing state then you will still run into a similar problem that once you lose quorum (below 3 replicas/nodes) of your worker VM then there is no guarantee that your SF service itself will come back with all of the state maintained. In this case your cluster itself would still be fine since the Primary nodes are running, but your service’s state may be in an unknown replication state.
I think you have a few options:
Instead of storing state within Service Fabric’s reliable collections, instead store your state externally into something like Azure Storage or SQL Azure. You can optionally use something like Redis cache or Service Fabric’s reliable collections in order to maintain a faster read-cache, just make sure all writes are persisted to an external store. This way you can freely delete and recreate your cluster at any time you want.
Use the Service Fabric backup/restore in order to maintain your state, and delete the entire resource group or cluster overnight and then recreate it and restore state in the morning. The backup/restore duration will depend entirely on how much data you are storing and where you export the backup.
Utilize something such as Azure Batch. Service Fabric is not really designed to be a temporary high capacity compute platform that can be started and stopped regularly, so if this is your goal you may want to look at an HPC platform such as Azure Batch which offers native capabilities to quickly burst up compute capacity.
No. You would have to delete the cluster and recreate the cluster and deploy the application in the morning.
Turning off the cluster is, as Todd said, not an option. However you can scale down the number of VM's in the cluster.
During the day you would run the number of VM's required. At night you can scale down to the minimum of 5. Check this page on how to scale VM sets: https://azure.microsoft.com/en-us/documentation/articles/service-fabric-cluster-scale-up-down/
For development purposes, you can create a Dev/Test Lab Service Fabric cluster which you can start and stop at will.
I have also been able to start and stop SF clusters on Azure by starting and stopping the VM scale sets associated with these clusters. But upon restart all your applications (and with them their state) are gone and must be redeployed.

Azure VM downgrade from A8

Is it possible to downgrade an Azure VM A8 (high compute) to a lower version like an A3? I keep getting the following error message when I try. I don't have an availability set setup. Thanks!
"Unable to upgrade the deployment. The requested VM size 'Large' may not be available in the resources supporting the existing deployment. Please try again later, try with a different VM size or smaller number of role instances, or create a deployment under an empty hosted service with a new affinity group or no affinity group binding. The long running operation tracking ID was: b2024fe9e93f6764bec3aa008756f0b7."
I recently discovered (via MS support tickets) that there are different "clusters" within Azure data centers, with different VM size compatibilities. In my case I had some cloud services in older clusters which didn't allow the newer "D-Series" VM sizes I wanted. The only solution was to create brand new cloud service instances from scratch and use Azure traffic manager to achieve a transition from the old servers to new ones.

Windows Azure and dynamic elasticity

Is there a way do do dynamic elasticity in Windows Azure? If my workers begin to get overloaded, or queues start to get too full, or too many workers have no work to do, is there a way to dynamically add or remove workers through code or is that just done manually (requires human intervention) right now? Does anyone know of any plans to add that if its not currently available?
Microsoft shipped the Autoscaling Application Block (Wasabi) to provide dynamic scaling. Some of the supported scenarios:
Autoscaling both web and worker roles in Windows Azure by dynamically changing instance counts or performing application throttling.
Autoscaling Windows Azure roles based on timetables.
Autoscaling Windows Azure roles based on metrics collected from the application and/or Windows Azure but constrained by upper and lower bounds on the instance count per role.
Preventing fast oscillations in the number of role instances with the stabilizer. The stabilizer can also help to optimize costs by limiting scaling up operations to the beginning of the hour and scaling down operations to the end of the hour.
Monitoring and logging autoscaling activity.
Sending notifications to preview any scaling operations before they take place.
Encrypting the rules and other configuration in Windows Azure blob storage or in local file storage.
Managing the autoscaler configuration by using Windows PowerShell.
A comprehensie sample application (Tailspin Surveys) showcasing all these features is provided (installation instructions are available here). Also, check out the Developer's Guide and the Channel9 video walkthrough.
The block is available as standalone download of binaries, source or via NuGet.
Here are a couple of talks/demos showing Wasabi in action:
CloudCover Episode on autoscaling
p&p symposium talk "Windows Azure app scaling to need"
There's a Service Management API, and you can use that to scale your application (from code running in Windows Azure or from code running outside of Windows Azure).
http://msdn.microsoft.com/en-us/library/ee460799.aspx and http://code.msdn.microsoft.com/Release/ProjectReleases.aspx?ProjectName=windowsazuresamples&ReleaseId=3233.
Windows Azure has just added the autoscaling feature built into the platform. Now it's trivially easy to configure your autoscaling rules right in the management portal:
See the announcement and the demo. I've also written a post comparing Windows Azure Autoscale to Wasabi and outlining the path forward.
Create a queue named autoscale.[your_role_name].instance_count
In the Management Portal, set the autoscale to Queue.
Set the Target Count field to 1.
Now you can use standard enqueue and dequeue operations on that queue to control the number of worker role instances. You've got 7 days to process a message before it expires, so you might want to create a worker role that can ensure that the number of messages in the queue is tracking your target instance count.
If you're after dynamic elasticity, you've probably already got a worker-role-based controller in mind already, so that's probably not a problem.
Lokad.Cloud open source project for Windows Azure contains distributed executor framework. Among other things it provides auto-scaling with VM provisioning feature.

Resources