How to migrate data between Apache Pulsar clusters? - apache-pulsar

How do folks migrate data between Pulsar environments either for disaster recovery or blue-green deployment are they copying data to a new AWS region or K8S namespace?

One of the easiest approaches is to rely on geo-replication to replicate the data across different clusters.

This PIP was just created to address this exact scenario of blue-green deployments and to address issues with a cluster becoming non-functioning or corrupted in case an upgrade or update (perhaps of experimental setting changes) goes wrong: https://github.com/pkumar-singh/pulsar/wiki/PIP-95:-Live-migration-of-producer-consumer-or-reader-from-one-Pulsar-cluster-to-another
Until then, aside from using geo-replication, another potential way around this is having an active-active setup (with two live clusters) with a DNS endpoint that producers and consumers use that can be cut over from the old cluster to the new cluster just before taking the old cluster goes down. If it's not acceptable to have the blip in latency this approach would cause, you could replicate just the ingest topics from the old cluster to the new cluster, cut over your consumers, and then cut over your producers. Depending on how your flows are designed, you may need to replicate additional topics to make that happen. Keep in mind that on the cloud there can be different cost implications depending on how you do this. (Network traffic across regions is typically much more expensive than traffic within a single zone.)
If you want to have the clusters completely separate (meaning avoiding using Pulsar geo-replication), you can splice a Pulsar replicator function into each of your flows to manually produce your messages to the new cluster until the traffic is cut over. If you do that, just be mindful that the function must have its own subscription on each topic you deploy it to ensure that you're not losing messages.

Related

Is implementing elastic search service on same server as node server with auto scaling is a good idea?

Trying to deploy a project on t3 large server with auto scaling.
I have my elastic search service deployed on same system as node and react projects.(Not using AWS elastic search)
Will it be facing issues in future and i need to segregate elastic search service to some other server?
It's always nice to have a separate dedicated server for running the Elasticsearch server but as you are using AWS some of the things which you can do to minimize the issues:
Elasticsearch is a stateful application contrast to your node and react app unless you are storing the state there as well which is not a good idea and due to stateless nature of the applications, autoscaling is very useful as you can on-demand based on the CPU, memory or other metrics scale up or down the instances.
But in case of Elasticsearch or other stateful applications, it becomes tricky as when you scale up or down the instance, shards get relocated if they are not reachable within a threshold which can lead to unbalanced Elasticsearech cluster.
Now in order to minimize these issues:
Make sure you can storing Elasticsearch indices on the network-attached disk so that there is no data loss when autoscaling brings a new instance and new instance again should use earlier network attaches EBS(where your data is stored).
Make sure you don't create a new Elasticsearch process when you scale up or down the instances according to your autoscaling policy and the Elasticsearch process should be fixed and scale up/down with some manual intervention.
If you have to scale up the Elasticsearch cluster then make sure you disable shard allocation to avoid the issues mentioned earlier.
These are some known issues which you might face and there could be even more based on your configuration and while writing the answer itself I felt, it so easy to just have a dedicated instance for Elasticsearch to avoid these weird issues.
I would add to other answers following:
Elasticsearch performs best if it has enough RAM to keep indexes in entirety in RAM. If the Elasticsearch is competing with Node/Application for RAM it will affect it's performance.
From maintenance/performance perspective you should consider having at least 3-node cluster. Even if that means you have smaller machines. If AWS is upgrading infrastructure and you have 1 machine, when than 0.05% unavailability hits your search is down. If you need to do maintenance on the node or do upgrades having multiple machines will help with availability.
Depending on your use of Elasticsearch and how often you update/delete items in the indexes, and how fast your indexes will grow, adding more machines/nodes to the cluster will help with growth.
There are probably many more things to consider, but that totally depends on your application, budget, SLAs etc.

The right approach to do Elasticsearch upgrades via terraform

I would like to discuss what are the best practices/approaches engineers do while upgrading elasticsearch clusters. I believe this post may serve as a good example of strategies and steps to perform, guaranteeing no data loss, minimum downtime, scalability and availability of the elasticsearch services.
To start the initiative, we can break the upgrade into two subsections:
1) Performing upgrade on master nodes:
Since master nodes do not contain any data and are responsible for controlling the cluster I believe we can safely do terraform apply to add all the upgraded master node VMs and then remove the old ones.
2) Performing upgrade on data nodes:
As many people already know, there is certain limitation on the ability to update data nodes. We cannot afford to completely deallocate the VM and replace it with another. A good practice in my opinion is to:
a) Stop the index allocation to the old VM
b) Then performing terraform apply to create the new upgraded version of the data node VM(and manually modifying the terraform state in order the old VM not to be destroyed)
c) Allowing traffic(index creation) to the new VM and using the elasticsearch APIs to transfer the data from the old to the new VM
d) Manually changing the terraform state allowing it to delete the old VM.
These are just idealistic steps, I would like to see your opinion and strategies to perform safe elasticsearch upgrades via Terraform.
The reference manual has guidelines regarding removing master-eligible nodes that you must respect in versions 7 and later. It's much trickier to get this right in earlier versions because of the discovery.zen.minimum_master_nodes setting.
Your strategy for data nodes sounds slow and expensive given that you might be moving many terabytes of data around for each node. It's normally better to restart/upgrade larger data nodes "in place", detaching and reattaching the underlying storage if needed.

'Unable to connect Net/http: TLS handshake timeout' — Why can't Kubectl connect to Azure Kubernetes server? (AKS)

My question (to MS and anyone else) is: Why is this issue occurring and what work around can be implemented by the users / customers themselves as opposed to by Microsoft Support?
There have obviously been 'a few' other question about this issue:
Managed Azure Kubernetes connection error
Can't contact our Azure-AKS kube - TLS handshake timeout
Azure Kubernetes: TLS handshake timeout (this one has some Microsoft feedback)
And multiple GitHub issues posted to the AKS repo:
https://github.com/Azure/AKS/issues/112
https://github.com/Azure/AKS/issues/124
https://github.com/Azure/AKS/issues/164
https://github.com/Azure/AKS/issues/177
https://github.com/Azure/AKS/issues/324
Plus a few twitter threads:
https://twitter.com/ternel/status/955871839305261057
TL;DR
Skip to workarounds in Answers below.
Current best solution is post a help ticket — and wait — or re-create your AKS cluster (maybe more than once, cross your fingers, see below...) but there should be something better. At least please grant the ability to let AKS preview customers, regardless of support tier, upgrade their support request severity for THIS specific issue.
You can also try scaling your Cluster (assuming that doesn't break your app).
What about GitHub?
Many of the above GitHub issues have been closed as resolved but the issue persists. Previously there was an announcements document regarding the problem but no such status updates are currently available even though the problem continues to present itself:
https://github.com/Azure/AKS/tree/master/annoucements
I am posting this as I have a few new tidbits that I haven't seen elsewhere and I am wondering if anyone has ideas as far as other potential options for working around the issue.
Affected VM / Node Resource Usage
The first piece I haven't seen mentioned elsewhere is Resource usage on the nodes / vms / instances that are being impacted by the above Kubectl 'Unable to connect to the server: net/http: TLS handshake timeout' issue.
Production Node Utilization
The node(s) on my impacted cluster look like this:
The drop in utilization and network io correlates strongly with both the increase in disk utilization AND the time period we began experiencing the issue.
The overall Node / VM utilization is generally flat prior to this chart for the previous 30 days with a few bumps relating to production site traffic / update pushes etc.
Metrics After Issue Mitigation (Added Postmortem)
To the above point, here are the metrics the same Node after Scaling up and then back down (which happened to alleviate our issue, but does not always work — see answers at bottom):
Notice the 'Dip' in CPU and Network? That's where the Net/http: TLS issue impacted us — and when the AKS Server was un-reachable from Kubectl. Seems like it wasn't talking to the VM / Node in addition to not responding to our requests.
As soon as we were back (scaled the # nodes up by one, and back down — see answers for workaround) the Metrics (CPU etc) went back to normal — and we could connect from Kubectl. This means we can probably create an Alarm off of this behavior (and I have a issue in asking about this on Azure DevOps side: https://github.com/Azure/AKS/issues/416)
Node Size Potentially Impacts Issue Frequency
Zimmergren over on GitHub indicates that he has less issues with larger instances than he did running bare bones smaller nodes. This makes sense to me and could indicate that the way the AKS servers divy up the workload (see next section) could be based on the size of the instances.
"The size of the nodes (e.g. D2, A4, etc) :)
I've experienced that when running A4 and up, my cluster is healther than if running A2, for example. (And I've got more than a dozen similar experiences with size combinations and cluster failures, unfortunately)." (https://github.com/Azure/AKS/issues/268#issuecomment-375715435)
Other Cluster size impact references:
giorgited (https://github.com/Azure/AKS/issues/268#issuecomment-376390692)
An AKS server responsible for more smaller Clusters may possibly get hit more often?
Existence of Multiple AKS Management 'Servers' in one Az Region
The next thing I haven't seen mentioned elsewhere is the fact that you can have multiple Clusters running side by side in the same Region where one Cluster (production for us in this case) gets hit with 'net/http: TLS handshake timeout' and the other is working fine and can be connected to normally via Kubectl (for us this is our identical staging environment).
The fact that users (Zimmergren etc above) seem to feel that the Node size impacts the likelihood that this issue will impact you also seems to indicate that node size may relate to the way the sub-region responsibilities are assigned to the sub-regional AKS management servers.
That could mean that re-creating your cluster with a different Cluster size would be more likely to place you on a different management server — alleviating the issue and reducing the likelihood that multiple re-creations would be necessary.
Staging Cluster Utilization
Both of our AKS Clusters are in U.S. East. As a reference to the above 'Production' Cluster metrics our 'Staging' Cluster (also U.S. East) resource utilization does not have the massive drop in CPU / Network IO — AND does not have the increase in disk etc. over the same period:
Identical Environments are Impacted Differently
Both of our Clusters are running identical ingresses, services, pods, containers so it is also unlikely that anything a user is doing causes this problem to crop up.
Re-creation is only SOMETIMES successful
The above existence of multiple AKS management server sub-regional responsibilities makes sense with the behavior described by other users on github (https://github.com/Azure/AKS/issues/112) where some users are able to re-create a cluster (which can then be contacted) while others re-create and still have issues.
Emergency could = Multiple Re-Creations
In an emergency (ie your production site... like ours... needs to be managed) you can PROBABLY just re-create until you get a working cluster that happens to land on a different AKS management server instance (one that is not impacted) but be aware that this may not happen on your first attempt — AKS cluster re-creation is not exactly instant.
That said...
Resources on the Impacted Nodes Continue to Function
All of the containers / ingresses / resources on our impacted VM appear to be working well and I don't have any alarms going off for up-time / resource monitoring (other than the utilization weirdness listed above in the graphs)
I want to know why this issue is occurring and what work around can be implemented by the users themselves as opposed to by Microsoft Support (currently have a ticket in). If you have an idea let me know.
Potential Hints at the Cause
https://github.com/Azure/AKS/issues/164#issuecomment-363613110
https://github.com/Azure/AKS/issues/164#issuecomment-365389154
Why no GKE?
I understand that Azure AKS is in preview and that a lot of people have moved to GKE because of this problem (). That said my Azure experience has been nothing but positive thus far and I would prefer to contribute a solution if at all possible.
And also... GKE occasionally faces something similar:
TLS handshake timeout with kubernetes in GKE
I would be interested to see if scaling the nodes on GKE also solved the problem over there.
Workaround 1 (May Not Work for Everyone)
An interesting solution (worked for me) to test is scaling the number of nodes in your cluster up, and then back down...
Log into the Azure Console — Kubernetes Service blade.
Scale your cluster up by 1 node.
Wait for scale to complete and attempt to connect (you should be able to).
Scale your cluster back down to the normal size to avoid cost increases.
Alternately you can (maybe) do this from the command line:
az aks scale --name <name-of-cluster> --node-count <new-number-of-nodes> --resource-group <name-of-cluster-resource-group>
Since it is a finicky issue and I used the web interface I am uncertain if the above is identical or would work.
Total time it took me ~2 minutes — for my situation that is MUCH better than re-creating / configuring a Cluster (potentially multiple times...)
That being Said....
Zimmergren brings up some good points that Scaling is not a true Solution:
"It worked sometimes, where the cluster self-healed a period after scaling. It failed sometimes with the same errors. I don't consider scaling a solution to this problem, as that causes other challenges depending on how things are set up. I wouldn't trust that routine for a GA workload, that's for sure. In the current preview, it's a bit wild west (and expected), and I'm happy to blow up the cluster and create a new one when this fails continuously." (https://github.com/Azure/AKS/issues/268#issuecomment-395299308)
Azure Support Feedback
Since I had a support ticket open at the time I ran into the above scaling solution I was able to get feedback (or rather a guess) on what the above might have worked, here's a paraphrased response:
"I know that scaling the cluster can sometimes help if you get into a state where the number of nodes is mismatched between “az aks show” and “kubectl get nodes”. This may be similar."
Workaround References:
GitHub user Scaled nodes from console and fixed the problem: https://github.com/Azure/AKS/issues/268#issuecomment-375722317
Workaround Didn't Work?
If this DOES NOT work for you, please post a comment below as I am going to try to keep an up to date list of how often the issue crops up, whether it resolves itself, and whether this solution works across Azure AKS users (looks like it doesn't work for everyone).
Users Scaling Up / Down DID NOT work for:
omgsarge (https://github.com/Azure/AKS/issues/112#issuecomment-395231681)
Zimmergren (https://github.com/Azure/AKS/issues/268#issuecomment-395299308)
sercand — scale operation itself failed — not sure if it would have impacted connectability (https://github.com/Azure/AKS/issues/268#issuecomment-395301296)
Scaling Up / Down DID work for:
Me
LohithChanda (https://github.com/Azure/AKS/issues/268#issuecomment-395207716)
Zimmergren (https://github.com/Azure/AKS/issues/268#issuecomment-395299308)
Email Azure AKS Specific Support
If after all the diagnosis you still suffer from this issue, please don't hesitate to send email to aks-help#service.microsoft.com
Adding another answer since this is now the Azure Support official solution when the above attempts do not work. I haven't experienced the issue in a while so I can't verify this one but it seems like it would make sense to me (based on previous experience).
Credit on this one / full thread found here (https://github.com/Azure/AKS/issues/14#issuecomment-424828690)
Check for Tunneling Issues
ssh to the agent node which running the tunnelfront pod
get tunnelfront logs: "docker ps" -> "docker logs "
"nslookup " whose fqdn can be get from above command -> if it resolves ip, which means dns works, then go to the following step
"ssh -vv azureuser# -p 9000" ->if port is working, go to the next step
"docker exec -it /bin/bash", type "ping google.com", if it is no response, which means tunnel front pod doesn't have external network, then do following step
restart kube-proxy, using "kubectl delete po -n kube-system", choose the kube-proxy which is runing on the same node with tunnelfront. customer can use "kubectl get po -n kube-system -o wide"
I feel like this particular work-around could PROBABLY be automated (for sure on Azure side but probably on the community side).
Email Azure AKS Specific Support
If after all the diagnosis you still suffer from this issue, please don't hesitate to send email to aks-help#service.microsoft.com
Workaround 2 Re-Create Cluster (Somewhat Obvious)
I am adding this one because there are some details to keep in mind and even though I touched on it in my original Question, that thing got long, so I am adding specific details on re-creation here.
Cluster Re-Creation Doesn't Always Work
Per the above in my original question there are multiple AKS Server instances that divide up responsibilities for a given Azure region (we think). Some, or all, of these can be impacted by this bug resulting in your Cluster being un-reachable via Kubectl.
That means that if you re-create your Cluster and it some how lands on the same AKS server, probably that new Cluster will ALSO not be reachable requiring...
Additional Re-creation Attempts
Probably re-creating multiple times will result in you eventually landing your new Cluster on one of the other AKS servers (which is working fine).
As far as I can tell I don't see any indication that ALL AKS servers get hit with this problem at once in a while (if ever).
Different Cluster Node Size
If you are in a pinch and want the highest possibly probability (we haven't confirmed this) that your re-creation lands on a different AKS management server — choose a different Node size when you create your new Cluster (see Node Size section of the initial Question above).
I have opened this ticket asking Azure DevOps whether or not the Node Size is ACTUALLY related to deciding which Clusters are administered by which AKS management servers: https://github.com/Azure/AKS/issues/416
Support Ticket Fix vs. Self Healing
Since there are a lot of users who indicate that the problem occasionally solves itself and just goes away I think that it is reasonable to guess that Support actually fixes the offending AKS server (which may result in other users having their Clusters fixed — 'Self Heal') as opposed to fixing the individual user's Cluster.
Creating Support Tickets
To me the above would likely mean that creating a Ticket is probably a good thing since it would fix other user Clusters experiencing the same issue — it might also be an argument for allowing support issue severity escalation for this specific issue.
I think this is also a decent indicator that maybe Azure support hasn't figured out how to fully alarm for the problem yet, in which case creation of a support ticket serves that purpose as well.
I also asked Azure DevOps whether they Alarm for the issue (based on my experience easily visualizing the issue based on CPU and Network IO metric changes) on their side: https://github.com/Azure/AKS/issues/416
If NOT (haven't heard back) then it makes sense to create a ticket EVEN IF you plan to re-create your cluster since that ticket would then make Azure DevOps aware of the issue resulting in a fix for other users on that Cluster management server.
Things to make Cluster Re-Creation Easier
I will add to this (feedback / ideas are appreciated) but off the top of my head:
Be diligent (obvious) about how you store all YAML files used to create your Cluster (even if you don't re-deploy often for your app by design).
Script your DNS modifications in order to speed up pointing to the new instance — If you have a public facing app / service that utilizes DNS (Maybe something like this example for Google Domains?: https://gist.github.com/cyrusboadway/5a7b715665f33c237996, Full docs here: https://cloud.google.com/dns/api/v1/)
We just had this issue for one of our clusters. Sent a support ticket and got called back 5 minutes later by an engineer asking if it was OK for them to restart the API Server. 2 minutes later it was working again.
Reason was something about timeouts in their messaging queue.

How do I determine the number of Node Types, Number of nodes and VM size in Service Fabric cluster for a relatively simple but high throughput API?

I have an Asp.Net core 2.0 Wen API that has a relatively simple logic (simple select on a SQL Azure DB, return about 1000-2000 records. No joins, aggregates, functions etc.). I have only 1 GET API. which is called from an angular SPA. Both are deployed in service fabric as as stateless services, hosted in Kestrel as self hosting exes.
considering the number of users and how often they refresh, I've determined there will be around 15000 requests per minute. in other words 250 req/sec.
I'm trying to understand the different settings when creating my Service Fabric cluster.
I want to know:
How many Node Types? (I've determined as Front-End, and Back-End)
How many nodes per node type?
What is the VM size I need to select?
I have ready the azure documentation on cluster capacity planning. while I understand the concepts, I don't have a frame of reference to determine the actual values i need to provide to the above questions.
In most places where you read about the planning of a cluster they will suggest that this subject is part science and part art, because there is no easy answer to this question. It's hard to answer it because it depends a lot on the complexity of your application, without knowing the internals on how it works we can only guess a solution.
Based on your questions the best guidance I can give you is, Measure first, Measure again, Measure... Plan later. Your application might be memory intensive, network intensive, CPU, Disk and son on, the only way to find the best configuration is when you understand it.
To understand your application before you make any decision on SF structure, you could simply deploy a simple cluster with multiple node types containing one node of each VM size and measure your application behavior on each of them, and then you would add more nodes and span multiple instances of your service on these nodes and see which configuration is a best fit for each service.
1.How many Node Types?
I like to map node types as 1:1 to roles on your application, but is not a law, it will depend how much resource each service will consume, if the service consume enough resource to make a single VM(node) busy (Memory, CPU, Disk, IO), this is a good candidate to have it's own node type, in other cases there are services that are light-weight that would be a waste of resources provisioning an entire VM(node) just for it, an example is scheduled jobs, backups, and so on, In this case you could provision a set of machines that could be shared for these services, one important thing you have to keep in mind when you share a node-type with multiple service is that they will compete for resources(memory, CPU, network, disk) and the performance measures you took for each service in isolation might not be the same anymore, so they would require more resources, the option is test them together.
Another point is the number of replicas, having a single instance of your service is not reliable, so you would have to create replicas of it(the right number I describe on next answer), in this case you end up with a service load split in to multiple nodes, making this node-type under utilized, is where you would consider joining services on same node-type.
2.How many nodes per node type?
As stated before, it will depend on your service resource consumption, but a very basic rule is a minimum of 3 per node type.
Why 3?
Because 3 is the lowest number where you could have a rolling update and guarantee a quorum of 51% of nodes\service\instances running.
1 Node: If you have a service running 1 instance in a node-type of 1 node, when you deploy a new version of your service, you would have to bring down this instance before the new comes up, so you would not have any instance to serve the load while upgrading.
2 Nodes: Similar to 1 node, but in this case you keep only 1 node running, in case of failure, you wouldn't have a failover to handle the load until the new instance come up, it will worse if you are running a stateful service, because you will have only one copy of your data during the upgrade and in case of failure you might loose data.
3 Nodes: During a update you still have 2 nodes available, when the one being updated get back, the next one is put down and you still have 2 nodes running, in case of failure of one node, the other node can support the load until a new node is deployed.
3 nodes does not mean the your cluster will be highly reliable, it means the chances of failure and data loss will be lower, you might be unlucky a loose 2 nodes at same time. As suggested in the docs, in production is better to always keep the number of nodes as 5 or more, and plan to have a quorum of 51% nodes\services available. In this case I would recommend 5, 7 or 9 nodes in cases you really need higher uptime 99.9999...%
3.What is the VM size I need to select?
As said before, only measurements will give this answer.
Observations:
These recommendations does not take into account the planning for primary node types, it is recommended to have at least 5 nodes on primary Node Types, it is where SF system services are placed, they are responsible to manage the
cluster, so they must be highly reliable, otherwise you risk losing control of your cluster. If you plan to share these nodes with your application services, keep in mind that your services might impact them, so you have to always monitor them to check for any impact it might cause.

HDInsight vs. Virtualized Hadoop Cluster on Azure

I'm investigating two alternatives for using a Hadoop cluster, the first one is using HDInsight (with either Blob or HDFS storage) and the second alternative is deploying a powerful Windows Server on Microsoft Azure and run HDP (Hortonwork Data Processing) on it (using virtualization). The second alternative gives me more flexibility, however what I'm interested in is investigating the overhead of each alternative. Any ideas on that? Particularly how is the effect of Blob storage in the efficiency?
This is a pretty broad question, so an answer of "it depends," is appropriate here. When I talk with customers, this is how I see them making the tradeoff. It's a spectrum of control at one end, and convenience on the other. Do you have specific requirements on which Linux distro or Hadoop distro you deploy? Then you will want to go with IaaS and simply deploy there. That's great, you get a lot of control, but patching and operations are still your responsibility.
We refer to HDInsight as a managed service, and what we mean by that is that we take care of running it for you (eg, there is an SLA we provide on the cluster itself, and the apps running on it, not just "can I ping the vm"). We operate that cluster, patch the OS, patch Hadoop, etc. So, lots of convenience there, but, we don't let you choose which Linux distro or allow you to have an arbitrary set of Hadoop bits there.
From a perf perspective, HDInsight can deploy on any Azure node size, similar to IaaS VM's (this is a new feature launched this week). On the question of Blob efficiency, you should try both out and see what you think. The nice part about Blob store is you get more economic flexibility, you can deploy a small cluster on a massive volume of data if that cluster only needs to run on a small chunk of data (as compared to putting it all in HDFS, where you need all of the nodes running all of the time to fit all of your data).

Resources