Cloud Disk Triplicate in AlibabaCloud ECS - alibaba-cloud-ecs

What is Cloud Disk Triplicate and how can I add in my AlibabaCloud ECS.
I appreciate any assistance with this.

Triplicate technology is nothing but the process of making and distributing three copies of data. It is principle concept of Alibaba Cloud DFS which provides stable and efficient data access and reliability for ECS.
All the Cloud disks and Shared Block Storage offered by Alibaba Cloud ECS Instances are using Triplicate technology. So you can just add a cloud disk to ECS instance
Hope I answered your question.
More about Triplicate Technology

Related

Storage to cloud

we have a working Netapp with ESXi (VMWare 5.5) setup. With multiple VMs running on 3 ESXi Systems but they are residing entirely on Netapp Storage.
We are thinking of moving this entire setup to private Cloud consists of HP Nimble cloud storage. This cloud is currently owned by one of our departments and are ready to give us space(in terms of storage) and ESXis(VMI Cluster) to run our VMs on a rental basis. So immediate advantage for us is more space, more network speed, DR Setup and not anymore worry about the hardware.
Ofcourse it is in discussion phase but I still would like to ask you experts following questions.
Netapp Storage is all about data plus its configuration (Snapshot, User Quota Policies, Export Rules etc.). When we talk about storage space in cloud, then how are we going to control/administrate the configuration parts listed above? Or will this not be anymore possible to administrate? And the Cloud administrators take this control in their hands and we have to be dependant on them for every configuration changes? This is very important factor.
Will VMs running on Netapp Storage be migrated without much efforts? Is there any documented method for this?
Your view on this will be really helpful.
Thanx in advance.
Regards,
Admin
On point #1, a common way to provide multi-tenant administrator access on NetApp is to create a separate SVM [1] (Storage Virtual Machine) that a tenant administrator can use to manage volume capacity, snapshots, quotas, etc.
For #2, a common migration path for moving VMware VMs is to use Storage vMotion [2]. The private cloud provider can remap the ESXi hosts in your environment to be managed under their vCenter Server first. Then from there, they will have the ability to (non-disruptively, in most cases) move the VMs from your old NetApp datastores to new datastores on their array. They can do the same for vMotioning these VMs over to their managed ESXi hosts.
[1] https://docs.netapp.com/us-en/ontap/concepts/storage-virtualization-concept.html
[2] https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.vcenterhost.doc/GUID-AB266895-BAA4-4BF3-894E-47F99DC7B77F.html

Pricing Details for Azure Kubernetes Service and Service Fabric on free tier

I was able to sign up for a free Azure for Students account, which has the same limits as with the Azure free tier. On the "Free Service" service, this is written:
As well as this:
While I understand that "you are only charged for the computer, storage, networking, etc. resources you use" (from Microsoft's website), what can we logically do with AKS and Service Fabric in the free tier? For example, with Virtual Machines, you can Linux and Windows B1S VMs for 1500 hours every month. This means I can provision B1S VMs, one Linux and one Windows, and leave them running for up to 1500 hours per month.
To summarize, can we provision any AKS cluster (with some amount of worker nodes), or a Service Fabric cluster without incurring any charges (a logical way to use these services) in a similar way to how I can create VMs and use them within these limits for free? And if not, is there any other alternative that is free?
Thank you in advance.
AKS cluster itself is free, but you cant use b1s nodes for the system node pool (they are too small):
https://learn.microsoft.com/en-us/azure/aks/quotas-skus-regions#restricted-vm-sizes
so the answer would be no.
EDIT: sorry, forgot about the Service Fabric part of the question. I dont see why that wouldn't work apart from the fact that it would be next to impossible to actually deploy anything useful on top of b1s vm that's running all of the SF binaries.
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-capacity

Highly available, redundant Redis-cluster over kubernetes

The objective is to create a highly available redis cluster using kubernetes for a nodeJS client. I have already created the architecture as below:
Created a Kubernetes cluster of Kmaster with 3 nodes (slaves).
Then I created statefulsets and persistent volumes (6 - one for each POD).
Then created Redis pods 2 on each node (3 Master, 3 replicas of respective master).
I need to understand the role of Redis Sentinel hereafter, how does it manage the monitoring, scaling, HA for the redis-cluster PODs across the nodes. I understand Sentinel should be on each node and doing its job but what should be the right architecture here?
P.S. I have created a local setup for now, but ultimately this goes on Azure so any suggestions w.r.to az is also welcome.
Thanks!
From an Azure perspective, you have two options and if you are very specific to option two but are looking for the Sentinel architecture piece, there is business continuity and high availability options in both IaaS (Linux VM scale sets) and PaaS services that go beyond the Sentinel component.
Azure Cache for Redis (PaaS) where you choose & deploy your desired service tier (Premium Tier required for HA) and connect your client applications. Please see: Azure Cache for Redis FAQ and Caching Best Practice.
The second option is to deploy a solution (as you have detailed) as an IaaS solution built from Azure VMs. There are a number of Redis Linux VM images to choose from the Azure Marketplace or there is the option to create a Linux VM OS image from your on-premise solution and migrate that to Azure. The Sentinel component is enabled on each server (master, slavea, and slaveb, ...). There are networking and other considerations too. For building a system from scratch, please see: How to Setup Redis Replication (with Cluster-Mode Disabled) in CentOS 8 – Part 1 and How to Setup Redis For High Availability with Sentinel in CentOS 8 – Part 2

Terraform Import All Cloud Infrastructure Services to Statefile

I 'm using many services in Alibaba Cloud like Container Service, VPC, RDS, DNS, OSS and many more.
Instead of importing 1 by 1 of Alibaba Cloud Product Services that used that would take a long time for that.
Is there any elegant and fast way to importing all of the cloud infrastructure to a statefile ?
Yes, you can make a resource list and then run terraform but make sure you can have

Azure VMs high-availability setup for data disk or storage

I'm currently looking into a high-availability approach for a file server within Azure in which I will need to be deploying VMs for. The data on the file server will be constantly changing. From what I read so far, I will need at least 2 VMs and grouping them together into a shared availability set along with creating a cloud service. Although this will address the application and server aspect, what about the storage and the data on them?
I understand that I can't attach a single disk to multiple VMs so I'm a bit lost on how to proceed with this. Any thoughts or ideas on how to move forward with this?
In short, I have a VM with direct data disk attached to it that I'm looking to provide high-availability in the event that the VM goes offline; either through an outage, host patching, hardware maintenance, etc...
Have a look into Azure Blob Storage - don't worry about disks etc, just let the Azure fabric handle the data redundancy and scalability for you!
Here's an "all you need" introduction to WIndows Azure Storage:

Resources