How to connect to slave instance of an Azure Redis Cache - azure

The standard and premium pricing tiers of Azure Redis Cache provide master/slave replication:
Standard—A replicated cache in a two-node primary/secondary
configuration managed by Microsoft, with a high-availability SLA.
But the Azure portal provides connection details (hostname, port, key) for only a single redis instance. Is there a way to connect to connect to the slave process in a replica?

Since the Azure Redis service manages replication and automatic failover on your behalf, it is useful not to make any assumptions about which node is Master as that could change on a failover. Hence the service exposes only one endpoint and ensures that any requests to that endpoint hit the correct Master. It is technically possible to connect to Master or Slave, but Azure doesn't expose it and it requires checks on the client side to ensure that the node is indeed Master or Slave.
If you turn on clustering, the Redis cluster protocol is used. Under this protocol, you can run a cluster nodes command and it should return get a list of Master and slave nodes and the ports that each of these are listening on.

The Redis service manages replication and failover, for high availability. This is not something exposed to you. That is, you cannot connect directly to the slave/secondary.

Related

How Failover works when Primary VM Set get restarted?

Above is sample configuration for Azure Service Fabric.
I have created with Wizard and I have deployed one Asp.net core Application and that I am able to access from out side.
Now if you look at the image below Service Fabric is being access with sfclustertemp.westus2.cloudapp.azure.com. I am able to access application with
sfclustertemp.westus2.cloudapp.azure.com/api/values.
Now if I restart primary VM set it should transfer load to secondary and I have a thought that it should done automatically but it is not as Second Load Balancer has different dns name. ( If I specify different dns name then it is accessible).
I have understanding cluser has one id so it is common for both load balancer.
Is such configuration possible ?
Maybe you could use Azure Traffic Manager with health probes.
However, instead of using multiple node types for fail-over options during reboot, have a look at 'Durability tiers'. Using Silver or Gold will have the effect that reboots are performed sequentially on machine groups (grouped by fault domain), instead of all at once.
The durability tier is used to indicate to the system the privileges
that your VMs have with the underlying Azure infrastructure. In the
primary node type, this privilege allows Service Fabric to pause any
VM level infrastructure request (such as a VM reboot, VM reimage, or
VM migration) that impact the quorum requirements for the system
services and your stateful services.
There is misconception on what is a SF cluster.
On your diagram, the part you describe on the left as 'Service Fabric' does not belong there.
Service Fabric is nothing more than applications and services deployed in the cluster nodes, when you create a cluster, you define a primary node type, will be there where service fabric will deployed the services used for managing the cluster.
A node type will be formed by:
A VM Scale Set: machines with OS and SF services installed
A load balancer with dns and IP, forwarding requests to the VM Scale Set
So what you describe there, should be represented as:
NodeTypeA (Primary)
Load Balancer (cluster domain + IP)
VM Scale Set
SF management services (explorer, DNS)
Your applications
NodeTypeB
Load Balancer (other dns + IP)
VM Scale Set
Your applications
Given that:
the first concern is, if the Primary Node goes down, you will lose your cluster, because the management services won't be available to manage your service instances.
second: you shouldn't rely on node types for this kind of reliability, you should increase the reliability of your cluster adding more nodes to the node types.
third: if the concern is a data center outage, you could:
Create a custom cluster that span multiple regions
Add a reverse proxy or API gateway in front of your service to route the request wherever your service is.

Accessing Mongo replicas in kubernetes cluster from AWS lambdas

Some of my data is in Mongo replicas that are hosted in docker containers running in kubernetes cluster. I need to access this data from the AWS lambda that is running in the same VPC and subnet (as the kubernetes minions with mongo db). lambda as well as the kubernetes minions (hosting mongo containers) are run under the same security group. I am trying to connect using url "mongodb://mongo-rs-1-svc,mongo-rs-2-svc,mongo-rs-3-svc/res?replicaSet=mongo_rs" where mongo-rs-x-svc are three kubernetes services that enables access to the appropriate replicas. When I try to connect using this url, it fails to resolve the mongo replica url (e.g. mongo-rs-2-svc). Same URL works fine for my web service that is running in its own docker container in the same kubernetes cluster.
Here is the error I get from mongo client that I use...
{\"name\":\"MongoError\",\"message\":\"failed to connect to server [mongo-rs-1-svc:27017] on first connect [MongoError: getaddrinfo ENOTFOUND mongo-rs-1-svc mongo-rs-1-svc:27017]\"}". I tried replacing mongo-rs-x-svc to their internal ip addresses in the url. In this case the above name resolution error disappeared but got another error - {\"name\":\"MongoError\",\"message\":\"failed to connect to server [10.0.170.237:27017] on first connect [MongoError: connection 5 to 10.0.170.237:27017 timed out]\"}
What should I be doing to enable this access successfully?
I understand that I can use the webservice to access this data as intermediary but since my lambda is in VPC, I have to deploy NAT gateways and that would increase the cost. Is there a way to access the webservice using the internal endpoint instead of public url? May be that is another way to get data.
If any of you have a solution for this scenario, please share. I went through many threads that showed up as similar questions or in search results but neither had a solution for this case.
This is a common confusion with Kubernetes. The Service object in Kubernetes is only accessible from inside Kubernetes by default (i.e. when type: ClusterIP is set). If you want to be able to access it from outside the cluster you need to edit the service so that it is type: NodePort or type: LoadBalancer.
I'm not entirely sure, but it sounds like your network setup would allow you to use type: NodePort for your Service in Kubernetes. That will open a high-numbered port (e.g. 32XXX) on each of the Nodes in your cluster that forwards to your Mongo Pod(s). DNS resolution for the service names (e.g. mongo-rs-1-svc) will only work inside the Kubernetes cluster, but by using NodePort I think you should be able to address them as mongodb://ec2-instance-1-ip:32XXX,ec2-instance-2-ip:32XXX,....
Coreyphobrien's answer is correct. Subsequently you were asking for how to keep the exposure private. For that I want to add some information:
You need to make the Lambdas part of your VPC that your cluster is in. For this you use the --vpc-config parameter when creating the lambdas or updating. This will create a virtual network interface in the VPC that allows the Lambda access. For Details see this.
After that you should be able to set the AWS security group for your instances so that the NodePort will only be accessible from another security group that is used for your Lambdas network interface.
This blog discusses an example in more detail.

cassandra on azure, how to configure security groups

i just installed the datastax cluster of cassandra.
i have a question regarding the security groups and how to limit access.
currently, there are no security groups to the vnet and to all vms. so everyone can connect to the cluster.
the problem starts when i try to set a security group on the subnet. this is because the http communication of the cassandra nodes is (i think) used with the public ip and not the internal ip. i get an error in the opscenter that the http connection is down.
the question is how can i restrict the access to the cluster (for a specific ip), but provide access to all the cassandra nodes to work.
Its good practice to exercise security when running inside any public cloud whether its Azure, GCE, or AWS etc. Enabling internode SSL is a very good idea because this will secure the internode gossip communications. Then you should also introduce internal authentication (at the very least) so you require a user/password to login to cqlsh. I would also recommend using client to node SSL, 1-way should be sufficient for most cases.
I'm not so sure about Azure but I know with AWS and GCE the instances will only have a local internally routed IP (usually in the 10.0.0.0/8 private range) and the public IP will be via NAT. You would normally use the public IP as the broadcast_address especially if you are running across different availability zones where the internal IP does not route. You may also be running a client application which might connect via the public ip so you'd want to set the broadcast_rpc_address as public too. Both of these are found in the cassandra.yaml. The listen_address and rpc_address are both IPs that the node will bind to so they have to be locally available (i.e. you cant bind a process to a IP thats not configured on an interface on the node).
Summary
Use internode SSL
Use client to node SSL
Use internal authentication at the very minimum (Ldap and Kerberos are also supported)
Useful docs
I highly recommend following the documentation here. Introducing security can be a bit tricky if you hit snags (whatever the application). I always start of making sure the cluster is running ok with no security in place then introduce one thing at a time, then test, verify and then introduce the next thing. Dont configure everything at once!
Firewall ports
Client to node SSL - note require_client_auth: true should be false for 1-way.
Node to node SSL
Preparing SSL certificates
Unified authentication (internal, LDAP, Kerberos etc)
Note when generating SSL keys and certs typically you'd just generate the one pair and use it across all the nodes when you have node to node SSL. Otherwise if you introduce a new node you'll have to import the new cert into all nodes, which isn't really scalable. In my experience working with organisations using large clusters this is how they manage things. Also client applications may well use just same key or a different one at least.
Further info / reading
2-way SSL is supported, but its not as common as 1-way. This is typically a bit more complex and switched on with the require_client_auth: true in the cassandra.yaml
If you're using OpsCenter for SSL, the docs (below) will cover things. Note that essentially its in two places:
SSL between opscenter and the agents and the cluster (same as client to node SSL above)
SSL between OpsCenter and the Agents
OpsCenter SSL configuration
I hope this helps you towards achieving what you need to!

Scaling of Azure service fabric Stateless services

Can you please give me a better understanding of how we can scale the stateless services without partitioning?
Say we have 5 nodes in a cluster and we have 5 instances of the service. On simple testing a node is behaving as sticky where all the requests I am sending are being served by only one node. In the scenario when we have high volume of requests that come in, can other instances be automatically used to serve the traffic. How do we handle such scale out situations in service fabric?
Thanks!
Usually there's no need to use partitioning for stateless SF services, so avoid that if you can:
more on SF partitioning, including why its not normally used for stateless services
If you're using the ServiceProxy API, it will maintain sticky connections to a given physical node in the cluster. If you're (say) exposing HTTP endpoints, you'll have one for each physical instance in the cluster (meaning you'll end up talking to one at a time, unless you manually cycle thru them). You can avoid this by:
Creating a new proxy instance for each call, which tends to be expensive if you do it alot (or manually cycle thru the list of instance endpoint URLs, which can be tedious and/or expensive)
Put a load balancer in front of your cluster and configure all traffic from your clients to SF nodes to be forwarded thru that. The load balancer can be configured for Round-Robin, etc. style semantics:
Azure Load Balancer
Azure Traffic Manager
Good luck!
You can query the request using the reverse proxy installed on each node. Using the https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-reverseproxy
The reverse proxy then resolve the endpoint for you. If you have multiple instances of the a stateless service then it will forward your request to a random one.
If during heavy load you can increase the instance count of your service and the proxy then include the new instances automatically.
I will assume you are calling your services from outside your cluster. If yes, your problem is not specific for Service Fabric, it is Azure VMSS + LB.
Service Fabric runs on top of Virtual Machines Scale Set, these VMs are created behind a Load Balancer, when the client connects to your service, they are creating a connection through the load balancer to your service, whenever a connection is open, the load balancer assign one target VM for handling your request, and any request made from your client, while using the same connection(keep alive), will be handled by the same node, this is why your load goes to a single node.
LB won't round robin the requests because they are using the same connection, it is a limitation(feature) of the LB, to work around this problem, you should open multiple connections or use multiple clients(instances).
This is for default distribution mode(Hash-based). You have to check also the routing rules in the LB to check if the distribution mode is Hash-based(5 tuple= ip+port) or if it is IP affinity mode(ip only), otherwise multiple connections from same IP will still be linked to same node.
Source: Azure Load Balaner Distribution Mode

Access Cassandra Node on the Azure Cloud from outside

I have created a Linux VM with a single node Cassandra cluster installed.
Cassandra.yaml has the following:
seeds:
listen address:
rpc address:
netstat -an check with all required port are up and listening. (i.e. 9160, 9042)
I am trying to connect my application which is outside of the Azure cloud to access the cassandra cluster in the cloud. Looks like the connection between the outside host to the Azure cloud Cassandra node has been block.
Wonder if there is a true restriction to access Azure VM from out of network. Is there a way to access this cassandra node from outside?
If someone can answer my question would be very nice.
Thank you!
You need to go to the "Endpoints" of your virtual machine:
At the bottom click on "Add", and add new endpoints for these ports.
Then you will need to manage ACL for each endpoint, defining the IP ranges of the allowed and blocked IP addresses.
Keep in mind that, if the internal IP that is used by the virtual machine, is different from external (public) IP, that is used by the client, then depending on a driver you may need to teach it how to do address translation. Otherwise, the cluster will report only internal IPs upon the discovery request, which will obviously be not accessible from outside.
From this and from the security prospective I would recommend setting up cassandra cluster inside of the virtual network, and accessing it via VPN.
There is a comprehensive tutorial how to do it here: http://azure.microsoft.com/en-us/documentation/articles/virtual-machines-linux-nodejs-running-cassandra/

Resources