[Question posted by a user on YugabyteDB Community Slack]
I’m trying to setup async replication between universes running on CentOS VMs in different Azure regions behind Azure load balancer. I'm getting connection refused or unable to establish connection to leader master. I probably need help on understanding how & where to bind the correct IPs and which IPs to provide wherein the replication setup.
The DB servers need to be able to talk with each other directly, not through load balancers. The IPs must be of the DB servers and the load balancers aren’t utilized between them. Load balancers can then be used between your clients & servers.
Related
I have a Web server (Web01) setup in VM. Currently, I facing performance issue on the Web Server, The bottleneck is too many request, the web server is not enough process power to execute. So I have 2 options to resolve this problem.
Increase CPU and Memory
Setup Web02 in VM (Same VM Host of Web01) and build NLB.
I don't know above 2 options which is the best. Actually, I struggle option 2 that if I setup 2 web server's in same VM host, is the performance is bester than option 1?
I can share some thoughts with you on the pros and cons of NLB, but I can't directly help you make a choice.
Network load balancing has several potential advantages. By distributing network traffic among multiple servers or virtual machines, processing is faster than if all traffic flows through a single server. If demand decreases, servers can be taken offline, and the feature will balance traffic among the remaining hosts. NLB provides fault tolerance at the network layer, ensuring that connections are not directed to servers that are down. Network Load Balancing also enables organizations to rapidly scale server applications by adding hosts and then distributing the application's traffic among the new hosts.
But it still has some drawbacks. It cannot detect service interruption, only by IP address. If a particular server service fails, WNLB cannot detect the failure and will still route requests to that server. The current CPU load and RAM utilization of each server cannot be considered when distributing the client load.
[Question posted by a user on YugabyteDB Community Slack]
I am wondering if for the attached (image) setup I should be trying to setup xCluster/Asynchronous Replication between Site A and Site B as described in https://docs.yugabyte.com/preview/deploy/multi-dc/async-replication/ or simply create a single swarm of containers across both sites. Is there any reason that xCluster/Async won’t work?
Some notes:
Basically there are two physical sites that connect to a single telco company.
Via dynamic DNS I can get a static url that maps to whatever the variable IP address of the link is at the time.
Internally to each site the machines only natively talk to each other via their internal IP addresses but use the dynamic DNS to communicate with machines at the other site via port forwarding.
The goal at this stage is a yb-master and yb-server on each node.
Perhaps it’s all handled by the magic of Docker networking and Yugabyte works seamlessly with all that?
No, this is not possible. The servers will need to connect to each other directly as they do internally. Think A2 sends/receives changes to B2 directly.
I have a NodeJs web application running on amazon EC2 server.
Now from this node app in EC2, I have to access a database system (SqlServer) which is in the customer's in house network which can be accessed only with a VPN. What are the possible ways to do this?
Note:
- In house db cannot be exposed to public
There are a three options:
1) Expose your database publicly, and connect from your app using a secure protocol (i.e. ssl). This is probably a horrible idea, but is possible.
2) Set up a VPN between AWS and the data center where the database lives. This is a quick, easy way to set up a hybrid architecture.
3) Set up Direct Connect between AWS and the data center. This can reduce latency, provide network sovereignty, and depending on the amount of traffic between the app and the db may actually be cheaper than option 2.
You can setup a VPN between the VPC and the customers network.
ref : https://aws.amazon.com/premiumsupport/knowledge-center/create-connection-vpc/
We're running MongoDB on Azure and are in the process of setting up a production replica set (no shards) and I'm looking at the recommendations here:
http://docs.mongodb.org/ecosystem/tutorial/install-mongodb-on-linux-in-azure/
And I see the replica set config is such that the members will talk to each other via external IP addresses - isn't this going to 1) incur additional Azure costs since the replication traffic goes thru the external IPs and 2) incur replication latency because of the same?
At least one of our applications that will talk to Mongo will be running outside of Azure.
AWS has a feature where external DNS names when looked up from the VMs resolve to internal IPs and when resolved from outside, to the external IP which makes things significantly easier :) In my previous job, I ran a fairly large sharded mongodb in AWS...
I'm curious what your folks recommendations are? I had two ideas...
1) configure each mongo host with an external IP (not entirely sure how to do this in Azure but I'm sure it's possible...) and configure DNS to point to those IPs externally. Then configure each VM to have an /etc/hosts file that points those same names to internal IP addresses. Run Mongo on port 27017 in all cases (or really whatever port). This means that the set does replication traffic over internal IPs but external clients can talk to it using the same DNS names.
2) simiilar to #1 but run mongo on 3 different ports but with only one external IP address and point all three external DNS names to this external IP address. We achieve the same results but it's cleaner I think.
Thanks!
Jerry
There is no best way, but let me clarify a few of the "objective" points:
There is no charge for any traffic moving between services / VMs / storage in the same region. Even if you connect from one VM to the other using servicename.cloudapp.net:port. No charge.
Your choice whether you make the mongod instances externally accessible. If you do create external endpoints, you'll need to worry about securing those endpoints (e.g. Access Control Lists). Since your app is running outside of Azure, this is an option you'll need to consider. You'll also need to think about how to encrypt the database traffic (mongodb Enterprise edition supports SSL; otherwise you need to build mongod yourself).
Again, if you expose your mongod instances externally, you need to consider whether to place them within the same cloud service (sharing an ip address, getting separate ports per mongod instance), or multiple cloud services (unique ip address per cloud service). If the mongod instances are within the same cloud service, they can then be clustered into an availability set which reduces downtime by avoiding host OS updates simultaneously across all vm's, and splits vm's across multiple fault domains).
In the case where your app/web tier live within Azure, you can use internal IP addresses, with both your app and mongodb vm's within the same virtual network.
We are looking at deploying an application on azure web sites and deploying a redis and solr clusters on sets of azure virtual machines. What is the best practise for restricting access so just my azure web site can access these boxes?
We store private information in the redis and solr cluster so cannot risk allowing other azure websites access to the redis and solr clusters so allowing the complete IP range of the azure data centres is a no go.
Azure Web Sites do not have dedicated outbound IP addresses for each deployment. This precludes you from using ACLs or Virtual Networks to connect to your Redis / Solr virtual machines.
While you can filter IP traffic entering a Virtual Machine via ACL, this will only work with Cloud Services (web/worker roles) and Virtual Machines. Likewise, you can add Cloud Services and Virtual Machines to a Virtual Network, allowing you to directly access your Redis/Solr instances.
As #Itamar mentioned in his answer, you can use IP filtering on the Redis/Solr instances themselves, via the OS or within Redis/Solr as supported. You can also use an SSL connection.
Don't know about solr, but if you want a secure connection to your Redis you should consider using a secure proxy such stunnel on the website and the Redis servers (see for example http://bencane.com/2014/02/18/sending-redis-traffic-through-an-ssl-tunnel-with-stunnel)... or, just use a Redis service provider that supports SSL (e.g. http://redislabs.com ;)).