Let's say I have set up continuous bidirectional replication in CouchDB:
HostA <-> HostB.
As far as I understand, this is the simplest master-master replication - so both HostA and HostB accept reads and writes. Is it possible for the client to be transparet on which database he operates? I use Ektorp as a Java driver for CouchDB. When I initialize the connection I have to provide the IP address, for example http://HostA.mydomain.com:5984. What happens when this host is actually down? I would like to achieve transparency, like in RavenDB's client library - when HostA is down, it tries to connect to the other replicated hosts (http://ravendb.net/docs/article-page/3.0/csharp/client-api/bundles/how-client-integrates-with-replication-bundle).
In other words - is CouchDB aware of its replication targets so that the client doesn't have to manually choose the currently-alive host?
One way to achieve this is to use a load balancer like HAProxy. HAproxy is commonly used with web application servers, but can be used with any server that uses HTTP. You can find an example configuration in the couchdb repo.
Related
I have one situation to deal with redis-cluster.Actually we want to move to redis-cluster for high availability.So, currently we have one transaction server and we are using redis for managing mini-Statements.We have single instance of redis running on default port with 0.0.0.0 ip. In my transaction server, i have one configuration file in which i am putting redis ip and port for connection.
My Question:
1) Suppose i have two machine with redis server and i want something like if one machine died then my transaction server will automatically use second machine for its work and it should have all the keys available.So for this what ip and port i should configure in my transaction server config file and what should be the setup for redis to achieve this goal?
A suggestion or a link will be helpful!
If you looking for high availability solution for Redis, you might want to look inot Redis Sentinel but not cluster.
Redis Sentinel offers exactly what you need, you can see the official document for more information.
I'm making a nodejs application that will act a server for other sites in different countries as the data being transmitted will be business related data. I would like to know how I can safely/securely send this data.
I am currently using socket.io to act as my main server (Master) on other sites there are (Slave) servers that handle the data from the master server.
I have got this working in a local environment but want to deploy this in the other sites.
I have tried to Google this to see if anyone else has done this but came across socket.io sessions but I don't know if this will fit with (Server->Server) connections.
Any help or experience would be grateful.
For server-server communication where you control both ends of the communication you can use WebSocket over HTTPS, you can use TCP over SSH tunnel or any other encrypted tunnel. You can use a PubSub service, a queue service etc. There are a lot of ways you can do it. Just make sure that the communication is encrypted either natively by the protocols you use or with VPN or tunnels that connect your servers in remote locations.
Socket.io is usually used as a replacement for WebSocket where there is no native support in the browser. It is rarely used for server to server communication. See this answer for more details:
Differences between socket.io and websockets
If you want a higher level framework with focus on real-time data then see ActionHero:
https://www.actionherojs.com/
For other options of sending real-time data between servers you can use some shared resource like a Redis database or some pub/sub service like Faye or Kafka, or a queue service like ZeroMQ or RabbitMQ. This is what is usually done to make things like that work across multiple instances of the server or multiple locations. You could also use a CouchDB changes feed, or a similar feature of RethinkDB to make sure that all of your instances get all the data as soon as it is posted by any one of them. See:
http://docs.couchdb.org/en/2.0.0/api/database/changes.html
https://rethinkdb.com/docs/changefeeds/javascript/
https://redis.io/topics/pubsub
https://faye.jcoglan.com/
https://kafka.apache.org/
Everything that uses HTTP is easy to encrypt with HTTPS. Everything else can be encrypted with a tunnel or VPN.
Good tools that can add encryption for protocols that are not encrypted themselves (like e.g. the Redis protocol) are:
http://www.tarsnap.com/spiped.html
https://www.stunnel.org/index.html
https://openvpn.net/
https://forwardhq.com/help/ssh-tunneling-how-to
See also:
https://en.wikipedia.org/wiki/Tunneling_protocol
Note that some hosting services may give you preconfigured tunnels or internal network interfaces that pass data encrypted between your servers located in different data centers of that provider. Some providers give you tools and tutorials to that easily as well.
A beginner question, but I've looked through many questions on this site and haven't found a simple, straightforward answer:
I'm setting up a Linux server running Ubuntu to store a MySQL database.
It's important this server is secure as possible, as far as I'm aware my main concerns should be incoming DoS/DDoS attacks and unauthorized access to the server itself.
The database server only receives incoming data from one specific IP (101.432.XX.XX), on port 3000. I only want this server to be able to receive incoming requests from this IP, as well as prevent the server from making any outgoing requests.
I'd like to know:
What is the best way to prevent my database server from making outgoing requests and receiving incoming requests solely from 101.432.XX.XX? Would closing all ports ex. 3000 be helpful in achieving this?
Are there any other additions to the linux environment that can boost security?
I've taken very basic steps to secure my phpmyadmin portal (linked to the MySQL database), such as restricting access to solely my personal IP address.
To access the database server requires the SSH key (which itself is password protected).
A famous man once said "security is a process, not a product."
So you have a db server that should ONLY listen to one other server for db connections and you have the specific IP for that one other server. There are several layers of restriction you can put in place to accomplish this
1) Firewall
If your MySQL server is fortunate enough to be behind a firewall, you should be able to block out all connections by default and allow only certain connections on certain ports. I'm not sure how you've set up your db server, or whether the other server that wants to access it is on the same LAN or not or whether both machines are just virtual machines. It all depends on where your server is running and what kind of firewall you have, if any.
I often set up servers on Amazon Web Services. They offer security groups that allow you to block all ports by default and then allow access on specific ports from specific IP blocks using CIDR notation. I.e., you grant access in port/IP combination pairs. To let your one server get through, you might allow access on port 3000 to IP address 101.432.xx.xx.
The details will vary depending on your architecture and service provider.
2) IPTables
Linux machines can run a local firewall (i.e., a process that runs on each of your servers itself) called iptables. This is some powerful stuff and it's easy to lock yourself out. There's a brief post here on SO but you have to be careful. It's easy to lock yourself out of your server using IPtables.Keep in mind that you need to permit access on port 22 for all of your servers so that you can login to them. If you can't connect on port 22, you'll never be able to login using ssh again. I always try to take a snapshot of a machine before tinkering with iptables lest I permanently lock myself out.
There is a bit of info here about iptables and MySQL also.
3) MySQL cnf file
MySQL has some configuration options that can limit any db connections to localhost only - i.e., you can prevent any remote machines from connecting. I don't know offhand if any of these options can limit the remote machines by IP address, but it's worth a look.
4) MySQL access control via GRANT, etc.
MySQL allows you very fine-grained control over who can access what in your system. Ideally, you would grant access to information or functions only on a need-to-know basis. In practice, this can be a hassle, but if security is what you want, you'll go the extra mile.
To answer your questions:
1) YES, you should definitely try and limit access to your DB server's MySQL port 3000 -- and also port 22 which is what you use to connect via SSH.
2) Aside from ones mentioned above, your limiting of PHPMyAdmin to only your IP address sounds really smart -- but make sure you don't lock yourself out accidentally. I would also strongly suggest that you disable password access for ssh connections, forcing the use of key-pairs instead.You can find lots of examples on google.
What is the best way to prevent my database server from making outgoing requests and receiving incoming requests solely from 101.432.XX.XX? Would closing all ports ex. 3000 be helpful in achieving this?
If you don't have access to a separate firewall, I would use ip tables. There are a number of managers available for you on this. So yes. Remember that if you are using IPtables, make sure you have a way of accessing the server via OOB (short for out of band, which means accessing it in such a way that if you make a mistake in IP tables, you can still access it via console/remote hands/IPMI, etc)
Next up, when creating users, you should only allow that subnet range plus user/pass authentication.
Are there any other additions to the linux environment that can boost security? I've taken very basic steps to secure my phpmyadmin portal (linked to the MySQL database), such as restricting access to solely my personal IP address.
Ubuntu ships with something called AppArmor. I would investigate that. That can be helpful to prevent some shenanigans. An alternative is SELinux.
Further, take more steps with phpmyadmin. That is your weakest link in the security tool chain we are building.
To access the database server requires the SSH key (which itself is password protected).
If security is a concern, I would NOT use SSH key style access. Instead, I would use MySQLs native support for SSL certificate authentication. Here is now to configure it with phpmyadmin.
I was unable to find existing answers on this topic.
I am running a redis client that connects to a remote redis server (not on the same host).
I can connect either via the domain name or via the server's IP, i.e. I can launch the client either via redis-cli -h 123.123.123.123 or redis-cli -h my.domain.com. It is more convenient to use the domain name.
Speed is important for my use case, therefore I would like to know whether the "costly" DNS lookup happens only once at startup or multiple times throughout the life of the client.
Thanks!
The overhead will be paid only when a connection is established.
If you make sure that your application is keeping permanent connections to the Redis instances instead of systematically connecting/disconnecting, I would say the overhead is negligible.
I have a number of services running on various machines which need to communicate over arbitrary ports. Right now port discovery happens by pushing a config file to each machine which contains mappings of a service-name to a hostname/port combo.
For all the same reasons that DNS works better than manually maintaining an /etc/hosts on each machine, I'd like to have a centralized system to register and lookup these hostname/port combos.
Yes, building a simple version of this system wouldn't take long at all (it's just a key-value store), but ideally the service would be fast, redundant, auto-updating and have fail-over, which would obviously take a bit more time to build from scratch.
I can't imagine I'm the first to need such a tool, but so far my Google-fu has failed me. Is there something out there built for this purpose? Or should I just set up Kyoto Tycoon or ZooKeeper and write a bit of caching/lookup/failover logic myself?
DNS supports SRV records that are designed just for this (service location.)
SRV records are of the following form (courtesy Wikipedia):
_service._proto.name TTL class SRV priority weight port target
service: the symbolic name of the desired service.
proto: the transport protocol of the desired service; this is usually either TCP or UDP.
name: the domain name for which this record is valid.
TTL: standard DNS time to live field.
class: standard DNS class field (this is always IN).
priority: the priority of the target host, lower value means more preferred.
weight: A relative weight for records with the same priority.
port: the TCP or UDP port on which the service is to be found.
target: the canonical hostname of the machine providing the service.
Most modern DNS servers support SRV records.
Avahi advertises services (by port) that each machine offers. (aka Apple's Bonjour)
Not sure if it's exactly what you're looking for, but definately in this vein.
The concept is that each machine would announce what services it is running on each port.
But this is limited to a LAN implementation, which I'm not sure fits your requirements.
To add a little more meat to this answer, here is an example service file for Avahi advertising a webpage:
<?xml version="1.0" standalone='no'?><!--*-nxml-*-->
<!DOCTYPE service-group SYSTEM "avahi-service.dtd">
<service-group>
<name replace-wildcards="yes">%h Web Server</name>
<service>
<type>_http._tcp</type>
<port>80</port>
</service>
</service-group>
I personally think zookeeper is a great fit for this use case. Ephemeral nodes mean that registration cleanup is not a problem, freeing you to use dynamic port allocation on the server side and watches will help with rebalancing client->server mappings. That said, using zookeeper for server side registration and using DNS SRV records for client side lookup(using a zookeeper to dns bridge) would work well for most use cases.