Redis connection via domain name instead of IP: speed? - dns

I was unable to find existing answers on this topic.
I am running a redis client that connects to a remote redis server (not on the same host).
I can connect either via the domain name or via the server's IP, i.e. I can launch the client either via redis-cli -h 123.123.123.123 or redis-cli -h my.domain.com. It is more convenient to use the domain name.
Speed is important for my use case, therefore I would like to know whether the "costly" DNS lookup happens only once at startup or multiple times throughout the life of the client.
Thanks!

The overhead will be paid only when a connection is established.
If you make sure that your application is keeping permanent connections to the Redis instances instead of systematically connecting/disconnecting, I would say the overhead is negligible.

Related

What should be the ip and port for connecting redis-cluster?

I have one situation to deal with redis-cluster.Actually we want to move to redis-cluster for high availability.So, currently we have one transaction server and we are using redis for managing mini-Statements.We have single instance of redis running on default port with 0.0.0.0 ip. In my transaction server, i have one configuration file in which i am putting redis ip and port for connection.
My Question:
1) Suppose i have two machine with redis server and i want something like if one machine died then my transaction server will automatically use second machine for its work and it should have all the keys available.So for this what ip and port i should configure in my transaction server config file and what should be the setup for redis to achieve this goal?
A suggestion or a link will be helpful!
If you looking for high availability solution for Redis, you might want to look inot Redis Sentinel but not cluster.
Redis Sentinel offers exactly what you need, you can see the official document for more information.

Horizontal scaling with a node.js app & socket io

My team and I are working on a digital signage platform.
We have ~ 2000 Raspberry Pi around the world connected to a Nodejs server using Socket IO. The Raspberries are initiating the connection.
We would like to be able to scale horizontally our application on multiple servers but we have a problem that we can’t figure out.
Basically, the application stores the sockets of the connected Raspberry in an array.
We have an external program that calls the API within the server, this results by the server searching which sockets will be "impacted" by the API call and send them the informations.
After lots of search, we assume that we have to stores the sockets (or their ID) elsewhere (Redis ?), to make the application stateless. Then, any server can respond to a API call and look the sockets in a central place.
Unfortunately, we can’t find any detailed example on how to do that.
Can you please help us ?
Thanks
(You can't store sockets from multiple server instances in a shared datastore like redis: they only make sense in the context of the server where they were initiated).
You will need a cluster of node.js servers to handle this. There are various ways to make a cluster. They all involve directing incoming connections from your RPis to a "generic" hostname, for example server.example.com. Behind that server.example.com hostname will be multiple node.js servers.
Each incoming connection from each RPi connects to just one of those multiple servers. (You know this, I believe.) This means one node.js server in your cluster "owns" each individual RPi.
(Telling you how to rig up a cluster of node.js servers is beyond the scope of this answer. Hints: round-robin DNS or a reverse-proxy nginx front end.)
Then, you want to route -- to fan out -- the incoming data from each API call to each server in the cluster, so the server can route it to the RPis it owns.
Here's a good way to handle that:
Set up a redis cache or other shared data store. It can be very small.
When each node.js server starts, have it register itself as active. That is, have it place its own specific address for handling API calls into the shared server. The specific address is probably of the form 12.34.56.78:3000: that is, an IP address and port.
Have each server update that address every so often, once a minute or so, to show it is still alive.
When an API call arrives at server.example.com, it will come to a more-or-less randomly chosen node.js server instance.
Get that server to read the list of server addresses from the redis cache
Get that server to repeat the API call to all servers except itself. Add a parameter like repeated=yes to the repeated API calls.
Then, each server looks at its list of connected sockets and does what your application requires.
On server shutdown, have the server unregister itself -- remove its address from redis -- if possible.
In other words, build a way of fanning out the API calls to all active node.js servers in your cluster.
If this must scale up to a very large number (more than a hundred or so) node.js servers, or to many hundreds of API calls a minute, you probably should investigate using message queuing software.
SECURE YOUR REDIS server from random cybercreeps on the internet.

Securing a simple Linux server that holds a MySQL database?

A beginner question, but I've looked through many questions on this site and haven't found a simple, straightforward answer:
I'm setting up a Linux server running Ubuntu to store a MySQL database.
It's important this server is secure as possible, as far as I'm aware my main concerns should be incoming DoS/DDoS attacks and unauthorized access to the server itself.
The database server only receives incoming data from one specific IP (101.432.XX.XX), on port 3000. I only want this server to be able to receive incoming requests from this IP, as well as prevent the server from making any outgoing requests.
I'd like to know:
What is the best way to prevent my database server from making outgoing requests and receiving incoming requests solely from 101.432.XX.XX? Would closing all ports ex. 3000 be helpful in achieving this?
Are there any other additions to the linux environment that can boost security?
I've taken very basic steps to secure my phpmyadmin portal (linked to the MySQL database), such as restricting access to solely my personal IP address.
To access the database server requires the SSH key (which itself is password protected).
A famous man once said "security is a process, not a product."
So you have a db server that should ONLY listen to one other server for db connections and you have the specific IP for that one other server. There are several layers of restriction you can put in place to accomplish this
1) Firewall
If your MySQL server is fortunate enough to be behind a firewall, you should be able to block out all connections by default and allow only certain connections on certain ports. I'm not sure how you've set up your db server, or whether the other server that wants to access it is on the same LAN or not or whether both machines are just virtual machines. It all depends on where your server is running and what kind of firewall you have, if any.
I often set up servers on Amazon Web Services. They offer security groups that allow you to block all ports by default and then allow access on specific ports from specific IP blocks using CIDR notation. I.e., you grant access in port/IP combination pairs. To let your one server get through, you might allow access on port 3000 to IP address 101.432.xx.xx.
The details will vary depending on your architecture and service provider.
2) IPTables
Linux machines can run a local firewall (i.e., a process that runs on each of your servers itself) called iptables. This is some powerful stuff and it's easy to lock yourself out. There's a brief post here on SO but you have to be careful. It's easy to lock yourself out of your server using IPtables.Keep in mind that you need to permit access on port 22 for all of your servers so that you can login to them. If you can't connect on port 22, you'll never be able to login using ssh again. I always try to take a snapshot of a machine before tinkering with iptables lest I permanently lock myself out.
There is a bit of info here about iptables and MySQL also.
3) MySQL cnf file
MySQL has some configuration options that can limit any db connections to localhost only - i.e., you can prevent any remote machines from connecting. I don't know offhand if any of these options can limit the remote machines by IP address, but it's worth a look.
4) MySQL access control via GRANT, etc.
MySQL allows you very fine-grained control over who can access what in your system. Ideally, you would grant access to information or functions only on a need-to-know basis. In practice, this can be a hassle, but if security is what you want, you'll go the extra mile.
To answer your questions:
1) YES, you should definitely try and limit access to your DB server's MySQL port 3000 -- and also port 22 which is what you use to connect via SSH.
2) Aside from ones mentioned above, your limiting of PHPMyAdmin to only your IP address sounds really smart -- but make sure you don't lock yourself out accidentally. I would also strongly suggest that you disable password access for ssh connections, forcing the use of key-pairs instead.You can find lots of examples on google.
What is the best way to prevent my database server from making outgoing requests and receiving incoming requests solely from 101.432.XX.XX? Would closing all ports ex. 3000 be helpful in achieving this?
If you don't have access to a separate firewall, I would use ip tables. There are a number of managers available for you on this. So yes. Remember that if you are using IPtables, make sure you have a way of accessing the server via OOB (short for out of band, which means accessing it in such a way that if you make a mistake in IP tables, you can still access it via console/remote hands/IPMI, etc)
Next up, when creating users, you should only allow that subnet range plus user/pass authentication.
Are there any other additions to the linux environment that can boost security? I've taken very basic steps to secure my phpmyadmin portal (linked to the MySQL database), such as restricting access to solely my personal IP address.
Ubuntu ships with something called AppArmor. I would investigate that. That can be helpful to prevent some shenanigans. An alternative is SELinux.
Further, take more steps with phpmyadmin. That is your weakest link in the security tool chain we are building.
To access the database server requires the SSH key (which itself is password protected).
If security is a concern, I would NOT use SSH key style access. Instead, I would use MySQLs native support for SSL certificate authentication. Here is now to configure it with phpmyadmin.

Bypassing socket connections in node.js

I'm working in a project where we need to connect clients to devices behind LAN networks.
Brief description: there are "devices" connected, in a home for example, under a LAN created by a router. These devices create a full webserver, operating under linux, and using nodejs as the backend implementation language. They also have access to Internet, through the public IP of the router. On the other side, there are clients which can choose to which device to connect to.
The goal is to connect the clients with the webServer created by any device.
Up to now, my idea is to try to implement something similar to how TeamViewer works. As I understand, Teamviewer has a central server, which the agents connect to. When an agent connects to the central server, this one gets hold of the TCP connection, keeping it alive. When another client wants to access to the first client, the server bypasses both TCP connections. That way the server acts like a proxy, where it additionally routes the TCP connections. This also allows to connect to clients under LAN or firewalls (because the connections are created always from the clients).
If this is correct, what I would like to implement is a central server, in nodejs as well, which manages a pool of socket connections coming from the different active devices, and when a client wants to connect to one specific device, the server bypasses the incoming TCP connection of the client with the already existing connection of the device.
What I first would like to know is if this is possible in nodejs. My idea is to keep the device connections alive, so clients can inmediately connect to them, creating some sort of pool of device connections.
If implemented in C, I guess I could get hold of the socket descriptor, keeping it alive, and bypassing it to the incoming client request. But in nodejs I can't seem to find any modules that manage TCP connections.
Are there any high level npm packages which do this function? Else, is it possible to use lower level modules (like net) which have those functionalities.
Ideally I would like to implement it with high level modules (express), but if it's not possible, I could always rewrite the server using low level modules.
Thanks in advance

ZNC - Cheat IRC server's connections from ip limit

I want to connect more than 5 bouncers to my favourite irc network.
Unfortunately, server accepts only up to five connections from one IP.
How can i do it and is it real?
I have only one server with one IP but i have a domain with an unlimited number pf subdomains.
You could use a proxy server.
http://en.wikipedia.org/wiki/Proxy_server
Either ask the network for a connection limit exemption (which a network should be able to give you if you explain why you need it), or you'll need a second IP or a second server - there's no way around this.
With a second machine, you could set up a bouncer on that machine (such as irssi with irssi-proxy), then connect ZNC to irssi. Alternatively, you could use SSH tunnelling to route your IRC connection through another machine.
Neither method is particularly good, multiple ZNC instances on multiple machines, or an exemption is probably the best way. Talk to the network staff about it and see what they can do.

Resources