Redis cluster creating cannot connect to the server, what's wrong? - linux

I have 3 different servers deployed on Aliyun, each of them is running 2 redis instances with port 6379 and 6380.
I was trying to build a redis cluster with these 6 nodes.(Redis version 3.2.0). But it failed and said "Sorry, cannot connect to the node 10.161.94.215:6379" (10.161.94.215 is the lan ip address of my first server.)
While obviously the servers were running quite well, and I could get it by redis-cli.
Gem is installed.
Requirepass is banned, no auth is needed.
No ip bind
No protected-mode as well.
error pic
All the configuration options about cluster are well set.
What's wrong with this?

I think i know why now.
Use the IP of the local host.
src/redis-trib.rb create 127.0.0.1:6379 127.0.0.1:6380 h2:p1 h2:p2 h3:p1 h3:p2

I think you are creating cluster from a different subnet. That might be a problem.

Looks like protected mode is a new security feature in redis 3.2. The short version is if you don't explicitly bind to an ip address it will only allow access to localhost.
If you only wish to create a cluster on a single host, this may be ok. If you're using multiple hosts to create a cluster you'll either need to turn off protected mode or explicitly bind to an ip address.
From redis.conf file:
# Protected mode is a layer of security protection, in order to avoid that
# Redis instances left open on the internet are accessed and exploited.
#
# When protected mode is on and if:
#
# 1) The server is not binding explicitly to a set of addresses using the
# "bind" directive.
# 2) No password is configured.
# The server only accepts connections from clients connecting from the
# IPv4 and IPv6 loopback addresses 127.0.0.1 and ::1, and from Unix domain
# sockets.
#
# By default protected mode is enabled. You should disable it only if
# you are sure you want clients from other hosts to connect to Redis
# even if no authentication is configured, nor a specific set of interfaces
# are explicitly listed using the "bind" directive.
protected-mode yes
There are instructions on how to correct this if you attempt to connect to it using something aside from the loopback interface:
DENIED Redis is running in protected mode because protected mode is enabled, no bind address was specified, no authentication password is requested to clients. In this mode connections are only accepted from the loopback interface. If you want to connect from external computers to Redis you may adopt one of the following solutions: 1) Just disable protected mode sending the command 'CONFIG SET protected-mode no' from the loopback interface by connecting to Redis from the same host the server is running, however MAKE SURE Redis is not publicly accessible from internet if you do so. Use CONFIG REWRITE to make this change permanent. 2) Alternatively you can just disable the protected mode by editing the Redis configuration file, and setting the protected mode option to 'no', and then restarting the server. 3) If you started the server manually just for testing, restart it with the '--protected-mode no' option. 4) Setup a bind address or an authentication password. NOTE: You only need to do one of the above things in order for the server to start accepting connections from the outside.
The output of redis-trib.rb is fairly terse (probably appropriately so).

sudo nano /etc/redis/6379.conf
Replace #bind 127.0.0.1 or bind 127.0.0.1 with bind 0.0.0.0
sudo service redis_6379 restart
Allow to access redis anywhere.

Related

Target specific Azure Web App Instance with Header instead of a Cookie

I have an architecture where I have multiple instances but I want to maximize cache hits.
Users are defined in groups and I want to make sure that all users that belong to the same group hit the same server as much as possible.
The application is fully stateless, but having users from the same group hitting the same server will dramatically increase performance and memory load on all instances.
When loading the main page I already know which server I would like to send this user to on the XHR call.
Using the ARRAffinity cookies are not really great is almost impossible in this scenario (cross domain, have to make server call first etc) and I would strongly prefer sending a hint myself through a custom header.
I'm trying manually to do some workarounds with deleting the cookies and assigning them, but it feels very hacky and I don't get it fully working yet. and it doesn't work for XHR calls.
Question:
Is it possible to direct to a specific instance through a header, url or domain instead of a cookie?
Notes
Distributed cache does not work for me in this case. I need the performance of memory cache without extra network hops and serialization/deserialization.
This seems to be possible with Application Gateway, but it seem to need a lot of extra infrastructure and moving parts while all my problems would be fixed by sending the "right" header.
I could fix this by duplicating the web app in its entirety and assigning a different hostname. Also this feels like adding a lot of extra moving parts that can break. Also maintenance will be harder and more confusing, I loss autoscale, etc.
Maybe this can be fixed easily by Kubenetes/Docker Swarm type of architecture (no experience), but as this is a large legacy project and I have a pretty strict deadline I am very cautious of making such a dramatic switch last minute.
If I am understanding correctly you want to set a custom header via a client application and based on that proxy over the connection to some other backend server.
I like to use HAProxy for that, you can also look into Nginx as well for that.
You can install HAProxy on linux from the distribution's package manager, or you can use the available HAProxy docker container.
An example of installing it on ArchLinux:
sudo pacman -S haproxy
sudo systemctl start haproxy
Once its installed you can find out where your haproxy.cfg config file is located and then copy the haproxy.cfg config snippet that I posted here below instead of the existing default config.
In my case haproxy.cfg was in /etc/haproxy/haproxy.cfg
To achieve what you want in HAProxy you would set all clients to communicate with this main HAProxy server, which would then forward the connection to the different backend servers you have based on the value of the custom header that you can set client side, for example "x-mycustom-header: Server-one". As a bonus, you can also enable sticky sessions as well if needed on HAProxy, but it is not mandatory to achieve what you are looking for.
Here is a simple example setup for the HAProxy config file (haproxy.cfg) with only 2 backend servers, but you can add more.
The logic here is that all the clients would make http requests to the HAProxy server listening on port 80, then HAProxy would check the value of the custom header called 'x-mycustom-header' that the clients added and based on that value, it will forward the client to either backend_server_one or backend_server_two.
For testing purposes both HAProxy and the two backends are on the same box but listening on different ports. HAProxy on port 80, server1 is on 127.0.0.1:3000 and server2 is on 127.0.0.1:4000.
cat haproxy.cfg
#---------------------------------------------------------------------
# Example configuration. See the full configuration manual online.
#
# http://www.haproxy.org/download/1.7/doc/configuration.txt
#
#---------------------------------------------------------------------
global
maxconn 20000
log 127.0.0.1 local0
user haproxy
chroot /usr/share/haproxy
pidfile /run/haproxy.pid
daemon
frontend main
bind :80
mode http
log global
option httplog
option dontlognull
option http_proxy
option forwardfor except 127.0.0.0/8
maxconn 8000
timeout client 30s
use_backend backend_server_one if { req.hdr(x-mycustom-header) server-one }
use_backend backend_server_two if { req.hdr(x-mycustom-header) server-two }
default_backend backend_server_one #when the header is something else default to the first backend
backend backend_server_one
mode http
balance roundrobin
timeout connect 5s
timeout server 5s
server static 127.0.0.1:3000 #change this ip to your 1st backend ip address
backend backend_server_two
mode http
balance roundrobin
timeout connect 5s
timeout server 30s
server static 127.0.0.1:4000 #change this ip to your 2nd backend ip address
To test that this works you can open two netcat listeners, one on port 3000, and then the other on port 4000, run them on differnt screens or different ssh sessions.
nc -l 3000 # in the first screen
nc -l 4000 # in a second screen
Then after you do a sudo systemctl reload haproxy to make sure that HAProxy is reloaded with your new config file, you can make an http GET request on port 80 and provide the "x-mycustom-header: Server-one" header.
You will be able to see the request in the output of the netcat instance that is listening on port 3000.
Now change the header to "x-mycustom-header: Server-two" and make a second GET request, and you will see that the request reached to the second netcat instance this time, which is listening on port 4000, which indicates that this works.
Tested on ArchLinux
The Microsoft Team has responded an confirmed this is at the moment not possible.

How do I access RabbitMQ natively without IP address using amqplib?

Where I work we have a cloudfoundry server that provides RabbitMQ as a service. When I configure this service and try to connect using amqplib via (localhost, 127.0.0.1, etc) it doesn't connect. When I look at the Java project, it never configures an IP and seems to connect natively through a driver or something (using Spring).
How would I connect using amqplib without an IP? Should I use another node lib instead?
You can make a connection without setting the hostname but then the hostname is set as "localhost" as described in the documentation.
If your RabbitMQ is on a remote server you must provide
a remote IP address
port (if it is different from the default 5672)
username and password of not default user as mentioned here
You may not be able to make a connection due to closed port on the remote server is closed, check it via telnet

Cannot connect from windows to redis linux server

I cannot connect to redis server (ubuntu server 16.04 LTS 64 bits on separate PC) from windows 8.1 64-bits. Redis is well documented, however I found very little information how to connect redis server from separate machine.
I have installed latest version of redis into linux and locally everything works fine. I start server via redis-server and also I start redis-cli and after that I am able to add information into server and retrieve it. The same situation is in windows - everything works locally.
In order to connect from windows into linux redis server I did these changes.
In linux I set the static local IP via sudo nano /etc/network/interfaces
address 192.186.xxx.xxx
netmask 255.255.255.0
network 192.168.xxx.xxx
broadcast 192.168.xxx.xxx
gateway 192.168.xxx.xxx
dns-nameservers 8.8.8.8
In redis.conf file I bind my windows PC IP which is given by my internet service provider. I also opened TCP 6379 port in my router GUI. In windows I modify redis.windows-service.conf and redis.windows.conf files. In both of them I bind my IP address given by my internet service provider. After this I cannot start redis-cli properly (empty black cmd window is visible)
What I am doing wrong? I would be very grateful for any help.
You should modify the redis conf, my redis conf is located at /etc/redis/6379.conf.
And you should comment the line "bind 127.0.0.1" Or change to bind 0.0.0.0.
The bind specify which network interface the redis server should listen to. The default is localhost.
And also Change the protected-mode to no :
Protected mode is a layer of security protection, in order to avoid that
Redis instances left open on the internet are accessed and exploited.
When protected mode is on and if:
1) The server is not binding explicitly to a set of addresses using the
"bind" directive.
2) No password is configured.
The server only accepts connections from clients connecting from the
IPv4 and IPv6 loopback addresses 127.0.0.1 and ::1, and from Unix domain
sockets.
By default protected mode is enabled. You should disable it only if
you are sure you want clients from other hosts to connect to Redis
even if no authentication is configured, nor a specific set of interfaces
are explicitly listed using the "bind" directive.
protected-mode yes
If you don't disable the protected-mode, your redis server will not listen public ip interface. more detail see above.
If you can access the remote server from your machine, your problem is most probably with redis security config, read the Securing Redis section in this document
I found that most of the time people don't change the "bind" directive value in redis config, you can test that by setting bind 0.0.0.0 and restarting redis server, if that's the issue, you can then allow whatever subnets you need to access the server.
I have also experience the same issue trying to connect to Redis (MSOpenTech 3.0.5 and 3.2.1) By default if no binding is stated then redis(according to the comments in the conf file) will listen to all available interfaces. That said, v 3.2.1 does have 'bind 127.0.0.1' already set... in 3.0.5 Setting the binding to 'bind 127.0.0.1' still allows the redis-cli to be used. Binding to 192.168.1.2 renders the redis-cli unusable with both versions - there is no IP and Port prompt, simply a carat and the cli does not accept keyboard input. Binging to an external IP the MSOpenTech fork service will not restart and throws an error(nice). Clearing all bindings and reverting back to original state, the redis-cli becomes usable again. Also, on the MS OpenTech fork there is no 'ProtectedMode' setting in either config file. Not sure whether this can actually be set.
Have raised this as an issue on the MSOpenTech fork via github but expecting silence to be the only reply...
I'm not sure this helps you in any way other than knowing that you are not alone. I am trying to pub from PHP to AS3 subscribers - it works great in the Flash IDE but from the localhost browser, redis appears to go decididly deaf.

AbortError: Ready check failed: Redis connection lost and command aborted. It might have been processed

What does this error message means and what are the possible causes for it? I'm using node 6.10.0 and redis 2.7.1. I run Redis store in separately Docker container and the container is successfully built. After that I prime the store with access tokens that I need in my application. I do it with script and at that moment I get the error message.
The error appears as the result of a broken connection (Your software somehow lost connection with the Redis server).
It can be one of two scenarios (or both) - the connection has timed out or the reconnect attempts have exceeded the maximum number specified in a config.
For me the issue was a missing "bind" directive in redis config and, as a result, redis worked in "protected mode". Nodejs client didn't show the full response, so I only found the reason for the issue when connecting to redis from standard redis-cli:
DENIED Redis is running in protected mode because protected mode is
enabled, no bind address was specified, no authentication password is
requested to clients. In this mode connections are only accepted from
the loopback interface. If you want to connect from external computers
to Redis you may adopt one of the following solutions: 1) Just disable
protected mode sending the command 'CONFIG SET protected-mode no' from
the loopback interface by connecting to Redis from the same host the
server is running, however MAKE SURE Redis is not publicly accessible
from internet if you do so. Use CONFIG REWRITE to make this change
permanent. 2) Alternatively you can just disable the protected mode by
editing the Redis configuration file, and setting the protected mode
option to 'no', and then restarting the server. 3) If you started the
server manually just for testing, restart it with the
'--protected-mode no' option. 4) Setup a bind address or an
authentication password. NOTE: You only need to do one of the above
things in order for the server to start accepting connections from the
outside.

How to access CouchDB installed on another machine?

We have CouchDB installed on a separate machine.
When it was installed on my machine, it was accessible through Fauxton via link http://localhost:5984/_utils/index.html
Also, I am using Divan, a C# library to interact with CouchDB. It uses host as localhost and port as 5984 - default host and port names to connect with database.
But now I have CouchDB installed on another machine, how can I access it in this case?
Please suggest.
Thank you.
You need to allow exterior access in the new machine (which I'll just call the server). Your computer is the client. First, make sure the server is accessible from your network and get its IP address using ipconfig or ifconfig in the command line.
First, in the server, open the CouchDB configuration file, which is
/usr/local/etc/couchdb/local.ini in Linux
or
C:\Program Files\CouchDB\etc\couchdb\local.ini in Windows
and change
[httpd]
bind_address = 127.0.0.1
to
[httpd]
bind_address = 0.0.0.0
If there is no bind_address already in the file, just add it.
Then save the file.
Now, from the client, you can access futon from your machine using {SERVER_IP}:5984/_utils.
In Divan, set host to {SERVER_IP}. Unless you configure it otherwise, the port remains 5984.
Reference:
http://docs.couchdb.org/en/2.0.0/config/intro.html *
http://docs.couchdb.org/en/2.0.0/config/http.html *
(*) I'm assuming you're using CouchDB 2.0, but in my experience with 1.6.1 this instructions also work.
Connect to your server locally:
localhost:5984/_utils
then simply from the setting section, change the bind_address to 0.0.0.0

Resources