I have 2 nodes.. One of them in azure (node1) one of them in my computer (node2)
ip1 = VM's public IP
ip2 = 192.168.1.33
in yaml file:
Node1:
listen_address= ip1
rpc_address= ip1
seed = ip1
Node2:
listen_address=ip2
rpc_address=ip2
seed=ip1
When I check if oport 7000 is open on ip1, it shows open.
Node1 successfuly starting but in node 2 I'm getting Unable to gossip with any peers error.
You need to specify broadcast_address for node2 as your public IP - Azure couldn't route traffic into your private network..., and then setup your router so it will route incoming requests to your node2.
Related
I have a postgres database currently working on my PC. I am deploying a flask app which uses said database onto a linux server, and need to remotely connect to my database from the linux machine. The command I am using on the linux machine to do this is
psql -h 12.345.678.901 -p 5432 -U postgres
where 12.345.678.901 is my local PC ip address. When I do this, I get the error
psql: error: connection to server at "12.345.678.901", port 5432 failed: Connection timed out
Is the server running on that host and accepting TCP/IP connections?
I would like to emphasize that the connection is not being 'refused', it is just timing out (unlike many of the questions related to this topic). I'm not sure if this helps identify the underlying issue or not. I understand that this is an extremely common issue, but no solutions have worked for me. Among these solutions are updating the pg_hba.conf, postgresql.conf, firewall configuration, and many others. I have done this. My pg_hba.conf file looks like this
# TYPE DATABASE USER ADDRESS METHOD
# "local" is for Unix domain socket connections only
local all all scram-sha-256
# IPv4 local connections:
host all all 127.0.0.1/32 scram-sha-256
host all all 0.0.0.0/0 trust
# IPv6 local connections:
host all all ::1/128 scram-sha-256
host all all ::0/0 md5
# Allow replication connections from localhost, by a user with the
# replication privilege.
local replication all scram-sha-256
host replication all 127.0.0.1/32 scram-sha-256
host replication all ::1/128 scram-sha-256
host all all 0.0.0.0/0 md5
and my postgresql.conf looks like this
# - Connection Settings -
listen_addresses = '*' # what IP address(es) to listen on;
# comma-separated list of addresses;
# defaults to 'localhost'; use '*' for all
# (change requires restart)
port = 5432 # (change requires restart)
max_connections = 100 # (change requires restart)
These files are located in C:\Program Files\PostgreSQL\14\data. I have manually checked that these changes are saved and implemented with the psql shell. I also restarted postgres after all changes to these files.
Other fixes I have implemented:
Set firewall rules on local PC to open port 5432 to inbound and outbound TCP/IP connections with Windows Defender Firewall
Set remote linux PC firewall to allow connections through port 5432 with the lines
'sudo ufw allow 5432/tcp' &
'sudo ufw allow postgres/tcp'
Tried both local PC IPv4 address and default gateway address (I am not sure which one to use to be honest)
Set a rule for my physical router to allow connections to port 5432
I cannot figure this out to save my life. Any help would be greatly so utterly appreciated.
To anyone else struggling with this question, after many weeks of digging I finally found the solution. The ip address you must use in the postgres configuration files is the ip address of your ROUTER. I tried every single ip address on my computer and none worked. Only when I got on my internet providers app and found my router ip is when I was finally able to connect. The biggest tell of this was that when I would ssh into my remote server, it would say 'connection from {router ip address}'. Hope this helps.
I am using DNSMasq as a service on my network. The machine that has DNSMasq installed has two network cards. (the IP addresses are 192.168.1.5 and 192.168.1.6).
The issue I have run into is that I have a pod container running PiHole on the the same machine. When I reboot, PiHole fails to start because DNSMasq is using the ports required. PiHole is set to specifically use 192.168.1.6 (ports 80, 443, 52, 67). When I run # lsof - :67, I see that DNSMasq is listening to port 67 on both IP addresses when I have it specifically set to listen to only 192.168.1.5.
Is there another way to restrict DNSMasq to a single IP address (not loopback or localhost) and make it ignore a specific IP address (in this case, 192.168.1.6 is not touched by DNSMasq)?
Here is my /etc/dnsmasq.d/default.conf
no-resolv
no-poll
server=1.1.1.1
server=8.8.8.8
listen-address=192.168.1.5
interface=eno2
bind-interfaces
I am total beginner for HAProxy so please any advice will be much useful.
I have two virtual machines on Microsoft Azure.
They are in virtual network, and they have private IP addresses 10.0.9.4 and 10.0.9.5
I created new Network interface on Microsoft Azure in the same virtual network with IP address 10.0.9.7
Of course this is not delegated to any virtual machines.
Name of interface is : lb.oozie.local, private IP address 10.0.9.7
I added in /etc/hosts on .4 and .5
10.0.9.7 lb.oozie.local
I installed haproxy on both machines 4 and 5.
haconfig file is the following:
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats timeout 30s
#user haproxy
#group haproxy
daemon
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
# Default ciphers to use on SSL-enabled listening sockets.
# For more information, see ciphers(1SSL).
ssl-default-bind-ciphers kEECDH+aRSA+AES:kRSA+AES:+AES256:RC4-SHA:!kEDH:!LOW:!EXP:!MD5:!aNULL:!eNULL
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
frontend localnodes
bind lb.oozie.local:80
mode http
default_backend nodes
backend nodes
mode http
balance roundrobin
option forwardfor
http-request set-header X-Forwarded-Port %[dst_port]
http-request add-header X-Forwarded-Proto https if { ssl_fc }
option httpchk HEAD / HTTP/1.1\r\nHost:localhost
server oozie1 10.0.9.4:11000 check
server oozie2 10.0.9.5:11000 check
listen stats lb.oozie.local:1936
stats enable
stats uri /haproxy?stats
I did also:
sudo service haproxy restart
Redirecting to /bin/systemctl restart haproxy.service
Validation returns following:
haproxy -f /etc/haproxy/haproxy.cfg -c
[WARNING] 284/134546 (22658) : config : frontend 'GLOBAL' has no 'bind' directive. Please declare it as a backend if this was intended.
Available polling systems :
epoll : pref=300, test result OK
poll : pref=200, test result OK
select : pref=150, test result FAILED
Total: 3 (2 usable), will use epoll.
Using epoll() as the polling mechanism.
[WARNING] 284/134547 (22658) : Server nodes/oozie2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[ALERT] 284/134547 (22658) : sendto logger #1 failed: No such file or directory (errno=2)
[ALERT] 284/134547 (22658) : sendto logger #2 failed: No such file or directory (errno=2)
As I understood my servers should get the LB IP address (10.0.9.7).
I try from 10.0.9.4 and 10.0.9.5 ping to 10.0.9.7
but on both servers I am getting it is not recognized.
ping 10.0.9.7
PING 10.0.9.7 (10.0.9.7) 56(84) bytes of data.
From 10.0.9.4 icmp_seq=1 Destination Host Unreachable
From 10.0.9.4 icmp_seq=2 Destination Host Unreachable
Also if it is relevant:
i installed keepalived mechanism
I did not set public IP address for Load Balancer address, it has only private IP 10.0.9.7, because service is invoked directly from servers 10.0.9.4 and 10.0.9.5
please help.
Thank you in advance,
If you want to use Load Balancer in front of VM's with HA Proxy to create a fault tolerant pair of HA Proxies , you need to create an internal Load Balancer with the frontend IP of 10.0.9.7 (rather than assign 10.0.9.7 to a NIC). It is not possible to ICMP ping the frontend IP of a Load Balancer frontend, you need to use TCP ping instead. Make sure health probes are configured and see a signal from your HA Proxy VM's directly rather than the port HA Proxy is offering up to clients (the result is probably not what you want). Familiarize yourself with Standard Load Balancer at https://aka.ms/lbstandard and take not that an NSG must whitelist ports used with a Standard LB.
I have created a kubernetes v1.2 running in Azure cloud with one master(Master) and two nodes(Node1 and Node2). I have deployed an Nginx and Tomcat application. Both the containers are deployed in individual pods with RC and they have a SERVICE for each.
Nginx pod is deployed in the Node1 and Tomcat pod is deployed in Node2. Now Nginx from Node1 is trying to access Tomcat via tomcat's ServiceIP(clusterIP) which is in Node2. But its unreachable.
Nginx serviceIP: 10.16.0.2 Node1
Tomcat serviceIP: 10.16.0.4 Node2
I tried curl 10.16.0.4:8080 from Node2, it works. But same from Node1 fails with curl: (52) Empty reply from server
So communication to serviceIP across nodes fails. Is this the problem with kube v1.2?
Note: ClusterIP for the Service will be specified at the time of creating the service.
Since you are able to reach the cluster ip from the Node2, it looks like the service selector is properly defined.
Kube-proxy is the component that watches the services and creates iptable rules for end points. I would check if kube-proxy is running properly on Node1. Then check if iptable rules are set properly for the cluster ip you are trying to reach.
You can see these with iptables -L -t nat | grep namespace/servicename
Here is an example:
bash-4.3# iptables -L -t nat | grep kube-system/heapster
KUBE-MARK-MASQ all -- 172.168.16.182 anywhere /* kube-system/heapster: */
DNAT tcp -- anywhere anywhere /* kube-system/heapster: */ tcp to:172.168.16.182:8082
KUBE-SVC-BJM46V3U5RZHCFRZ tcp -- anywhere 192.168.172.66 /* kube-system/heapster: cluster IP */ tcp dpt:http
KUBE-SEP-KNJP5BBKUOCH7NDB all -- anywhere anywhere /* kube-system/heapster: */
In this example I looked up heapster running in kube-system namespace. It is showing that the cluster ip is 192.168.172.66 DNATs to the endpoint 172.168.16.182, which is pods ip (You should cross check this with the endpoints listed in kubectl describe service.
If is it not there, restarting kube-proxy might help.
I have a Spark-master running in a Docker container which in turn is executed on a remote server. Next to the Spark-master there are containers running Spark-slave on the same Docker Host.
Server <---> Docker Host <---> Docker Container
In order to let the slaves find the master, I set a master hostname in Docker SPARKMASTER which the slaves use to connect to the master. So far, so good.
I use the SPARK_MASTER_IP environment variable to let the master bind to that name.
I also exposed the Spark port 7077 to the Docker host and forwarded this port on the physical server host. The port is open and available.
Now on my machine I can connect to the Server using its IP, say 192.168.1.100. When my Spark program connects to the server on port 7077 I get a connection, which is disassociated by the master:
15/10/09 17:13:47 INFO AppClient$ClientEndpoint: Connecting to master spark://192.168.1.100:7077...
15/10/09 17:13:47 WARN ReliableDeliverySupervisor: Association with remote system [akka.tcp://sparkMaster#192.168.1.100:7077] has failed, address is now gated for [5000] ms. Reason: [Disassociated]
I already learned that the reason for this disconnection is that the host IP 192.168.1.100 doesn't match the hostname SPARKMASTER.
I could add a host to my /etc/hosts file which would probably work. But I don't want to do that. Is there a way I can completely disable this check for hostname equality?