We are running services on a cluster as docker containers.
Since we cannot use eureka or multicast, we are trying to use hazelcast TCP discovery.Currently, configuration is like this (example):
cluster:
enabled: true
hazelcast:
useSiteLocalInterfaces: true
discovery:
tcp:
enabled: true
members:
- 10.10.10.1
- 10.10.10.2
- 10.10.10.3
azure:
enabled: false
multicast:
enabled: false
kubernetesDns:
enabled: false
During service start, we get the following log message:
Members configured for TCP Hazelcast Discovery after removing local addresses: [10.10.10.1, 10.10.10.2, 10.10.10.3]
That means, the service didn't discover its local ip right.
Later in the log, the following message appears: [LOCAL] [hazelcast-test-service:hz-profile] [3.12.2] Picked [172.10.0.1]:5701, using socket ServerSocket[addr=/0.0.0.0,localport=5701], bind any local is true
Obviously, the services determines its local ip to be 172.10.0.1. We have no idea, where this ip comes from. It doesn't exist on the cluster.
Is there a way to give hazelcast a hint how to discover its local ip?
The address 172.10.0.1 must be one of the network interfaces inside your container. You can ssh into your Docker container and check the network interfaces (e.g. with ifconfig).
If you want to use another network interface, you can configure it with the environment variable HZ_NETWORK_PUBLICADDRESS. For example, in your case, one of the members can be started with the following docker command.
docker run -e HZ_NETWORK_PUBLICADDRESS=10.10.10.1:5701 -p 5701:5701 hazelcast/hazelcast
Please read more at Hazelcast Docker Image: Hazelcast Hello World
Related
Error at master node trying to connect to remote jmter slave node in same network
You need to ensure that at least port 1099 is open, check out How to open ports to a virtual machine with the Azure portal article for more details.
Apart from port 1099 you need to open:
The port you specify as the server.rmi.localport on slaves
The port you specify as the client.rmi.localport on master
More information:
Remote hosts and RMI configuration
JMeter Distributed Testing with Docker
JMeter Remote Testing: Using a different port
We are facing issues on 15001 port in istio deployed in Azure AKS.
Currently we have deployed Istio in AKS and trying to connect to Azure cache redis instance in cluster mode. Our Azure redis instance is having more than two shards with SSL enabled and one of the master node is assigned on port 15001. We were able to connect to Azure redis from AKS pods over ports 6380, 15000, 15002, 15003, 15004 and 15005 ports. However when we try to connect to over 15001 we see some issues. When we try to connect to redis over 15001 port from a namespace without istio sidecar injection from same aks cluster the connection is working fine.
Below are the logs from rediscli pod deployed in our AKS cluster.
Success case:
redis-cli -h our-redis-host.redis.cache.windows.net -p 6380 -a our-account-key --cacert "BaltimoreCyberTrustRoot.pem" --tls ping
OUTPUT:
Warning: Using a password with ‘-a’ or ‘-u’ option on the command line interface may not be safe.
PONG
We are able to connect over all ports - 6380, 15000, 15002, 15003, 15004 and 15005 to redis. However when we try to conenct using 15001. We are getting below error
Failure case:
redis-cli -h our-redis-host.redis.cache.windows.net -p 15001 -a our-account-key --cacert "BaltimoreCyberTrustRoot.pem" --tls ping
OUTPUT:
Warning: Using a password with ‘-a’ or ‘-u’ option on the command line interface may not be safe.
Could not connect to Redis at our-redis-host.redis.cache.windows.net :15001: SSL_connect failed: Success
I could not see any entry in istio-proxy logs when trying from 15001 port. However when trying for other ports we can see entry in logs as below
[2021-05-05T00:59:18.677Z] "- - -" 0 - - - "-" 600 3982 10 - "-" "-" "-" "-" "172.XX.XX.XX:6380" PassthroughCluster 172.XX.XX.XX:45478 172.22.XX.XX:6380 172.XX.XX.XX:45476 - -
Is this because 15001 port blocks the outbound requests or manipulates certs for requests on 15001 port. If yes, is there any configuration to update the proxy_port to other ports than 15001?
Note: Posted this on istio forum . Posting here for better reach.
Istio versions:
> istioctl version
client version: 1.8.2
control plane version: 1.8.3
data plane version: 1.8.3
Port 15001 is used by Envoy in Istio. Applications should not use ports reserved by Istio to avoid conflicts.
You can read more here
We have utilised the concept of istio excludeOutboundPorts annotation to bypass the the istio envoy proxy interception of the traffic on outbound ports for which we are see the problem due to istio port requirements
Using annotations provided by istio, we can use either IP or port ranges to exclude the interception. Below is an explain with ports
template:
metadata:
labels:
app: 'APP-NAME'
annotations:
traffic.sidecar.istio.io/excludeOutboundPorts: "15001"
References:
Istio Annotations
Istio traffic capture limitations
Istio Port Requirement
I'm trying to set a public ip to a container using the routed nictype in LXD,
Essentially i inited a fresh container, ran lxc config device add c1 eth0 nic nictype=routed parent=eth0 ipv4.address=my.public.ip
then started the container, it shows the correct ip in the IPV4 section for a split second, and running an lxc list again shows it dissapearing into a blank. So it IS* being set properly, at least to lxc, but a few seconds after startup it goes away.
My guess is there's maybe some DHCP style nonsense going on inside the container trying to get an ip from the host lxd machine? Any ideas are useful I don't have much knowledge with networking
For routed to work, you need to make some configuration in LXD and some configuration in the container. It is easier to create a LXD profile that contains both parts of the configuration.
Here is an example LXD profile. The upper part is about the container configuration, and the below part is what LXD needs to know to configure routed for the container.
config:
user.network-config: |
version: 2
ethernets:
eth0:
addresses:
- 192.168.1.200/32
nameservers:
addresses:
- 8.8.8.8
search: []
routes:
- to: 0.0.0.0/0
via: 169.254.0.1
on-link: true
description: Default LXD profile
devices:
eth0:
ipv4.address: 192.168.1.200
nictype: routed
parent: enp6s0
type: nic
name: routed_192.168.1.200
used_by:
To create a container with this profile, you would then run
lxc launch ubuntu: mycontainer --profile default --profile routed_192.168.1.200
References: https://blog.simos.info/how-to-get-lxd-containers-get-ip-from-the-lan-with-routed-network/
We have deployed a K8S cluster using ACS engine in an Azure public cloud.
We are able to create deployments and services but when we enter a pod using "kubectl exec -ti (pod name) (command)" we are receiving the below error,
Error from server: error dialing backend: dial tcp: lookup (node hostname) on 168.63.129.16:53: no such host
I looked all over the internet and performed all I could to fix this issue but no luck so far.
The OS is Ubuntu and 168.63.129.16 is a public IP from Azure used for DNS.(refer below link)
https://blogs.msdn.microsoft.com/mast/2015/05/18/what-is-the-ip-address-168-63-129-16/
I've already added host entries to /etc/hosts and entries into resolv.conf of the master/node server and nslookup resolves the same. I've also tested by adding --resolv-conf flag to the kubelet but still it fails. I'm hoping that someone from this community can help us fix this issue.
Verify the node on which your pod is running can be resolved and reached from inside the API server container. If you added entries to /etc/resolv.conf on the master node verify they are visible in the APIserver container, if they are not, restarting the API server pod might be helpful
The problem was in VirtualBox layer
sudo ifconfig vboxnet0 up
Solution is taken from here https://github.com/kubernetes/minikube/issues/1224#issuecomment-316411907
I have a Spark-master running in a Docker container which in turn is executed on a remote server. Next to the Spark-master there are containers running Spark-slave on the same Docker Host.
Server <---> Docker Host <---> Docker Container
In order to let the slaves find the master, I set a master hostname in Docker SPARKMASTER which the slaves use to connect to the master. So far, so good.
I use the SPARK_MASTER_IP environment variable to let the master bind to that name.
I also exposed the Spark port 7077 to the Docker host and forwarded this port on the physical server host. The port is open and available.
Now on my machine I can connect to the Server using its IP, say 192.168.1.100. When my Spark program connects to the server on port 7077 I get a connection, which is disassociated by the master:
15/10/09 17:13:47 INFO AppClient$ClientEndpoint: Connecting to master spark://192.168.1.100:7077...
15/10/09 17:13:47 WARN ReliableDeliverySupervisor: Association with remote system [akka.tcp://sparkMaster#192.168.1.100:7077] has failed, address is now gated for [5000] ms. Reason: [Disassociated]
I already learned that the reason for this disconnection is that the host IP 192.168.1.100 doesn't match the hostname SPARKMASTER.
I could add a host to my /etc/hosts file which would probably work. But I don't want to do that. Is there a way I can completely disable this check for hostname equality?