Trouble connecting to AMQP adapter in OpenShift - eclipse-hono

I'm trying to create an MQTT Protocol Gateway for our Hono Cluster running in OpenShift
using this template but I am having trouble connecting to the AMQP adapter.
I can connect to the Sandbox AMQP adapter using the CLI (version 2.1.0) but when I try to connect to the instance running in our cluster I get "503 - Temporary unavailable".
I have tried many variants of this command, but I discovered I get the exact same error seemingly no matter what host I call in OpenShift, including hosts not even running in our Hono cluster, and I don't know what that means.
I have verified the cluster installation in so far that all the pods look healthy and I have been able to create tenants and devices and I can send telemetry to the HTTP adapter.
SERVICES
ROUTES
Values overridden in initial helm install:
platform: openshift
kafka:
podSecurityContext:
enabled: false
containerSecurityContext:
enabled: false
zookeeper:
podSecurityContext:
enabled: false
containerSecurityContext:
enabled: false
useLoadBalancer: false
deviceRegistryExample:
type: "mongodb"
externalAccess:
enabled: true
mongoDBBasedDeviceRegistry:
externalAccess:
enabled: true
mongodb:
createInstance: true
persistence:
enabled: false
kafka:
externalAccess:
autoDiscovery:
enabled: false
service:
type: "NodePort"
# length of the array must match replicaCount
nodePorts:
- "32094"
serviceAccount:
create: false
rbac:
create: false
adapters:
amqp:
enabled: true
coap:
enabled: false
http:
enabled: true
mqtt:
enabled: true
lora:
enabled: false
enter code here

You should omit the https:// prefix from the host name and you will also need to provide a username and password for authenticating to the AMQP adapter using the -u and -p options:
java -jar hono-cli-*-exec.jar amqp -H hono-poc-adapter... -P 5672 -u sensor1#DEFAULT_TENANT -p hono-secret

I have still not been able to connect using the CLI from my local computer, but since the protocol gateway should run in the cluster anyway I deployed a container with the Hono CLI installed and ran the CLI from the pods terminal, which worked.
java -jar hono-cli-*-exec.jar amqp -H hono-adapter-amqp -P 5672 -u TEST_DEVICE#TEST_TENANT -p pwd
"hono-adapter-amqp" in this case is the service name of the amqp-adapter in OpenShift, the IP also worked but the route did not.

Related

Istio in Azure AKS - Connection issues over 15001 port while connecting to Azure Redis Cache

We are facing issues on 15001 port in istio deployed in Azure AKS.
Currently we have deployed Istio in AKS and trying to connect to Azure cache redis instance in cluster mode. Our Azure redis instance is having more than two shards with SSL enabled and one of the master node is assigned on port 15001. We were able to connect to Azure redis from AKS pods over ports 6380, 15000, 15002, 15003, 15004 and 15005 ports. However when we try to connect to over 15001 we see some issues. When we try to connect to redis over 15001 port from a namespace without istio sidecar injection from same aks cluster the connection is working fine.
Below are the logs from rediscli pod deployed in our AKS cluster.
Success case:
redis-cli -h our-redis-host.redis.cache.windows.net -p 6380 -a our-account-key --cacert "BaltimoreCyberTrustRoot.pem" --tls ping
OUTPUT:
Warning: Using a password with ‘-a’ or ‘-u’ option on the command line interface may not be safe.
PONG
We are able to connect over all ports - 6380, 15000, 15002, 15003, 15004 and 15005 to redis. However when we try to conenct using 15001. We are getting below error
Failure case:
redis-cli -h our-redis-host.redis.cache.windows.net -p 15001 -a our-account-key --cacert "BaltimoreCyberTrustRoot.pem" --tls ping
OUTPUT:
Warning: Using a password with ‘-a’ or ‘-u’ option on the command line interface may not be safe.
Could not connect to Redis at our-redis-host.redis.cache.windows.net :15001: SSL_connect failed: Success
I could not see any entry in istio-proxy logs when trying from 15001 port. However when trying for other ports we can see entry in logs as below
[2021-05-05T00:59:18.677Z] "- - -" 0 - - - "-" 600 3982 10 - "-" "-" "-" "-" "172.XX.XX.XX:6380" PassthroughCluster 172.XX.XX.XX:45478 172.22.XX.XX:6380 172.XX.XX.XX:45476 - -
Is this because 15001 port blocks the outbound requests or manipulates certs for requests on 15001 port. If yes, is there any configuration to update the proxy_port to other ports than 15001?
Note: Posted this on istio forum . Posting here for better reach.
Istio versions:
> istioctl version
client version: 1.8.2
control plane version: 1.8.3
data plane version: 1.8.3
Port 15001 is used by Envoy in Istio. Applications should not use ports reserved by Istio to avoid conflicts.
You can read more here
We have utilised the concept of istio excludeOutboundPorts annotation to bypass the the istio envoy proxy interception of the traffic on outbound ports for which we are see the problem due to istio port requirements
Using annotations provided by istio, we can use either IP or port ranges to exclude the interception. Below is an explain with ports
template:
metadata:
labels:
app: 'APP-NAME'
annotations:
traffic.sidecar.istio.io/excludeOutboundPorts: "15001"
References:
Istio Annotations
Istio traffic capture limitations
Istio Port Requirement

How to set hazelcast tcp strategy?

We are running services on a cluster as docker containers.
Since we cannot use eureka or multicast, we are trying to use hazelcast TCP discovery.Currently, configuration is like this (example):
cluster:
enabled: true
hazelcast:
useSiteLocalInterfaces: true
discovery:
tcp:
enabled: true
members:
- 10.10.10.1
- 10.10.10.2
- 10.10.10.3
azure:
enabled: false
multicast:
enabled: false
kubernetesDns:
enabled: false
During service start, we get the following log message:
Members configured for TCP Hazelcast Discovery after removing local addresses: [10.10.10.1, 10.10.10.2, 10.10.10.3]
That means, the service didn't discover its local ip right.
Later in the log, the following message appears: [LOCAL] [hazelcast-test-service:hz-profile] [3.12.2] Picked [172.10.0.1]:5701, using socket ServerSocket[addr=/0.0.0.0,localport=5701], bind any local is true
Obviously, the services determines its local ip to be 172.10.0.1. We have no idea, where this ip comes from. It doesn't exist on the cluster.
Is there a way to give hazelcast a hint how to discover its local ip?
The address 172.10.0.1 must be one of the network interfaces inside your container. You can ssh into your Docker container and check the network interfaces (e.g. with ifconfig).
If you want to use another network interface, you can configure it with the environment variable HZ_NETWORK_PUBLICADDRESS. For example, in your case, one of the members can be started with the following docker command.
docker run -e HZ_NETWORK_PUBLICADDRESS=10.10.10.1:5701 -p 5701:5701 hazelcast/hazelcast
Please read more at Hazelcast Docker Image: Hazelcast Hello World

How to connect jhipster on aws to elk cloud

I need to connect my JHIPSTER app that is in AWS / elasticbeanstalk to ELK Cloud.
It does not work with this:
jhipster:
logging:
logstash:
enabled: true
host: localhost # If using a Virtual Machine on Mac OS X or Windows with docker-machine, use the Docker's host IP here
port: 5000
queueSize: 512
as reference https://www.jhipster.tech/monitoring/
The answer is that you have to connect to logstash. Jhipster delivers a format to send it to a logstash service. Which in turn transfers it to elasticsearch -> Kibana.

Can't Access Ajenti Web Panel Port in VirtualBox Debian 8

I'm trying to install Anjenti Server Admin Panel on Debian 8 and NGINX. I have removed Apache.
The website connects at 127.0.0.1:8888, but I cannot access Ajenti.
I used the Ajenti Automatic Installation. It completed with:
But it does not connect in the browser:
Unable to connect
Firefox can’t establish a connection to the server at 127.0.0.1:8000.
VirtualBox Server
Portforwading
Address in use
sudo netstat -tlnp | grep 8000
Config
Digital Ocean answer says to disable SSL in the config. It is already disabled.
config.yml
auth:
allow_sudo: true
emails: {}
provider: os
bind:
host: 0.0.0.0
mode: tcp
port: 8000
color: default
max_sessions: 9
name: debian
ssl:
certificate:
client_auth:
certificates: []
enable: false
force: false
enable: false
I changed the Host to 127.0.0.1 and Port to 7000. It says Binding to [127.0.0.1]:7000.
I get the same connection error:
Unable to connect
Firefox can’t establish a connection to the server at 127.0.0.1:7000.
I tried adding it to Portfowarding. I tries to connect but the loading icon just keeps spinning.
You may want to add the port in IP tables.
root#debian:/# iptables -A INPUT -p tcp --dport 8888 --jump ACCEPT
root#debian:/# iptables-save

FileBeat not load balancing to multiple logstash (indexer) servers

I tried load balancing with 2 different logstash indexer servers, but when I add, say 1000 lines to my log, filebeats sends logs exclusively to only one server (I enabled stdout and can visually check output too see which logstash server is receiving the log events)
My filebeats conf:
filebeat:
prospectors:
-
paths:
- "D:/ApacheLogs/Test/access.*.log"
input_type: log
document_type: my_test_log
scan_frequency: 1s
registry_file: "C:/ProgramData/filebeat/registry"
output:
logstash:
hosts: ["10.231.2.223:5044","10.231.4.143:5044"]
loadbalance: true
shipper:
logging:
files:
will there be added support to disable persistent TCP connection on filebeats? I currently cannot use AWS ELB since due to sticky connection it always sends to one logstash server until it gets reset. Is this not the right architecture for it? Should I be sending to redis queue instead? In filebeats I have no idea nor could find any documentation how to send to redis queue?
Something like these did not work, I can't even find a way to debug it because filebeats leaves no logs
filebeat:
prospectors:
-
paths:
- "D:/ApacheLogs/Test/access.*.log"
input_type: log
document_type: my_test_log
scan_frequency: 1s
registry_file: "C:/ProgramData/filebeat/registry"
output:
redis:
# Set the host and port where to find Redis.
host: "logstash-redis.abcde.0001.usw2.cache.amazonaws.com"
port: 6379
shipper:
logging:
level: warning
# enable file rotation with default configuration
to_files: true
files:
path: C:\temp\filebeat.log
Version:
On windows server: FileBeat (Windows - filebeat version 1.2.2 (386))
On logstash indexer server: logstash 2.3.2
Operating System:
Windows server: Microsoft Windows NT 6.0.6002 Service Pack 2
Logstash indexer server: RHEL Linux 4.1.13-19.30.amzn1.x86_64
Filebeat should really solve this, but since they advertise it as being as lightweight as possible, don't hold your breath.
I don't know how easy it is to get HAProxy running on Windows, but it should solve your problem if you can get it installed:
https://serverfault.com/questions/95427/windows-replacement-for-haproxy
Use Layer4 roundrobin load balancing. You'll probably want to install an HAProxy on every machine with Filebeat. 1 HAProxy frontend will listen on localhost:5044, and it will map to multiple Logstash backends.
You can send your filebeat output to redis via below config:
output:
redis:
host: "host"
port: <port>
save_topology: true
index: "index-name"
db: 0
db_topology: 1
timeout: 5
reconnect_interval: 1

Resources