I have Strimzi Kafka cluster set-up successfully on OpenShift. I can see the following services:
kafka-brokers
kafka-bootstrap
zookeeper-client
zookeeper-nodes
This is actually different from what is called out here; so, not sure, if this is a Strimzi installation issue. I followed installation steps from here.
I created a routes for kafka-bootstrap and kafka-brokers on port 9092 (non-TLS clients). In both cases, I get a ECONNREFUSED error when I provide the route value (e.g. my-cluster-myproject.192.168.99.100.nip.io:9092 in the example from kafkajs.
How do I get the kafkajs package connected to the Strimzi cluster?
The Strimzi services that you are seeing are correct but in order to access the Kafka brokers, it's better using the bootstrap service which allows to specify only one "broker" in the bootstrap servers list of your client and it will select a broker to connect the first time and then getting metadata (it avoids to you to connect using the headless service where the pod IPs can change on restart).
So saying that, if you need to access the brokers from outside of OpenShift you don't have to create a route for the bootstrap service manually but you have to configure external listeners (https://strimzi.io/docs/latest/#assembly-configuring-kafka-broker-listeners-deployment-configuration-kafka) with type route.
As already mentioned above, the OpenShift routes work with TLS only for TCP connections.
In order to provide your clients the right certificate tu use for TLS you can follow this part of the documentation: https://strimzi.io/docs/latest/#kafka_client_connections
Have you checked out external listeners?
BTW, if you need to expose strimzi by router, TLS encryption is necessary. OpenShift router does not support TCP, but support TLS.
Related
I'm trying to develop a web application in nodejs. I'm using an npm package called "simple-peer" but i don't think this issue is related to that. I was able to use this package and get it working when integrating it with a laravel application using an apache server as the back end. I could access the host machine through it's IP:PORT on the network and connect a separate client to the host successfully with a peer-to-peer connection. However, I'm now trying to develop this specifically in node without an apache back end. I have my express server up and running on port 3000, I can access the index page from a remote client on the same network through IP:3000. But when I try to connect through webrtc, I get a "Connection failed" error. If I connect two different browser instances on the same localhost device, the connection succeeds.
For reference: i'm just using the copy/pasted code from this usage demo. I have the "simplepeer.min.js" included and referenced in the correct directory.
So my main questions are: is there a setting or some webRTC protocol that could be blocking the remote clients from connecting? What would I need to change to meet this requirement? Why would it work in a laravel/webpack app with apache and not with express?
If your remote clients can not get icecandidates, you need TURN server.
When WebRTC Peer behind NAT, firewall or using Cellular Network(like smartphone), P2P Connection will fail.
At that time, for fallback, TURN server will work as a relay server.
I recommend coTURN.
Here is an simple implementation of simple-peer with nodejs backend for multi-user video/audio chat. You can find the client code in /public/js/main.js. Github Project and the Demo.
And just like #JinhoJang said. You do need a turn server to pass the information. Here is a list of public stun/turn servers.
Where I work we have a cloudfoundry server that provides RabbitMQ as a service. When I configure this service and try to connect using amqplib via (localhost, 127.0.0.1, etc) it doesn't connect. When I look at the Java project, it never configures an IP and seems to connect natively through a driver or something (using Spring).
How would I connect using amqplib without an IP? Should I use another node lib instead?
You can make a connection without setting the hostname but then the hostname is set as "localhost" as described in the documentation.
If your RabbitMQ is on a remote server you must provide
a remote IP address
port (if it is different from the default 5672)
username and password of not default user as mentioned here
You may not be able to make a connection due to closed port on the remote server is closed, check it via telnet
from development environment, developpers need to access redis cache.
Connection to the azure redis cache is done via socks protocol on port 6380.
Issue is due to the fact that external access to the internet is done via a proxy in our company.
If it's HTTP(S) access, in nodejs for example, we use npm package 'dotenv' where we specify 'HTTP(S)' proxy settings (example for package ms-rest azure).
But here we don't find any solutions to for proxy usage for socks access.
We use the npm package 'redis' in that case.
Anyone has a solution to for proxy usage ??
Thanks in advance Mathieu
It seems to be impossible for directly connecting to Azure Redis Cache from a client behind a proxy. The reason as below:
Redis only supports tcp connection via its protocol like telnet, it's infeasible if your proxy does not support socks.
After I searched two recommended NodeJS redis clients ioredis & node_redis, both don't support build connection via proxy.
So here are two possible solutions for your current scenario.
If your proxy supports socks, you can try to create a new redis client via change some code based on the existing redis client to support socks proxy.
Recommended for the current case. I suggest that you can create a HTTP service on Azure to handle the requests from your client behind your proxy, which can pass the parameters of HTTP requests to Azure Redis Cache and wrap the result into the HTTP responses. It's Redis over HTTP like solutious/bone.
Hope it helps.
I am trying to configure Celery on my Django web server securely and I can figure out two alternatives on achieving this. Either securing the broker or signing the messages.
Celery, needs a message broker in which case is RabbitMQ.
I am using a "RabbitMQ as a service" implementation, which means that the RabbitMQ server is reached through the internet using the amqp protocol.
The service provider distributes an amqp uri, and also supports amqps:
The "amqps" URI scheme is used to instruct a client to make an secured connection to the server.
Apparently, this is what I need, otherwise all my messages will be circulating around the net, naked on the wire.
In order to use amqps, celery needs the following configuration:
import ssl
BROKER_USE_SSL = {
'keyfile': '/var/ssl/private/worker-key.pem',
'certfile': '/var/ssl/amqp-server-cert.pem',
'ca_certs': '/var/ssl/myca.pem',
'cert_reqs': ssl.CERT_REQUIRED
}
Question:
Where can I find those .pem files?
According to RabbitMQ docs, I have to create them myself and configure the RabbitMQ server to use them.
However, I am not running the server. As stated above I have a "RabbitMQ as a service" provider who supports amqps. Should I ask him to provide me with those .pem files?
Celery, can also sign messages.
(Trying this approach, I get a No encoder installed for auth error which I reported.)
Question: Does this mean that I can use my certificates to secure the connection as an alternative configuration to BROKER_USE_SSL?
There is also a note regarding message signing:
auth serializer won’t encrypt the contents of a message, so if needed
this will have to be enabled separately.
Subquestion: Does encrypting the contents of a message protect me from the "current" RabbitMQ server administrator while "message signing" only protects me while on the wire towards that server?
Apparently I am somehow confused but I would not like to create any kind of insecure traffic over the internet for any reason. I would appreciate your help.
When configuring for CloudAMQP, you need to set BROKER_USE_SSL to True and the BROKER_URL as shown below:
BROKER_USE_SSL = True
BROKER_URL = 'amqp://user:pass#hostname:5671/vhost'
Note the port number 5671, and keep 'amqp'.
If you are running your own Rabbit setup checkout this to make it secure.
https://www.rabbitmq.com/ssl.html
I am using RabbitMQ (cluster) and connecting to it using a Node.js client (node-amqp - https://github.com/postwait/node-amqp).
The RabbitMQ docs states that handling a failover scenario (cluster's node failure) should be handled by the client meaning the client should detect the fail and connect to another node in the cluster.
What is the simplest way to support this failover balancing. Is the node-amqp client support this ?
Any example or solution will be appreciated.
Thanks.
node-amqp has support for multiple server hosts in the connection object. So input host: as an array of hosts (unfortunately only the host part accepts an array, so other parameters like port and authentication has to match across you rabbit servers).