We have a Kafka cluster and various producers and consumers running across at least 5 differe servers. I have been asked to secure Kafka environment using SSL. I have configured SSL on brokers, now it's time to secure clients. So in the document it's mentioned that below config needs to be placed in producer/consumer config
security.protocol=SSL
ssl.truststore.location=/var/private/ssl/kafka.client.truststore.jks
ssl.truststore.password=test1234
Now the problem is I have distributed producer and consumer application jars to different teams and they have deployed them on their application servers. How can I create truststore for all of them? Can anyone explain me this concept. In case clients run on hundreds of server how can one create truststore on each server? Please help.
Related
I would like to know more about how to connect to a Kafka MSK Cluster.
You need a Kafka client.
Since you have node.js, start with kafkajs or node-rdkafka. I'd recommend kafkajs since it supports SASL consumption with AWS IAM
Beyond that, MSK is an implementation detail, and any steps will work for all Kafka clusters (depending on other security implementations). However, cloud providers will require that you allow your client IP address space within the VPC, and you can follow the official getting started guide for verifying a basic client will work.
I want to make a secure hyperledger fabric infrastructure to manage all nodes based on physical devices.
The front-end user application writes to HL. It asks for a random node and if it answers application sends request and payload.
What is the best way to guarantee private communication between off-chain frontend app and hyperledger?
I have already created private domain secured by SSL certificate for every node but this method doesn’t sound scalable - what if we have 10k nodes? Is there a better approach?
If your intent is to communicate directly with the Peer, the endpoint's already able to be secured with TLS.
However, under an ideal situation, your web app, would communicate with your back-end server (lets say NodeJS Express server). Your Express server would be TLS secured and your web app would communicate via https. Your Express server would then use the Fabric Node SDK to communicate with your network, which is also TLS secured communication. You're not configuring anything more extensively than you would have while building a TLS-secured web server in the first place.
To your last point, who owns the 10k nodes? An organization would only be expected to own a few nodes, and your few nodes would be handling your transactions, you wouldn't be submitting to other organizations peers. You owning so many peer's in a network would defeat the purpose of Fabric's consensus, allowing you to compromise the network by always being able to provide policy quorum.
We are currently running two Kafka clusters. One is inside our corporate network and another is in AWS. We need to stream data from the internal Kafka to the cluster in AWS. Mirror Maker is currently our first choice based on simplicity. I've been investigating on how to do this in a secure fashion. I would prefer not to place Kafka on a public subnet as it seems that is not a secure option but, some limitations of Kafka have made this somewhat difficult. Namely that Kafka producers/consumers need to be able to see all instances of the target cluster on the network. This means that all Kafka instances in AWS need a public IP address.
The alternative is to place Mirror Maker in AWS, but then how to we expose our internal servers to the web in a secure manner? This seems like it might be a common use case, but I cannot find anything on the web that relates. Can someone provide recommendations or correct me if any of my assumptions are incorrect?
I have been spring integration, I want to connect multiple ftp server to retrieve files from remote locations, could anyone give me good example how to connect multiple ftp servers using spring integration
Thank you in advance,
Udeshika
The Dynamic FTP Sample explores a technique for creating multiple parameterized contexts on the outbound side. On the inbound side you have to make the context a child of the main context, so that it has access to the channel to which to send the files. This is discussed in this Spring Forum Thread and other threads linked from there.
What am I trying to do: create multiple TLS servers that listen on same port. Each TLS server has different set of certificates and should only allow certain set of clients. For example, first TLS server should allow Client X and not Client Y. Second TLS server should allow Client Y and not Client X.
Issue that I am having is Client Y and X both connect only with first TLS server. TLS certificates used are different for each client are signed by different TLS servers, but they tend to connect only to first TLS server.
Would appreciate any thoughts on this issue.
That setup just isn't going to work. The cluster API lets multiple workers share a port, but there is no intelligence about which worker gets allocated to which requests. If there isn't a lot of load, it's entirely possible that only one worker will receive all the requests.
I'm not sure what you're trying to do, but if you think about it, this kind of setup doesn't make sense. If you have different certificates, then there is just no way for a TLS session to be setup successfully. It would be like trying to bind multiple SSL certificates to the same IP.
The only way I could see this working is if each of those different certificates corresponds with a different hostname. In that case, you could try using SNI as documented at http://nodejs.org/docs/latest/api/tls.html#tls.connect. However, each worker process would still need to access the same pool of certificates.