How to connect to a vertx eventbus using hazelcast in node - node.js

I'm creating a cluster of applications that run on my server.
I use Hazelcast-cluster in combination with VertX in java.
now I would lite to extend the vertx eventbus into an NodeJs applicaion running on the same server.
Hazelcast is running in node and connecting correctly with the hazelcast members running on the JVM
var HazelcastClient = require('hazelcast-client').Client;
var Config = require('hazelcast-client').Config;
var config = new Config.ClientConfig();
config.networkConfig.addresses = [{host: '127.0.0.1', port: '5701'}];
var map = {};
HazelcastClient.newHazelcastClient(config)
.then(function (hazelcastClient) {
map = hazelcastClient.getMap("persons");
});
});
can someone help me with the eventbus part ?!?!
Thanks

After some long searching I found the answer to my problem: I had to let go of node and run my javascript application in a JVM provided by VertX itself.
Now I can cluster my JS application with JAVA application and use the native eventbus ( without bridge )
For anyone who comes along the same situation, here is my test code :
vertx-server.js:
var Vertx = require("vertx-js/vertx");
var options = {};
Vertx.clusteredVertx(options, function (res, res_err) {
if (res_err == null) {
var vertx = res;
var eventBus = vertx.eventBus();
console.log("We now have a clustered event bus: " + eventBus);
eventBus.consumer("system", function (message) {
console.log("I have received a system message: " + JSON.stringify(message.body()));
message.reply("ok from javascript");
});
eventBus.publish("system", "hoi van Javascript-node");
} else {
console.log("Failed: " + res_err);
}
});
to instal the packages needed :
npm install vertx3-full
to run the application :
./node_modules/.bin/vertx run vertx-server.js -cluster

You can't use the Hazelcast Node client to connect a Node application with a Vert.x cluster. You need to setup an event bus bridge use the bridge client in your Node app.
See the EventBus Bridge - Node.JS loader in the examples repo.

Related

How can we set Proxy setting for Provisioning of Azure IOT device

We are using this repo : https://github.com/Azure/azure-iot-sdk-node
We are trying to setup a DPS service for Azure Iot hub, we want to setup proxy for Provisioning through X509, In the Sample code : "register_x509.js"
We are using "var Transport = require('azure-iot-provisioning-device-mqtt').MqttWs;" library. In that, there is function call "setTransportOptions" and we sending our proxy agent as a permeant there :
var transport = new Transport();
transport.setTransportOptions({webSocketAgent:new HttpsProxyAgent(process.env.HTTP_PROXY)})
var securityClient = new X509Security(registrationId, deviceCert);
var deviceClient = ProvisioningDeviceClient.create(
provisioningHost,
idScope,
transport,
securityClient
);
// Register the device. Do not force a re-registration.
deviceClient.register(function (err, result) {
if (err) {
console.log("error registering device: " + err);
} else {
console.log("registration succeeded");
console.log("assigned hub=" + result.assignedHub);
console.log("deviceId=" + result.deviceId);
}
the initial tunneling is not happening due to which the connection is fialing. We also saw in documentation, that Azure SDK has a proxy filter which automatically take Proxy variable from environment, we tried that as well but still same issue. Can anyone please suggest a way for this use case.
Error we received : UnhandledPromiseRejectionWarning: Error: socket hang up

Can not connect Apache Ignite on Azure Kuberntes from .net core app

I am new to Ignite and Kubernetes. I have a.Net Core 3.1 web application which is hosted Azure Linux App Service.
I followed the instructions (Apache Ignite Instructions Offical Site) and Apache Ignite could run on Azure Kubernetes. I could create a sample table and read-write actions worked successfully. Here is the screenshot of my success tests on PowerShell.
Please see the success test
Now, I try to connect Apache Ignite from my .net core web app but I couldn't make it.
My code is as below. I try to connect with IgniteConfiguration and SpringCfgXml, but both of them getting error.
private void Initialize()
{
var cfg = GetIgniteConfiguration();
_ignite = Ignition.Start(cfg);
InitializeCaches();
}
public IgniteConfiguration GetIgniteConfiguration()
{
var appSettingsJson = AppSettingsJson.GetAppSettings();
var igniteNodes = appSettingsJson["AppSettings:IgniteNodes"];
var nodeList = igniteNodes.Split(",");
var config = new IgniteConfiguration
{
Logger = new IgniteLogger(),
DiscoverySpi = new TcpDiscoverySpi
{
IpFinder = new TcpDiscoveryStaticIpFinder
{
Endpoints = nodeList
},
SocketTimeout = TimeSpan.FromSeconds(5)
},
IncludedEventTypes = EventType.CacheAll,
CacheConfiguration = GetCacheConfiguration()
};
return config;
}
The first error I get:
Apache.Ignite.Core.Common.IgniteException HResult=0x80131500
Message=Java class is not found (did you set IGNITE_HOME environment
variable?):
org/apache/ignite/internal/processors/platform/PlatformIgnition
Source=Apache.Ignite.Core
Also, I have no idea what I am gonna set for IGNITE_HOME, and username and secret to authentication.
Solution :
I finally connect the Ignite on Azure Kubernetes.
Here is my connection method.
public void TestConnection()
{
var cfg = new IgniteClientConfiguration
{
Host = "MyHost",
Port = 10800,
UserName = "user",
Password = "password"
};
using (IIgniteClient client = Ignition.StartClient(cfg))
{
var employeeCache1 = client.GetOrCreateCache<int, Employee>(
new CacheClientConfiguration(EmployeeCacheName, typeof(Employee)));
employeeCache1.Put(1, new Employee("Bilge Wilson", 12500, 1));
}
}
To find to host IP, user name and client secret please check the below images.
Client Id and Secret
IP Addresses
Note: I don't need to set any IGNITE_HOME ana JAVA_HOME variable.
The simplest way is to download Apache Ignite binary distribution (of the same version as one that you use), unzip it to a directory, and point IGNITE_HOME environment variable or IgniteConfiguration.IgniteHome configuration property to unzipped apache-ignite-n.n.n-bin/ directory absolute path.
We support doing that automatically for Windows-hosted apps but not for Linux-based deployments.

Cannot connect to Azure Redis after changing Minimum TLS version to 1.2

In my .NET Framework 4.6.1 application I am using StackExchange.Redis.StrongName 1.2.6 to connect to Azure Redis.
This is the code
public RedisContext(string connectionString = null)
{
if (connectionString == null) return;
Lazy<ConfigurationOptions> lazyConfiguration
= new Lazy<ConfigurationOptions>(() => ConfigurationOptions.Parse(connectionString));
var configuration = lazyConfiguration.Value;
configuration.SslProtocols = SslProtocols.Tls12;//just added
configuration.AbortOnConnectFail = false;
Lazy<ConnectionMultiplexer> lazyConnection =
new Lazy<ConnectionMultiplexer>(() => ConnectionMultiplexer.Connect(configuration));
_connectionMultiplexer = lazyConnection.Value;
LogProvider.IsDisabled = true;
var connectionEndpoints = _connectionMultiplexer.GetEndPoints();
_lockFactory = new RedisLockFactory(connectionEndpoints.Select(endpoint => new RedisLockEndPoint
{
EndPoint = endpoint,
Password = configuration.Password,
Ssl = configuration.Ssl
}));
}
In Azure, I have changed the Redis resource to use TLS1.2 and in code I have added this line:
configuration.SslProtocols = SslProtocols.Tls12;//just added
And now, nothing works anymore. This is the error I get in Application Insights:
Error connecting to Redis. It was not possible to connect to the redis server(s); ConnectTimeout
I have also tried to add ",ssl=True,sslprotocols=tls12" to the redis connection string, but with the same result.
Try referencing StackExchange.Redis instead of StackExchange.Redis.StrongName. I have done that in a few of my projects and now it works. However some 3rd party still use StrongName rather than the normal redis one. StackExchange.Redis.StrongName is now deprecated. https://github.com/Azure/aspnet-redis-providers/issues/107. I assume you are trying to connect to Azure Redis in relation to them stopping TLS 1.0 and 1.1 support?

How to create a MySQL adapter for session.io in NodeJS?

I have a NodeJS project on 2 Google Cloud instances behind a load-balancer. I'm using socket.io. I want to share the sessions between the instances.
Usually developers doing it using socket.io-redies, but I don't want redis just for that. I have Cloud SQL (Aka: MySQL), and I want to use MySQL for sharing the sessions.
I have understand the whole index.js of the redis adapter file, except this function:
https://github.com/socketio/socket.io-redis/blob/master/index.js#L93
Redis.prototype.onmessage = function(channel, msg){
var args = msgpack.decode(msg);
var packet;
if (uid == args.shift()) return debug('ignore same uid');
packet = args[0];
if (packet && packet.nsp === undefined) {
packet.nsp = '/';
}
if (!packet || packet.nsp != this.nsp.name) {
return debug('ignore different namespace');
}
args.push(true);
this.broadcast.apply(this, args);
};
If I need to get events from the MySQL (subscribe) I think it is not possible. Am I right?
Do you know another solution of sharing socket.io between two machines, without using Redis?

Embedded Elastic with NodeJS

I have used embedded elastic as part of a Spring application in Java like this:
Node node;
#SuppressWarnings("unused")
#Bean
public Client es() {
node = nodeBuilder().local(true).node();
Client client = node.client();
boolean indexExists = client.admin().indices().prepareExists(INDEX).execute().actionGet().isExists();
if (!indexExists) {
client.admin().indices().prepareCreate(INDEX).execute().actionGet();
}
return client;
}
I'm trying to do something similar with NodeJS so I don't have to create an elastic search instance separately(super low traffic). In the Spring case, I just set .local(true) and it's good to go. I can't find any option like that in Node.
This is what I'm doing now
var elasticsearch = require('elasticsearch');
var client = new elasticsearch.Client({
// log: 'trace',
host: 'localhost:9200'
});
and it works fine for an external server.
You can't have Elasticsearch Node Client in NodeJS. The second method is the way to go.

Resources