Vertx and Redis: I cannot make them working together - groovy

I have my simple Vertx script in Groovy that should send a request to Redis to get a value back:
def eb = vertx.eventBus
def config = [:]
def address = 'vertx.mod-redis-io'
config.address = address
config.host = 'localhost'
config.port = 6379
container.deployModule("io.vertx~mod-redis~1.1.4", config)
eb.send(address, [command: 'get', args: ['mykey']]) { reply ->
if (reply.body.status.equals('ok')) {
println 'ok'
// do something with reply.body.value
} else {
println("Error ${reply.body.message}")
}
}
The value for 'mykey' is stored regularly on my Redis (localhost:6379):
127.0.0.1:6379> get mykey
"Hello"
The script starts correctly but no values are returned (reply).
Am I missing something?

The issue is that you deployModule and send to the EventBus sequentially, even if the call is asynchronous.
So, when you call deployModule the module deployment gets triggered, but is not guaranteed before eb.send is called. By that you are sending the right command but it does not get computed because the module is not there.
Try the following in adding your test command to the AsyncHandler of the deployModule
container.deployModule("io.vertx~mod-redis~1.1.4", config) { asyncResult ->
if(asyncResult.succeeded) {
eb.send(address, [command: 'get', args: ['mykey']]) { reply ->
if (reply.body.status.equals('ok')) {
println 'ok'
// do something with reply.body.value
} else {
println("Error ${reply.body.message}")
}
}
} else {
println 'Deployment broken!'
}
}
The example from the https://github.com/vert-x/mod-redis is maybe not the best because it is just a snippet to point the direction.
This works as it only sends the request to the Bus as soon as the module is deployed and by that someone listening to it. I tested it locally on a Vagrant installment with Redis.
Overall, development in Vert.x is close to always asynchronous because of its key concept. It takes some time to get acquainted with it, but it has its benefits :)
Hope this helps.
Best

Related

Alternative suggestions for this Ping function?

In my game app, I call this:
Runtime.getRuntime().exec("/system/bin/ping -c 1 -w 1 $serverip")
It gives an accurate reading of the ping to my server but in some exceptional cases, the ping doesn't go through in certain circumstances (for example, when the player is using Mobile Data, the command returns nothing in 25% of the cases for no apparent reason).
I am aware there must be other ping commands/functions/methods/protocols to get a ping reading (I am not sure what game companies use in order to get constant ping readings inside their games), any suggestions ? Thanks in advance.
You could also use the Socket class provided in the java.net package.
Using the provided method connect(SocketAddress endpoint) you can connect your socket to the server.
For example, you can use something like this
public static boolean ping(String address, int port) {
Socket socket = new Socket();
try {
socket.connect(new InetSocketAddress(address, port));
} catch (IOException e) {
return false;
} finally {
try {
socket.close();
} catch (IOException ignored) { }
}
return true;
}
You can invoke like this ping("www.google.com", 443)
Finally, you could use the java.net.URL class to wrap your String url.
For instance,
URL url = new URL("https://www.google.com:443/");
ping(url.getHost(), url.getPort());

Async base-local with MQTT

I need to synchronize a base and a local client with MQTT. If client publishes then the other one will get the message.
If my MQTT broker is down, I need to stop sending messages, save the messages somewhere, wait for a connection, then continue sending.
If my local or base client is down for a second, I need to save the message which I didn't send, then send it when I turn on my base/local.
I'm working with Node.js and can't figure out how to implement this.
This is my handler when I connect or disconnect with my MQTT server.
client.on('connect',()=>{
store.state = true;
run(store).then((value)=>console.log('stop run'));
});
client.on('offline',()=>{
store.state = false;
console.log('offline');
});
This is my run function. I use store.state to decide if I should stop this interval. But this code does not seem to be a good way to implement my concept.
function run(store) {
return new Promise((resolve,reject)=>{
let interval = setInterval(()=>{
if (!store.state) {
clearInterval(interval);
resolve(true);
}
else if (store.queue.length > 0) {
let data = store.queue.pop();
let res = client.publish('push',JSON.stringify(data),{qos:2});
}
},300)
});
}
What should I do to implement a function which always sends, stop upon 'disconnect', then continues sending when connected?
I don't think set interval which 300ms is good.
If you want something that "always runs", at set intervals and in spite of any errors inside the loop, setInterval() makes sense. You are right that queued messages can be cleared faster than "once every 300 ms".
Since MQTT.js has a built-in queue, you could simplify a lot by using it. However, your messages are published to a target called "push", so I guess you want them delivered in the order of the queue. This answer keeps the queue and focuses on sending the next message as soon as the last one is confirmed.
What if res=client.publish(..) false ?
Good point! If you want to make sure it arrives, better to remove it once the publish has succeeded. For this, you need to retrieve the value without removing it, and use the callback argument to find out what happened (publish() is asynchronous). If that was the only change, it might look like:
let data = store.queue[store.queue.length - 1];
client.publish('push', JSON.stringify(data), {qos:2}, (err) => {
if(!err) {
store.queue.pop();
}
// Ready for next publish; call this function again
});
Extending that to include a callback-based run:
function publishFromQueue(data) {
return new Promise((resolve,reject)=>{
let res = client.publish('push', JSON.stringify(data), {qos:2}, (err) => {
resolve(!err);
});
});
}
async function run(store) {
while (store.queue.length > 0 && store.state) {
let data = store.queue[store.queue.length - 1];
let res = await publishFromQueue(data);
if(res) {
store.queue.pop();
}
}
}
This should deliver all the queued messages in order as soon as possible, without blocking. The only drawback is that it does not run constantly. You have two options:
Recur at set intervals, as you have already done. Slower, though you could set a shorter interval.
Only run() when needed, like:
let isRunning = false; //Global for tracking state of running
function queueMessage(data) {
store.queue.push(data);
if(!isRunning) {
isRunning = true;
run(store);
}
isRunning = false;
}
As long as you can use this instead of pushing to the queue, it should come out similar length, and more immediate and efficient.

Akka.net Ask timeout when used in Azure WebJob

At work we have some code in a Azure WebJob where we use Rabbit
The basic workflow is this
A message arrives on RabbitMQ Queue
We have a message handler for the incoming message
Within the message handler we start a top level (user) supervisor actor where we "ask" it to handle the message
The supervisor actor hierarchy is like this
And the relevant top level code is something like this (this is the WebJob code)
static void Main(string[] args)
{
try
{
//Bootstrap akka IoC resolver well ahead of any actor usages
new AutoFacDependencyResolver(ContainerOperations.Instance.Container, ContainerOperations.Instance.Container.Resolve<ActorSystem>());
var system = ContainerOperations.Instance.Container.Resolve<ActorSystem>();
var busQueueReader = ContainerOperations.Instance.Container.Resolve<IBusQueueReader>();
var dateTime = ContainerOperations.Instance.Container.Resolve<IDateTime>();
busQueueReader.AddHandler<ProgramCalculationMessage>("RabbitQueue", x =>
{
//This is code that gets called whenever we have a RabbitMQ message arrive
//This is code that gets called whenever we have a RabbitMQ message arrive
//This is code that gets called whenever we have a RabbitMQ message arrive
//This is code that gets called whenever we have a RabbitMQ message arrive
//This is code that gets called whenever we have a RabbitMQ message arrive
try
{
//SupervisorActor is a singleton
var supervisorActor = ContainerOperations.Instance.Container.ResolveNamed<IActorRef>("SupervisorActor");
var actorMessage = new SomeActorMessage();
var supervisorRunTask = runModelSupervisorActor.Ask(actorMessage, TimeSpan.FromMinutes(25));
//we want to wait this guy out
var supervisorRunResult = supervisorRunTask.GetAwaiter().GetResult();
switch (supervisorRunResult)
{
case CompletedEvent completed:
{
break;
}
case FailedEvent failed:
{
throw failed.Exception;
}
}
}
catch (Exception ex)
{
_log.Error(ex, "Error found in Webjob");
//throw it for the actual RabbitMqQueueReader Handler so message gets NACK
throw;
}
});
Thread.Sleep(Timeout.Infinite);
}
catch (Exception ex)
{
_log.Error(ex, "Error found");
throw;
}
}
And this is the relevant IOC code (we are using Autofac + Akka.NET DI for Autofac)
builder.RegisterType<SupervisorActor>();
_actorSystem = new Lazy<ActorSystem>(() =>
{
var akkaconf = ActorUtil.LoadConfig(_akkaConfigPath).WithFallback(ConfigurationFactory.Default());
return ActorSystem.Create("WebJobSystem", akkaconf);
});
builder.Register<ActorSystem>(cont => _actorSystem.Value);
builder.Register(cont =>
{
var system = cont.Resolve<ActorSystem>();
return system.ActorOf(system.DI().Props<SupervisorActor>(),"SupervisorActor");
})
.SingleInstance()
.Named<IActorRef>("SupervisorActor");
The problem
So the code is working fine and doing what we want it to, apart from the Akka.Net "ask" timeout shown above in the WebJob code.
Annoyingly this seems to work fine if I try and run the webjob locally. Where I can simulate a "ask" timeout by providing a new supervisorActor that simply doesn't EVER respond with a message back to the "Sender".
This works perfectly running on my machine, but when we run this code in Azure, we DO NOT see a Timeout for the "ask" even though one of our workflow runs exceeded the "ask" timeout by a mile.
I just don't know what could be causing this behavior, does anyone have any ideas?
Could there be some Azure specific config value for the WebJob that I need to set.
The answer to this was to use the async rabbit handlers which apparently came out in V5.0 of the C# rabbit client. The offical docs still show the sync usage (sadly).
This article is quite good : https://gigi.nullneuron.net/gigilabs/asynchronous-rabbitmq-consumers-in-net/
Once we did this, all was good

Worker stuck in a Sandbox?

Trying to figure out why I can login with my rest API just fine on the main thread but not in a worker. All communication channels are operating fine and I am able to load it up no problem. However, when it tries to send some data it just hangs.
[Embed(source="../bin/BGThread.swf", mimeType="application/octet-stream")]
private static var BackgroundWorker_ByteClass:Class;
public static function get BackgroundWorker():ByteArray
{
return new BackgroundWorker_ByteClass();
}
On a test script:
public function Main()
{
fBCore.init("secrets", "my-firebase-id");
trace("Init");
//fBCore.auth.addEventListener(FBAuthEvent.LOGIN_SUCCES, hanldeFBSuccess);
fBCore.auth.addEventListener(AuthEvent.LOGIN_SUCCES, hanldeFBSuccess);
fBCore.auth.addEventListener(IOErrorEvent.IO_ERROR, handleIOError);
fBCore.auth.email_login("admin#admin.admin", "password");
}
private function handleIOError(e:IOErrorEvent):void
{
trace("IO error");
trace(e.text); //Nothing here
}
private function hanldeFBSuccess(e:AuthEvent):void
{
trace("Main login success.");
trace(e.message);//Complete success.
}
When triggered by a class via an internal worker channel passed from Main on init:
Primordial:
private function handleLoginClick(e:MouseEvent):void
{
login_mc.buttonMode = false;
login_mc.play();
login_mc.removeEventListener(MouseEvent.CLICK, handleLoginClick);
log("Logging in as " + email_mc.text_txt.text);
commandChannel.send([BGThreadCommand.LOGIN, email_mc.text_txt.text, password_mc.text_txt.text]);
}
Worker:
...
case BGThreadCommand.LOGIN:
log("Logging in with " + message[1] + "::" + message[2]); //Log goes to a progress channel and comes to the main thread reading the outputs successfully.
fbCore.auth.email_login(message[1], message[2]);
fbCore.auth.addEventListener(AuthEvent.LOGIN_SUCCES, loginSuccess); //Nothing
fbCore.auth.addEventListener(IOErrorEvent.IO_ERROR, handleLoginIOError); //Fires
break;
Auth Rest Class: https://github.com/sfxworks/FirebaseREST/blob/master/src/net/sfxworks/firebaseREST/Auth.as
Is this a worker limitation or a security sandbox issue? I have a deep feeling it is the latter of the two. If that's the case how would I load the worker in a way that also gives it the proper permissions to act?
Completely ignored the giveAppPrivelages property in the createWorker function. Sorry Stackoverflow. Sometimes I make bad questions when I get little (or none in this case) sleep the night before.

Testing events that fire multiple times

So, Im using vows js for testing node apps.
I have some code which emits the same event multiple times.
Vows (0.7.0) seems fine when testing events that fire once, but if your code emits the same event multiple times, vows complains.
A pull request which I believe might solve this problem was submitted over a year ago but nothing seems to have happened with it...
Does anybody know of a test framework which will allow me to test an object which emits the same event n times?
Here's what I mean (in vows):
vows.describe("Vows test").addBatch({
"A test ": {
topic: function () {
var topic = new(events.EventEmitter);
for(var i=0; i<10;i++) {
topic.emit('woot', 'woot');
}
return topic;
},
on: {
"woot": {
"will catch event the woot event" : function (ret) {
assert.strictEqual(ret, 'woot');
}
}
}
}
})
Cheers...

Resources