I have an elastix server4, but I have a big problem. I don't know how to find who hung up the call (agent or client).
How do I determine if client or agent hung up?
There is no way do it via web without custom context writing.
Only thing you can do is setup failover destination for queue.
Maybe there are some info in CEL log, however that should be checked.
Related
I'm thinking about making a worker script to handle async tasks on my server, using a framework such as ReactPHP, Amp or Swoole that would be running permanently as a service (I haven't made my choice between these frameworks yet, so solutions involving any of these are helpful).
My web endpoints would still be managed by Apache + PHP-FPM as normal, and I want them to be able to send messages to the permanently running script to make it aware that an async job is ready to be processed ASAP.
Pseudo-code from a web endpoint:
$pdo->exec('INSERT INTO Jobs VALUES (...)');
$jobId = $pdo->lastInsertId();
notify_new_job_to_worker($jobId); // how?
How do you typically handle communication from PHP-FPM to the permanently running script in any of these frameworks? Do you set up a TCP / Unix Socket server and implement your own messaging protocol, or are there ready-made solutions to tackle this problem?
Note: In case you're wondering, I'm not planning to use a third-party message queue software, as I want async jobs to be stored as part of the database transaction (either the whole transaction is successful, including committing the pending job, or the whole transaction is discarded). This is my guarantee that no jobs will be lost. If, worst case scenario, the message cannot be sent to the running service, missed jobs may still be retrieved from the database at a later time.
If your worker "runs permanently" as a service, it should provide some API to interact through. I use AmPHP in my project for async services, and my services implement HTTP/Websockets servers (using Amp libraries) as an API transport.
Hey ReactPHP core team member here. It totally depends on what your ReactPHP/Amp/Swoole process does. Looking at your example my suggestion would be to use a message broker/queue like RabbitMQ. That way the process can pic it up when it's ready for it and ack it when it's done. If anything happens with your process in the mean time and dies it will retry as long as it hasn't acked the message. You can also do a small HTTP API but that doesn't guarantee reprocessing of messages on fatal failures. Ultimately it all depends on your design, all 3 projects are a toolset to build your own architectures and systems, it's all up to you.
I currently have a volttron agent that periodicially downloads some data from the web in the form of a csv. I would like to use the DataPublisher example to take that csv data and push it to pubsub. However, from looking at the code, it seems like the DataPublisher is designed to run once, automatically, as soon as the agent starts up.
So my question them becomes, is there a way to start up the datapublisher from the original agent (which would itself have some sort of timer or loop)? I would then also need to stop it afterwards.
If I can't do this, my alternatives seem to be modifying the datapublisher to work on a schedule, or altering my other agent to publish the data from the csvs to pubsub.
Any assistance would be greatly appreciated.
The way I would do this is to expose an RPC method on the data publisher that either accepts a filename to publish. When this is called it would change the filename to publish and "start" the publishing of the data. The data publisher agent would always be running so there isn't a true restarting of the publisher.
The other agent (The one the downloaded the data?) would then just need to "kick off" the publishing through the rpc call.
This sounds like a very good feature that could be committed back to the VOLTTRON repository if you saw fit.
I recently created an error manager to take logged errors from clients on our network and put them into an MSMQ for processing. I have a separate Windows Service running on the server to pick items off the queue and push them into a database.
When I wrote it and tested it everything worked great; however I neglected to consider that at deploy-time, having 100 clients all sending to a public queue might not be performant, best-case, and worst-case there could be all kinds of collisions, it seems to me.
My thought right now is to front the MSMQ with a WCF service and make everyone go through that. The logic being that at that point I could employ some locking, etc. If I went with a service I think I could employ a private queue instead of a public one, which would be tons faster, as well.
What I'm not sure is, am I overthinking it? MSMQ is pretty robust and the methods I think are thread-safe. Should I just leave it alone and see what happens? If I do put in the service, how much management would I need to have in place?
I recently created an error manager to take logged errors from clients
on our network and put them into an MSMQ for processing
I assume you're using System.Messaging for this? If so there is nothing at all wrong with your approach.
having 100 clients all sending to a public queue might not be
performant
MSMQ was designed from the bottom up to handle high load. Depending on the size of the individual messages and the storage threshold of the machine, a queue can hold 10's of thousand of messages without any noticeable performance impact.
Because a "send" in MSMQ involves the queue manager on each machine writing messages locally before transmission (in a store and forward messaging pattern), there is almost no chance of "collisions" or any other forms of contention happening; if the sender is unable to transmit the message it simply "sends" it to a temporary local queue and then the actual transmission happens in the background and is mediated by the fault tolerant and very reliable msmq protocol.
My thought right now is to front the MSMQ with a WCF service and make
everyone go through that
This would be a valid choice if you were starting from nothing. As another poster has stated, WCF does hide you from some of the msmq-voodoo by removing the necessity to use System.Messaging. However, you've already written the code so I see little benefit exposing a netMsmqBinding endpoint.
If I went with a service I think I could employ a private queue
instead of a public one
As far as I understand it from your description, there's nothing to stop you using a private queue in your current scenario. In fact I'd recommend always using private queues as they're much simpler.
If I do put in the service, how much management would I need to have
in place?
You will have more management overhead with a wcf service. Because you're wrapping each end of a send-receive with the WCF stack, there is more code to spin up and therefore potentially fail. WCF stack exceptions are famously difficult to troubleshoot without full service logging enabled.
EDIT - in response to comments
I think for a private queue you have to actually be writing FROM the
machine the queue sits on, which would not work in a networked
environment
Untrue. MSMQ supports transactional reads to and writes from any private queue, regardless of whether the queue is local or remote.
This is because any time a message is sent from one machine to another in msmq, regardless of the queue address, the following happens:
Queue manager on sending machine writes the message to a temporary local "outbound" queue.
Queue manager on sending machine contacts queue manager on receiving machine and transmits the message.
Queue manager on receiving machine places the message into the destination queue.
If you are using transactions, the above steps will comprise 3 distinct transactions.
Something to remember: the safest paradigm in exchanging messages between queues on different machines is send remote, read local.
So this means when you send a message, you're instructing msmq to send to a remote queue address. However, when someone sends something to you, they must do the same. So you end up reading only from local queues, and sending only to remote queues.
This way you get the most reliable messaging setup, because when reading, a local queue will always be available.
Try it! I've been using msmq for cross machine communication for nearly 10 years and I've never used a public queue. I don't even know what they're for!
I would expose an WCF "IsOneWay" method.
And then host your WCF in IIS.
The IsOneWay will wire up to MSMQ.
This way...you have the robustness of IIS hosting. You can expose any endpoint you want.
But eventually the request makes it to MSMQ.
One of hte reasons is the ease of using msmq with wcf. Having written and used msmq "pre-wcf" I found the code (pulling messages off the queue and error handling) to be difficult and problematic. That alone would push me to WCF hosting.
And as you mention, the security around a local-queue is much easier to deal with.
Bottom line, let WCF handle the msmq-voodoo for you.
Simple example below.
[ServiceContract]
public interface IMyControllerController
{
[OperationContract(IsOneWay = true)]
void SubmitRequest( MyObject obj );
}
http://msdn.microsoft.com/en-us/library/ms733035%28v=vs.110%29.aspx
http://msdn.microsoft.com/en-us/library/system.servicemodel.operationcontractattribute.isoneway%28v=vs.110%29.aspx
What happens in WCF to methods with IsOneWay=true at application termination
http://blogs.msdn.com/b/tomholl/archive/2008/07/12/msmq-wcf-and-iis-getting-them-to-play-nice-part-1.aspx
I have a state machine working as a workflow service, having receive/send-reply activities as triggers for transitions.
Before send replies back, I have to do some work.
Problems come when exceptions happen in the process before sending the reply. In such case, if I don't handle the exception, the whole workflow is suspended; anyway, I shouldn't move to the next state if the requests wasn't properly handled.
Would it be enough to wrap the whole state machine with a Try/catch? Will the state machine recover from the last persisted state (I'm using Sql persistence)?
Are there other solutions?
Remark: workflows are hosted in IIS.
Thanks
I have an NServiceBus configuration that is working great on developers machines and in my Development Environment.
However, when I move it to my Test Environment my messages just start getting tossed.
Here is the system:
An app gets a TCP message from a Mainframe system and sends it to a MSMQ (call it FromMainframe).
An application hosted in IIS has a "Handle" method for that MSMQ and processes the messages from the mainframe.
In my Test Environment, step two only half way happens. The message is popped off the MSMQ, but not processed by my application.
Effectively my data is LOST! NServiceBus removes them from the Queue but I never get to process them. They are not even in the error queue!
These are the things I have tried in an attempt to figure out what is happening:
Check the Config files
Attach a remote debugger to the process to see what the Handle method is doing
The Handle method is never called (but when I attach to the Development Environment my breakpoint in my Handle method is hit and it all works flawlessly).
Redeploy my Dev version to the Test Envioronment and try step 2 again (just in case the versions were not exactly the same.)
Check the Config files again
Check that the Error queue is not filling up
The error queue stays empty (I wish it would fill up, then my data would not be LOST).
Check for any other process that may be pulling stuff from my MSMQs
I Turned off my IIS website and the messages in the FromMainframe queue start to backup.
When I turn it back on, the messages disappear fairly fast (but still not all at once). The speed that they disappear is too fast for them to be processed by my Handle method.
Check Config files yet again.
Run the NServiceBusTools\MsmqUtils\Runner.exe \i
I ran it, rebooted, ran it again and again for good measure!
Check the Configs again (I must have missed SOMETHING right?)
Check the Development Environment Configs are not pointing to the Test Environment
I don't think it is possible to use another computer's MSMQ as your input queue, but it does not hurt to check.
Look for any catch blocks that could be silently killing my message.
One last check of the Config files.
Recreate my Test Environment on another machine (it worked flawlessly)
Run my stuff outside of IIS.
When I host outside of IIS (using NServiceBus.Host.exe) it all works fine. So it has to be an IIS thing right?
Go crazy and hope that stack overflow can offer any kind of insight.
So I know enough about what happened to throw out an "Answer".
When I setup my NServiceBus self hosting I had a call that loaded the message handlers.
NServiceBus.Configure.With().LoadMessageHandlers()
(There are more configurations, but I omitted them for brevity)
When you call this, NServiceBus scans the assmeblies for a class that implements IHandleMessages<T>.
So, somehow, on my Test Environment Machine, the ServiceBus scan of the directory for a class that calls IHandleMessages was failing to find my class (even though the assembly was absolutely there).
Turns out that if NServiceBus does not find something that handles a message it will THROW IT AWAY!!!
This is a total design bug in my opinion. The whole idea of NServiceBus is to not lose your data, but in this case it does just that!
Now, once you know about this pitfall, there are several ways around it.
Expressly state what your handler(s) should be:
NServiceBus.Configure.With().LoadMessageHandlers<First<MyMessageType>>()
Even further protection is to add another handler that will handle "Everything else". IMessage is the base for all message payloads, so if you put a handler on it, it will pickup everything.
If you set IMessage to handle after your messages get handled, then it will handle everything that NServiceBus can't find a handler for. If you throw and exception in that Handle method that will cause NServiceBus to to move the message to the error queue. (What I think should be the default behavior.)