Can MessageChannel overflow - multithreading

I am working on an AS3 project in FDT6. I am using the lastest FLEX 4.6 and AIR 3.7.
I have a worker.swf file that is embedded into the main application to do threading work with.
I am using the MessageChannel class to pass information between the two.
In my main class I have defined
private var mainToWorker:MessageChannel;
private var workerToMain:MessageChannel;
mainToWorker = Worker.current.createMessageChannel(worker);
workerToMain = worker.createMessageChannel(Worker.current);
on the mainToWorker I only ever send messages. In these messages I send a byte array of information. The information is an object that contains a 'command' property and a 'props' property. Basically acting like a function call. The command is a function name and the props is an object that contains data for that function.
mainToWorkerMutex.lock();
mainToWorker.send(ByteArrayUtils.ObjectToByteArray({command:"DoSomething", props:{propA:1,propB:7}}));
mainToWorkerMutex.unlock();
The same occurs for the workerToMain var except I only send byte data that contains the 'message' and 'props' parameters.
workerToMainMutex.lock();
workerToMain.send(ByteArrayUtils.ObjectToByteArray({command:"complete", props:{return:"result"}}));
workerToMainMutex.unlock();
As a sanity check I make sure that the message channels are getting what they should.
It is working fine when I build it in FDT, however when it is built using an ANT script through flash builder I am sometimes getting the 'command' events coming back through in the workerToMain channel.
I am sending quite a lot of data through the message channel. Is it possible that I am overloading it and causing a buffer overflow into the other message channel somehow? How could that only be happening in FB?
I have checked my code many times and I am sure there is nothing in my own code that is sending that message back.

I had similar issue. When sending many bytearrays using channels sometimes things i received was not things i've actually sended. I had 4 channels (message channel to worker, message channel to main, data channel to worker, data channel to main).
I've noticed that data channel to main was affecting message channel to worker. When i turned off data channel to main - message channel to worker stared working just fine :D...
They have a big issue there with sending byte arrays it seems.
But what helped me was using shareable (at first it was not shareable) bytearray for communication via channels, but only for communication, as soon as i am receiving such bytearray i'm copying it to another byte array and parsing a copy.
This removed the problem (made quite hard stress tests there)...
Cheers
P.S. I'm also using static functions (like your ByteArrayUtils) to create bytearray's used for communication, but it seems fine, even made tests using non static functions.

So, it looks like I have found the issue. Looks like it's the ByteArray that is doing it.
ByteArray.toString() is basically sometimes mangles your data meaning you can't really trust it.
http://www.actionscript.org/forums/showthread.php3?t=155067
If you read the comment by "Jim Freer" he mentions how strings sometimes do this.
My solution was to switch to using a JSON encoded string instead of ByteArray data in the message channel. The reason I was using bytearray data to begin with is because I wanted to preserve class definition information, which JSON doesn't do.

Related

An aggregator that can release when all records are processed, even with errors

I am building a system with Spring Integration that processes all lines in a file as records. Because some of the String records are malformed I have multiple paths through the application via a Splitter and Aggregator combination (I'm building the Aggregator as we speak).
Further, some of the records are so malformed that they are effectively errors. However I have a requirement that all records must be processed therefore I must identify and log gross malformation errors separately and finish processing the file. In other words, I can not fail to process the file but instead must only log errors.
Aggregator
I intend to do achieve the goal of processing grossly malformed records by modifying the headers on the incoming message and passing the message on-ward to the Aggregator which can search for the existence of such a parameter. I'll effectively be hand coding in some error handling situations to my processors and aggregator.
My Release Strategy for the Aggregator will be when all messages are processed.
Code Extract
This code comes from a blog entry by Matt Vickery. He constructs an entirely new message (using MessageBuilder and transferring headers) whereas I will just add something to the Message headers. He includes this code in a gateway which subsequently transfers the Message onto the Aggregator.
public Message<AvsResponse> service(Message<AvsRequest> message) {
Assert.notNull(message, MISSING_MANDATORY_ARG);
Assert.notNull(message.getPayload(), MISSING_MANDATORY_ARG);
MessageHeaders requestMessageHeaders = message.getHeaders();
Message<AvsResponse> responseMessage = null;
try {
logger.debug("Entering AVS Gateway");
responseMessage = avsGateway.send(message);
if (responseMessage == null)
responseMessage = buildNewResponse(requestMessageHeaders,
AvsResponseType.NULL_RESULT);
logger.debug("Exited AVS Gateway");
return responseMessage;
}
catch (Exception e) {
return buildNewResponse(responseMessage, requestMessageHeaders,
AvsResponseType.EXCEPTION_RESULT, e);
}
}
Confusion (...at least, that which I know about)
My questions are as follows:
When I have such a release strategy (all messages processed), is that the best way to ensure all messages get through to the Aggregator?
When using an Aggregator it seems like in practical cases, it would be very common to need access to the Message in some previous step, as opposed to just passing and processing simple POJOs. Would that be true or is there something I should be doing to simplify my design so I can avoid Message
I came across a blog entry by Matt Vickery showing how he achieves what seems to be similar with an Aggregator. I'm using his work as a guide.
P.S. Per Artem Bilan's advice, I'm avoiding creating my own messages and letting SI turn them into Messages
There is no difference for Aggregator if payload is valid or not. Its general purpose is to build a List (by default) of payloads to one Message. And it does it via some sequenceDetails from MessageHeaders. It is first.
If you use Splitter, it is responsible to enrich each produced Message with default sequenceDetails. So, if you have this configuration:
<splitter/>
<aggregator/>
And if your inbound payload is List, you end up with List after aggregator as well.
I assume, that your Splitter just produces String payloads from File lines.
Then you pass each Message to some service/transformer.
The result of that you may pass to the Aggregator.
But as you say some of payloads are not valid and your processor fails with an Exception.
So, how about just try...catch within that POJO method and return some payload with error indicator, e.g. simple String "Oops!".
As I described before: the result of POJO method will be pushed to payload of the Message by Framework. And what is magic, that sequenceDetails will be there in the MessageHeaders too.
I don't see reason to write some custom ReleaseStrategy for this task, or even any other Aggregator's strategies...
Let me know, what you don't understand.
UPDATE
To add some error-indicator to message headers and don't throw Exception, it really will be simpler to build a new Message from code, not via some error-channel flow:
try {
return [GOOD_RESULT];
}
catch(Exception e) {
return MessageBuilder.withPayload(payload).setHeader("ERROR", e.getMessage()).build();
}
But in this case you should use <service-activator> instead of <transformer>, because the last one doesn't copy headers from inbound Message. And you really need them - setHeader for aggregator.

Java exception: "Can't get a Writer while an OutputStream is already in use" when running xAgent

I am trying to implement Paul Calhoun's Apache FOP solution for creating PDF's from Xpages (from Notes In 9 #102). I am getting the following java exception when trying to run the xAgent that does the processing --> Can't get a Writer while an OutputStream is already in use
The only changes that I have done from Paul's code was to change the package name. I have isolated when the exception happens to the SSJS line: var jce: DominoXMLFO2PDF = new DominoXMLFO2PDF(); All that line does is instantiate the class, there is no custom constructor. I don't believe it is the code itself, but some configuration issue. The SSJS code is in the beforeRenderResponse event where it should be, I haven't changed anything on the xAgent.
I have copied the jar files from Paul's sample database to mine, I have verified that the build paths are the same between the two databases. Everything compiles fine (after I did all this.) This exception appears to be an xpages only exception.
Here's what's really going on with this error:
XPages are essentially servlets... everything that happens in an XPage is just layers on top of a servlet engine. There are basically two types of data that a servlet can send back to whatever is initiating the connection (e.g. a browser): text and binary.
An ordinary XPage sends text -- specifically, HTML. Some xAgents also send text, such as JSON or XML. In any of these scenarios, however, Domino uses a Java Writer to send the response content, because Writers are optimized for sending Character data.
When we need to send binary content, we use an OutputStream instead, because streams are optimized for sending generic byte data. So if we're sending PDF, DOC/XLS/PPT, images, etc., we need to use a stream, because we're sending binary data, not text.
The catch (as you'll soon see, that's a pun) is that we can only use one per response.
Once any HTTP client is told what the content type of a response is, it makes assumptions about how to process that content. So if you tell it to expect application/pdf, it's expecting to only receive binary data. Conversely, if you tell it to expect application/json, it's expecting to only receive character data. If the response includes any data that doesn't match the promised content type, that nearly always invalidates the entire response.
So Domino in its infinite wisdom protects us from making this mistake by only allowing us to send one or the other in a single request, and throws an exception if we disobey that rule.
Unfortunately... if there's any exception in our code when we're trying to send binary content, Domino wants to report that to the consumer... which tries to invoke the output writer to send HTML reporting that something went wrong. Except we already got a handle on the output stream, so Domino isn't allowed to get a handle on the output writer, because that would violate its own rule against only using one per response. This, in turn, throws the exception you reported, masking the exception that actually caused the problem (in your case, probably a ClassNotFoundException).
So how do we make sure that we see the real problem, and not this misdirection? We try:
try {
/*
* Move all your existing code here...
*/
} catch (e) {
print("Error generating dynamic PDF: " + e.toString());
} finally {
facesContext.responseComplete();
}
There are two reasons this is a preferred approach:
If something goes wrong with our code, we don't let Domino throw an exception about it. Instead, we log it (instead of using print to send it to the console and log, you could also toss it to OpenLog, or whatever your preferred logging mechanism happens to be). This means that Domino doesn't try to report the error to the user, because we've promised that we already reported it to ourselves.
By moving the crucial facesContext.responseComplete() call (which is what ultimately tells Domino not to send any content of its own) to the finally block, this ensures it will get executed. If we left it inside the try block, it would get skipped if an exception occurs, because we'd skip straight to the catch... so even though Domino isn't reporting our exception because we caught it, it still tries to invoke the response writer because we didn't tell it not to.
If you follow the above pattern, and something's wrong with your code, then the browser will receive an incomplete or corrupt file, but the log will tell you what went wrong, rather than reporting an error that has nothing to do with the root cause of the problem.
I almost deleted this question, but decided to answer it myself since there is very little out on google when you search for the exception.
The issue was in the xAgent, there is a line importPackage that was incorrect. Fixing this made everything work. The exception verbage: "Can't get a Writer while an OutputStream is already in use" is quite misleading. I don't know what else triggers this exception, but an alternative description would be "Java class ??yourClass?? not found"
If you found this question, then you likely have the same issue. I would ignore what the exception actually says, and check your package statements throughout your application. The java code will error on its own, but your SSJS that references the java will not error until runtime, focus on that code.
Update the response header after the body can solve this kind of problem, example :
HttpServletResponse response = (HttpServletResponse) facesContext.getExternalContext().getResponse();
response.getWriter().write("<html><body>...</body></html>");
response.setContentType("text/html");
response.setHeader("Cache-Control", "no-cache");
response.setCharacterEncoding("UTF-8");

Websockets with Streaming Archives

So this is the setup I'm working with:
I am on an express server which must stream an archived binary payload to a browser (does not matter if it is zip, tar or tar.gz - although zip would be nice).
On this server, I have a websocket open that connects to another server which is sending me binary payloads of individual files in a directory. I get these payloads streamed, piece-by-piece, as buffers, and I'm doing this serially (that is - file-by-file - there aren't multiple websockets open at one time, and there is one websocket per file). This is the websocket library I'm using: https://github.com/einaros/ws
I would like to go through each file, open a websocket, and then append the buffers to an archiver as they come through the websockets. When data is appended to the archiver, it would be nice if I could stream the ouput of the archiver to the browser (via the response object with response.write). So, basically, as I'm getting the payload from the websocket, I would like that payload streamed through an archiver and then to the response. :-)
Some things I have looked into:
node-zipstream - This is nice because it gives me an output stream I can pipe directly to response.write. However, it doesn't appear to support nested files/folders, and, more importantly, it only accepts an input stream. I have looked at the source code (which is quite terse and readable), and it seems as though, if I were able to have access to the update function within ZipStream.prototype.addFile, I could just call that each time on the message event when I get a binary buffer from the websocket. This is quite messy/hacky though, and, given that this library already doesn't seem to support nested files/folders, I'm not sure I will be going with it.
node-archiver - This suffers from the same issue as node-zipstream (probably because it was inspired by it) where it allows me to pipe the output, but I cannot append multiple buffers for the same file within the archive (and then manually signal when the last buffer has been appended for a given file). However, it does allow me to have nested folders, which is a clear win over node-zipstream.
Is there something I'm not aware of, or is this just a really crazy thing that I want to do?
The only alternative I see at this point is to wait for the entire payload to be streamed through a websocket and then append with node-archiver, but I really would like to reap the benefit of true streaming/archiving on-the-fly.
I've also thought about the possibility of creating a read stream of sorts just to serve as a proxy object that I can pass into node-archiver and then just append the buffers I get from the websocket to this read stream. Looking at various read streams, I'm not sure how to do this though. The only way I could think of was creating a writestream, piping buffers to it, and having a readstream read from that writestream. Am I on the correct thought process here?
As always, thanks for any help/direction you can offer SO community.
EDIT:
Since I just opened this question, and I'm new to node, there may be a better answer than the one I provided. I will keep this question open and accept a better answer if one presents itself within a few days. As always, I will upvote any other answers, even if they're ridiculous, as long as they're correct and allow me to stream on-the-fly as mine does.
I figured out a way to get this working with node-archiver. :-)
It was based off my hunch of creating a temporary "proxy stream" of sorts, inspired by this SO question: How to create streams from string in Node.Js?
The basic gist is (coffeescript syntax):
archive = archiver 'zip'
archive.pipe response // where response is the http response
// and then for each file...
fileName = ... // known file name
fileSize = ... // known file size
ws = .... // create websocket
proxyStream = new Stream()
numBytesStreamed = 0
archive.append proxyStream, name: fileName
ws.on 'message', (dataBuffer) ->
numBytesStreamed += dataBuffer.length
proxyStream.emit 'data', dataBuffer
if numBytesStreamed is fileSize
proxyStream.emit 'end'
// function/indicator to do this for the next file in the folder
// and then when you're completely done...
archive.finalize (err, bytesOfArchive) ->
if err?
// do whatever
else
// unless you somehow knew this ahead of time
res.addTrailers
'Content-Length': bytesOfArchive
res.end()
Note that this is not the complete solution I implemented. There is still a lot of logic dealing with getting the files, their paths, etc. Not to mention error-handling.
EDIT:
Since I just opened this question, and I'm new to node, there may be a better answer. I will keep this question open and accept a better answer if one presents itself within a few days. As always, I will upvote any other answers, even if they're ridiculous, as long as they're correct and allow me to stream on-the-fly as mine does.

[Asterisk]Attended transfer using hook flash on a SIP channel

Within our organisation we use quite a few different models of telephone sets. The only thing they have in common, apart from the dialpad, is the ability to "send" hook flash. I prefer using this type of signaling for attended transfers above combinations of the usual dialpad keys to prevent the other end from receiving DTMF-tones (to prevent unwanted interactions with IVRs or bothering people on the other end).
2 questions:
How is a flash hook represented in features.conf? According to RFC2833 section 3.10 (DTMF Events) and this article (which is about a ZAP- instead of SIP-configuration, thus my doubt... see next question, also), it should be just "flash".
From my Google-quest I've learned that hook flash gets ignored by the PBX when using the SIP-protocol in Asterisk... I do get an error message when sending it: "WARNING[26159]: chan_sip.c:6487 sip_indicate: Don't know how to indicate condition 9". Is there a way to fix it/work around it?
Asterisk version: 1.8.3.2
Using "info" for dtmfmode
Tnx in advance!
In most cases you have in you adapter settings what to do with hook. IF you have, you can change that to transfer code.
Update: after code review i can say that DTMF 16 received ok and sended in 1.8.x. BUT features.c have no any action on flash(event 16)
So posible create audiohook application for asterisk to change that DTMF 16 to 2 DTMF values or invoke transfer. Will work for DTMF method SIPInfo, and such patch complexity is below-average(5-6 hours for expert)

Netty: Pipe-ing output of one Channel to the input of an other

Netty-Gurus,
I've been wondering if there is a shortcut/Netty-Utility/smart-trick
for connecting the input of one Channel to the output of
an other channel. In more details consider the following:
Set-Up a Netty (http) server
For an incoming MessageEvent get its ChannelBuffer
and pipe its input to a NettyClient-ChannelBuffer
(which is to be set up along the lines of the NettyServer).
I'm interested in how to achieve bullet-point 3. since my first
thoughts along the lines
// mock messageReceived(ChannelHandlerContext ctx, MessageEvent e):
ChannelBuffer bufIn = (ChannelBuffer) e.getMessage();
ChannelBuffer bufOut = getClientChannelBuffer();// Set-up somewhere else
bufOut.write(bufIn);
seem to me awkward because
A. I have to determine for each and every messageReceived-Event
the target ChannelBuffer
B. To much Low-Level tinkering
My wish/vision would be to connect
--> the input of one Channel
--> to the output of an other channel
and let them do their I/O without any additional coding.
Many thanks in advance!,
Traude
P.S: Issue has arisen as I'm trying to dispatch the various HTTP-requests to the
server (one entry point) to several other servers, depending on
the input content (mapping based on the first HTTP request line).
Obviously, I also need to do the inverse trick -- pipeing back client
to server -- but I guess it'll be similar to the solution of
the question before.
Looks like you need to use a multiplexer in you business handler. The business handler could have a map. With key as "first http request line" and value as the output channel for the server. Once you do a lookup you just do a channel.write(channelBuffer);
Also take a look at bruno de carvalho's tcp tunnel, which may give you more ideas on how to deal with these kind of requirements.

Resources