I realise outq is used to see the last 100 or so responses for processed messages. However, the objects stored in outq only seem to have the response body, not the originating request, so it can be quite difficult to debug issues.
Is there an easy way to automatically include a copy of the originating inq message as well?
I've found a solution that works. Not sure if its optimal, but it seems to do the job. When defining the handler I just create a new response object and insert original request into it.
mqService.RegisterHandler<MyRequest>(
m => {
var response = ObjectFactory.GetInstance<MyService>().Post((MyRequest) m.Body);
return new {result = response, request = m.Body};
}
Related
i am trying to stream a response. But i want to be able to read the response (and work with the data) while it is still being sent. I basically want to send multiple messages in one response.
It works internally in node.js, but when i tried to do the same thing in typescript it doesnt work anymore.
My attempt was to do the request via fetch in typescript and the response is coming from a node.js server by writing parts of the response on the response stream.
fetch('...', {
...
}).then((response => {
const reader = response.body.getReader();
reader.read().then(({done, value}) => {
if (done) {
return response;
}
console.log(String.fromCharCode.apply(null, value)); //just for testing purposes
})
}).then(...)...
On the Node.js side it basically looks like this:
// doing stuff with the request
response.write(first_message)
// do some more stuff
response.write(second_message)
// do even more stuff
response.end(last_message)
In Node.js, like i said, i can just read every message once its sent via res.on('data', ...), but the reader.read in typescript only triggers(?) once and that is when the whole response was sent.
Is there a way to make it work like i want, or do i have to look for another way?
I hope it is kinda understandable what i want to do, i noticed while writing this how much i struggled explaining this :D
I found the problem, and as usual it was sitting in front of the pc.
I forgot to write a header first, before writing the response.
I have this problem only when I try refresh the page and I can not solve it, I tried everything but still happens the same. It began to happen when I add socket.io at the project. The project run in several servers which are connected one each other throught sockets.
TEST CASES: When I render the page, at the first time everything goes well but, if I refresh the same page, I get this error:
ERROR: "Error: Can't set headers after they are sent. at ServerResponse.OutgoingMessage.setHeader (_http_outgoing.js:344:11)"
ATTENTION: when get in IF() and send "return res.end('The Activation Code is INVALID!');" it DOESN'T HAPPEND! I refresh it and refresh it and everything goes well. My problem is in the RENDER.
MY CODE BELOW:
activationUser = function(req,res,next){
var data = {
activationCode : req.params.activationCode,
now : new Date().valueOf(),
ip : req.connection.remoteAddress,
fId : frontalId
}
socketCore.emit('activationUser', data);
socketCore.on(frontalId + 'activationUserResp', function(data){
if(data.msg == "CHECKED!"){
next();
}else{
return res.end(data.msg);
}
});
}
router.get('/activationUser/:activationCode',activationUser,function(req,res){
var data = {
activationCode : req.params.activationCode,
fId : frontalId
}
socketCore.emit('step2', data);
socketCore.on(frontalId + 'step2Resp', function(data){
if(data.msg == 'err'){
return res.end('The Activation Code is INVALID!');
}else{
return res.render('registro2', {title: 'title | '+ data.name + ' ' + data.lastname, user:data});
}
});
});
Thank you!
The particular error you are getting happens when you try to send anything on the res object after the complete response has already been sent. This often occurs because of errors in asynchronous logic. In your particular case, it apepars to be because you are assigning a new event handler with socketCore.on() every single time the router is hit. Those event handlers will accumulate and after the first time the route is hit, they will execute multiple times triggering the sending of multiple responses on the same response object, thus trigger that error.
The main ways to fix your particular problem are:
Use .once() instead of .on() so the event handler automatically removes itself after being triggered.
Manually remove the .on() event handler after you get the response.
Move the event handler outside of the route so it's only ever installed once.
In your particular case, since socketCore is a shared object available to all requests, it appears that you also have a race condition. If multiple users trigger the '/activationUser/:activationCode' route in the same general time frame, then you will register two event handlers with socketCore.on() (one for each route that is hit) and you will do two socketCore.emit('step2', data);. But, you have no way of associating which response belongs with which request and the two responses could easily get mixed up - going to the wrong request.
This highlights how socket.io connections are not request/response. They are message/answer, but unless you manually code a correspondence between a specific message request and a specific answer, there is no way to correlate which goes with which. So, without assigning some particular responseID that lets you know which response belongs to which message, you can't use a socket.io connection like this in a multi-user environment. It will just cause race conditions. It's actually simpler to use an HTTP request/response for this type of data fetching because each response goes only with the request that made it in the HTTP architecture.
You can change your architecture for making the socketCore request, but you will have to manually assign an ID to each request and make sure the server is sending back that ID with the response that belongs to that request. Then, you can write a few lines of code on the receiving side of things that will make sure the right response gets fed to the code with the matching request.
I am building a system with Spring Integration that processes all lines in a file as records. Because some of the String records are malformed I have multiple paths through the application via a Splitter and Aggregator combination (I'm building the Aggregator as we speak).
Further, some of the records are so malformed that they are effectively errors. However I have a requirement that all records must be processed therefore I must identify and log gross malformation errors separately and finish processing the file. In other words, I can not fail to process the file but instead must only log errors.
Aggregator
I intend to do achieve the goal of processing grossly malformed records by modifying the headers on the incoming message and passing the message on-ward to the Aggregator which can search for the existence of such a parameter. I'll effectively be hand coding in some error handling situations to my processors and aggregator.
My Release Strategy for the Aggregator will be when all messages are processed.
Code Extract
This code comes from a blog entry by Matt Vickery. He constructs an entirely new message (using MessageBuilder and transferring headers) whereas I will just add something to the Message headers. He includes this code in a gateway which subsequently transfers the Message onto the Aggregator.
public Message<AvsResponse> service(Message<AvsRequest> message) {
Assert.notNull(message, MISSING_MANDATORY_ARG);
Assert.notNull(message.getPayload(), MISSING_MANDATORY_ARG);
MessageHeaders requestMessageHeaders = message.getHeaders();
Message<AvsResponse> responseMessage = null;
try {
logger.debug("Entering AVS Gateway");
responseMessage = avsGateway.send(message);
if (responseMessage == null)
responseMessage = buildNewResponse(requestMessageHeaders,
AvsResponseType.NULL_RESULT);
logger.debug("Exited AVS Gateway");
return responseMessage;
}
catch (Exception e) {
return buildNewResponse(responseMessage, requestMessageHeaders,
AvsResponseType.EXCEPTION_RESULT, e);
}
}
Confusion (...at least, that which I know about)
My questions are as follows:
When I have such a release strategy (all messages processed), is that the best way to ensure all messages get through to the Aggregator?
When using an Aggregator it seems like in practical cases, it would be very common to need access to the Message in some previous step, as opposed to just passing and processing simple POJOs. Would that be true or is there something I should be doing to simplify my design so I can avoid Message
I came across a blog entry by Matt Vickery showing how he achieves what seems to be similar with an Aggregator. I'm using his work as a guide.
P.S. Per Artem Bilan's advice, I'm avoiding creating my own messages and letting SI turn them into Messages
There is no difference for Aggregator if payload is valid or not. Its general purpose is to build a List (by default) of payloads to one Message. And it does it via some sequenceDetails from MessageHeaders. It is first.
If you use Splitter, it is responsible to enrich each produced Message with default sequenceDetails. So, if you have this configuration:
<splitter/>
<aggregator/>
And if your inbound payload is List, you end up with List after aggregator as well.
I assume, that your Splitter just produces String payloads from File lines.
Then you pass each Message to some service/transformer.
The result of that you may pass to the Aggregator.
But as you say some of payloads are not valid and your processor fails with an Exception.
So, how about just try...catch within that POJO method and return some payload with error indicator, e.g. simple String "Oops!".
As I described before: the result of POJO method will be pushed to payload of the Message by Framework. And what is magic, that sequenceDetails will be there in the MessageHeaders too.
I don't see reason to write some custom ReleaseStrategy for this task, or even any other Aggregator's strategies...
Let me know, what you don't understand.
UPDATE
To add some error-indicator to message headers and don't throw Exception, it really will be simpler to build a new Message from code, not via some error-channel flow:
try {
return [GOOD_RESULT];
}
catch(Exception e) {
return MessageBuilder.withPayload(payload).setHeader("ERROR", e.getMessage()).build();
}
But in this case you should use <service-activator> instead of <transformer>, because the last one doesn't copy headers from inbound Message. And you really need them - setHeader for aggregator.
Related question: Web API action parameter is intermittently null and http://social.msdn.microsoft.com/Forums/vstudio/en-US/25753b53-95b3-4252-b034-7e086341ad20/web-api-action-parameter-is-intermittently-null
Hi!
I'm creating a ActionFilterAttribute in ASP.Net MVC WebAPI 4 so I can apply the attribute in action methods at the controller that we need validation of a token before execute it as the following code:
public class TokenValidationAttribute : ActionFilterAttribute
{
public override void OnActionExecuting(HttpActionContext filterContext)
{
//Tried this way
var result = string.Empty;
filterContext.Request.Content.ReadAsStringAsync().ContinueWith((r)=> content = r.Result);
//And this
var result = filterContext.Request.Content.ReadAsStringAsync().Result;
//And this
var bytes = await request.Content.ReadAsByteArrayAsync().Result;
var str = System.Text.Encoding.UTF8.GetString(bytes);
//omit the other code that use this string below here for simplicity
}
}
I'm trying to read the content as string. Tried the 3 ways as stated in this code, and all of them return empty. I know that in WebApi I can read only once the body content of the request, so I'm commenting everything else in the code, and trying to get it run to see if I'm getting a result. The point is, the client and even the Fiddler, reports the 315 of the content length of the request. The same size is getting on the server content header as well but, when we try read the content, it is empty.
If I remove the attribute and make the same request, the controller is called well, and the deserialization of Json happens flawless. If I put the attribute, all I get is a empty string from the content. It happens ALWAYS. Not intermittent as the related questions state.
What am I doing wrong? Keep in mind that I'm using ActionFilter instead of DelegatingHandler because only selected actions requires the token validation prior to execution.
Thanks for help! I really appreciate it.
Regards...
Gutemberg
By default the buffer policy for Web Host(IIS) scenarios is that the incoming request's stream is always buffered. You can take a look at System.Web.Http.WebHost.WebHostBufferPolicySelector. Now as you have figured, Web Api's formatters will consume the stream and will not try to rewind it back. This is on purpose because one could change the buffer policy to make the incoming request's stream to be non-buffered in which case the rewinding would fail.
So in your case, since you know that the request is going to be always buffered, you could get hold of the incoming stream like below and rewind it.
Stream reqStream = await request.Content.ReadAsStreamAsync();
if(reqStream.CanSeek)
{
reqStream.Position = 0;
}
//now try to read the content as string
string data = await request.Content.ReadAsStringAsync();
I know writing business logic in getters and setters is a very bad programming practice, but is there any way to handle exceptions if the response is already committed?
What exactly is the meaning of "Response already committed" and "Headers are already sent to the client"?
There's no nice way to handle exceptions if the response is already committed. The HTTP response exist basically of a header and a body. The headers basically instruct the client (the webbrowser) how exactly it should deal with the response, e.g. the content type, the content length, the character encoding, the body encoding, the cache instructions, etcetera.
You can see the headers in the HTTP traffic monitor of the webbrowser's developer toolset. Press F12 in Chrome/IE9+/Firefox23+ and check the "Network" tab. The below screenshow is what my Chrome shows on your current question:
(note: the "Response" tab shows the response body)
The response body is the actual content, usually in flavor of a bunch of HTML code. The server has usually a fixed size buffer to write the response to. The buffer size depends on server make/version and configuration and is usually 2KB~10KB. If this buffer overflows, then it will be flushed to the other end of the connection, the client. This is the commit of a response. The client has already obtained the first part of the response, usually already representing the whole bunch of headers and maybe a part of the body.
The commit of a response is a point of no return. The server cannot take the already sent bytes back. It's too late to change the response headers (for example, a redirect is basically instructed by a Location header with therein the new URL), let alone the response body. Best what you can do is to append the error information to the already written response body. But this may end up in some weird looking HTML as it's not known which HTML tags needs to be closed at that point. The browser may fail to present it in a proper manner.
Apart from avoiding business logic in getters so that the exceptions are not thrown while rendering the response, another way to avoid an already committed response is to configure the response buffer size to be as large as the largest page which your webapp can serve. How to do that depends on the server make/version. In Tomcat for example, you can configure it as bufferSize attribute of the <Connector> element. Note that this won't prevent from flushing if your own code is (implicitly) calling flush() on the response output stream.
Good exlanation BalusC and I would add that primefaces has an issue in their exception handler. They try to redirect to error page after request was already committed. And as you said the only solution I found is to add some extra content to the response body. I owerride the handler and add this code
if ( extContext.isResponseCommitted() ) {
PartialResponseWriter writer = context.getPartialViewContext().getPartialResponseWriter();
writer.startElement( "script", null );
writer.write( "window.location.href = '" + errorPageUrl + "';" );
writer.endElement( "script" );
writer.getWrapped().endCDATA();
writer.endElement( "update" );
writer.getWrapped().endDocument();
}
else {
extContext.redirect( errorPageUrl );
context.responseComplete();
}