I am calling a web service that is returning a json message.
In my spring integration application i want to have a generic processing of this message (without having to write a domain object) by converting it into pipe delimited key value pair and build the downstream flow based on tokens. How would i approach this in spring-integration?
Thanks!
You could use a json-to-object-transformer to create, e.g. a LinkedHashMap from the JSON, the use a custom transformer to transform the map to your format.
Related
I have two applications - the first produces messages using spring-cloud-stream/function with the AWS Kinesis Binder, the second is an application that builds off of spring integration to consume messages. Communicating between the two is not a problem - I can send a message from "stream" and handle it easily in "integration".
When I want to send a custom header, then there is an issue. The header arrives at the consumer as an embedded header using the "New" format (Has an 0xff at the beginning, etc.) - See AbstractMessageChannelBinder#serializeAndEmbedHeadersIfApplicable in spring-cloud-stream.
However, the KinesisMessageDrivenChannelAdapter (spring-integration-aws) does not seem to understand the "new" embedded header form. It uses EmbeddedJsonHeadersMessageMapper (See #toMessage) which cannot "decode" the message. It throws a com.fasterxml.jackson.core.JsonParseException: Unrecognized token 'ΓΏ': was expecting (JSON String, Number, Array, Object or token 'null', 'true' or 'false') because of the additional information included in the embedded header (0xff and so on).
I need to send the header across the wire (the header is used to route on the other side), so it's not an option to "turn off" headers on the producer. I don't see a way to use the "old" embedded headers.
I'd like to use spring-cloud-stream/function on the producer side - it's awesome. I wish I could redo the consumer, but...
I could write my own embedded header mapper that understands the new format (use EmbeddedHeaderUtils), and wire it into the KinesisMessageDrivenChannelAdapter.
Given the close relationship between spring-cloud-stream and spring-integration, I must be doing something wrong. Does Spring Integration have an OutboundMessageMapper that understands the new embedded form?
Or is there a way to coerce spring cloud stream to use a different embedding strategy?
I could use Spring Integration on the producer side. (sad face).
Any thoughts? Thanks in advance.
understands the new format
It's not a "new" format, it's a format that Spring Cloud Stream created, originally for Kafka, which only added header support in 0.11.
I could write my own embedded header mapper that understands the new format (use EmbeddedHeaderUtils), and wire it into the KinesisMessageDrivenChannelAdapter.
I suggest you do that, and consider contributing it to the core Spring Integration Project alongside the EmbeddedJsonHeadersMessageMapper so that it can be used with all technologies that don't support headers natively.
I know this is possible with Rest but would like to know if there is any way to upload Documents that are valid JSON only without the need to deserialize them and provide an object?
This is purely a performance optimization. Our SQL query returns the object as JSON and would prefer not to have to deserialize it in .Net consuming resources to be able to upload to Azure Search.
After thinking through using a custom JsonConverter scenario where I store a string and retrieve it, I have dismissed that option as probably not worth while.
No, the SDK does not support this scenario. You'd be better off writing a simple client to send your JSON to the REST API directly.
Edited left name of a custom type instead of the direct service fabric interface.
I am trying to write an interceptor capable of interrogating the parameters being passed to a remoting service. I can intercept the IServiceRemotingRequestMessage once it gets to the service and am able extract the parameters, but ONLY if I know the position and name of the parameter at the time.
[Pseudo]
var someParam = IServiceRemotingRequestMessageBody.GetParameter(0, "request", serviceRequestInfo.RequestMessage.GetBody().GetType());
What I need is a way to simply iterate the parameters and work with them directly (currently just serialize them to a string so I can log some of the info being passed). However, the IServiceRemotingRequestMessageBody only exposes a GetParameter method that must be passed the index and the name...
I can maybe do some reflection work given the method name and the service contract but I'm hoping there is a much more straightforward way to get this directly.
Thanks for any tips,
Will
There may be an easier way using the default serialization, but the way I solved it, currently, is to replace the Service Fabric serialization providers with JSON Serialization. Then, my interceptors can work with the JSON data as necessary.
I'll assume there is a way to do something similar with the default serialization but, if so, it's not clearly documented how to work with it. If someone proposes an option I would gladly give it a try.
We're trying to write a ServiceStack Rest method to received data from the NLOG WebService Target.
https://github.com/NLog/NLog/wiki/WebService-target
It appears that Nlog will send a WCF formatted Json POST based on the class NlogEvents
http://sourcebrowser.io/Browse/nlog/nlog/src/NLog/LogReceiverService/NLogEvents.cs
We can resolve this object as an argument to a post method. But how do we specify the ROUTE as we cant decorate it with an ROUTE attribute?
Also, it appears that this object already has a several attributes that were added from the WCF support. Is there another way to specify the Poco recieve object?
Also, The Nlog webservice has flags to format the data as Rfc3986 or Rfc2396 but im nor sure if that does anything for us.
Any suggestions would be appreciated.
Have a look at ServiceStack's routing docs, you can register routes on DTOs you don't own using the Fluent API, or dynamically attach attributes to Types.
You don't need to use NLog's exact Types in Services, i.e. you can just use a copy of the DTOs for your Service contract and annotate them freely. If needed you can use Auto Mapping to easily copy data from DTOs to NLog Types.
Going over development notes from Microsoft for PHP Azure bus, I see PHP bus library has $message->getBody(), where as C# library has message.GetBody<T>(); and expects a type.
How would we send messages (as simple classes) between PHP/C# in a flexible way that doesn't break should a newer message version be received?
My guess is that the default data type for php based brokered messages is string. Depending on what you are sending here are a few options:
Send data using the key value pairs collection on the BrokeredMessage. php: $message->setProperty("Key", "Value"); OR c#: brokeredMessage.Properties.Add("Key","Value");
Serialize all of your objects to json then insert into the BrokeredMessage body. If retrieving in C# use BrokeredMessage.GetBody<string>()
Another tip is that in C# you can only call BrokeredMessage.GetBody() once.