Going over development notes from Microsoft for PHP Azure bus, I see PHP bus library has $message->getBody(), where as C# library has message.GetBody<T>(); and expects a type.
How would we send messages (as simple classes) between PHP/C# in a flexible way that doesn't break should a newer message version be received?
My guess is that the default data type for php based brokered messages is string. Depending on what you are sending here are a few options:
Send data using the key value pairs collection on the BrokeredMessage. php: $message->setProperty("Key", "Value"); OR c#: brokeredMessage.Properties.Add("Key","Value");
Serialize all of your objects to json then insert into the BrokeredMessage body. If retrieving in C# use BrokeredMessage.GetBody<string>()
Another tip is that in C# you can only call BrokeredMessage.GetBody() once.
Related
I have two applications - the first produces messages using spring-cloud-stream/function with the AWS Kinesis Binder, the second is an application that builds off of spring integration to consume messages. Communicating between the two is not a problem - I can send a message from "stream" and handle it easily in "integration".
When I want to send a custom header, then there is an issue. The header arrives at the consumer as an embedded header using the "New" format (Has an 0xff at the beginning, etc.) - See AbstractMessageChannelBinder#serializeAndEmbedHeadersIfApplicable in spring-cloud-stream.
However, the KinesisMessageDrivenChannelAdapter (spring-integration-aws) does not seem to understand the "new" embedded header form. It uses EmbeddedJsonHeadersMessageMapper (See #toMessage) which cannot "decode" the message. It throws a com.fasterxml.jackson.core.JsonParseException: Unrecognized token 'ΓΏ': was expecting (JSON String, Number, Array, Object or token 'null', 'true' or 'false') because of the additional information included in the embedded header (0xff and so on).
I need to send the header across the wire (the header is used to route on the other side), so it's not an option to "turn off" headers on the producer. I don't see a way to use the "old" embedded headers.
I'd like to use spring-cloud-stream/function on the producer side - it's awesome. I wish I could redo the consumer, but...
I could write my own embedded header mapper that understands the new format (use EmbeddedHeaderUtils), and wire it into the KinesisMessageDrivenChannelAdapter.
Given the close relationship between spring-cloud-stream and spring-integration, I must be doing something wrong. Does Spring Integration have an OutboundMessageMapper that understands the new embedded form?
Or is there a way to coerce spring cloud stream to use a different embedding strategy?
I could use Spring Integration on the producer side. (sad face).
Any thoughts? Thanks in advance.
understands the new format
It's not a "new" format, it's a format that Spring Cloud Stream created, originally for Kafka, which only added header support in 0.11.
I could write my own embedded header mapper that understands the new format (use EmbeddedHeaderUtils), and wire it into the KinesisMessageDrivenChannelAdapter.
I suggest you do that, and consider contributing it to the core Spring Integration Project alongside the EmbeddedJsonHeadersMessageMapper so that it can be used with all technologies that don't support headers natively.
We are integrating a new system with DocuSign. The system is built using C# objects.
My question is, which is the best practice to interface with DocuSign - call the DocuSign Web API methods directly, or include the DocuSign C# client library as a reference to our code, and call that directly?
Thanks!
I recommend using the C# client library. It will save you time and make it very easy to use.
The code for it is also public in github, so if for some reason you want to fork and use it that way - you can do that too.
The DocuSign C# SDK saves you the bother of:
serializing the request objects into a JSON structure
sending the HTTPS request
deserializing the response objects into C# objects.
It also includes helper methods for implementing the OAuth JWT Grant flow.
These are all good reasons to use the SDK.
If you expect that you will be regularly sending documents that are above 20MB in size then you may want to implement the Envelopes::create call yourself. Why? Because the current version of the SDK BASE64 encodes the documents you upload to DocuSign.
If you implement the Envelopes::create call yourself, you can send the documents in binary. This isn't so easy to do but is important if you have very large source documents. An example of how to send in binary mode.
Added
Size limits: 25MB per API call. But document(s) in an API call that are BASE64 encoded have alot of overhead. So in this case, the effective max doc size is around 20 MB.
You can have multiple documents in an envelope. To have multiple large documents, create the envelope as a draft, then upload the additional documents as separate API calls.
See the API Limits document
I need to set up an application in Azure and make communicate 2 functions (one written in C# and one written in JavaScript).
The C# fragment consists in analyzing a XML feed, get the data and save in objects then finally send them to the other JavaScript function by parameter.
I did read that we could establish communication between both functions using HTTP calls but is it possible to do it with parameters ?
If not, would have any suggestions in order to achieve something like this properly? I'm getting started with Azure and i don't have enough visibility to know what is recommened in such a situation
Thank you for your advices
Yes, this is absolutely possible. How you do this is up to you. If you look at the default HTTP trigger templates, you can see that they take parameters (for example, as query string parameters). You can find more examples in the HTTP and webhook recipes documentation.
You can use other trigger types for cross-function communication as well. Take a look at this documentation for related best practices: https://learn.microsoft.com/en-us/azure/azure-functions/functions-best-practices#cross-function-communication
We're trying to write a ServiceStack Rest method to received data from the NLOG WebService Target.
https://github.com/NLog/NLog/wiki/WebService-target
It appears that Nlog will send a WCF formatted Json POST based on the class NlogEvents
http://sourcebrowser.io/Browse/nlog/nlog/src/NLog/LogReceiverService/NLogEvents.cs
We can resolve this object as an argument to a post method. But how do we specify the ROUTE as we cant decorate it with an ROUTE attribute?
Also, it appears that this object already has a several attributes that were added from the WCF support. Is there another way to specify the Poco recieve object?
Also, The Nlog webservice has flags to format the data as Rfc3986 or Rfc2396 but im nor sure if that does anything for us.
Any suggestions would be appreciated.
Have a look at ServiceStack's routing docs, you can register routes on DTOs you don't own using the Fluent API, or dynamically attach attributes to Types.
You don't need to use NLog's exact Types in Services, i.e. you can just use a copy of the DTOs for your Service contract and annotate them freely. If needed you can use Auto Mapping to easily copy data from DTOs to NLog Types.
I am calling a web service that is returning a json message.
In my spring integration application i want to have a generic processing of this message (without having to write a domain object) by converting it into pipe delimited key value pair and build the downstream flow based on tokens. How would i approach this in spring-integration?
Thanks!
You could use a json-to-object-transformer to create, e.g. a LinkedHashMap from the JSON, the use a custom transformer to transform the map to your format.