Is there any way to use Pulsar with schema registry via websocket-/REST API? - apache-pulsar

I have a Java client consumer that is recieving Pulsar (v2.10.0) AVRO messages (Employees), like this:
import org.apache.pulsar.client.api.Consumer;
import org.apache.pulsar.client.api.Message;
import org.apache.pulsar.client.api.PulsarClient;
import org.apache.pulsar.client.api.PulsarClientException;
import org.apache.pulsar.client.api.Schema;
import example.Employee;
public class TestConsumer {
public static void main(String[] args) throws PulsarClientException, InterruptedException {
final String broker = "pulsar://localhost:6650";
final String topic = "persistent://public/default/avrotopic";
PulsarClient client = PulsarClient.builder().serviceUrl(broker).build();
Consumer<Employee> consumer = client.newConsumer(Schema.AVRO(Employee.class)).topic(topic)
.subscriptionName("first-subscription")
.subscribe();
Message<Employee> message = consumer.receive();
Employee employeeObj = message.getValue();
System.out.println("Received Employee: " + employeeObj.getName() );
consumer.acknowledge(message);
consumer.close();
client.close();
}
}
The topics's AVRO schema is:
{
"version": 0,
"type": "AVRO",
"timestamp": 0,
"data": "{\"type\":\"record\",\"name\":\"Employee\",\"namespace\":\"example\",\"fields\":[{\"name\":\"name\",\"type\":\"string\"}]}",
"properties": {
"__jsr310ConversionEnabled": "false",
"__alwaysAllowNull": "true"
}
}
When producing messages via a corresponding Java client producer, all works fine: Messages get deserialized into Employee objects.
Now I'm trying to get the same result when producing messages via Websocket API or REST API instead.
For Websocket API producer - I have tried:
ws://localhost:8080/ws/v2/producer/persistent/public/default/avrotopic
with message:
{
"payload":"CEpvaG4="
}
"CEpvaG4=" is the base64 encoded AVRO binary data (name is "John").
The message is accepted and received by the consumer but throws an exception:
Exception in thread "main" org.apache.pulsar.shade.com.google.common.util.concurrent.UncheckedExecutionException: org.apache.pulsar.shade.org.apache.commons.lang3.SerializationException: Failed at fetching schema info for EMPTY
at org.apache.pulsar.shade.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2050)
at org.apache.pulsar.shade.com.google.common.cache.LocalCache.get(LocalCache.java:3951)
at org.apache.pulsar.shade.com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3973)
at org.apache.pulsar.shade.com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4957)
at org.apache.pulsar.client.impl.schema.StructSchema.decode(StructSchema.java:107)
at org.apache.pulsar.client.impl.MessageImpl.getValue(MessageImpl.java:301)
at com.delti.esb.example.example_consumer.TestConsumer.main(TestConsumer.java:23)
Caused by: org.apache.pulsar.shade.org.apache.commons.lang3.SerializationException: Failed at fetching schema info for EMPTY
at org.apache.pulsar.client.impl.schema.StructSchema.getSchemaInfoByVersion(StructSchema.java:220)
at org.apache.pulsar.client.impl.schema.AvroSchema.loadReader(AvroSchema.java:93)
at org.apache.pulsar.client.impl.schema.StructSchema$1.load(StructSchema.java:75)
at org.apache.pulsar.client.impl.schema.StructSchema$1.load(StructSchema.java:72)
at org.apache.pulsar.shade.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3527)
at org.apache.pulsar.shade.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2276)
at org.apache.pulsar.shade.com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2154)
at org.apache.pulsar.shade.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2044)
... 6 more
Since websocket API does not support AVRO schema registry according to the feature list I guess this is not suprising though.
For REST API producer - I have tried:
curl --location --request POST 'http://localhost:8080/topics/persistent/public/default/avrotopic' \
--header 'Content-Type: application/json' \
--data-raw '{
"valueSchema":"{\"schema\":\"eyJuYW1lc3BhY2UiOiJleGFtcGxlIiwiZmllbGRzIjpbeyJuYW1lIjoibmFtZSIsInR5cGUiOiJzdHJpbmcifV0sInR5cGUiOiJyZWNvcmQiLCJuYW1lIjoiRW1wbG95ZWUifQ==\",\"properties\":{\"__jsr310ConversionEnabled\":\"false\",\"__alwaysAllowNull\":\"true\"},\"schemaDefinition\":\"{\\\"namespace\\\":\\\"example\\\",\\\"fields\\\":[{\\\"name\\\":\\\"name\\\",\\\"type\\\":\\\"string\\\"}],\\\"type\\\":\\\"record\\\",\\\"name\\\":\\\"Employee\\\"}\",\"name\":\"avrotopic\",\"type\":\"AVRO\"}",
"messages":[
{"payload":"CEpvaG4="}
]
}'
Response:
{
"messagePublishResults": [
{
"messageId": "10:2:-1",
"errorCode": 0,
"schemaVersion": 0
}
],
"schemaVersion": 0
}
So the message is accepted and also received by the consumer but the payload always seems to be empty when consumed. I tried to get the request similar to the JSON example documented on https://pulsar.apache.org/docs/client-libraries-rest/ but I'm clearly missing something.
Is there any way to get this working?
If not I guess I have to send AVRO base64 without using schema registry and do the deserialization in the application..

Currently, there isn't a way to specify the schema when creating a WS producer/consumer.
The best option is to specify the AVRO schema on the topic itself and then set the schema compatibility setting for the topic as ALWAYS_COMPATIBLE.
This will allow the WS producer to publish the raw bytes (which are really in Avro format) to the topic. Then the Java Avro consumer will be able to deserialize Avro messages as expected.

Related

Getting message in ibmmq Node.js

I'm using ibmmq module https://github.com/ibm-messaging/mq-mqi-nodejs
I need to get an xml message from a queue and than make an xsl-transformation.
I put messages to the queue with JMeter and if I browse messages in rfhutil I can see them as is on the Data tab.
But when I get it in the code
function getCB(err, hObj, gmo,md,buf, hConn ) {
// If there is an error, prepare to exit by setting the ok flag to false.
if (err) {...
} else {
if (md.Format=="MQSTR") {
console.log("message <%s>", decoder.write(buf));
} else {
console.log("binary message: " + buf);
}
}
I get my message with some service information:
buf=RFH �"�MQSTR � <mcd><Msd>jms_text</Msd></mcd> X<jms><Dst>queue://MY_QM/MY_QUEUE</Dst><Tms>1657791724648</Tms><Dlv>2</Dlv></jms> ...My_message...
How can I get only My message like I do in rfhutil?
I can get it with string methods, but it looks like crutches.
That message has the headers created by a JMS application. There are various ways of dealing with it. You can
Have the sending app disable sending that structure (setting the targClient property)
Use GMO options to ignore the properties (MQGMO_NO_PROPERTIES)
Have your application deal with the RFH2 stucture. See for example the amqsget.js sample in the Node.js repo which includes this fragment:
switch (format) {
case MQC.MQFMT_RF_HEADER_2:
hdr = mq.MQRFH2.getHeader(buf);

Producing event to kafka topic in a Fire and Forget way in .NET Web API

Scenario:
My web API have to generate some message to kafka topic and in the same request context I have to send the response back to the client.
My request processing have two functions
SendToKafka()
ReturnResponse()
I have to return response even if the SendToKafka method fails (may be due to kafka server not available)
My approach was to put the ProduceMessage method in a t Task with timeout of 100 ms so that the thread wont wait for more than 100 ms for the task.
Main()
{
SendToKafka();
return ReturnResponse();
}
void SendToKafka()
{
try
{
var task = Task.Run(() => { ProduceMessage()})
if(task.Wait(100))
{
// log message as successful
}
else
{
// timed out
}
}
catch()
{
// log exception
}
}
I am using custom wrapper for using confluent kafka which does not providing an option to set timeout or cancellation token when producing message to topic.
Is there any better way to solve the issue

is there a specific payload format for <https://authserver.mojang.com/authenticate>?

I am currently trying to write code to find out what has changed for the authentication of migrated accounts for Minecraft and this is my code.
import requests
from uuid import uuid4
uuid = uuid4().hex #used as client token
payload = {
"agent": {
"name": "Minecraft",
"version": 1
},
"username": "AGmailAccount#gmail.com",
"password": "APasswordToTheAccount",
"clientToken": uuid,
"requestUser": True
}
print(requests.post("https://authserver.mojang.com/authenticate", headers = {"content-type":"application/json"}, data = payload))
Every time I run it I get a 400 error code, I should be getting an appropriate, non-200 HTTP status code or a JSON-encoded dictionary.
my resources are
<https://wiki.vg/Mojang_API> ,
<https://wiki.vg/Authentication> ,
and the mojangapi library <https://pypi.org/project/mojang-api/>
Try this line instead:
print(requests.post("https://authserver.mojang.com/authenticate", json=payload))
You weren't sending the JSON the right way.

Spring cloud data flow Httpclient

I have the following stream.
Context of the problem
1.
rabbit --password='******' --queues=springdataflow-q --virtual-host=springdataflow --host=172.24.172.184 --username=springdataflow | transform | httpclient --url-expression='http://172.20.24.47:8080/push' --http-method=POST --headers-expression={'Content-Type':'application/x-www-form-urlencoded'} --body-expression={arg1:payload} | log
2.
I have spring boot running locally.
#RestController
public class HelloController {
#RequestMapping(value = "/push", method = RequestMethod.POST,produces = {MediaType.TEXT_PLAIN})
public String pushMessage(#RequestParam(value="arg1") String payload) {
System.out.println(payload);
return payload;
}
}
I would like to have the rabbit message come into httpclient as value for the the 'arg1' parameter value to the post request. The intent being that message published on rabbit queue is consumed by a rest post point, the message being captured by SpEL payload.
For this I am using the body-expression = {arg1:payload} but this is not working, maybe syntactically wrong.
Any suggestions ?
The #RequestParam(value="arg1") is really about request param, the part of the URL after ?, which is called query string: https://en.wikipedia.org/wiki/Query_string.
So, if you really would like to have an arg1=payload pair in the query string, you need to use a proper url-expression:
--url-expression='http://172.20.24.47:8080/push?arg1='+payload
This seems to work to pass strings as payloads. It seems that by default the payload becomes requestbody.
So on the rest service I made a change:
#RequestMapping(value = "/pushbody", method = RequestMethod.POST,consumes = {MediaType.TEXT_PLAIN})
public String pushBody(#RequestBody String payload) {
System.out.println(payload);
return payload;
}
And the stream that seems to work now is :
rabbit --password='******' --queues=springdataflow-q1 --host=172.24.172.184 --virtual-host=springdataflow --username=springdataflow | httpclient --http-method=POST --headers-expression={'Content-Type':'text/plain'} --url=http://172.20.24.47:8080/pushbody | log
I did try with inputType= text/plain suggestion both on httpclient and logsink and removing the consumes and produces on the rest service post method, but no luck there.

How can I get GWT RequestFactory to with in a Gadget?

How can I get GWT RequestFactory to with in a Gadget?
Getting GWT-RPC to work with Gadgets is explained here.
I'm looking for a analogous solution for RequestFactory.
I tried using the GadgetsRequestBuilder, so far I've managed to get the request to the server using:
requestFactory.initialize(eventBus, new DefaultRequestTransport() {
#Override
protected RequestBuilder createRequestBuilder() {
return new GadgetsRequestBuilder(RequestBuilder.POST,
getRequestUrl());
}
#Override
public String getRequestUrl() {
return "http://....com/gadgetRequest";
}
});
But I end up with the following error:
java.lang.StringIndexOutOfBoundsException: String index out of range: 0
at java.lang.String.charAt(String.java:694)
at com.google.gwt.autobean.server.impl.JsonSplittable.create(JsonSplittable.java:35)
at com.google.gwt.autobean.shared.impl.StringQuoter.split(StringQuoter.java:35)
at com.google.gwt.autobean.shared.AutoBeanCodex.decode(AutoBeanCodex.java:520)
at com.google.gwt.requestfactory.server.SimpleRequestProcessor.process(SimpleRequestProcessor.java:121)
The general approach for sending a RequestFactory payload should be the same as RPC. You can see the payload that's being received by the server by running it with the JVM flag -Dgwt.rpc.dumpPayload=true. My guess here is that the server is receiving a request with a zero-length payload. What happens if you set up a simple test involving a GadgetsRequestBuilder sending a POST request to your server? Do you still get the same zero-length payload behavior?

Resources