protobuf: string content is overlapped when 2 different instance of the same message in multithread - multithreading

Recently I encountered a protobuf problem that the string was overlapped. The scenario is :
1/ defined a message like in proto file.
message fund_inout_rsp {
uint64 seq_no = 1;
uint32 err_id = 2;
string err_msg = 3
};
and then convert this message to C++ code using protoc --cpp_out。
so I got a class named fund_inout_rsp.
2/ Now there are two thread A and B.
In thread A,
fund_inout_rsp instance1;
instance1.set_err_msg(u8("123456789"))
instance1.SerializeToArray();
send out by TCP
In thread B,
fund_inout_rsp instance2;
instance2.set_err_msg(u8("abcd"));
instance2.SerializeToArray();
send out by TCP
3/ the TCP receiver got a message.
"abcd56789"
it seems that the instance1's err_msg is overlapped by instance2's err_msg.
The following is the auto generated code by protoc.
fund_inout_rsp::set_err_msg(const char* value) {
err_msg_.SetNoArena(
&google::protobuf::internal::GetEmptyStringAlreadyInited(),
std::string(value))
}
Can anyone give me a hint how to solve this problem?
Thanks in advance.

Related

How to display unpacked UDP data properly

I'm trying to write a small code which displays data received from a game (F1 2019) over UDP.
The F1 2019 game send out data via the UDP. I have been able to receive the packets and have separated the header and data and now unpacked the data according to the structure in which the data is sent using the rawutil module.
The struct in which the packets are sent can be found here:
https://forums.codemasters.com/topic/38920-f1-2019-udp-specification/
I'm only interested in the telemetry packet.
import socket
import cdp
import struct
import array
import rawutil
from pprint import pprint
# settings
ip = '0.0.0.0'
port = 20777
# listen for packets
listen_socket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
listen_socket.bind((ip, port))
while True:
# Receiving data
data, address = listen_socket.recvfrom(65536)
header = data[:20]
telemetry = data[20:]
# decode the header
packetFormat, = rawutil.unpack('<H', header[:2])
gameMajorVersion, = rawutil.unpack('<B', header[2:3])
gameMinorVersion, = rawutil.unpack('<B', header[3:4])
packetVersion, = rawutil.unpack('<B', header[4:5])
packetId, = rawutil.unpack('<B', header[5:6])
sessionUID, = rawutil.unpack('<Q', header[6:14])
sessionTime, = rawutil.unpack('<f', header[14:18])
frameIdentifier, = rawutil.unpack('<B', header[18:19])
playerCarIndex, = rawutil.unpack('<B', header[19:20])
# print all info (just for now)
## print('Packet Format : ',packetFormat)
## print('Game Major Version : ',gameMajorVersion)
## print('Game Minor Version : ',gameMinorVersion)
## print('Packet Version : ',packetVersion)
## print('Packet ID : ', packetId)
## print('Unique Session ID : ',sessionUID)
## print('Session Time : ',sessionTime)
## print('Frame Number : ',frameIdentifier)
## print('Player Car Index : ',playerCarIndex)
## print('\n\n')
#start getting the packet data for each packet starting with telemetry data
if (packetId == 6):
speed, = rawutil.unpack('<H' , telemetry[2:4])
throttle, = rawutil.unpack('<f' , telemetry[4:8])
steer, = rawutil.unpack('<f' , telemetry[8:12])
brake, = rawutil.unpack('<f' , telemetry[12:16])
gear, = rawutil.unpack('<b' , telemetry[17:18])
rpm, = rawutil.unpack('<H' , telemetry[18:20])
print (speed)
The UDP specification states that the speed of the car is sent in km/h. However when I unpack the packet, the speed is a multiple of 256, so 10 km/h is 2560 for example.
I want to know if I'm unpacking the data in the wrong way? or is it something else that is causing this.
The problem is also with the steering for example. the spec says it should be between -1.0 and 1.0 but the actual values are either very large or very small.
screengrab here: https://imgur.com/a/PHgdNrx
Appreciate any help with this.
Thanks.
I recommend you don't use the unpack method, as with big structures (e.g. MotionPacket has 1343 bytes) your code will immediately get very messy.
However, if you desperately want to use it, call unpack only once, such as:
fmt = "<HBBBBQfBB"
size = struct.calcsize(fmt)
arr = struct.unpack("<HBBBBQfBB", header[:size])
Alternatively, have a look at ctypes library, especially ctypes.LittleEndianStructure where you can set the _fields_ attribute to a sequence of ctypes (such as uint8 etc, without having to translate them to relevant symbols as with unpack).
https://docs.python.org/3.8/library/ctypes.html#ctypes.LittleEndianStructure
Alternatively alternatively, have a look at namedtuples.
Alternatively alternatively alternatively, there's a bunch of python binary IO libs, such as binio where you can declare a structure of ctypes, as this is a thin wrapper anyway.
To fully answer your question, the structure seems to be:
struct PacketHeader
{
uint16 m_packetFormat; // 2019
uint8 m_gameMajorVersion; // Game major version - "X.00"
uint8 m_gameMinorVersion; // Game minor version - "1.XX"
uint8 m_packetVersion; // Version of this packet type, all start from 1
uint8 m_packetId; // Identifier for the packet type, see below
uint64 m_sessionUID; // Unique identifier for the session
float m_sessionTime; // Session timestamp
uint m_frameIdentifier; // Identifier for the frame the data was retrieved on
uint8 m_playerCarIndex; // Index of player's car in the array
};
Meaning that the sequence of symbols for unpack should be: <H4BQfLB, because uint in ctypes is actually uint32, where you had uint8.
I also replaced BBBB with 4B. Hope this helps.
Haider I wanted to read car speed from Formula 1 2019 too. I found your question, from your question I had some tips and solved my issue. And now i think i must pay back. The reason you get speed multiplied with 256 is you start from wrong byte and this data is formatted little endian. The code you shared starts from 22nd byte to read speed, if you start it from 23rd byte you will get correct speed data.

Stuck in API XAxiDma_BdRingFromHw, why doesn't the S2MM Block descriptor's Completed bit Set?

I am working on Zynq 7z030 and i am trying to receive data on the DDR from the PL side. I am using the AXI DMA SG poll code provided as example by xilinx on SDK. (xaxidma_example_sg_poll.c)
After Configuring DMA -> Setting up the RX channel -> Starting DMA -> I enter the API CheckDmaResult.
Here I call XAxiDma_BdRingFromHw API.
while ((ProcessedBdCount = XAxiDma_BdRingFromHw(RxRingPtr,
XAXIDMA_ALL_BDS,
&BdPtr)) == 0) {
}
This API calls Xil_DCacheInvalidateRange which returns and then the Block descriptor status remains always 0. Thus resulting in forever looping of the XAxiDma_BdRingFromHw. The complete bit never sets.
This happens eventhough I see the TREADY of S2MM go high and receive data in ILA(integrated logic analyser on FPGA end/PL end)
main
....
Status1 = CheckDmaResult(&AxiDma);
.....
-> static int CheckDmaResult(XAxiDma * AxiDmaInstPtr)
....
while ((ProcessedBdCount =
XAxiDma_BdRingFromHw(RxRingPtr,
XAXIDMA_ALL_BDS,
&BdPtr)) == 0) {
}
....
-> XAxiDma_BdRingFromHw(XAxiDma_BdRing * RingPtr, int BdLimit,
XAxiDma_Bd ** BdSetPtr)
....
while (BdCount < BdLimit) {
/* Read the status */
XAXIDMA_CACHE_INVALIDATE(CurBdPtr);
BdSts = XAxiDma_BdRead(CurBdPtr, XAXIDMA_BD_STS_OFFSET);
BdCr = XAxiDma_BdRead(CurBdPtr, XAXIDMA_BD_CTRL_LEN_OFFSET);
/* If the hardware still hasn't processed this BD then we are
* done
*/
if (!(BdSts & XAXIDMA_BD_STS_COMPLETE_MASK)) {
break;
}
.....
could someone please suggest possible reasons or directions i should consider to solve this problem.. any and every suggestion would be a great help.
Thanks in advance!
The problem was with the board (ESD damage).
The DDR started receiving data as soon as the board was changed and the following were observed
further in debug config settings the following needed to be ticked on Under Target Setup
Reset entire system
Program FPGA
Under Application tab
Download Application
Stop at 'main'
by Specifying the correct corresponding .elf file in 'Application ' field

Spark stream data from IBM MQ

I want to stream data from IBM MQ. I have tried out this code I found on Github.
I am able to stream data from the Queue but each time it streams, it takes all the data from it. I just want to take the current data that is pushed into the queue. I looked up on many sites but didn't find the correct solution.
In Kafka we had something like KafkaStreamUtils for streaming the near-real-time data. Is there anything similar to that in IBM MQ so that it streams only the latest data?
The sample in the link you provided shows that it calls the following method to recieve from the the IBM MQ:
CustomMQReciever(String host , int port, String qm, String channel, String qn)
If you review CustomMQReciever here you can see that it is only Browsing the messages from the queue. This means the message will still be on the queue and the next time you connect you will receive the same messages:
MQQueueBrowser browser = (MQQueueBrowser) qSession.createBrowser(queue);
If you wanted to remove the messages from the queue you would need to call a method that does consume them from the queue instead of browsing them from the queue. Below is an example of changes to CustomMQReciever.java that should accomplish what you want:
Under the initConnection() change the above code to the following to cause it to remove the messages from the queue:
MQMessageConsumer consumer = (MQMessageConsumer) qSession.createConsumer(queue);
Get rid of:
enumeration= browser.getEnumeration();
Under receive() change the following:
while (!isStopped() && enumeration.hasMoreElements() )
{
receivedMessage= (JMSMessage) enumeration.nextElement();
String userInput = convertStreamToString(receivedMessage);
//System.out.println("Received data :'" + userInput + "'");
store(userInput);
}
To something like this:
while (!isStopped() && (receivedMessage = consumer.receiveNoWait()) != null))
{
String userInput = convertStreamToString(receivedMessage);
//System.out.println("Received data :'" + userInput + "'");
store(userInput);
}

Storing Value in Arduino Sent From Python via Serial

I have been trying to send a value from a Python program via serial to an Arduino, but I have been unable to get the Arduino to store and echo back the value to Python. My code seems to match that I've found in examples online, but for whatever reason, it's not working.
I am using Python 3.5 on Windows 10 with an Arduino Uno. Any help would be appreciated.
Arduino code:
void readFromPython() {
if (Serial.available() > 0) {
incomingIntegrationTime = Serial.parseInt();
// Collect the incoming integer from PySerial
integration_time = incomingIntegrationTime;
Serial.print("X");
Serial.print("The integration time is now ");
// read the incoming integer from Python:
// Set the integration time to what you just collected IF it is not a zero
Serial.println(integration_time);
Serial.print("\n");
integration_time=min(incomingIntegrationTime,2147483648);
// Ensure the integration time isn't outside the range of integers
integration_time=max(incomingIntegrationTime, 1);
// Ensure the integration time isn't outside the range of integers
}
}
void loop() {
readFromPython();
// Check for incoming data from PySerial
delay(1);
// Pause the program for 1 millisecond
}
Python code:
(Note this is used with a PyQt button, but any value could be typed in instead of self.IntegrationTimeInputTextbox.text() and the value is still not receieved and echoed back by Arduino).
def SetIntegrationTime(self):
def main():
# global startMarker, endMarker
#This sets the com port in PySerial to the port with the Genuino as the variable arduino_ports
arduino_ports = [
p.device
for p in serial.tools.list_ports.comports()
if 'Genuino' in p.description
]
#Set the proper baud rate for your spectrometer
baud = 115200
#This prints out the port that was found with the Genuino on it
ports = list(serial.tools.list_ports.comports())
for p in ports:
print ('Device is connected to: ', p)
# --------------------------- Error Handling ---------------------------
#Tell the user if no Genuino was found
if not arduino_ports:
raise IOError("No Arduino found")
#Tell the user if multiple Genuinos were found
if len(arduino_ports) > 1:
warnings.warn('Multiple Arduinos found - using the first')
# ---------------------------- Error Handling ---------------------------
#=====================================
spectrometer = serial.Serial(arduino_ports[0], baud)
integrationTimeSend = self.IntegrationTimeInputTextbox.text()
print("test value is", integrationTimeSend.encode())
spectrometer.write(integrationTimeSend.encode())
for i in range(10): #Repeat the following 10 times
startMarker = "X"
xDecoded = "qq"
xEncoded = "qq"
while xDecoded != startMarker: #Wait for the start marker (X))
xEncoded = spectrometer.read() #Read the spectrometer until 'startMarker' is found so the right amound of data is read every time
xDecoded = xEncoded.decode("UTF-8");
print(xDecoded);
line = spectrometer.readline()
lineDecoded = line.decode("UTF-8")
print(lineDecoded)
#=====================================
spectrometer.close()
#===========================================
#WaitForArduinoData()
main()
First, this is a problem:
incomingValue = Serial.read();
Because read() returns the first byte of incoming serial data reference. On the Arduino the int is a signed 16-bit integer, so reading only one byte of it with a Serial.read() is going to give you unintended results.
Also, don't put writes in between checking if data is available and actual reading:
if (Serial.available() > 0) {
Serial.print("X"); // Print your startmarker
Serial.print("The value is now ");
incomingValue = Serial.read(); // Collect the incoming value from
That is bad. Instead do your read immediately as this example shows:
if (Serial.available() > 0) {
// read the incoming byte:
incomingByte = Serial.read();
That's two big issues there. Take care of those and let's take a look at it after those fundamental issues are corrected.
PART 2
Once those are corrected, the next thing to do is determine which side of the serial communication is faulty. Generally what I like to do is determine one side is sending properly by having its output show up in a terminal emulator. I like TeraTerm for this.
Set your python code to send only and see if your sent values show up properly in a terminal emulator. Once that is working and you have confidence in it, you can attend to the Arduino side.

multi threading storm Bolt parallelism

Components for Testing are
Kafka Procucer that reads a file from machine, file is composed of 1000 lines.
String sCurrentLine;
br = new BufferedReader(new FileReader("D:\\jsonLogTest.txt"));
while ((sCurrentLine = br.readLine()) != null) {
//System.out.println(sCurrentLine);
KeyedMessage<String, String> message =new KeyedMessage<String, String>(TOPIC,sCurrentLine);
producer.send(message);
}
Storm Consumer with Three Bolts, BoltOne is supposed to receive the stream and divide it on two different Streams (Stream1 & Stream2). BoltTwo and BoltThree should subscribe to these Streams.
(In simple words I am looking to process the tuple in BoltOne parley like Bolt2 processes first 500 lines and BolltThree last 500 lines.
Topology
builder.setSpout("line-reader-spout",kafkaSpout,1);
builder.setBolt("bolt-one", new BoltOne(),1).shuffleGrouping("line-reader-spout");
builder.setBolt("bolt-two", new BoltTwo(),1).shuffleGrouping("bolt-one","stream1");
builder.setBolt("bolt-three", new BoltThree(),1).shuffleGrouping("bolt-one","stream2");
BoltOne
collector.emit("stream1", new Values(input.getString(0)));
collector.emit("stream2", new Values(input.getString(0)));
x++;System.out.println("" + x);
collector.ack(input);
public void declareOutputFields(OutputFieldsDeclarer outputFieldsDeclarer) {
// TODO Auto-generated method stub
outputFieldsDeclarer.declareStream("stream1", new Fields("field1"));
outputFieldsDeclarer.declareStream("stream2", new Fields("field2"));
}
BoltTwo & BoltThree
public void execute(Tuple input) {
String sentence = input.getString(0);
System.out.println("*********B2*************");
}
StackTrace
*********B2*************
1
*********B3*************
2
*********B2*************
*********B3*************
3
*********B3*************
*********B2*************
4
*********B3*************
*********B2*************
5
*********B2*************
*********B3*************
6
*********B2*************
*********B3*************
7
*********B3*************
*********B2*************
Totally confused with splitting streams and parallelism. Example would be helpful.
Updated Solution I came up with for now :
public void execute(Tuple input) {
#SuppressWarnings("unused")
String sentence = input.getString(0);
if (x%2==0) {
collector.emit("stream1", new Values(input.getString(0)));
}
else{
collector.emit("stream2", new Values(input.getString(0)));
}
x++;
collector.ack(input);
}
I just divided the stream on even-Odd basis, and the time to process becomes half, While the BoltTwo processes a tuple other is processed by BoltThree.
I guess you run everything using LocalCluster. As there are multiple threads running, the output via println(...) is not synchronized and internal buffering can mess up the order of the output... Thus, the stuff you see in not reliable -- the order is only preserved within a single spout/bolt.
Furthermore, what is the behavior you want to get?
Right now, you have
Spout => Bolt1 =+=> Bolt2
+=> Bolt3
Ie, the output of Bolt1 is duplicated and Bolt2 and Bolt3 both receive all output tuple from Bolt1. Thus, Bolt1 counts from 1 to 7, and each output tuple of Bolt1 triggers execute() of Bolt2 and Bolt3.
As Bolt2 and Bolt3 do the same thing, I guess you want to have two copies of the same Bolt and partition the input to both. For this, you only add a single bolt and set the parallelism to 2:
builder.setSpout("line-reader-spout",kafkaSpout,1);
builder.setBolt("bolt-one", new BoltOne(),1).shuffleGrouping("line-reader-spout");
builder.setBolt("bolt-two", new BoltTwo(),2).shuffleGrouping("bolt-one","stream1");
Furthermore, Bolt1 only need to declare a single output stream (and not two). If you declare multiple output streams and write to both, you replicate the data...

Resources