Is it possible to implement RTP-based protocol using Google's FlatBuffers library? - protocols

I'm especially interested in implementation of RTP-MIDI protocol. The major difficulties could be faced in implementation of bit-fields and non-ordinary MIDI-like timestamps, as I can expect. And maybe if somebody knows already existed open source c++ implementations, please give me a reference to it.

I don't think so. As far as I can tell, RTP-MIDI is a very specific encoding. FlatBuffers is not able to emulate existing binary encoding/layout, it has its own encoding which typically does not match other binary encodings, even if the same information is stored.
FlatBuffers can generally be used with any protocol that can transport an opaque payload of bytes. RTP itself (not RTP-MIDI) could potentially be used with FlatBuffer data (after the RTP header).

Related

OpenBSD Encode and Decode Base64

According to what I read, OpenBSD seems to have its own system on encoding and decoding using Base64. However,I cant find any literature whatsoever that can describe it mathematically. my question is, what is the difference between the usual EncodeBase64 DecodeBase64 with the OpenBSD version? how do we calculate it on math?
I think you're jumping to (unwarranted) conclusions here: the link you provide is a Java implementation of bcrypt and indeed, bcrypt uses a modified base64 definition. See more about the differences on the security stack exchange.
What you should remember is that indeed bcrypt uses a nonstandard base64 dialect (forcing it to provide implementations for base64 within its implementation) but that bcrypt implementations can be supposed to be interoperable as they all need to implement the proprietary encoding scheme.
The link to the Hacker Noon article mentioned on SSE is interesting because it digs deeper into the peculiarities of bcrypt, including its use of a proprietary encoding scheme. Definitely worth a visit.
I am not sure that I answer your full question here but at least the provided pointers should allow you to dig deeper into the subject.

Determine endianness in NodeJS

I'm writing a small parser and after trials and errors it seems the file byte order is big endian (which i was told it ain't common, but it's there).
I don't think the original devs include anything about endianness since the byte order may depend only in the hardware that wrote the file. Please correct me here if flawed (is it possible that the developers specify in the C code the endianness?).
So I don't really find how would I parse those files, when there is no actual way to determine the byte order - say, for a Int32 number. I've read this similar post but that's for a system that writes and reads the binary files, hence you can just use an system-endianness reader.
In my case, the code parses the instrument output gathered and binary-written by potentially any type of computer with any OS (but I guess again endianness depends on the system architecture and not the OS).
Do you have any idea/pointers on how to deal with this problem?
Wikipedia was very informative but as far as I read it's just general information.

Reading utf-8 files to std::string in C++

Finally! We're starting to require that all our input files are encoded in utf-8! This is something we've been wanting to do for years. Unfortunately, we suck at it since none of us have ever tried it and most of us are Windows programmers or are used to operating systems where utf-8 is the only real option anyway; neither group knows anything about reading utf-8 strings in a platform agnostic way.
So we started to look at how to deal with utf-8 in a platform agnostic way and found that its pretty confusing (because Windows) and the other questions I've found here on stackoverflow don't really seem to cover our scenario or they are confusing. I found a reference to https://www.codeproject.com/Articles/38242/Reading-UTF-with-C-streams which, I find, is a bit confusing and contains a great deal of fluff.
So a few assumptions (that must be true or we're in a state of GIGO)
All files are in utf-8 (yay!)
The std::strings must contain utf-8; no conversion allowed.
The solution must be locale agnostic and work on both macOS (10.13+), Windows (10+), Android and iOS 10+.
Stream support is not required; we're dealing with local files only (for now), but support for streams is appreciated.
We're trying to avoid using std::wstring if we can and I see no reason to use it anyway. We're also trying to avoid using any third party libraries which do not use utf-8 encoded std::string; using a custom string with functions that overloads and converts all std::string arguments to the a custom string is acceptable.
Is there any way to do this using just the standard C++ library? Preferably just by imbuing the global locale with a facet that tells the stream library to just dump content of files in strings (using custom delimiters as usual); no conversion allowed.
This question is only about reading utf-8 files into std::strings and storing the content as utf-8 encoded strings. Dealing with Windows APIs and such is a separate concern.
C++17 is available.
UTF-8 is just a sequence of bytes that follow a specific encoding. If you read a sequence of bytes that is legitimate UTF-8 data into a std::string, then the string contains UTF-8 data.
There's nothing special you have to actually do to make this happen. This works like any other C or C++ file loading. Just don't mess around with iostream locales and you'll be fine.

ZeroMQ + Protocol Buffers

ZeroMQ FAQ page suggest use of Google's protobuf as a way to serialise message content.
Has anyone see a good usage example?
I also need to get the answer to "What is the biggest advantage of serialising messages?" -
whether it may be something I can live without and take the advantage of slimmer pipeline.
I quite like the idea of .proto files and the protoc compiler.
Also, it seem that another great tool to throw at the playground would be libev, any
comments are welcome :)
If you are 100% certain that the programs that are going to communicate over ZMQ will at all times be capable of understanding each other's binary format (eg because they are always distributed together and were all compiled with the same compiler options anyways) I see no benefit to the overhead that's added by serialization.
As soon as the above condition cannot be satisfied (like partner programs running on different host types, programs written in different languages or even partner programs that can evolve independently in time - which may cause incompatibilities in their raw binary structures) serialization becomes quite probably a must.
It seems that nowadays everybody and their brother is creating serialization solutions, which may be an indication that there's no one size fits all solution. This page contains a pretty thorough benchmarking of serialization time, deserialization time and sizes for 27 (!!) different serialization systems. Don't skip the first paragraph of that page, it says "Warning, benchmarks can be misleading". Your application, your data are what counts for you, but the data presented there may help you narrow down the choices you want to study in detail.
Here is a sample which send and receive messages through java and in C++:
Serializing in java:
Person person = Person.newBuilder().setName("chand")
.setEmail("chand#test.com").setId(55555).build();
socket.send(person.toByteArray(), 0);
De-serialize in java:
byte[] reply = socket.recv(0);
Person person2 = Person.parseFrom(reply);
Serializing in C++:
Person p = Person();
std::string str;
p.SerializeToString(&str);
int sz = str.length();
zmq::message_t *query = new message_t(sz);
memcpy(query->data (), str.c_str(), sz);
socket->send (*query);
De-serializign in C++
zmq::message_t resultset(100);
socket->recv (&resultset);
Person p = Person();
p.ParseFromArray(resultset.data(), resultset.size());
printf("\n Server : %s", p.name().c_str());
I am not sure PUB/SUB in 0mq will work with protobuf, because 0mq expects a string topic at head of the msg.. but protobuf puts a field descriptor first.
actually here is a link with a solution.
http://www.dotkam.com/2011/09/09/zeromq-and-google-protocol-buffers/
cheers
You always need to serialize when communicating. Structures are random access. Communication layers, like ZeroMQ are serial.
You can use the "default serialization" that comes with your language.
For example in C++ a structure with no pointers will have a certain binary layout that can be directly turned into a byte array. This binary layout is, indirectly, your serialization layer, and is both language and compiler specific.
As long as you limit yourself to structures that have no pointers and are using the same compiler and language on both ends of the pipe... feel free to avoid a library that does additional serialization on top of the default layout provided.

What are the advantages of strings for serialization?

What are the advantages of strings for serialization?
What's wrong with binary files for serialization?
String data is human-readable, which is wonderful for troubleshooting. Binary data is not.
String data is easily parsed by systems on any platform. Binary data is not always, so if you need to pass data between a Windows and a Linux box, or maybe an IBM Mainframe, string data is simpler.
String data also is comprehensive enough to include XML, which brings even better features to the table.
It's easier to truly encrypt and decrypt, particularly if you're going cross-platform.
However, there are disadvantages to using string data as well.
It usually results in larger amounts of raw bits - larger files, more traffic on the network, etc.
It's human-readable which is not so wonderful if you need to protect it from being used in other systems or from prying eyes. (Although encryption helps with the prying eyes deal.)
Strings are simply more portable, and more forward and backward compatible. In binary formats, everything relies on offsets, known sizes, and expected fields. They're difficult to write parsers for, because basically you need to support every known "version" of the binary format. With text however (especially something flexible like XML) it's easy to find the fields you're looking for, and it's even easier to debug when something goes wrong (human readable makes everything better).

Resources