Protocol Definition Language - protocols

What protocol definition do you recommend?
I evaluated Google's protocol buffers, but it does not allow me to control the placement of fields in the packet being built. I assume the same is true for Thrift. My requirements are:
specify the location of fields in the packet
allow for bit fields
conditionals: a flag (bit field) = true means that data can appear at a later location in the packet
ability to define a packet structure by referring to another packet definition
Thank you.
("Flavor" on SourceForge, used for defining MPEG-4 might be a candidate, but I am looking for something that seems to have more of a community and preferably works in a .NET environment.)

Have a look to ASN.1 http://es.wikipedia.org/wiki/ASN.1
FooProtocol DEFINITIONS ::= BEGIN
FooQuestion ::= SEQUENCE {
trackingNumber INTEGER,
question IA5String
}
FooAnswer ::= SEQUENCE {
questionNumber INTEGER,
answer BOOLEAN
}
END
It seems to cover your main requirements:
- Bit detail
- ordered content
- type references
- not sure, about conditions
Is widely used, and you can find some implementations on java and python

I'd be interested in the reasons for your requirements. Why do you need to control the position of the fields? Why are bitfields important? Conditionals?
It sounds like you have a (more or less) fixed wire format for which you need to write a parser for, and in that case none of the existing popular protocol/serialization formats (Protobufs, Thrift, JSON, Yaml, etc.) will work for you.
A somewhat unorthodox approach is to use Erlang or Haskell, both of which have good support for parsing binary protocols.

How about C# itself?
eg
class MySimplePDLData {
// format: name (or blank if padding), bit length, value (or blank if data),
// name of presence flag field (or blank if no presence flag), C# type
// one packet type per string, fields separated by pipes (|)
string[] pdl = {
// MY-SIMPLE-PDL-START
",8,0xf8,|version,8,,Int32|type,8,,Int32|id1,64,,Int64",
...
// MY-SIMPLE-PDL-END
};
}
If the data is already in memory, you don't need to do IO on a file format. From here you can either dynamically interpret packes or generate the necessary C# source code for packet recognition/pack/unpack, again using C# itself.

Related

How can I escape fields in a f-string?

Javascript's version of f-strings allows for string escaping through use of a somewhat funny API, e.g.
function escape(str) {
var div = document.createElement('div');
div.appendChild(document.createTextNode(str));
return div.innerHTML;
}
function escapes(template, ...expressions) {
return template.reduce((accumulator, part, i) => {
return accumulator + escape(expressions[i - 1]) + part
})
}
var name = "Bobby <img src=x onerr=alert(1)></img> Arson"
element.innerHTML = escapes`Hi, ${name}` # "Hi, Bobby <img src=x onerr=alert(1)></img> Arson"
Does Python f-strings allow for a similar mechanism? or do you need to bring your own string.Formatter? Would a more pythonic implementation wrap results into a class with an overriden __str__() method before interpolation?
When you're dealing with text that is going to be interpreted as code (e.g., text that the browser will parse as HTML or text that a database executes as SQL), you don't want to solve security issues by implementing your own escaping mechanism. You want to use the standard, widely tested tools to prevent them. This gives you much greater safety from attacks for several reasons:
The wide adoption means the tools are well tested and much less likely to contain bugs.
You know they have the best available approach to solving the problem.
They will help you avoid the common mistakes associated with generating the strings yourself.
HTML escaping
The standard tools for HTML escaping are templating engines, such as Jinja. The major advantage is that these are designed to escape text by default, rather than requiring you to remember to explicitly convert unsafe strings. (You do need to be cautious about bypassing or disabling, even temporarily, the escaping, though. I have seen my share of insecure attempts to insecurely construct JSON in templates, but the risk in templates is still lower than a system that requires explicit escaping everywhere.) Your example is pretty easy to implement with Jinja:
import jinja2
template_str = 'Hi, {{name}}'
name = "Bobby <img src=x onerr=alert(1)></img> Arson"
jinjaenv = jinja2.Environment(autoescape=jinja2.select_autoescape(['html', 'xml']))
template = jinjaenv.from_string(template_str)
print(template.render(name=name))
# Hi, Bobby <img src=x onerr=alert(1)></img> Arson
If you're generating HTML, though, chances are you're using a web framework such as Flask or Django. These frameworks include a templating engine and will require less set up than the above example.
MarkupSafe is a useful tool if you're trying to create your own template engine (Some Python templating engines use it internally, such as Jinja.), and you could potentially integrate it with a Formatter. But there's no reason to reinvent the wheel. Using a popular engine will result in much simpler, easier to follow, more recognizable code.
SQL injection
SQL injection is not solved through escaping. PHP has a nasty history that everyone has learned from. The lesson is use parameterized queries instead of trying to escape input. This prevents untrusted user data from ever being parsed as SQL code.
How you do this depends on exactly what libraries you're using for executing your queries, but for an example, doing so with SQLAlchemy's execute method looks like this:
session.execute(text('SELECT * FROM thing WHERE id = :thingid'), thingid=id)
Note that SQLAlchemy is not just escaping the text of id to ensure it does not contain attack code. It is actually differentiating between the SQL and the value for the database server. The database will parse the query text as a query, and then it will include the value separately after the query has been parsed. This makes it impossible for the value of id to trigger unintended side effects.
Note also that quoting issues are precluded by parameterized queries:
name = 'blah blah blah'
session.execute(text('SELECT * FROM thing WHERE name = :thingname'), thingname=name)
If you can't parameterize, whitelist in memory
Sometimes, it's not possible to parameterize something. Maybe you're trying to dynamically select a table name based on the input. In these cases, one thing you can do is have a collection of known valid and safe values. By validating that the input is one of these values and retrieving a known safe representation of it, you avoid sending user input into your query:
# This could also be loaded dynamically if needed.
valid_tables = {
# Keys are uppercased for look up
'TABLE1' : 'table1',
'TABLE2': 'Table2',
'TABLE3': 'TaBlE3',
...
}
def get_table_name(table_num):
table_name = 'TABLE' + table_num
try:
return valid_tables[table_name]
except KeyError:
raise 'Unknown table number: ' + table_num
def query_for_thing(session, table_num):
return session.execute(text('SELECT * FROM "{}"'.format(get_table_name(table_num))
The point is you never want to allow user input to go into your query as something other than a parameter.
Make sure that this whitelisting occurs in application memory. Do not perform the whitelisting in SQL itself. Whitelisting in the SQL is too late; by that time, the input has already been parsed as SQL, which would allow the attacks to be invoked before the whitelisting could take effect.
Make sure you understand your library
In the comments, you mentioned PySpark. Are you sure you're doing this right? If you create a data frame using just a simpler SELECT * FROM thing and then use PySpark filtering functions, are you sure it doesn't properly push those filters down to the query, precluding the need to format values into it unparameterized?
Make sure you understand how data is normally filtered and manipulated with your library, and check if that mechanism will use parameterized queries or otherwise be efficient enough under the hood.
With small data, just filter in memory
If your data isn't at least in the tens of thousands of records, then consider just loading it into memory and then filtering:
filter_name = 'blah blah blah'
results = session.execute(text('SELECT * FROM thing'))
filtered_results = [r for r in results if r.name == filter_name]
If this is fast enough and parameterizing the query is hard, then this approach avoids all the security headaches of trying to make the input safe. Test its performance with somewhat more data than you expect to see in prod. I would use at least double of the maximum you expect; an order of magnitude would be even safer if you can make it perform.
If you're stuck without parameterized query support, the last resort is very strict limits on inputs
If you're stuck with a client that doesn't support parameterized queries, first check if you can use a better client. SQL without parameterized queries is absurd, and it's an indication that the client you're using is very low quality and probably not well maintained; it may not even be widely used.
Doing the following is NOT recommended. I include it only as an absolute last resort. Don't do this if you have any other choice, and spend as much time as you can (even a couple of weeks of research, I dare say) trying to avoid resorting to this. It requires a very high level of diligence on the part of every team member involved, and most developers do not have that level of diligence.
If none of the above is a possibility, then the following approach may be all you can do:
Do not query on text strings coming from the user. There is no way to make this safe. No amount of quoting, escaping, or restricting is guaranteed. I don't know all the details, but I've read of the existence of Unicode abuses that can allow bypassing character restrictions and the like. It's just not worth it to try. The only text strings allowed should be whitelisted in application memory (as opposed to whitelisted via some SQL or database function). Note that even leveraging database level quoting functions (like PostgreSQL's quote_literal) or stored procedures can't help you here because the text has to be parsed as SQL to even reach those functions, which would allow the attacks to be invoked before the whitelisting could take effect.
For all other data types, parse them first and then have the language render them into an appropriate string. Doing so again means avoiding having user input parsed as SQL. This requires you to know the data type of the input, but that's reasonable since you'll need to know that to construct the query. In particular, the available operations with a particular column will be determined by that column's data types, and the operation and column type will determine what data types are valid for the input.
Here's an example for a date:
from datetime import datetime
def fetch_data(start_date, end_date):
# Check data types to prevent injections
if not isinstance(start_date, datetime):
raise ValueError('start_date must be a datetime')
if not isinstance(end_date, datetime):
raise ValueError('end_date must be a datetime')
# WARNING: Using format with SQL queries is bad practice, but we don't
# have a choice because [client lib] doesn't support parameterized queries.
# To mitigate this risk, we do not allow arbitrary strings as input.
# We tightly control the input's data type (to something other than text or binary) and the format used in the query.
session.execute(text(
"SELECT * FROM thing WHERE timestamp BETWEEN CAST('{start}' AS TIMESTAMP) AND CAST('{end}' AS TIMESTAMP)"
.format(
# Make the format used explicit
start=start_date.strftime('%Y-%m-%dT%H:%MZ'),
end=end_date.strftime('%Y-%m-%dT%H:%MZ')
)
))
user_input_start_date = '2019-05-01T00:00'
user_input_end_date = '2019-06-01T00:00'
parsed_start_date = datetime.strptime(user_input_start_date, "%Y-%m-%dT%H:%M")
parsed_end_date = datetime.strptime(user_input_end_date, "%Y-%m-%dT%H:%M")
data = fetch_data(parsed_start_date, parsed_end_date)
There's several details that you need to be aware of.
Notice that in the same function as the query, we're validating the data type. This is one of the rare exceptions in Python where you don't want to trust duck typing. This is a safety feature that ensures insecure data won't be passed into your function accidentally.
The format passed of the input when it's rendered into the SQL string is explicit. Again, this is about control and whitelisting. Don't leave it to any other library to decide what format the input will be rendered to; make sure you know exactly what the format is so that you can be certain that injections are impossible. I'm fairly certain that there's no injection possibility with the ISO 8601 date/time format, but I haven't confirmed that explicitly. You should confirm that.
The quoting of the values is manual. That's okay. And the reason it's okay is because you know what data types you're dealing with and you know exactly what the string will look like after it's formatted. This is by design: you're maintaining very strict, very tight control over the input's format to prevent injections. You know whether quotes need to be added or not based on that format.
Don't skip the comment about how bad this practice is. You have no idea who will read this code later and what knowledge or abilities they have. Competent developers who understand the security risks here will appreciate the warning; developers who weren't aware will be warned to use parameterized queries whenever available and to avoid carelessly including new conditions. If at all feasible, require that changes to these areas of code be reviewed by additional developers to further mitigate the risks.
This function should have full control over generating the query. It should not delegate its construction out to other functions. This is because the data type checking needs to be kept very, very close to the construction of the query to avoid mistakes.
The effect of this is a sort of looser whitelisting technique. You can't whitelist specific values, but you can whitelist the kinds of values you're working with and control the format they're delivered in. Forcing callers to parse the values into a known data type reduces the possibility of an attack getting through.
I'll also note that callering code is free to accept the user input in whatever format is convenient and to parse it using whatever tools you wish. That's one of the advantages of requiring a dedicated data type instead of strings for input: you don't lock callers into a particular string format, just the data type. For date/times in particular, you might consider some third party libraries.
Here's another example with a Decimal value instead:
from decimal import Decimal
def fetch_data(min_value, max_value):
# Check data types to prevent injections
if not isinstance(min_value, Decimal):
raise ValueError('min_value must be a Decimal')
if not isinstance(max_value, Decimal):
raise ValueError('max_value must be a Decimal')
# WARNING: Using format with SQL queries is bad practice, but we don't
# have a choice because [client lib] doesn't support parameterized queries.
# To mitigate this risk, we do not allow arbitrary strings as input.
# We tightly control the input's data type (to something other than text or binary) and the format used in the query.
session.execute(text(
"SELECT * FROM thing WHERE thing_value BETWEEN CAST('{minv}' AS NUMERIC(26, 16)) AND CAST('{maxv}' AS NUMERIC(26, 16))"
.format(
# Make the format used explicit
# Up to 16 decimal places. Maybe validate that at start of function?
minv='{:.16f}'.format(min_value),
maxv='{:.16f}'.format(max_value)
)
))
user_input_min = '78.887'
user_input_max = '89789.78878989'
parsed_min = Decimal(user_input_min)
parsed_max = Decimal(user_input_max)
data = fetch_data(parsed_min, parsed_max)
Everything is basically the same. Just a slightly different data type and format. You're free to use whatever data types your database supports, of course. For example, if your DB does not require specifying a scale and precision on the numeric type or would auto-cast a string or can handle the value unquoted, you can structure your query accordingly.
You do not need to bring your own formatter if you're using python 3.6 or newer. Python 3.6 introduced formatted string literals, see PEP 498: Formatted string literals.
Your example in python 3.6 or newer would look like this:
name = "Bobby <img src=x onerr=alert(1)></img> Arson"
print(f"Hi, {name}") # Hi, Bobby <img src=x onerr=alert(1)></img> Arson
The format specification that can be used with str.format() can also be used with formatted string literals.
This example,
my_dict = {'A': 21.3, 'B': 242.12, 'C': 3200.53}
for key, value in my_dict.items():
print(f"{key}{value:.>15.2f}")
will print the following:
A..........21.30
B.........242.12
C........3200.53
Additionally, since the string is evaluated at runtime, any valid python expression can be used, for example,
name = "Abby"
print(f"Hello, {name.upper()}!")
will print
Hello, ABBY!

why does this string not contain the correct characters?

Written in Delphi XE3, my software is communicating with an instrument that occasionally sends binary data. had expected I should use AnsiString since this data will never be Unicode. I couldn't believe that the following code doesn't work as I had expected. I'm supposing that the characters I'm exposing it to are considered illegitimate...
var
s:AnsiString;
begin
s:='test' + chr(128);
// had expected that since the string was set above to end in #128,
// it should end in #128...it does not.
if ord(s[5])<>128 then
ShowMessage('String ending is not as expected!');
end;
Naturally, I could use a pointer to accomplish this but I would think I should probably be using a different kind of string. of course, I could use a byte array but a string would be more convenient.
really, I'd like to know "why" and have some good alternatives.
thanks!
The behaviour you observe stems from the fact that Chr(128) is a UTF-16 WideChar representing U+0080.
When translated to your ANSI locale this does not map to ordinal 128. I would expect U+0080 to have no equivalent in your ANSI locale and therefore map to ? to indicate a failed translation.
Indeed the compiler will even warn you that this can happen. You code when compiled with default compiler options yields these warnings:
W1058 Implicit string cast with potential data loss from 'string' to 'AnsiString'
W1062 Narrowing given wide string constant lost information
Personally I would use configure the warnings to treat both of those warnings as errors.
The fundamental issue is revealed here:
My software is communicating with an instrument that occasionally sends binary data.
The correct data type for byte oriented binary data is an array of byte. In Delphi that would be TBytes.
It is wrong to use AnsiString since that exposes you to codepage translations. You want to be able to specify ordinal values and you categorically do not want text encodings to play a part. You do not want for your program's behaviour to be determined by the prevailing ANSI locale.
Strings are for text. For binary use byte arrays.

Public fixed-length Strings

I am just summarizing info about implementing a digital tree (Trie) in VBA. I am not asking how to do that so please do not post your solutions - my specific question regarding fixed-length Strings in class modules comes at the end of this post.
A Trie is all about efficiency and performance therefore most of other programming languages use a Char data type to represent members of TrieNodes. Since VBA does not have a Char datatype I was thinking about faking it and using a fixed-length String with 1 character.
Note: I can come up with a work-around to this ie. use Byte and a simple function to convert between Chr() and Asc() or an Enum, or delcare as a private str as String * 1 and take advantage of get/let properties but that's not the point. Stay tuned though because...
According to Public Statement on Microsoft Help Page you can't declare a fixed-length String variable in class modules.
I can't find any reasonable explanation for this constrain.
Can anyone give some insight why such a restriction applies to fixed-length Strings in class modules in VBA?
The VBA/VB6 runtime is heavily reliant on the COM system (oleaut32 et al) and this enforces some rules.
You can export a class flile between VB "stuff" but if you publish (or could theoretically publish) it as a COM object it must be able to describe a "fixed length string" in its interface description/type library so that say a C++ client can consume it.
A fixed length string is "special" because it has active behaviour, i.e. its not a dumb datatype, it behaves somewhat like a class; for example its always padded - if you assign to it it will have trailing spaces, in VBA the compiler adds generated code to get that behaviour. A C++ consumer would be unaware of the fixed-length nature of the string because the interface cant describe it/does not support a corresponding type (a String is a BSTR) which could lead to problems.
Strings are of type BSTR and like a byte array you would still lose the padding semantics if you used one of those instead.

Given ShortString is deprecated, what is the best method for limiting string length

It's not uncommon to have a record like this:
TAddress = record
Address: string[50];
City : string[20];
State : string[2];
ZIP : string[5];
end;
Where it was nice to have hard-coded string sizes to ensure that the size of the string wouldn't exceed the database field size allotted for the data.
Given, however, that the ShortString type has been deprecated, what are Delphi developers doing to "solve" this problem? Declaring the record fields as string gets the job done, but doesn't protect the data from exceeding the proper length.
What is the best solution here?
If I had to keep data from exceeding the right length, I'd let the database code handle it as much as possible. Put size limits on the fields, and display data to the user in data-bound controls. A TDBEdit bound to a string field will enforce the length limit correctly. Set it up so the record gets populated directly from the dataset, and it will always have the right length.
Then all you need to worry about is data coming into the record from some outside source that is not part of your UI. For that, use the same process. Have the import code insert the data into the dataset, and let its length constraints do the validation for you. If it raises an exception, reject the import. If not, then you've got a valid dataset row that you can use to populate a record from.
The short string types in your question don't really protect the strings from exceeding the proper length. When you assign a longer value to these short strings, the value is silently truncated.
I'm not sure what database access method you are using but I rather imagine that it will do the same thing. Namely truncate any over-length strings to the maximum length. In which case there is nothing to do.
If your database access method throws an error when you give it an over long string then you would need to truncate before passing the value to the database.
If you have to truncate explicitly, then there are lots of places where you might choose to do so. My philosophy would be to truncate at the last possible moment. That's the point at which you are subject to the limit. Truncating anywhere else seems wrong. It means that a database limitation is spreading to parts of the code that are not obviously related to the database.
Of course, all this is based on the assumption that you want to carry on silently truncating. If you want to do provide user feedback in the event of truncation, then you will need to decide just where are the right points to action that feedback.
From my understanding, my answer should be "do not mix layers".
I suspect that the string length is specified at the database layer level (a column width), or at the business application layer (e.g. to validate a card number).
From the "pure Delphi code" point of view, you should not know that your string variable has a maximum length, unless you reach the persistence layer or even the business layer.
Using attributes could be an idea. But it may "pollute" the source code for the very same reason that it is mixing layers.
So what I recommend is to use a dedicated Data Modeling, in which you specify your data expectations. Then, at the Delphi var level, you just define a plain string. This is exactly how our mORMot framework implements data filtering and validation: at Model level, with some dedicated classes - convenient, extendable and clean.
If you're just porting from Delphi 7 to XE3, leave it be. Also, although "ShortString" may be deprecated, I'll eat my hat if they ever remove it completely, because there are a lot of bits of code that will never be able to be rebuilt without it. ShortString + Records is still the only practical way to specify a byte-oriented file-of-record data storage. Delphi will NEVER remove ShortString nor change its behaviour, it would be devastating to existing delphi code. So if you really must define records and limit their length, and you really don't want those records to support Unicode, then there is zero reason to stop using or stop writing ShortString code. That being said, I detest short-strings, and File-of-record, wish they would go away, and am glad they are marked deprecated.
That being said, I agree with mason and David entirely; I would say, Length checking, and validation are presentation/validation concerns, and Delphi's strong typing is NOT the right place or the right way to deal with them. If you need to put validation constraints on your classes, write helper classes that implement constraint-storage (EmployeeName is a string field and EmployeeName has the following length limit). In Edit controls for example, this is already a property. It seems to me that mapping DB Fields to visual fields, using the new Binding system would be much preferable to trying to express constraints statically in the code.
User input validation and storage are different and length limits should be set in your GUI controls not in your data structures.
You could for example use Array of UnicodeChar, if you wanted to have a Unicode wide but length limited string. You could even write your own LimitedString class using the new class helper methods in Delphi. But such approaches are not a maintainable and stable design.
If your SQL database has a field declared with VARCHAR(100) type, and you want to limit your user's input to 100 characters, you should do so at the GUI layer and forget about imposing truncation (data corruption, in fact) silently behind the scenes.
I had this problem - severely - upgrading from Delphi 6 to 2009 for what one program was/is doing it was imperative to be able to treat the old ASCII strings as individual ASCII characters.
The program outputs ASCII files (NO ANSI even) and has concepts such as over-punch on the last numeric digit to indicate negative. So the file format goes back a bit one could say!
After the first build in 2009 (10 year old code, well, you do don't you!) after sorting unit names etc there were literally hundreds of reported errors/ illegal assignments and data loss / conversion warnings...
No matter how good Delphi's back-room manipulation/magic with strings and chars I did not trust it enough. In the end to make sure everything was back as it was I re-declared them all as array of byte and then changed the code accordingly.
You haven't specified the delphi version, here's what works for me in delphi 2010:
Version1:
TTestRecordProp = record
private
FField20: string;
...
FFieldN: string
procedure SetField20(const Value: string);
public
property Field20: string read FField20 write SetField20;
...
property FieldN: string ...
end;
...
procedure TTestRecordProp.SetField20(const Value: string);
begin
if Length(Value) > 20 then
/// maybe raise an exception?
FField20 := Copy(FField20, 1, 20)
else
FField20 := Value;
end;
Version2:
TTestRecordEnsureLengths = record
Field20: string;
procedure EnsureLengths;
end;
...
procedure TTestRecordEnsureLengths.EnsureLengths;
begin
// for each string field, test it's length and truncate or raise exception
if Length(Field20) > 20 then
Field20 := Copy(Field20, 1, 20); // or raise exception...
end;
// You have to call .EnsureLength before push data to db...
Personally, I'd recommend replacing records with objects, then you can do more tricks.

Type-Length-Value vs. defined/structured Length-Value

There's no doubt that a length-value representation of data is useful, but what advantages are there to type-length-value over it?
Of course, using LV requires the representation to be predefined or structured, but that's hardly ever a problem. Actually, I can't think of a good enough case where it wouldn't be defined enough that TLV would be required.
In my case, this is about data interchange/protocols. In every situation, the representation must be known to both parties to be processed, which eliminates the need for type explicitly inserted in the data. Any thoughts on when the type would be useful or necessary?
Edit
I should mention that a generic parser/processor would certainly benefit from the type information, but that's not my case.
The only decent reason I could come up with is for a generic processor of the data, mainly for debugging or direct user presentation. Having the type embedded within the data would allow the processor to handle the data correctly without having to predefine all possible structures.
The below point was mentioned in the wikipedia.
New message elements which are received at an older node can be safely skipped and the rest of the message can be parsed. This is similar to the way that unknown XML tags can be safely skipped;
Example:
Imagine a message to make a telephone call. In a first version of a system this might use two message elements, a "command" and a "phoneNumberToCall":
command_c/4/makeCall_c/phoneNumberToCall_c/8/"722-4246"
Here command_c, makeCall_c and phoneNumberToCall_c are integer constants and 4 and 8 are the lengths of the "value" fields, respectively.
Later (in version 2) a new field containing the calling number could be added:
command_c/4/makeCall_c/callingNumber_c/8/"715-9719"/phoneNumberToCall_c/8/"722-4246"
A version 1 system which received a message from a version 2 system would first read the command_c element and then read an element of type callingNumber_c.
The version 1 system does not understand ;callingNumber_c
so the length field is read (i.e. the first 8) and the system skips forward 8 bytes to read
phoneNumberToCall_c which it understands, and message parsing carries on.
Without the type field, the version 1 parser would not know to skip callingNumber_c and instead call the wrong number and maybe throw an error on the rest of the message. So the type field allows for forward compatibility in a way that omiting it does not.

Resources