PE - DOS signature (MZ) and Little Endian - signature

this is not a serious problem but I am curious why DOS signature is 0x5A4D not 0x4D5A
DOS Header's first member is MZ which is DOS signature named by Mark Zbikowski.
Below code is the part of DOS header.
typedef struct _IMAGE_DOS_HEADER {
WORD e_magic; // DOS signature : 4D5A ("MZ")
}
If Mark Zbikowski wants to write his name at the first of PE File, why didn't he write the code 0x4D5A but 0x5A4D ?? Following the principle of Little Endian, that is ZM not MZ.
Do I have wrong concept?

Related

Reading .dfb file with rust throws invalid character error

I am new to rust and creating a POC to convert dbf file to csv. I am reading a .dbf file using rust library dbase.
The issue is, when i crate a sample .dbf file using dbfview the code works fine. But when i use .dbf file which i will be using in real time. I am getting the following error.
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: InvalidFieldType('M')', src/libcore/result.rs:999:5
Here is the code i am using from the given link.
use dbase::FieldValue;
let records = dbase::read("tests/data/line.dbf").unwrap();
for record in records {
for (name, value) in record {
println!("{} -> {:?}", name, value);
match value {
FieldValue::Character(string) => println!("Got string: {}", string),
FieldValue::Numeric(value) => println!("Got numeric value of {}", value),
_ => {}
}
}
}
I think the ^M shows the character appended by windows.
What can i do to handle this error and read the file successfully.
Any help will be much appreciated.
The short answer to your question is no, you will not be able to read this file with dbase-rs (or any current library) and you'll most likely have to rework this file to not contain a memo field.
A deep dive into the DBF file format
The InvalidFieldType error points at a structural feature of the file that your library cannot handle - a Memo field. We're going to deep-dive into the file to figure out why that is, and whether there is anything we can do to fix it.
This is the header definition:
Of particular importance is byte 28 (offset 0000010, byte 0C), which is a bitmask indicating if the table contains a bunch of possible things, most notably:
0x01 if the file comes with an associated .cdx file
0x02 if it contains a memo
0x04 if the file is actually a .dbc file (a database)
At 0x03, your file comes with both an associated .cdx file and contains a memo. As we know (ahead of time) that dbase-rs does not handle that, that's looking increasingly more likely.
Let's keep looking. From here on, each field is 32 bytes long.
Here are your fields:
Bytes 0-10 contain the field name, byte 11 is the type. Due to how the library you wanted to use can only parse certain fields, we only really care about byte 11.
In order of appearance by what the library can parse:
[x] CALL_ID (integer)
[x] CONTACT_ID (integer)
[x] CALL_DATE (DateTime)
[x] SUBJECT (char[])
[ ] NOTES (memo)
The last field is the problematic one. Looking into the library itself, this field type is not supported and will therefore yield an Error, which you are trying to unwrap(). This is the source of your error.
There are two three ways around it:
The "long" way is to patch the library to handle memo fields. This sounds easy, but in practice it really isn't. As the memos are stored in another file (typically a dbt file in the same folder), you're going to have to make that library read both files and reference both. The point of the memo type itself is to store more than 255 bytes of data in a field. You are the only one able to evaluate whether this work is worth the effort.
If your data is less than 255 bytes in size, you can replace that memo field with a char field, and dbfview should allow you to do this
If your field is longer than 255 bytes and you have access to the ability to run sub-processes (i.e. Command::run), you can sneak-convert it using a library that can process Memo fields in another language. this nodeJS library can, but read-only, for example.

Antrl visitor on a grammar with compiler directive

I try to get compiler directive in a verilog parser which give me the true file name/path and the true current line in the non-preprocessed file.
Verilog language needs a preprocessing pass I have, but during the visit I have to know the current file name (which can't change by the `include directive) and so the true current line in the non-preprocessed file .
The preprocessing part add the verilog directive `line which indicates the current file and line.
Then I send the preprocessed buffer to the antlr Lexer, parse and extract all verilog information with a visitor. I have to keep the verilog compiler `line directive in the verilog grammar description:
Preprocessing_line
: '`line ' Decimal_number String Decimal_number '\n' -> channel(2)
;
Now, I don't know how to get this dedicated channel information at any point in the visitor? The target language for this parser is Python3.
Given that the Preprocessing_line tokens may not have a reliable relation to the parse-tree tokens (different Verilog compilers can be a bit loose about where they inject the reference lines), the easiest solution is to create a temporary index prior to the visitor walk.
That is, after parsing the pre-processed Verilog source, do a quick pass over the entire token stream (BufferedTokenStream#getTokens), picking out the Preprocessing_line tokens, and building a current_line -> original_line index.
Then, in any visited context, examine the underlying token(s) (ParserRuleContext#getStart, #getStop, #getSourceInterval) to find their current_line (Token#getLine)

VC6 /r/n and Write works; Visual Studio 2013 does not work

the following code
if(!cfile.Open(fileName, CFile::modeCreate | CFile::modeReadWrite)){
return;
}
ggg.Format(_T("0 \r\n"));
cfile.Write(ggg, ggg.GetLength());
ggg.Format(_T("SECTION \r\n"));
cfile.Write(ggg, ggg.GetLength());
produces the following:
0 SECTI
clearly this is wrong: (a) \r\n is ignored, and (b) the word SECTION is cut off.
Can someone please tell me what I am doing wrong?
The same code without _T() in VC6 produces the correct results.
Thank you
a.
Apparently, you are building a Unicode build; CString (presumably that's what ggg is) holds a sequence of wchar_t characters, each two bytes large. ggg.GetLength() is the length of the string in characters.
However, CFile::Write takes the length in bytes, not in characters. You are passing half the number of bytes actually taken by the string, so only half the number of characters gets written.
Have you considered changing lines like:
cfile.Write(ggg, ggg.GetLength());
to`
cfile.Write(ggg, ggg.GetLength() * sizeof(TCHAR))
Write needs the number of bytes (not characters). Since Unicode is 2 bytes wide you need to account for that. sizeof(TCHAR) should be the number of bytes each character takes on a given platform. If it is built as Ansi it would be 1 and Unicode would have 2. Multiply that by the string length and the number of bytes should be correct.
Information on TCHAR can be found on MSDN documentation here. In particular it is defined as:
The _TCHAR data type is defined conditionally in Tchar.h. If the symbol _UNICODE is defined for your build, _TCHAR is defined as wchar_t; otherwise, for single-byte and MBCS builds, it is defined as char. (wchar_t, the basic Unicode wide-character data type, is the 16-bit counterpart to an 8-bit signed char.)
TCHAR and _TCHAR in your usage should be synonymous. However I believe these days Microsoft recommends including <tchar.h> and using _TCHAR. What I can't tell you is if _TCHAR existed on VC 6.0.
If using the method above - if you build using Unicode your output files will be in Unicode. If you build for Ansi it will be output as 8bit ASCII.
Want CFile.write to output Ascii no matter what? Read on...
If you want all text written to the file as 8bit ASCII you are going to have to use one of the macros for conversion. In particular CT2A. More on the macros can be found in this MSDN article. Each macro can be broken up by name, however CT2A says convert the Generic character string (equivalent to W when _UNICODE is defined, equivalent to A otherwise) to Ascii per the chart at the link. So no matter whether using Unicode or Ascii it would output Ascii. Your code would look something like:
ggg.Format(_T("0 \r\n"));
cfile.Write(CT2A(ggg), ggg.GetLength());
ggg.Format(_T("SECTION \r\n"));
cfile.Write(CT2A(ggg), ggg.GetLength());
Since the macro converts everything to Ascii CString's GetLength() will suffice.

Newest Delphi compiler versions and String type compatibilty

I'm trying to make some String processing routines compatible with
newest delphi version. I'm using Delphi2005 and 2007 but I'm not totally sure of the compatibility.
Here are a few samples, are they compatible with both the old and the new string type ?
( I'll use an imaginary STRING_UNICODE directive ).
a Type definition:
{$IFNDEF UNICODE_STRING}
TextBuffer = Array[0..13] Of Char;
{$ELSE}
TextBuffer = Array[0..13] Of WideChar;
{$ENDIF}
Useless or not? Is the Char type (becomes what was) a WideChar before the Unicode String, or is there still a difference?
a Function:
Function RemoveBlanks(Text: String): String;
Var
i: integer;
Begin
result := '';
For i:= 0 To Length(Text) Do
Begin
{$IFNDEF UNICODE_STRING}
If Byte(Text[i]) < 21 Then Continue;
{$ELSE}
If Word(Text[i]) < 21 Then Continue;
{$ENDIF}
If Text[i] = ' ' Then Continue;
Result := Result + Text[i];
End;
Is the Word() casting OK?
Here there is also the ' ' problem. How is the space handled
in Unicode version? Should I also use the directive to
differentiate ' ' and ' ' or will the ' ' be automatically handled
as a 2-byte blank?
a line jump:
NewLineBegin := CanReadText( aPTextBuffer, #13#10 );
How is the the second argument (#13#10) interpreted in the Unicode version? Is it compatible? Will it be translated to the byte block 00130010? If not, then should the directive be used instead with the constant #0013#0010?
The first thing to do is read Marco CantĂș's paper on
Unicode: http://edn.embarcadero.com/article/38980
Question 1
Just use Char all the time with no conditional code and it will work in old and new.
Char is a special type that is an 8 bit type in old versions of Delphi and a 16 bit type in new Unicode versions.
Question 2
Char is an ordinal type so you can write if s[i]<#21.
You also need to start loops at 1 for strings since they use 1-based indexing.
Question 3
Writing #0013 is not needed, #13 is fine.
In short almost all well written code will need no changes.
Compiler Directives
In general, I'd advise you to be very wary of compiler directives. They serve their purpose, but for general use, they should probably be avoided altogether.
The first problem is that you have to compile your app and test it twice, because it is fundamentally and/or subtly different for a directive on/off.
This situation get worse for each additional directive, because you usually have to permute the combinations:
D1 On, D2 On
D1 On, D2 Off
D1 Off, D2 On
D1 Off, D2 Off
3 directives is 8 permutations... etc.
Unicode Strings
Please see: Get ready for Delphi 2009 and up when developing with Delphi 7?
It has some nice answers for you to consider.
Question 1
As said, I advise against it. I also advise against for other reasons in my answer to the above mentioned question.
More specifically:
In Delphi <2009, both lines are different.
In Delphi >=2009 both lines are effectively the same.
Question 2
Not only is this ill advised for the same reasons as Question 1, but it actually has some subtle problems.
The more precise type of Text (String) is determined by your Delphi version. So:
In Delphi <2009, the else part of your conditional casts a single character to a Word. (Probably with no ill effect.)
In Delph >=2009, the if part of your conditional casts a double-byte character to a Byte. (With loss of information.)
Also, there are some special considerations, and new support classes for 'special' characters. You'll want to look into those. Refer to: How to identify unicode keys on key press?
Question 3
I'm pretty sure that #13 will be treated as a single character, so in Delphi >=2009 where Char == WideChar, that character will take up 2 bytes.
However, again look for Linebreak constants in Delphi. System.sLinebreak was probably introduced back in the Kylix days.
Generic type Char becomes either fundamental type AnsiChar or fundamental type WideChar (read up on generic vs. fundamental types). BTW, there is UNICODE symbol $DEFINEd for you already, however there is no need to branch at all, until specific byte size is required.
Second part smells, scratch it completely. It is an abuse of typecasts and creates a need for conditional compilation artifically. To get unsigned integer character code of given Char use Ord() function instead (or as said in the other answer - use ordinal traits of Char type).
For the third part, character constants are of generic type Char already. Again, there is no need to worry about, #13 becomes either byte sized $0D or word sized $0D00 (remember about little endianess)

Protection from Format String Vulnerability

What exactly is a "Format String Vulnerability" in a Windows System, how does it work, and how can I protect against it?
A format string attack, at its simplest, is this:
char buffer[128];
gets(buffer);
printf(buffer);
There's a buffer overflow vulnerability in there as well, but the point is this: you're passing untrusted data (from the user) to printf (or one of its cousins) that uses that argument as a format string.
That is: if the user types in "%s", you've got an information-disclosure vulnerability, because printf will treat the user input as a format string, and will attempt to print the next thing on the stack as a string. It's as if your code said printf("%s");. Since you didn't pass any other arguments to printf, it'll display something arbitrary.
If the user types in "%n", you've got a potential elevation of privilege attack (at least a denial of service attack), because the %n format string causes printf to write the number of characters printed so far to the next location on the stack. Since you didn't give it a place to put this value, it'll write to somewhere arbitrary.
This is all bad, and is one reason why you should be extremely careful when using printf and cousins.
What you should do is this:
printf("%s", buffer);
This means that the user's input is never treated as a format string, so you're safe from that particular attack vector.
In Visual C++, you can use the __Format_string annotation to tell it to validate the arguments to printf. %n is disallowed by default. In GCC, you can use __attribute__(__printf__) for the same thing.
In this pseudo code the user enters some characters to be printed, like "hello"
string s=getUserInput();
write(s)
That works as intended. But since the write can format strings, for example
int i=getUnits();
write("%02d units",i);
outputs: "03 units". What about if the user in the first place wrote "%02d"... since there is no parameters on the stack, something else will be fetched. What that is, and if that is a problem or not depends on the program.
An easy fix is to tell the program to output a string:
write("%s",s);
or use another method that don't try to format the string:
output(s);
a link to wikipedia with more info.

Resources