I wrote my own terminal program that reads from the serial port to read data from a microcontroller. Data is presented as follows:
0C82949>0D23949>0A75249> etc...
These are ASCII. Some things to note are that all elements start with >_0xx which is the header where xx is some chars such as >0C8 or >0D2 etc... this tells me what the rest of the data is such as if >0C8 is the speed of the car then 2949 holds the actual speed. The microcontroller writes the data really fast so at one time i can see 40 elements at a time. I want to quickly search this for an ">0C8" entry and only print out ">0C82949" out of the bunch:
an example if i only want 0D2:
Read from Serial Port: >0C82949>0D23949>0A75249>
Output: 0D23949
would anyone know how to do this?? I am aware that since it is so fast i would have to create threads which i can do, i am just not sure how to approach this issue for parsing. Any ideas would be greatly appreciated.
I am using Visual C++
You can parse the data and divide it on each > character. Then create separate strings. For each string, just search for desired substring. You may use strstr or CString::Find or string::find.
There is no need to create separate thread - the search operation is quite trivial and won't take much of CPU.
Related
Let's say I have a ton of data flowing through stdout over a long period of time, maybe an hour, and I want to capture a 30 second window of that data based on a trigger that occurs in the middle of that window. For instance, maybe something like
$ program-that-outputs-lots-of-data | program-that-captures-a-window-of-data
At some point, a line that contains "A-unique-string" will be output by the program, and at that point I want to save the 15 seconds worth of data before and after that string, discarding everything before that. Immediately afterward, I want to start monitoring again for the same string and capture another window when it comes in and save it to a new file. Any idea how I can do something like this with Linux tools?
The fact that you are trying to use time as a unit for buffering makes your problem very rare. Under the Unix command line, everything tends to be designed around the text line concept.
For example, if instead of 15 seconds of data you would like to capture 15 lines of text (before and after the special token), you could simply do:
$ program-that-outputs-lots-of-data | grep -C 15 A-unique-string
In your case, even if you are developing your own tailored filtering tool, deciding how much input to save and to discard is a pretty complex problem. I'd say that multimedia streaming is the area where there might be some ready to use tools.
I don't think anything exists that approaches these goals. Aside from the fact that your requirements are fairly specific, you also ask that the window be time-based, whereas most Unix-style text filters are line-oriented (e.g. grep -C 100 to get the hundred lines surrounding a match).
It should be fairly straightforward to do this in Python or Perl or Ruby or a similar scripting language.
I'm trying (for learning purposes) to take advantage of gets() function vulnerability using return-oriented programming (ROP) technique. The target program is a Windows console application that in some point asks for some input, and then uses gets() to store the input in the local 80 characters long array.
I created a file that contains 80 'a' characters in the beginning + some extra characters + 0x5da06c48 address for overwriting the old EIP pointer.
I'm opening the file in text editor and copy-pasting the content into the console as input. I've used IDA Pro (or OllyDbg) to set a breakpoint right after the return from the gets() function and noticed that the address was corrupted - it was set to 0x3fa03f48 (two 3f substitutions).
I've tried other addresses as well - part of them works well, but most of the times the address is being corrupted (sometimes characters missing or substituted, sometimes truncated).
How to get over this problem? Any suggestion will be highly appreciated!
Copy-Pasting binary data is hit-and-miss. Have you tried feeding the input into your test program directly from the file using input redirection?
First of all keep track of the Endianness of your platform. If you think your bits are in the right order but you are still getting malformed input, it might be that your shell/text editor isn't binary safe. You are better off writing an exploit for this flaw in a scripting language such as Python, using the Subprocess library which allows you to write data directly to an arbitrary process's stdin pipe.
What I want to do: lets suppose I have a TStringStream that just read a string with 100 characters. If I call .ReadString(50), I will get the first 50 characters of this stream and its cursor is going to be placed on the position 51.
My question is: how do I toss the characters 1 to 50 in this stream in a fast and clean way? I want to read the rest (51 to 100) later.
Thanks in advance.
You cannot do what you are hoping to do. The string stream's data is a Delphi string which is stored as a single memory block. Memory blocks are atomic, they cannot be split. You cannot free some part of a memory block.
If you really need to return memory to the memory manager then you should create a new string with the already processed data removed. You can then re-create your string stream with this new input and destroy the previous string stream.
Having said that, it's hard to see that doing much other than increasing your memory fragmentation. If the sizes of memory involved are large enough, and if the string stream persists for long enough, then this just might be a sensible approach. Otherwise it sounds like an attempt to optimise that actually would hinder performance.
Perhaps some class other than string stream could be more appropriate but it's very hard to advise without knowing more details.
You can't do this. If you really need to do this, you should write your own class that implements the stream-interface and which would let you process some data a little bit at a time and free whatever you want to free. Note that you would only be able to go through the data once, since you've now deleted your data. That is, seeking to the beginning again would become impossible, and your current stream "position" would be a lie.
In short, sounds like you're confused.
If I understand correctly you which to skip forward in the stream?
You can do:
Str.Position := Str.Position + 50;
Or like this:
Str.Seek(50,TSeekOrigin.soCurrent);
The only subroutine I know of capable of reading a user's alphabetical input is read_char, but how I want to be able to read the user's whole input of char no matter how long.
I have a vague notion that I have to make memory room to store the whole input or something? I'm really lost as I'm not certain if Assembly has a C++ equivalent of reading strings.
Thanks in advance.
Well, you should have a limit when reading input from the user, otherwise your program might not work properly anymore (see buffer overflow for more informations), so making room for the input and ensure the input won't exceed the buffer is very important.
Now, to get a string you have to call a dos interrupt, giving a pointer to your buffer and some other stuff. It will read until a carriage return is met.
But I think your prof wants you to read using his read_char, so (since this is homework), I'll give you a small advice: you have to do a loop and read chars until..
I'm trying to read in a 24 GB XML file in C, but it won't work. I'm printing out the current position using ftell() as I read it in, but once it gets to a big enough number, it goes back to a small number and starts over, never even getting 20% through the file. I assume this is a problem with the range of the variable that's used to store the position (long), which can go up to about 4,000,000,000 according to http://msdn.microsoft.com/en-us/library/s3f49ktz(VS.80).aspx, while my file is 25,000,000,000 bytes in size. A long long should work, but how would I change what my compiler(Cygwin/mingw32) uses or get it to have fopen64?
The ftell() function typically returns an unsigned long, which only goes up to 232 bytes (4 GB) on 32-bit systems. So you can't get the file offset for a 24 GB file to fit into a 32-bit long.
You may have the ftell64() function available, or the standard fgetpos() function may return a larger offset to you.
You might try using the OS provided file functions CreateFile and ReadFile. According to the File Pointers topic, the position is stored as a 64bit value.
Unless you can use a 64-bit method as suggested by Loadmaster, I think you will have to break the file up.
This resource seems to suggest it is possible using _telli64(). I can't test this though, as I don't use mingw.
I don't know of any way to do this in one file, a bit of a hack but if splitting the file up properly isn't a real option, you could write a few functions that temp split the file, one that uses ftell() to move through the file and swaps ftell() to a new file when its reaching the split point, then another that stitches the files back together before exiting. An absolutely botched up approach, but if no better solution comes to light it could be a way to get the job done.
I found the answer. Instead of using fopen, fseek, fread, fwrite... I'm using _open, lseeki64, read, write. And I am able to write and seek in > 4GB files.
Edit: It seems the latter functions are about 6x slower than the former ones. I'll give the bounty anyone who can explain that.
Edit: Oh, I learned here that read() and friends are unbuffered. What is the difference between read() and fread()?
Even if the ftell() in the Microsoft C library returns a 32-bit value and thus obviously will return bogus values once you reach 2 GB, just reading the file should still work fine. Or do you need to seek around in the file, too? For that you need _ftelli64() and _fseeki64().
Note that unlike some Unix systems, you don't need any special flag when opening the file to indicate that it is in some "64-bit mode". The underlying Win32 API handles large files just fine.