How to launch an app in memory on linux system - linux

I read the encrypted file, decrypt it in a buffer. how could I run the decrypted code?
where should I jump to? in DOS, I know, jump to the buffer offset 0x100, that's the code entry point. how about in linux?
thank you
Xian

Try using tail -c (output last K bytes).
Full answer:
First convert from hex to dec (remove the "0x" before converting)
Then, find your input file size. Deduct 0x100
hex="100"
# convert hex to dec
dec=$(echo "obase=10; ibase=16; ${hex}" | bc)
# input_file size in bytes
file_size=$(stat --printf="%s" input_file)
truncated_file_size=$(($file_size - $dec))
tail -c $truncated_file_size input_file > new_file

Related

Replace parts of file with 0xFF?

I want to modify a file so every byte from location 0x3000 to 0xDC000 is replaced with 0xFF, everything else should be unmodified.
How to accomplish this with standard Linux tools?
This is jhnc's answer with little improvements (explained at the end of this answer).
#! /bin/bash
overwrite() {
file="$1"; from="$2"; to="$3"; with="$4"
yes '' | tr \\n "\\$(printf %o "$with")" |
dd conv=notrunc bs=1 seek="$((from))" count="$((to-from))" of="$file"
}
In your case you would use the function from above like
overwrite yourFile 0x3000 0xDC000 0xFF
The start and end byte are both 0-based. The start is inclusive and the end is exclusive. Example:
$ printf 00000 > file
$ overwrite file 1 3 0x57
$ hexdump -C file
00000000 30 57 57 30 30 |0WW00|
00000005
Improvements made:
Fixed wrong count=... and explained interpretation of start and end.
Allow filling with null bytes.
If you want to write null bytes 0x00 you cannot use yes $'\x00'. The null byte would represent the end of yes's argument string making the call equivalent to yes ''. Since yes '' | tr -d \\n produces no output dd will wait indefinitely.
The command presented in this answer allows you to fill the region with any byte (choose one from 0x00 to 0xFF).
If Perl is your option, would you please try the following:
perl -e '
$start = 0x3000; # start position to overwrite
$end = 0xDC000; # end position to overwrite
$file = "file"; # filename to modify (replace with your filename)
open(FH, "+< $file") or die "$file"; # open the file "$file" to both read & write with the filehandle "FH"
seek(FH, $start, 0); # jump to the start position
for ($i = $start; $i < $end; $i++) { # loop over the overwrite area
print FH "\xFF"; # replace the byte with 0xFF
}
close(FH);
'

Hex dump to binary data conversion

Need to convert hex dump file to binary data use xxd command (or any other suitable methods that works). The raw hexdump was produced not with xxd.
Tried two variants with different options:
xxd -r input.log binout.bin
xxd -r -p input.log binout.bin
Both methods produce wrong results: first command create binary file size 2.2GB, the second command produce binary file size 82382 bytes, both binary file size mismatch, the expected binary size is 65536 bytes.
part of hex file:
807e0000: 4562 537f e0b1 6477 84bb 6bae 1cfe 81a0 | EbS...dw..k.....
807e0010: 94f9 082b 5870 4868 198f 45fd 8794 de6c | ...+XpHh..E....l
807e0020: b752 7bf8 23ab 73d3 e272 4b02 57e3 1f8f | .R{.#.s..rK.W...
807e0030: 2a66 55ab 07b2 eb28 032f b5c2 9a86 c57b | *fU....(./.....{
807e0040: a5d3 3708 f230 2887 b223 bfa5 ba02 036a | ..7..0(..#.....j
807e0050: 5ced 1682 2b8a cf1c 92a7 79b4 f0f3 07f2 | \...+.....y.....
807e0060: a14e 69e2 cd65 daf4 d506 05be 1fd1 3462 | .Ni..e........4b
What can be the issue here and how to convert data correctly?
After the xxd you need to remove the first and last parts.
$ sed -i 's/^\(.\)\{9\}//g' binary.txt
$ sed -i 's/\(.\)\{16\}$//g' binary.txt
binary.txt is the name of your file after xxd.
After that you can convert it to binary again.
$ for i in $(cat binary.txt) ; do printf "\x$i" ; done > mybinary
After this if you have the original .bin file you can check md5sums of the files to see if they have the same value. If they have same value then the transformation completed succesfully.
$ md5sum originbinary
$ md5sum mybinary
You can cover more details in the first part of this link. https://acassis.wordpress.com/2012/10/21/how-to-transfer-files-to-a-linux-embedded-system-over-serial/

Add line feed every 2391 byte

I am using Redhat Linux 6.
I have a file which should comes from mainframe MVS with EBCDIC-ASCII conversion.
(But I suspect some conversion may be wrong)
Anyway, I know that the record length is 2391 byte. There are 10 records and the file size is 23910 byte.
For each 2391 byte record, there are many 0a or 0d char (not CRLF). I want to replace them with, say, # and #.
Also, I want to add a LF (i.e.0a) every 2391 byte so as to make the file become a normal unix text file for further processing.
I have try to use
dd ibs=2391 obs=2391 if=emyfile of=myfile.new
But, this cannot work. Both files are the same.
I also try
dd ibs=2391 obs=2391 if=myfile | awk '{print $0}'
But, this also not work
Can anyone help on this ?
Something like this:
#!/bin/bash
for i in {0..9}; do
dd if=emyfile bs=2391 count=1 skip=$i | LC_CTYPE=C tr '\r\n' '##'
echo
done > newfile
If your files are longer, you will need more than 10 iterations. I would look to handle that by running an infinite looop and exiting the loop on error, like this:
#!/bin/bash
i=0
while :; do
dd if=emyfile bs=2391 count=1 skip=$i | LC_CTYPE=C tr '\r\n' '##'
[ ${PIPESTATUS[0]} -ne 0 ] && break
echo
((i++))
done > newfile
However, on my iMac under OSX, dd doesn't seem to exit with an error when you go past end of file - maybe try your luck on your OS.
You could try
$ dd bs=2391 cbs=2391 conv=ascii,unblock if=emyfile of=myfile.new
conv=ascii converts from EBCDIC to ASCII. conv=unblock inserts a newline at the end of each cbs-sized block (after removing trailing spaces).
If you already have a file in ASCII and just want to replace some characters in it before splitting the blocks, you could use tr(1). For example, the following will replace each carriage return with '#' and each newline (linefeed) with '#':
$ tr '\r\n' '##' < emyfile | dd bs=2391 cbs=2391 conv=unblock of=myfile.new

How to insert an offset to hexdump with xxd?

Is there an easy way to add an offset to the hex dump generated by xxd ?
i.e instead of
0000: <data>
0004: <data>
0008: <data>
I should get
Offset+0000: <data>
Offset+0004: <data>
Offset+0008: <data>
xxd now appears to come with offset support, using -o [offset]
for example: xxd -o 0x07d20000 file.bin
My version of xxd on Gentoo Linux has it, but I dug deeper to help folks on other distros:
xxd V1.10 27oct98 by Juergen Weigert -- Do not use the xxd version -- I have found this source code without the offset support!! So I tracked down where my binary comes from:
app-editors/vim-core-7.4.769 -- So apparently, as long as you have a modern VIM installed, you can reap the benefits of the added offset support; at least on Gentoo, but I'm steering you in the right direction.
If you find that your distro still ships an older xxd, considering manually compiling a newer VIM that you confirm has offset support.
This is what I am doing now..It works perfectly but its kind of lame approach for just adding an offset :)
xxd file.bin | xxd -r -s 0x2e00000 | xxd -s 0x2e00000 > file.hex
Reading your comment below:
I want the first byte of binary file to be present at the offset. i.e Just add an offset without seeking.
makes me believe the only way to do this is parsing the output and modifying it in order to add the desired offset.
I didn't found anything in the docs that would allow this to be done easily, sorry. :(
If you can live with AWK here's a proof of concept:
$ xxd random.bin | gawk --non-decimal-data ' # <-- treat 0x123 like numbers
> {
> offset = 0x1000 # <-- your offset, may be hex of dec
>
> colon = index($0, ":") - 1
> x = "0x" substr($0, 1, colon) # <-- add 0x prefix to original offset ...
> sub(/^[^:]*/, "") # <-- ... and remove it from line
>
> new = x + offset # <-- possible thanks to --non-decimal-data
> printf("%0"colon"x", new) # <-- print updated offset ...
> print # <-- ... and the rest of line
> }'
0001000: ac48 df8c 2dbe a80c cd03 06c9 7c9d fe06 .H..-.......|...
0001010: bd9b 02a1 cf00 a5ae ba0c 8942 0c9e 580d ...........B..X.
0001020: 6f4b 25a6 6c72 1010 8d5e ffe0 17b5 8f39 oK%.lr...^.....9
0001030: 34a3 6aef b5c9 5be0 ef44 aa41 ae98 44b1 4.j...[..D.A..D.
^^^^
updated offsets (+0x1000)
I bet it would be shorter in Perl or Python, but AWK just feels more "script-ish" :-)

Read characters from a text file using bash

Does anyone know how I can read the first two characters from a file, using a bash script. The file in question is actually an I/O driver, it has no new line characters in it, and is in effect infinitely long.
The read builtin supports the -n parameter:
$ echo "Two chars" | while read -n 2 i; do echo $i; done
Tw
o
ch
ar
s
$ cat /proc/your_driver | (read -n 2 i; echo $i;)
I think
dd if=your_file ibs=2 count=1 will do the trick
Looking at it with strace shows it is effectively doing a two bytes read from the file.
Here is an example reading from /dev/zero, and piped into hd to display the zero :
dd if=/dev/zero bs=2 count=1 | hd
1+0 enregistrements lus
1+0 enregistrements écrits
2 octets (2 B) copiés, 2,8497e-05 s, 70,2 kB/s
00000000 00 00 |..|
00000002
echo "Two chars" | sed 's/../&\n/g'
G'day,
Why not use od to get the slice that you need?
od --read-bytes=2 my_driver
Edit: You can't use head for this as the head command prints to stdout. If the first two chars are not printable, you don't see anything.
The od command has several options to format the bytes as you want.

Resources