Debug unicode msfvenom payload - shellcode

I'd like to debug some msfvenom generated payload.
I'm perfectly able to do it, when I generate the payload with "standard" encoding in this way:
msfvenom -a x86 --platform windows -p windows/meterpreter/reverse_tcp -e x86/alpha_mixed BufferRegister=ECX
In this case, I can get the resulting payload, use something like Blobrunner and I'm ready to go.
If I generate a shellcode with unicode encoding
msfvenom -a x86 --platform windows -p windows/meterpreter/reverse_tcp -e x86/unicode_mixed BufferRegister=ECX
The same is not working: I get a lot of duplicated/invalid instructions when I process the shellcode. This make sense because I guess I'm managing it as normal encoding. But how can I debug this shellcode? Any tool to convert it to a "valid" sequence removing the "unicode padding"?
Thanks a lot!

I think I got it: it's just a matter of adding an interleave with 00 between bytes...easy...don't know why I missed it before...

Related

What's the default encoding in bash standard input? [duplicate]

I am using Gina Trapiani's excellent todo.sh to organize my todo-list.
However being a dane, it would be nice if the script accepted special danish characters like ø and æ.
I am an absolute UNIX-n00b, so it would be a great help if anybody could tell me how to fix this! :)
Slowly, the Unix world is moving from ASCII and other regional encodings to UTF-8. You need to be running a UTF terminal, such as a modern xterm or putty.
In your ~/.bash_profile set you language to be one of the UTF-8 variants.
export LANG=C.UTF-8
or
export LANG=en_AU.UTF-8
etc..
You should then be able to write UTF-8 characters in the terminal, and include them in bash scripts.
#!/bin/bash
echo "UTF-8 is græat ☺"
See also: https://serverfault.com/questions/11015/utf-8-and-shell-scripts
What does this command show?
locale
It should show something like this for you:
LC_CTYPE="da_DK.UTF-8"
LC_NUMERIC="da_DK.UTF-8"
LC_TIME="da_DK.UTF-8"
LC_COLLATE="da_DK.UTF-8"
LC_MONETARY="da_DK.UTF-8"
LC_MESSAGES="da_DK.UTF-8"
LC_PAPER="da_DK.UTF-8"
LC_NAME="da_DK.UTF-8"
LC_ADDRESS="da_DK.UTF-8"
LC_TELEPHONE="da_DK.UTF-8"
LC_MEASUREMENT="da_DK.UTF-8"
LC_IDENTIFICATION="da_DK.UTF-8"
LC_ALL=
If not, you might try doing this before you run your script:
LANG=da_DK.UTF-8
You don't say what happens when you run the script and it encounters these characters. Are they in the todo file? Are they entered at a prompt? Is there an error message? Is something output in place of the expected output?
Try this and see what you get:
read -p "Enter some characters" string
echo "$string"

programmatically access IME

Is there a way to access Japanese or chinese IME either from the command line or python? I have Linux/osx/win8 boxes, so which ever system exposes the easiest accessible api is fine.
I'm experimenting with building a Japanese kana-kanji conversion algorithm and would like to establish a baseline using existing tools. I also have some collections of kana I would like to process.
Preferably I would like something along the lines of
$ ime JP "きしゃのきしゃがきしゃできしゃした"
貴社の記者が汽車で帰社した
I've looked at anthy, mozc and dbus on Linux but can't find anyway to interact with them via the terminal or scripting (such as python)
Anthy provides a cli tool
Personally, I prefer google's IME / mozc for better results, but perhaps this helps.
The source for anthy (sourceforge, file anthy-9100h.tar.gz) includes a simple cli program for testing. Download the source file, extract it, run
./configure && make
Enter the directory test which contains the binary anthy. By default, it reads from test.txt and uses EUC_JP encoding.
Simple test:
Input file test.txt
*にほんごにゅうりょく
*もももすももももものうち。
Run (using iconv to convert to UTF-8:
./anthy --all | iconv -f EUC-JP -t UTF-8
Output:
1:(にほんごにゅうりょく)
|にほんご|にゅうりょく
にほんご(日本語:(1,1000,N,72089)2500,001 ,にほんご:(N,0,-)2 ,ニホンゴ:(N,0,-)1 ,):
にゅうりょく(入力:(1,1000,N,62394)2500,001 ,にゅうりょく:(N,0,-)2 ,ニュウリョク:(N,0,-)1 ,):
2:(もももすももももものうち。)
|ももも|すももも|もものうち|。
ももも(桃も:(,1000,Ny,72089)225,279 ,ももも:(N,1000,Ny,72089)220,773 ,モモも:(,1000,Ny,72089)205,004 ,腿も:(,1000,Ny,72089)204,722 ,股も:(,1000,Ny,72089)146,431 ,モモモ:(N,0,-)1 ,):
すももも(すももも:(N,1000,Ny,72089)202,751 ,スモモも:(,1000,Ny,72089)168,959 ,李も:(,1000,Ny,72089)168,677 ,スモモモ:(N,0,-)1 ,):
もものうち(桃のうち:(,1000,N,655)2,047 ,もものうち:(N,1000,N,655)2,006 ,モモのうち:(,1000,N,655)1,863 ,腿のうち:(,1000,N,655)1,861 ,股のうち:(,1000,N,655)1,331 ,モモノウチ:(N,0,-)1 ,):
。(。:(1N,100,N,70203)57,040 ,.:(1,100,N,70203)52,653 ,.:(1,100,N,70203)3,840 ,):
You can uncomment some printf statements in the source files test/main.c and src-main/context.c to make the output more readable/parsable, eg:
1 にほんごにゅうりょく
にほんご 日本語
にゅうりょく 入力
2 もももすももももものうち。
ももも 桃も
すももも すももも
もものうち 桃のうち
。 。

Handling command line options with multiple arguments for some flags

I'm writing a program where the command line usage should be something like:
mkblueprint FILE FILE FILE -o <output name> -s <string> -r <number> -p pOPT1 pOPT2 pOPT3
I'm currently using CmdLib and I can't figure out a way to handle this; a flag is required for each input(so I can't just have FILEs sitting alone) and there doesn't appear to be a way to pass multiple arguments to a flag, as with -p. These are extremely common in command line programs so I figure I'm just misunderstanding the documentation, but it's not mentioned in any command line library I look at for Haskell.
After some more work with CmdLib I was able to handle the bare FILE input via the Extra tag and then checking that each string is a valid file, which seems to be the standard way to handle it despite the name. -p pOPT1 pOPT2 pOPT3 is apparently not allowed under the POSIX standard, which is why I'm not finding libraries that will do it.
You might consider the GetOpt bindings that come with base. They're not as sexy as some of the more modern alternatives, but they support bare arguments and final options well.

How can I get perf to find symbols in my program

When using perf report, I don't see any symbols for my program, instead I get output like this:
$ perf record /path/to/racket ints.rkt 10000
$ perf report --stdio
# Overhead Command Shared Object Symbol
# ........ ........ ................. ......
#
70.06% ints.rkt [unknown] [.] 0x5f99b8
26.28% ints.rkt [kernel.kallsyms] [k] 0xffffffff8103d0ca
3.66% ints.rkt perf-32046.map [.] 0x7f1d9be46650
Which is fairly uninformative.
The relevant program is built with debugging symbols, and the sysprof tool shows the appropriate symbols, as does Zoom, which I think is using perf under the hood.
Note that this is on x86-64, so the binary is compiled with -fomit-frame-pointer, but that's the case when running under the other tools as well.
This post is already over a year old, but since it came out at the top of my Google search results when I had the same problem, I thought I'd answer it here. After some more searching around, I found the answer given in this related StackOverflow question very helpful. On my Ubuntu Raring system, I then ended up doing the following:
Compile my C++ sources with -g (fairly obvious, you need debug symbols)
Run perf as
record -g dwarf -F 97 /path/to/my/program
This way perf is able to handle the DWARF 2 debug format, which is the standard format gcc uses on Linux. The -F 97 parameter reduces the sampling rate to 97 Hz. The default sampling rate was apparently too large for my system and resulted in messages like this:
Warning:
Processed 172390 events and lost 126 chunks!
Check IO/CPU overload!
and the perf report call afterwards would fail with a segmentation fault. With the reduced sampling rate everything worked out fine.
Once the perf.data file has been generated without any errors in the previous step, you can run perf report etc. I personally like the FlameGraph tools to generate SVG visualizations.
Other people reported that running
echo 0 > /proc/sys/kernel/kptr_restrict
as root can help as well, if kernel symbols are required.
In my case the solution was to delete the elf files which contained cached symbols from previous builds and were messing things up.
They are in ~/.debug/ folder
You can always use the '$ nm ' command.
here is some sample output:
Ethans-MacBook-Pro:~ phyrrus9$ nm a.out
0000000100000000 T __mh_execute_header
0000000100000f30 T _main
U _printf
0000000100000f00 T _sigint
U _signal
U dyld_stub_binder
I had this problem too, I couldn't see any userspace symbol, but I saw some kernel symbols. I thought this was a symbol loading issue. After tried all the possible solutions I could find, I still couldn't get it work.
Then I faintly remember that
ulimit -u unlimited
is needed. I tried and it magically worked.
I found from this wiki that this command is needed when you use too many file descriptors.
https://perf.wiki.kernel.org/index.php/Tutorial#Troubleshooting_and_Tips
my final command was
perf record -F 999 -g ./my_program
didn't need --call-graph
Make sure that you compile the program using -g option along with gcc(cc) so that debugging information is produced in the operating system's native format.
Try to do the following and check if there are debug symbols present in the symbol table.
$objdump -t your-elf
$readelf -a your-elf
$nm -a your-elf
How about your dev host machine? Is it also running the x86_64 OS?
If not, please make sure the perf is cross-compiled, because the perf depends on the objdump and other tools in toolchain.
I got the same problem with perf after overriding the name of my program via prctl(PR_SET_NAME)
As I can see your case is pretty similar:
70.06% ints.rkt [unknown]
Command you have executed (racket) is different from the one perf have seen.
you can check the value of kptr_restrict by cat /proc/kallsyms. If the addresses of the symbols in the result are all 0x000000, you can fix it by command echo 0 > sys/kernel/kptr_restrict . After this , you may get a wanted result of the perf report

parsing multithreading make's output (-j N)

I have a lot of source directories in common directory. When I start make by issuing command:
make -j 4
I receive a lot of strings from make's threads along with invoked gcc compiler instances.
For parsing errors I have to run make twice, second time with one thread:
make -j 1
so I can correctly parse make's output.
Is there a way for running multithreaded make one time and correctly decide which error related to which project (source directory)?
Thank you!
If you are using recursive make (naughty boy) together with -j, then you can wrap Make with a shell script which prefixes each line of output with a unique per-make-invocation string.
$ cat M
#!/bin/bash
PREFIX=$$:
exec -a $0 make "$#" 2>&1 | sed "s/^/$PREFIX/"
Now, assuming your makefiles correctly use ${MAKE} to indicate recursion, we can use ./M instead of make.
$ ./M -j --no-print-directory target
28720:/home/user/M -fa.mak
28720:/home/user/M -fb.mak
28720:/home/user/M -fc.mak
28720:/home/user/M -fd.mak
28720:/home/user/M -fe.mak
28720:32484:gcc blah...
28720:31936:/home/user/M -fanother.mak
28720:32484:gcc blah...
28720:31936:gcc blah...
28720:31936:gcc blah...
28720:31936:56781:echo blah...
∶
In this case, each line is prefixed with a list of process IDs (good for debugging recursive make). For your use case, you may prefer M to mangle source file names so that they become absolute pathnames in error messages.
May be it is possible to switch to pmake?
PMake is set up to handle the output from multiple jobs in a graceful fashion (source)
If you're using some kind of meta-build system (eg CMake) try using Ninja to actually run the build.
It solves this problem as well as being quite a lot faster.

Resources