Best Way to Hexdump Shellcode - hexdump

I'm trying to get shellcode from some programs I wrote. Besides taking the hex code from an objdump -D shellcode, is there a better way to purely hexdump a string? I've thrown it through hexdump as well, but that spits out way too many lines, and as it stands I don't know how to specify it to give me just the section of machine code after _start.

Related

sha256sum and hashalot produce different values on Linux

Suddenly I’ve discovered that two various SHA-256 calculators produce different values. Here is the real example — having had downloaded a ‛Neovim’ image, at first I didn’t get what was going on:
> cat nvim.appimage | sha256sum
ef9056e05ef6a4c1d0cdb8b21f79261703122c0fd31f23f782158d326fdadbf5 -
> cat nvim.appimage | hashalot -x sha256
ced1af6d51438341a0335cc00e1c2867fb718a537c1173cf210070a6b1cdf40a
The correct result is what ‛sha256sum’ gives — it matches the value on the official page. Did I do anything wrong? And how to avoid such unexpected effects in the future?
The operating system is Linux Mint 19 Cinnamon.
Thanks to user17732522 (https://stackoverflow.com/users/17732522/user17732522)
I forgot the name of the proper program, and after typing the wrong name, the shell suggested installing hashalot to calculate the sum. I did and read the man page but just only the first lines. If I had looked deeper, I wouldn’t be wondering and didn’t ask the question. The answer has turned out so simple. Thanks a lot!

POP3 buffer gets translated in a strange way. Characters are bad when they shouldn't be

I've been trying to write a script for a buffer overflow attack on SLmail service (Within a legal setting, in a VPN. I am following a penetration testing course.).
The issue I'm having came up while trying to define which characters were bad. As it turns out, anything above 7F is getting mangled. I have followed the exact example that my textbook gave me, I tried googling for similar examples, not a single one I found ever mentioned that issue.
In theory, the only bad characters for this particular buffer should be x00, x0a and x0d.
Here everything above 7F is a mess. I get C2s and C3s appearing every other byte, while the rest of the bytes are somehow translated. (FF turns into BF, for example.). This is rendering me completely unable to have my shellcode sent through. I've tried removing some or changing the order. No matter the order I put them in, anything above 7F will come out translated with C2/C3s every other byte.
Link to both my script code and the memory dump resulting from it.
(The for is weird, I know.)
I figured it out.
I was using py3, which required strings to be encoded.
By translating the script into py2.7, I no longer needed to encode them and they went through without any mangling.
https://imgur.com/a/OOct5Z9

How to output IBM-1027-codepage-binary-file?

My output (csv/json) from my newly-created program (using .NET framework 4.6) need to be converted to a IBM-1027-codepage-binary-file (to be imported to Japanese client's IBM mainframe),
I've search the internet and know that Microsoft doesn't have equivalent to IBM-1027 code page.
So how could I output a IBM-1027-codepage-binary-file if I have an UTF-8 CSV/json file in my hand?
I'm asking around for other solutions, but for now, I think I'm going to have to suggest you do the conversion manually; I assume whichever language you're using allows you to do a hex conversion, at worst. For mainframes, the codepage is usually implicit in the dataset, it isn't something that is included in the file header.
So, what you can do is build a conversion table, from https://www.ibm.com/support/knowledgecenter/en/SSEQ5Y_5.9.0/com.ibm.pcomm.doc/reference/html/hcp_reference26.htm. Grab a character from your json/csv file, convert to the appropriate hex digits, and write those hex digits to a file. Repeat until EOF. (Note to actually write the hex data, not the ascii representation of the hex data.) Make sure that when the client transfers the file to their system, they perform a binary transfer.
If you wanted to get more complicated than that, you could look at enhancing/overriding part of the converter to CP500, which does exist on Microsoft Windows. One of the design points for EBCDIC was to make doing character conversions as simple as possible, so many of the CP500 characters hex representations are the same as the CP1027, with the exception of the Kanji characters.
This is a separate answer, from a colleague; I don't have the ability to validate it, I'm afraid.
transfer the file to the host in raw mode, just tag it as ccsid 1208
(edited)
for uss export _BPXK_AUTOCVT=ALL
oedit/obrowse handles it automatically.

linux kernel output in a file

I am using a program by using the linux kernel (in this case a predictor for protein localization). The output/results are printed in the linux kernel, one after each other. However, if I want to copy it to a simple textfile, the "length" of the kernel is not long enough for all the results.
instead of using smaller seperate files, I would like to print the output of the kernel to a file. I tried to google this, but it doesn't really help me futher.
1. dmesg seems to be for system-output stuff?
2. the /var/log/syslog.txt doesn't show the stuff I need, but other technical kernel stuff.
3. i saw something with printf(), but didn't quite understand the mechanics and if it was useable for my problem.
could someone explain how to do this or where to look for the right info?
I think i found out how to do it, by using > fileToBeNamed.txt at the end of the command, Sorry :(

Linux console output gets corrupted with ASCII characters

I am implementing a software project using C++ on Debian. When I execute the stand alone binary on a debian box, program runs fine for at least 15-20 minutes but after a while the console output becomes corrupted. I see lots of ASCII characters for most of the characters, but some characters display fine, so output becomes almost unreadable. If I CTRL+C and stop the execution, whatever I type on command line is also displayed as weird ASCII characters. If I reboot the box and start over, everything works fine for another 15-20 minutes then the same thing happens. Does anybody have any idea what might be going on here? Debian box has only command line support no GUI.
It sounds like you are printing some unwanted characters at some point. I think you may have a problem with managing memory you use for strings. Try running your program under valgrid. You can follow this tutorial. You should expect warnings about reading from uninitialized memory.
I don't think you're using "ASCII" properly here. Considering the fact that ASCII is in range 0-127, there's not much "weird" stuff in that range. I've seen that happen before, it's usually due to escape characters interpreted as display codes. I am a bit fuzzy on this -- I haven't done console stuff in a long while. But I'm pretty sure it's related to raw output of stuff that actually out of ASCII range.

Resources