Shell-spawning shellcode that is smaller than 295 bytes - shellcode

I want shell-spawning shellcodes that is smaller than 295 bytes. Metasploit usually produces shell-spawning shellcodes that are (far) more than 300 bytes.
Or is there anyway I can compress the shell-spawning shellcode?
Edit: I need these shellcodes for Windows XP/2003.

Try using aleph1's shellcode. It should be small enough.
http://www.phrack.org/issues.html?issue=49&id=14#article

Did you try Shell-Storm?
http://www.shell-storm.org/shellcode/shellcode-windows.php
There are a few shell-spawning shellcodes, and other payloads as well. For example:
http://www.shell-storm.org/shellcode/files/shellcode-710.php
http://www.shell-storm.org/shellcode/files/shellcode-705.php
Or you would try a download-and-exec shellcode and put the Metasploit payload in an EXE file.

Related

Is base64 encoded file smaller than straight hexdump?

I was wondering if base64 provides any compression compared to straight hex-dump - that means turning every byte into two characters from range [a-f0-9].
Yes it does, because base64 has more characters to work with—64 instead of the 16 of hexdump. This is one of the purposes of base64.
The Wikipedia article shows you the gain: If the binary data is n bytes, the base64 data is 4*ceil(n/3) bytes. (Compared to 2*n bytes for the hexdump.)
So, instead of a 100% overhead, you get roughly a 33% overhead.

Limit to the number of characters read on sys_read

I was playing around with some assembly programming and wrote some code to read in 4096 bytes from stdin using the syscall sys_read. However, it reads only around a 120 bytes from stdin.
Why does this happen? Is there any system level setting that I can change in order to read more bytes in one go? Is there any other way I can get around this limitation and force the program or sys_read to read in more bytes?
stdin may be line buffered, do you happen to have a line feed at that position?
In general, however, read is allowed to return less than what you ask for. The solution is to read in a loop until you got all the bytes needed.

VimL: Get extra KB from function that outputs file size

Right now I'm creating a plugin of sorts for Vim, it's meant to simply have all kinds of utility functions to put in your statusline, here's the link: https://github.com/Greduan/vim-usefulstatusline
Right now I have this function: https://github.com/Greduan/vim-usefulstatusline/blob/master/autoload/usefulstatusline_filesize.vim
It simply outputs the file size from bytes to megabytes. Now, currently if the file size reaches 1MB for example it outputs 1MB, this is fine, but I would also like for it to output the amount of bytes or KB extra that it has.
From example, instead of outputting 1MB it would output 1MB-367KB, see what I mean? It would output the biggest size, and then the remainder of the size that follows it. It's hard to explain.
So how would I modify the current function(s) to output the size this way?
Thanks for your help! Any of it is appreciated. :)
Who needs this? I doubt it would be convenient to anyone (especially when having small remainders like 1MB + 3KB), using 1.367MB is much better. I see in your code that you don’t have either MB (1000*1000 B) or MiB (1024*1024 B), 1000*1024 bytes is very strange. Also, don’t use getfsize, it is wrong for any non-file buffer you constantly see in plugins. Use line2byte(line('$')+1)-1.
For 1.367MB you can just rewrite humanize_bytes function in VimL if you are fine with depending on +float feature.
Using integer arithmetic you can get the remainder with
let kbytes_remainder = kbytes % 1000
And do change to either MiB/KiB (M/K is a common shortcut used in ls. Without B) or MB/KB.

How many bytes of memory is a tweet?

140 characters. How much memory would it take up ?
I'm trying to calculate how many tweets my EC2 Large instance Mongo DB can hold.
Twitter uses UTF-8 encoded messages.
UTF-8 code points can be up to six four octets long, making the maximum message size 140 x 4 = 560 8-bit bytes.
This is, of course, just for the raw messages, excluding storage overhead, indexing and other storage-related padding.
e: Twitter successfully let me post the message:
™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™™
Yes, that's 140 trademark symbols, which are three octets each in UTF-8
Back in September, an engineer at Twitter gave a presentation that suggested it's about 200 bytes per tweet.
Of course you still have to account for overhead for your own metadata and the database itself, but 200 bytes/record is probably a good place to start.
Typically it's two bytes per character if you're storing Unicode as UTF-8, so that would mean 280 bytes max per tweet.
Probably 284 bytes in memory ( 4 byte length prefix + length*2). Inside the DB I cannot say but probably 280 if the DB is UTF-8, you could add some bytes of overhead, for metadata etc.
Potentially of interest:
http://mehack.com/map-of-a-twitter-status-object
Anatomy of a Twitter Status Object
Also more about twitter character encoding:
http://dev.twitter.com/pages/counting_characters
It's technically stored as UTF-8, and in reality, the slide deck from a tweeter guy here http://www.slideshare.net/raffikrikorian/twitter-by-the-numbers gives the real stat about it:
140 characters, ~200 bytes

is there any good point to use 500 byte buffer for simple upper-case converter?

I am reading programming from the ground up
in chapter 5,
the program uses 500 byte buffer for 1byte long character converting.
shouldn't it have to use double loop?
loop1 for read 500 byte by 500 byte from file.
loop2 for processing something in the 500 byte maybe a byte at at time.
and I think this make program little bit more complicated.
if I use a byte buffer for convert
there is nothing need but just one loop
loop1: read 1byte and processing it.
is there any good point to use 500 byte buffer for simple upper-case converter?
my development environment is x86,linux,assembly,at&t syntax
The only reason to consider doing it 500 (or more) bytes at a time is that it may reduce the number of function calls into the library and/or Operating System services you're using for I/O. I suggest you try it both ways and measure the performance difference for yourself. Say your two versions are compiled to executables named ala uppercase.version, you can get a report on the CPU and elapsed time for it to run by typing the following at the shell prompt:
time uppercase.byte_by_byte < input > output
time uppercase.500_byte_blocks < input > output

Resources