So I was following a tutorial about buffer overflow with the following code:
#include <stdlib.h>
#include <unistd.h>
#include <stdio.h>
int main(int argc, char **argv)
{
volatile int modified;
char buffer[64];
modified = 0;
gets(buffer);
if(modified != 0) {
printf("you have changed the 'modified' variable\n");
} else {
printf("Try again?\n");
}
}
I then compile it with gcc and additionnally run beforehand sudo sysctl -w kernel.randomize_va_space=0 to prevent random memory and allow the stack smashing (buffer overflow) exploit
gcc protostar.c -g -z execstack -fno-stack-protector -o protostar
-g is to allow debugging in gdb ('list main')
-z execstack -fno-stack-protector is to remove the stack protection
and then execute it:
python -c 'print "A"*76' | ./protostar
Try again?
python -c 'print "A"*77' | ./protostar
you have changed the 'modified' variable
So I do not understand why the buffer overflow occurs with 77 while it should have been 65, so it makes a 12 bits difference (3 bytes). I wonder the reason why if anyone has a clear explanation ?
Also it remains this way from 77 to 87:
python -c 'print "A"*87' | ./protostar
you have changed the 'modified' variable
And from 88 it adds a segfault:
python -c 'print "A"*88' | ./protostar
you have changed the 'modified' variable
Segmentation fault (core dumped)
Regards
To fully understand what's happening, it's first important to make note of how your program is laying out memory.
From your comment, you have that for this particular run, memory for buffer starts at 0x7fffffffdf10 and then modified starts at 0x7fffffffdf5c (although randomize_va_space may keep this consistent across runs, but I'm not quite sure).
So you have something like this:
0x7fffffffdf10 0x7fffffffdf50 0x7fffffffdf5c
↓ ↓ ↓
(64 byte buffer)..........(some 12 bytes).....(modified)....
Essentially, you have the 64 character buffer, then when that ends, there's 12 bytes that are used for some other stack variable (likely 4 bytes argc and 8 bytes for argv), and then modified comes after, precisely starting 64+12 = 76 bytes after the buffer starts.
Therefore, when you write between 65 and 76 characters into the 64 byte buffer, it goes past and starts writing into those 12 bytes that are in-between the buffer and modified. When you start writing the 77th character, it starts overwriting what's in modified which causes you to see the "you have changed the 'modified' variable" message.
You asked also "why does it work if I go up to 87 and then at 88 there's a segfault? The answer is that because it's undefined behavior, as soon as you start writing into invalid memory and the kernel recognizes it, it'll immediately kill your process because you are trying to read/write memory you don't have access to.
Note that you should almost never use gets in practice and this is a big reason, since you don't know exactly how many bytes you will be reading so there's a chance to overwrite. Also note that the behavior you're seeing is not the same behavior I'm seeing on my machine when I run it. This is normal, and that's because it's undefined behavior. There are no guarantees to what will happen when you run it. On my machine, modified actually comes before buffer in memory, so I don't ever see the modified variable get overwritten. I think this is a good learning example to understand why undefined behavior like this is just so unpredictable.
Related
In a bash script, I try to read lines from standard input, using built-in read command after setting IFS=$'\n'. The lines are truncated at 4095 character limit if I paste input to the read. This limitation seems to come from reading from terminal, because this worked perfectly fine:
fill=
for i in $(seq 1 94); do fill="${fill}x"; done
for i in $(seq 1 100); do printf "%04d00$fill" $i; done | (read line; echo $line)
I experience the same behavior with Python script (did not accept longer than 4095 input from terminal, but accepted from pipe):
#!/usr/bin/python
from sys import stdin
line = stdin.readline()
print('%s' % line)
Even C program works the same, using read(2):
#include <stdio.h>
#include <unistd.h>
int main(void)
{
char buf[32768];
int sz = read(0, buf, sizeof(buf) - 1);
buf[sz] = '\0';
printf("READ LINE: [%s]\n", buf);
return 0;
}
In all cases, I cannot enter longer than about 4095 characters. The input prompt stops accepting characters.
Question-1: Is there a way to interactively read from terminal longer than 4095 characters in Linux systems (at least Ubuntu 10.04 and 13.04)?
Question-2: Where does this limitation come from?
Systems affected: I noticed this limitation in Ubuntu 10.04/x86 and 13.04/x86, but Cygwin (recent version at least) does not truncate yet at over 10000 characters (did not test further since I need to get this script working in Ubuntu). Terminals used: Virtual Console and KDE konsole (Ubuntu 13.04) and gnome-terminal (Ubuntu 10.04).
Please refer to termios(3) manual page, under section "Canonical and noncanonical mode".
Typically, the terminal (standard input) is in canonical mode; in this mode the kernel will buffer the input line before returning the input to the application. The hard-coded limit for Linux (N_TTY_BUF_SIZE defined in ${linux_source_path}/include/linux/tty.h) is set to 4096 allowing input of 4095 characters not counting the ending new line. You can also have a look at file ${linux_source_path}/drivers/tty/n_tty.c, function n_tty_receive_buf_common() and the comment above that.
In noncanonical mode there will by default be no buffering by kernel and the read(2) system call returns immediately once a single character of input is returned (key is pressed). You can manipulate the terminal settings to read a specified amount of characters or set a time-out for non-canonical mode, but then too the hard-coded limit is 4095 per the termios(3) manual page (and the comment above the above mentioned n_tty_receive_buf_common()).
Bash read builtin command still works in non-canonical mode as can be demonstrated by the following:
IFS=$'\n' # Allow spaces and other white spaces.
stty -icanon # Disable canonical mode.
read line # Now we can read without inhibitions set by terminal.
stty icanon # Re-enable canonical mode (assuming it was enabled to begin with).
After this modification of adding stty -icanon you can paste longer than 4096 character string and read it successfully using bash built-in read command (I successfully tried longer than 10000 characters).
If you put this in a file, i.e. make it a script, you can use strace to see the system calls called, and you will see read(2) called multiple times, each time returning a single character when you type input to it.
I do not have a workaround for you, but I can answer question 2.
In linux PIPE_BUF is set to 4096 (in limits.h) If you do a write of more than 4096 to a pipe it will be truncated.
From /usr/include/linux/limits.h:
#ifndef _LINUX_LIMITS_H
#define _LINUX_LIMITS_H
#define NR_OPEN 1024
#define NGROUPS_MAX 65536 /* supplemental group IDs are available */
#define ARG_MAX 131072 /* # bytes of args + environ for exec() */
#define LINK_MAX 127 /* # links a file may have */
#define MAX_CANON 255 /* size of the canonical input queue */
#define MAX_INPUT 255 /* size of the type-ahead buffer */
#define NAME_MAX 255 /* # chars in a file name */
#define PATH_MAX 4096 /* # chars in a path name including nul */
#define PIPE_BUF 4096 /* # bytes in atomic write to a pipe */
#define XATTR_NAME_MAX 255 /* # chars in an extended attribute name */
#define XATTR_SIZE_MAX 65536 /* size of an extended attribute value (64k) */
#define XATTR_LIST_MAX 65536 /* size of extended attribute namelist (64k) */
#define RTSIG_MAX 32
#endif
The problem is definitely not the read() ; as it can read upto any valid integer value. The problem comes from the heap memory or the pipe size.. as they are the only possible limiting factors to the size..
Environment:
Ubuntu 10.04 LTS
Gnome Desktop v2.30.2
gcc/g++ 4.4.3
libreadline 6.1
I was building an application that inputs a multiple line of input and process for it and I found that if the size of input is large, readline skips several bytes of characters. To make sure, I made a simple program like this:
#include <stdio.h>
#include <readline/readline.h>
int main() {
while (1) {
char *p = readline("> ");
if (!p) break;
fprintf(stderr, "%s\n", p);
}
return 0;
}
and generated 20000 lines of input, which consists of 120000 bytes.
seq -f "%05g" 1 20000 >gen.txt
and ran the test program on gnome terminal and performed copy-and-paste of the content of gen.txt:
g++ test.cpp -lreadline
./a.out 2>out.txt
[copy-and-paste the content of gen.txt]
I could see that out.txt was smaller than gen.txt, and many bytes are omitted.
wc -c out.txt
119966 out.txt
I want to know which component is flawed, whether gnome terminal or readline, and want to know how many bytes of clipboard content readline and gnome terminal assure that copy-and-paste can be done in that amount without problem.
If I use a command like this one:
./program >> a.txt &
, and the program is a long running one then I can only see the output once the program ended. That means I have no way of knowing if the computation is going well until it actually stops computing. I want to be able to read the redirected output on file while the program is running.
This is similar to opening a file, appending to it, then closing it back after every writing. If the file is only closed at the end of the program then no data can be read on it until the program ends. The only redirection I know is similar to closing the file at the end of the program.
You can test it with this little python script. The language doesn't matter. Any program that writes to standard output has the same problem.
l = range(0,100000)
for i in l:
if i%1000==0:
print i
for j in l:
s = i + j
One can run this with:
./python program.py >> a.txt &
Then cat a.txt .. you will only get results once the script is done computing.
From the stdout manual page:
The stream stderr is unbuffered.
The stream stdout is line-buffered
when it points to a terminal.
Partial lines will not appear until
fflush(3) or exit(3) is called, or
a new‐line is printed.
Bottom line: Unless the output is a terminal, your program will have its standard output in fully buffered mode by default. This essentially means that it will output data in large-ish blocks, rather than line-by-line, let alone character-by-character.
Ways to work around this:
Fix your program: If you need real-time output, you need to fix your program. In C you can use fflush(stdout) after each output statement, or setvbuf() to change the buffering mode of the standard output. For Python there is sys.stdout.flush() of even some of the suggestions here.
Use a utility that can record from a PTY, rather than outright stdout redirections. GNU Screen can do this for you:
screen -d -m -L python test.py
would be a start. This will log the output of your program to a file called screenlog.0 (or similar) in your current directory with a default delay of 10 seconds, and you can use screen to connect to the session where your command is running to provide input or terminate it. The delay and the name of the logfile can be changed in a configuration file or manually once you connect to the background session.
EDIT:
On most Linux system there is a third workaround: You can use the LD_PRELOAD variable and a preloaded library to override select functions of the C library and use them to set the stdout buffering mode when those functions are called by your program. This method may work, but it has a number of disadvantages:
It won't work at all on static executables
It's fragile and rather ugly.
It won't work at all with SUID executables - the dynamic loader will refuse to read the LD_PRELOAD variable when loading such executables for security reasons.
It's fragile and rather ugly.
It requires that you find and override a library function that is called by your program after it initially sets the stdout buffering mode and preferably before any output. getenv() is a good choice for many programs, but not all. You may have to override common I/O functions such as printf() or fwrite() - if push comes to shove you may just have to override all functions that control the buffering mode and introduce a special condition for stdout.
It's fragile and rather ugly.
It's hard to ensure that there are no unwelcome side-effects. To do this right you'd have to ensure that only stdout is affected and that your overrides will not crash the rest of the program if e.g. stdout is closed.
Did I mention that it's fragile and rather ugly?
That said, the process it relatively simple. You put in a C file, e.g. linebufferedstdout.c the replacement functions:
#define _GNU_SOURCE
#include <stdlib.h>
#include <stdio.h>
#include <dlfcn.h>
char *getenv(const char *s) {
static char *(*getenv_real)(const char *s) = NULL;
if (getenv_real == NULL) {
getenv_real = dlsym(RTLD_NEXT, "getenv");
setlinebuf(stdout);
}
return getenv_real(s);
}
Then you compile that file as a shared object:
gcc -O2 -o linebufferedstdout.so -fpic -shared linebufferedstdout.c -ldl -lc
Then you set the LD_PRELOAD variable to load it along with your program:
$ LD_PRELOAD=./linebufferedstdout.so python test.py | tee -a test.out
0
1000
2000
3000
4000
If you are lucky, your problem will be solved with no unfortunate side-effects.
You can set the LD_PRELOAD library in the shell, if necessary, or even specify that library system-wide (definitely NOT recommended) in /etc/ld.so.preload.
If you're trying to modify the behavior of an existing program try stdbuf (part of coreutils starting with version 7.5 apparently).
This buffers stdout up to a line:
stdbuf -oL command > output
This disables stdout buffering altogether:
stdbuf -o0 command > output
Have you considered piping to tee?
./program | tee a.txt
However, even tee won't work if "program" doesn't write anything to stdout until it is done. So, the effectiveness depends a lot on how your program behaves.
If the program writes to a file, you can read it while it is being written using tail -f a.txt.
Your problem is that most programs check to see if the output is a terminal or not. If the output is a terminal then output is buffered one line at a time (so each line is output as it is generated) but if the output is not a terminal then the output is buffered in larger chunks (4096 bytes at a time is typical) This behaviour is normal behaviour in the C library (when using printf for example) and also in the C++ library (when using cout for example), so any program written in C or C++ will do this.
Most other scripting languages (like perl, python, etc.) are written in C or C++ and so they have exactly the same buffering behaviour.
The answer above (using LD_PRELOAD) can be made to work on perl or python scripts, since the interpreters are themselves written in C.
The unbuffer command from the expect package does exactly what you are looking for.
$ sudo apt-get install expect
$ unbuffer python program.py | cat -
<watch output immediately show up here>
A process is considered to have completed correctly in Linux if its exit status was 0.
I've seen that segmentation faults often result in an exit status of 11, though I don't know if this is simply the convention where I work (the applications that failed like that have all been internal) or a standard.
Are there standard exit codes for processes in Linux?
Part 1: Advanced Bash Scripting Guide
As always, the Advanced Bash Scripting Guide has great information:
(This was linked in another answer, but to a non-canonical URL.)
1: Catchall for general errors
2: Misuse of shell builtins (according to Bash documentation)
126: Command invoked cannot execute
127: "command not found"
128: Invalid argument to exit
128+n: Fatal error signal "n"
255: Exit status out of range (exit takes only integer args in the range 0 - 255)
Part 2: sysexits.h
The ABSG references sysexits.h.
On Linux:
$ find /usr -name sysexits.h
/usr/include/sysexits.h
$ cat /usr/include/sysexits.h
/*
* Copyright (c) 1987, 1993
* The Regents of the University of California. All rights reserved.
(A whole bunch of text left out.)
#define EX_OK 0 /* successful termination */
#define EX__BASE 64 /* base value for error messages */
#define EX_USAGE 64 /* command line usage error */
#define EX_DATAERR 65 /* data format error */
#define EX_NOINPUT 66 /* cannot open input */
#define EX_NOUSER 67 /* addressee unknown */
#define EX_NOHOST 68 /* host name unknown */
#define EX_UNAVAILABLE 69 /* service unavailable */
#define EX_SOFTWARE 70 /* internal software error */
#define EX_OSERR 71 /* system error (e.g., can't fork) */
#define EX_OSFILE 72 /* critical OS file missing */
#define EX_CANTCREAT 73 /* can't create (user) output file */
#define EX_IOERR 74 /* input/output error */
#define EX_TEMPFAIL 75 /* temp failure; user is invited to retry */
#define EX_PROTOCOL 76 /* remote error in protocol */
#define EX_NOPERM 77 /* permission denied */
#define EX_CONFIG 78 /* configuration error */
#define EX__MAX 78 /* maximum listed value */
8 bits of the return code and 8 bits of the number of the killing signal are mixed into a single value on the return from wait(2) & co..
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <unistd.h>
#include <signal.h>
int main() {
int status;
pid_t child = fork();
if (child <= 0)
exit(42);
waitpid(child, &status, 0);
if (WIFEXITED(status))
printf("first child exited with %u\n", WEXITSTATUS(status));
/* prints: "first child exited with 42" */
child = fork();
if (child <= 0)
kill(getpid(), SIGSEGV);
waitpid(child, &status, 0);
if (WIFSIGNALED(status))
printf("second child died with %u\n", WTERMSIG(status));
/* prints: "second child died with 11" */
}
How are you determining the exit status? Traditionally, the shell only stores an 8-bit return code, but sets the high bit if the process was abnormally terminated.
$ sh -c 'exit 42'; echo $?
42
$ sh -c 'kill -SEGV $$'; echo $?
Segmentation fault
139
$ expr 139 - 128
11
If you're seeing anything other than this, then the program probably has a SIGSEGV signal handler which then calls exit normally, so it isn't actually getting killed by the signal. (Programs can chose to handle any signals aside from SIGKILL and SIGSTOP.)
None of the older answers describe exit status 2 correctly. Contrary to what they claim, status 2 is what your command line utilities actually return when called improperly. (Yes, an answer can be nine years old, have hundreds of upvotes, and still be wrong.)
Here is the real, long-standing exit status convention for normal termination, i.e. not by signal:
Exit status 0: success
Exit status 1: "failure", as defined by the program
Exit status 2: command line usage error
For example, diff returns 0 if the files it compares are identical, and 1 if they differ. By long-standing convention, unix programs return exit status 2 when called incorrectly (unknown options, wrong number of arguments, etc.) For example, diff -N, grep -Y or diff a b c will all result in $? being set to 2. This is and has been the practice since the early days of Unix in the 1970s.
The accepted answer explains what happens when a command is terminated by a signal. In brief, termination due to an uncaught signal results in exit status 128+[<signal number>. E.g., termination by SIGINT (signal 2) results in exit status 130.
Notes
Several answers define exit status 2 as "Misuse of bash builtins". This applies only when bash (or a bash script) exits with status 2. Consider it a special case of incorrect usage error.
In sysexits.h, mentioned in the most popular answer, exit status EX_USAGE ("command line usage error") is defined to be 64. But this does not reflect reality: I am not aware of any common Unix utility that returns 64 on incorrect invocation (examples welcome). Careful reading of the source code reveals that sysexits.h is aspirational, rather than a reflection of true usage:
* This include file attempts to categorize possible error
* exit statuses for system programs, notably delivermail
* and the Berkeley network.
* Error numbers begin at EX__BASE [64] to reduce the possibility of
* clashing with other exit statuses that random programs may
* already return.
In other words, these definitions do not reflect the common practice at the time (1993) but were intentionally incompatible with it. More's the pity.
'1': Catch-all for general errors
'2': Misuse of shell builtins (according to Bash documentation)
'126': Command invoked cannot execute
'127': "command not found"
'128': Invalid argument to exit
'128+n': Fatal error signal "n"
'130': Script terminated by Ctrl + C
'255': Exit status out of range
This is for Bash. However, for other applications, there are different exit codes.
There are no standard exit codes, aside from 0 meaning success. Non-zero doesn't necessarily mean failure either.
Header file stdlib.h does define EXIT_FAILURE as 1 and EXIT_SUCCESS as 0, but that's about it.
The 11 on segmentation fault is interesting, as 11 is the signal number that the kernel uses to kill the process in the event of a segmentation fault. There is likely some mechanism, either in the kernel or in the shell, that translates that into the exit code.
Header file sysexits.h has a list of standard exit codes. It seems to date back to at least 1993 and some big projects like Postfix use it, so I imagine it's the way to go.
From the OpenBSD man page:
According to style(9), it is not good practice to call exit(3) with arbitrary values to indicate a failure condition when ending a program. Instead, the predefined exit codes from sysexits should be used, so the caller of the process can get a rough estimation about the failure class without looking up the source code.
To a first approximation, 0 is success, non-zero is failure, with 1 being general failure, and anything larger than one being a specific failure. Aside from the trivial exceptions of false and test, which are both designed to give 1 for success, there's a few other exceptions I found.
More realistically, 0 means success or maybe failure, 1 means general failure or maybe success, 2 means general failure if 1 and 0 are both used for success, but maybe success as well.
The diff command gives 0 if files compared are identical, 1 if they differ, and 2 if binaries are different. 2 also means failure. The less command gives 1 for failure unless you fail to supply an argument, in which case, it exits 0 despite failing.
The more command and the spell command give 1 for failure, unless the failure is a result of permission denied, nonexistent file, or attempt to read a directory. In any of these cases, they exit 0 despite failing.
Then the expr command gives 1 for success unless the output is the empty string or zero, in which case, 0 is success. 2 and 3 are failure.
Then there's cases where success or failure is ambiguous. When grep fails to find a pattern, it exits 1, but it exits 2 for a genuine failure (like permission denied). klist also exits 1 when it fails to find a ticket, although this isn't really any more of a failure than when grep doesn't find a pattern, or when you ls an empty directory.
So, unfortunately, the Unix powers that be don't seem to enforce any logical set of rules, even on very commonly used executables.
Programs return a 16 bit exit code. If the program was killed with a signal then the high order byte contains the signal used, otherwise the low order byte is the exit status returned by the programmer.
How that exit code is assigned to the status variable $? is then up to the shell. Bash keeps the lower 7 bits of the status and then uses 128 + (signal nr) for indicating a signal.
The only "standard" convention for programs is 0 for success, non-zero for error. Another convention used is to return errno on error.
Standard Unix exit codes are defined by sysexits.h, as David mentioned.
The same exit codes are used by portable libraries such as Poco - here is a list of them:
Class Poco::Util::Application, ExitCode
A signal 11 is a SIGSEGV (segment violation) signal, which is different from a return code. This signal is generated by the kernel in response to a bad page access, which causes the program to terminate. A list of signals can be found in the signal man page (run "man signal").
When Linux returns 0, it means success. Anything else means failure. Each program has its own exit codes, so it would been quite long to list them all...!
About the 11 error code, it's indeed the segmentation fault number, mostly meaning that the program accessed a memory location that was not assigned.
Some are convention, but some other reserved ones are part of POSIX standard.
126 -- A file to be executed was found, but it was not an executable utility.
127 -- A utility to be executed was not found.
>128 -- A command was interrupted by a signal.
See the section RATIONALE of man 1p exit.
For my Java apps with very long classpaths, I cannot see the main class specified near the end of the arg list when using ps. I think this stems from my Ubuntu system's size limit on /proc/pid/cmdline. How can I increase this limit?
For looking at Java processes jps is very useful.
This will give you the main class and jvm args:
jps -vl | grep <pid>
You can't change this dynamically, the limit is hard-coded in the kernel to PAGE_SIZE in fs/proc/base.c:
274 int res = 0;
275 unsigned int len;
276 struct mm_struct *mm = get_task_mm(task);
277 if (!mm)
278 goto out;
279 if (!mm->arg_end)
280 goto out_mm; /* Shh! No looking before we're done */
281
282 len = mm->arg_end - mm->arg_start;
283
284 if (len > PAGE_SIZE)
285 len = PAGE_SIZE;
286
287 res = access_process_vm(task, mm->arg_start, buffer, len, 0);
I temporarily get around the 4096 character command line argument limitation of ps (or rather /proc/PID/cmdline) is by using a small script to replace the java command.
During development, I always use an unpacked JDK version from SUN and never use the installed JRE or JDK of the OS no matter if Linux or Windows (eg. download the bin versus the rpm.bin).
I do not recommend changing the script for your default Java installation (e.g. because it might break updates or get overwritten or create problems or ...)
So assuming the java command is in /x/jdks/jdk1.6.0_16_x32/bin/java
first move the actual binary away:
mv /x/jdks/jdk1.6.0_16_x32/bin/java /x/jdks/jdk1.6.0_16_x32/bin/java.orig
then create a script /x/jdks/jdk1.6.0_16_x32/bin/java like e.g.:
#!/bin/bash
echo "$#" > /tmp/java.$$.cmdline
/x/jdks/jdk1.6.0_16_x32/bin/java.orig $#
and then make the script runnable
chmod a+x /x/jdks/jdk1.6.0_16_x32/bin/java
in case of copy and pasting the above, you should make sure that there are not extra spaces in /x/jdks/jdk1.6.0_16_x32/bin/java and #!/bin/bash is the first line
The complete command line ends up in e.g. /tmp/java.26835.cmdline where 26835 is the PID of the shell script.
I think there is also some shell limit on the number of command line arguments, cannot remember but it was possibly 64K characters.
you can change the script to remove the command line text from /tmp/java.PROCESS_ID.cmdline
at the end
After I got the commandline, I always move the script to something like "java.script" and copy (cp -a) the actual binary java.orig back to java. I only use the script when I hit the 4K limit.
There might be problems with escaped characters and maybe even spaces in paths or such, but it works fine for me.
You can use jconsole to get access to the original command line without all the length limits.
It is possible to use newer linux distributions, where this limit was removed, for example RHEL 6.8 or later
"The /proc/pid/cmdline file length limit for the ps command was previously hard-coded in the kernel to 4096 characters. This update makes sure the length of /proc/pid/cmdline is unlimited, which is especially useful for listing processes with long command line arguments. (BZ#1100069)"
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/6.8_Release_Notes/new_features_kernel.html
For Java based programs where you are just interested in inspecting the command line args your main class got, you can run:
jps -m
I'm pretty sure that if you're actually seeing the arguments truncated in /proc/$pid/cmdline then you're actually exceeding the maximum argument length supported by the OS. As far as I can tell, in Linux, the size is limited to the memory page size. See "ps ww" length restriction for reference.
The only way to get around that would be to recompile the kernel. If you're interested in going that far to resolve this then you may find this post useful: "Argument list too long": Beyond Arguments and Limitations
Additional reference:
ARG_MAX, maximum length of arguments for a new process
Perhaps the 'w' parameter to ps is what you want. Add two 'w' for greater output. It tells ps to ignore the line width of the terminal.