This question maybe not related to Emacs only, but to all development environment that use Console for its debugging process. Here is the problem. I use Eshell to run the application we are being developed. It's a J2ME application. And for debugging, we just use System.out.println(). And now, suppose I want to allow only text that started with Eko: to be displayed in the console (interactively), is it possible?
I installed Cygwin in my Windows environment, and try to grep the output like this :
run | grep Eko:. It surely filtered only output with Eko: as the beginning, but it's not interactive. The output suppressed until the application quit. Well, that's useless anyway.
Is it possible to do it? What I mean is, we don't have to touch the application code itself?
I tag to linux also, because maybe some guys in Linux know the answer.
Many thanks!
The short: Try adding --line-buffered to your grep command.
The long: I assume that your application is flushing its output stream with every System.out.println(), and that grep has the lines available to read immediately, but is choosing to buffer output until it has 'enough' output saved up to make writing make sense. (This is typically 4k or 8k of data, which could be several hundred lines, depending upon your line length.)
This buffering makes great sense when the output is another program in the pipeline; reducing needless context switches is a great way to improve program throughput.
But if your printing is slow enough that it doesn't fill the buffer quickly enough for 'real time' output, then switching to line-buffered output might fix it.
Related
I know when I type time ./myProgram <input.txt >output.txt on the terminal, once my program finish running, I can see the runtime conveniently. There must be some similar command to see memory usage of the program right? Please inform me concisely. Thanks!
One way to tell how much memory the terminal is using at any one time is to use top and:
find the line that corresponds to the terminal's process.
find the column that corresponds to memory usage (man top could likely tell you -- I'm on a Windows machine right now, and can't easily look it up.)
Perhaps try this command:
top | grep terminal
('terminal' can be replaced with 'xterm' or whatever terminal program you use)
Again, this will tell you the memory usage of a program at a particular time.
This may be a stupid question, but is there a way to write to the linux console from within a driver without using printk (i.e. syslog)?
For instance, working in a linux driver, I need to output a single character as an event happens. I'd like to output 'w' as an write event starts, and a 'W' when it finishes. This happens frequently, so sending that through syslog isn't ideal.
Ideally, it would be great if I could just do the equivalent of printf("W") or putc('W') and have that simply go out the default console.
TIA
Mike
Writing to the console isn't something you want to do frequently. If printk is too expensive for you, you shouldn't try the console any way.
But if you insist:
Within printk, printing to the console is handled by call_console_drivers. This function finds the console (registered via register_console) and calls it to print the data. The actual driver depends on what console you're using. The VGA screen is one option, the serial port is another (depending on boot parameters).
You can try to use the functions in console.h to interact with the console directly. I don't know how hard it would be to make it work.
Unfortunately no, as there is no concept of "console" in the kernel (that is a userspace process). You may try other kernel debugging options
So I was looking into why a program was getting rid of my background, and the author of the program said to post .xsession-errors and many people did. Then my next question was: What is .xsession-errors? A google search reveals many results but nothing explaining what it is.
What I know so far:
It's some kind of error log. I can't figure out what it's related too (ubuntu itself? programs?)
I have one and it seems like all Ubuntu systems have it, though I cannot verify.
Linux graphical interfaces (such as GNOME) provide a way to run applications by clicking on icons instead of running them manually on the command-line. However, by doing so, output from the command-line is lost - especially the error output (STDERR).
To deal with this, some display managers (such as GDM) pipe the error output to ~/.xsession-errors, which can then be used for debugging purposes. Note that since all applications launched this way dump to the same log, it can get quite large and difficult to find specific messages.
Update: Per the documentation:
The ~/.xsession-errors X session log file has been deprecated and is
no longer used.
It has been replaced by the systemd journal (journalctl command).
It's the error log produced by your X windows system (which the Ubuntu GUI is built on top of).
Basically it's quite a low level error log for X11.
I am developing under Linux with pretty tight constraints on disk usage. I'd like to be able to point logging to a fixed-size file. For example, if my application outputs all logs to stdout:
~/bin/myApp > /dev/debug1
and then, to see the last amount of output:
cat /dev/debug1
would write out however many bytes debug1 was setup to save (if at least that many had been written there).
This post suggests using expect or its library, but I was wondering if anyone has seen a "pseudo-tty" device driver-type implementation as I would prefer to not bind any more libraries to my executable.
I realize there are other mechanisms like logrotate, but I'd prefer to have a non-cron solution.
Pointers, suggestions, questions welcome!
Perhaps you could achieve what you want using mkfifo and something that reads the pipe with a suitable buffer. I haven't tried, but less --buffers=XXXXXX might work for this.
I am running a huge task [automated translation scripted with perl + database etc.] to run for about 2 weeks non-stop. While thinking how to speed it up I saw that the translator outputs everything (all translated sentences, all info on the way) to STDOUT all the time. This makes it work visibly slower when I get the output on the console.
I obviously piped the output to /dev/null, but then I thought "could there be something even faster?" It's so much output that it'd really make a difference.
And that's the question I'm asking You, because as far as I know there is nothing faster... (But I'm far from being a guru having used linux on a daily basis only last 3 years)
Output to /dev/null is implemented in the kernel, which is pretty bloody fast. The output pipe isn't your problem now, it's the time it takes to build the strings that are getting sent to /dev/null. I would recommend you go through the program and comment out (or guard with if $be_verbose) all the lines that are useless print statements. I'm pretty sure that'll give you a noticeable speedup.
I'm able (via dd) to dump 20 gigabytes of data per second down /dev/null. This is not your bottleneck :-p
Pretty much the only way to make it faster is to not generate the data in the first place - remove the logging statements entirely. The cost of producing all the log messages likely exceeds the cost of throwing them away quite a bit.
Unrelated to perl and standard output, but there is null_blk block device, which is even faster than /dev/null. Basically, it bounded by syscall performance and with large blocks it can saturate memory bus.