36 forks of evince vs. 36 instances of evince at once? Pros and cons? - linux

For a research project in human learning and cognition we're writing a wrapper program/shell script with these objectives:
(a) should launch 36 different pdfs at once.
(b) place them appropriately at different locations on the screen.
(c) should auto-flip to another page # 1 page/s. If it reaches the end of the pdf, should load the next pdf in que.
Almost everything has been sorted out except which of these two approaches would be better and why?
(A) If we use DBUS protocol to query evince if it's on the last page, we can't have multiple instances of evince running 36 different .pdfs at once. So, we make 35(=36-1) forks of evince and change the identifiers in the code and name those newly compiled programs as evince1, evince2....upto evince35. The script would handle the 36 evince (evince, evince1, ...evince35) as different programs.
(B)We do away with the DBUS protocol and use some other method to auto-flip pages and launch 36 instances of evince running all at once.
What would be the pros and cons of the two approaches?
P.S.- I have a gut-feeling that opening 36 instances of a reader program and making it flip to next-page is just too much of work for the said program.
===============================
PS- Please be kind to me, I'm not a programmer and forgive my ignorance. In a perfect world I'd not even need to know what the DBUS protocol is. :)

Related

Print to PDF by browser take a lot of threads?

I tried Brave on Mac with the command in the answer by K J in the following question. But after running many such conversions, I may end up with a message of -bash: fork: retry: Resource temporarily unavailable in a terminal. It seems that too many threads are used and not cleaned afterward. What is going wrong here?
How to use brave to automate printing html to pdf?
Ok I guess this may not be normal type of answer as its kind of "works for me"
In cases such as this where the "programming" is simply one cross platform command, that in its dependencies uses system and application resources, there are times one user has problems and others do not. Thus debugging can be highly system dependent.
By way of explanation as to potential issues (and its too long for simple comments) here are my experiences on Windows.
pre running (why so many windows processes !!)
Fresh boot
Apps=3
one is system folder explorer
one is this notepad
one is task manager monitor
Background processes=82 Including Edge (inactive=5 !!)
processes=107 including console = 5 !!
Start command terminal
Apps =+1 with 3 sub processes ?
Background =+1 command prompt
processes =+1 (console now = 6)
Start Brave portable
apps =+1 Brave no page requested only welcome but with 8 sub processes !#?
background =+1 brave portable
processes =same (console still = 6)
Navigate to this page
apps = same Brave with this page requested = 9 then drops back to 8 sub processes !
background = same 1 brave portable
processes =same (console still = 6)
Run 20 similar commands with/without --enable-logging
Mea Culpa (Idiot) 20 fails several times because I did not verify if will run without a running Brave nor test bad cut and paste
but looks like no residual change to processes ??
Try again with brave closed
Apps and Background processes returns to before Brave active
for /l %a in (1,1,20) do brave-portable --headless --print-to-pdf="C:\Users\K\Downloads\brave-portable\test2-%a.pdf" --disable-extensions --print-to-pdf-no-header --disable-popup-blocking --run-all-compositor-stages-before-draw --disable-checker-imaging "https://stackoverflow.com/questions/74788259/how-to-use-brave-to-automate-printing-html-to-pdf"
Hmmm without error checking there is some noticed difference to an earlier run
call completes in a few seconds thus its much too quick to see tasks listed in manager.
Background processes ramped up to 194 !! before drop back to about 78
and after about 20 seconds there are 19 same size files (as almost might be expected)
Now what is odd about that is that usually from experience they should all be different sizes as each call should show different in-page adverts over time
but then again I had logged in and accepted cookies earlier, so there should be no ads to make a difference in later runs.
EXCEPT ONE rogue file out of 20 has an advert Arghhhhhhhh!!
so the inconsistencies saga continues.
However there is no residual use of task processes in my Windows portable Brave with that command sequence!
On its own Brave is using only a few percent of CPU and Memory before and after with no hint of tying up disk or other resource features.

How do I "dump" the contents of an X terminal programmatically a la /dev/vcs{,a} in the Linux console?

Linux's kernel-level console/non-X terminal emulator contains a very cool feature (if compiled in): each /dev/ttyN device corresponds with /dev/vcsaN and /dev/vcsN devices which represent the in-memory (displayed) state of that tty, with and without attributes (color, flashing, etc) respectively. This allows you to very easily cat /dev/vcs7 and see a dump of /dev/tty7 wherever cat was launched. I used this incredibly practical capability the other day to login to a system via SSH and remotely watch a dd process I'd forgotten to put inside a screen (or similar) session - it was running off a text console, so I took a few moments to finetune the character ranges that I wanted to grab, and presently I was watching dd's transfer status over SSH (once every second, incidentally).
To reiterate and clarify, /dev/vcs{,a}* are character devices that retrieve the current in-memory representation the kernel console VT100 emulator, represented as a single "line" of text (there are no "newlines" at the end of each "line" of the screen). Just to remove confusion, I want to note that I can't tail -f this device: it's not a character stream like the TTY itself is. (But I've never needed this kind of behavior, for what it's worth.)
I've kept my ears perked for many years for a method to dump the character-cell memory state of X terminal emulators - or indeed any arbitrary process that needs to work with ttys, in some similar manner as I can with the Linux console. And... I am rather surprised that there is no practical solution to this problem - since it has, arguably, existed for approximately 30 years - X was introduced in 1984 - or, to be pedantic, at least 19 years - /dev/vcs{,a}* was introduced in kernel 1.1.94; the newest file in that release is dated 22 Feb 1995. (The oldest is from 1st Dec 1993 :P)
I would like to say that I do understand and realize that the tty itself is not a "screen buffer" as such but a character stream, and that the nonstandard feature I essentially exploited above is a quirky capability specific to the Linux VT102 emulator. However, this feature is cool enough (why else would it be in the mainline tree? :D) that, in my opinion, there should be a counterpart to it for things that work with /dev/pts*.
This afternoon, I needed to screen-scrape the output of an interactive ncurses application so I could extract metadata from the information it presented in my terminal. (There was no other practical way to achieve the goal I was aiming for.) Linux' kernel VT100 driver would permit such a task to be completed very easily, and I made the mistake of thinking that it, in light of this, it couldn't truly be that hard to do the same under X11.
By 9AM, I'd decided that the easiest way to experimentally request a dump of a remote screen would be to run it in dtach (think "screen -x" without any other options) and hack the dtach code to request a screen update and quit.
Around 11AM-12PM, I was requesting screen updates and dumping them to stdout.
Around 3:30PM, I accepted that using dtach would be impossible:
First of all, it relies on the application itself to send the screen redraws on request, by design, to keep the code simple. This is great, but, as luck would have it, the application I was using didn't support whole-screen repaints - it would only redraw on screen-size change (and only if the screen size was truly different!).
Running the program inside a screen session (because screen is a true terminal emulator and has an internal 2D character-cell buffer), then running screen -x inside dtach, also mysteriously failed to produce character cell updates.
I have previously examined screen and found the code sufficiently insane enough to remove any inclinations I might otherwise have to hack on it; all I can say is that said insanity may be one of the reasons screen does not already have the capabilities I have presented here (which would arguably be very easy to implement).
Other questions similar to this one frequently get answers to use typescript, or script; I just want to clarify that script saves the stream of the tty itself to a file, which I would need to push through a VT100 emulator to obtain a screen image of the current state of the tty in question. In other words, script would be a very insane solution to my problem.
I'm not marking this as accepted since it doesn't solve the actual core issue (which is many years old), but I was able to achieve the specific goal I set out to do.
My specific requirements were that I wanted to screen-scrape the output of the ncdu interactive disk usage browser, so I could simply press Enter in another terminal (or perform some similar, easy sequence) to add the directory currently highlighted/selected in ncdu to a file-list of files I wanted to work with.My goal was not to have to distract myself with endless copy+paste and/or retyping of directory names (probably with not a few inaccuracies to boot), so I could focus on the directories I wanted to select.
screen has a refresh feature, accessed by pressing (by default) CTRL+A, CTRL+L. I extended my copy of dtach to be capable of sending keystrokes in addition to dumping remote screens to stdout, and wrapped dtach in a script that transmitted the refresh sequence (\001\014) to screen -x running inside dtach. This worked perfectly, retrieving complete screen updates without any flicker.
I will warn anyone interested in trying this technique, however, that you will need to perfect the art of dodging VT100 escape sequences. I used regular expressions for this so I wasn't writing thousands of lines of code; here's the specific part of the script that extracted out the two pieces of information I needed:
sh -c "(sleep 0.1; dtach -k qq $'\001\014') &"; path="$(dtach -d qq -t 130000 | sed -n $'/^\033\[7m.*\/\.\./q;/---.*$/{s/.*--- //;s/ -\+.*//;h};/^\033\[7m/{s/.\033.*//g;s/\r.*//g;s/ *$//g;s/^\033\[7m *[^ ]\+ \[[# ]*\] *\(\/*\)\(.*\)$/\/\\2\\1/;p;g;p;q}' | sed 'N;s/\(.*\)\n\(.*\)/\2\1/')"
Since screenshots are cool and help people visualize things, here's a look at how it works when it's running:
The file shown inverted at the bottom of the ncdu-scrape window is being screen-scraped from the ncdu window itself; the four files in the list are there because I selected them using the arrow keys in ncdu, moved my mouse over to the ncdu-scrape window (I use focus-follows-mouse), and pressed Enter. That added the file to the list (a simple text file itself).
Having said this, I would like to clarify that the regular expression above is not a code sample to run with; it is, rather, a warning: for anything beyond incredibly trivial (!!) content extractions such as the one presented here, you're basically getting into the same territory as large corporations/interests who want to convert from VT100-based systems to something more modern, who have to spend tends of thousands commissioning large translation frameworks that perform the kind of conversion outlined above on an especially large scale.
Saner solutions appreciated.

Why is my Clojure project slow on Raspberry Pi?

I've been writing a simple Clojure framework for playing music (and later some other stuff) for my Raspberry Pi. The program parses a given music directory for songs and then starts listening for control commands (such as start, stop, next song) via a TCP interface.
The code is available via GitHub:
https://github.com/jvnn/raspi-framework
The current version works just fine on my laptop, it starts playing music (using the JLayer library) when instructed to, changes songs, and stops just as it should. The uberjar takes a few seconds to start on the laptop as well, but when I try to run it on the Raspberry Pi, things get insanely slow.
Just starting up the program so that all classes are loaded and the actual program code starts executing takes way over a minute. I tried to run it with the -verbose:class switch, and it seems the jvm spends the whole time just loading tons of classes (for Clojure and everything else).
When the program finally starts, it does react to the commands given, but the playback is very laggy. There is a short sub-second sound, then a pause for almost a second, then another sound, another pause etc... So the program is trying to play something but it just can't do it fast enough. CPU usage is somewhere close to 98%.
Now, having an Android phone and all, I'm sure Java can be executed on such hardware well enough to play some mp3 files without any troubles. And I know that JLayer (or parts of it) is used in the gdx game development framework (that also runs on Android) so it shouldn't be a problem either.
So everything points in me being the problem. Is there something I can do either with leiningen (aot is already enabled for all files), the Raspberry Pi, or my code that could make things faster?
Thanks for your time!
UPDATE:
I made a tiny test case to rule out some possibilities and the problems still exist with the following Clojure code:
(ns test.core
(:import [javazoom.jl.player.advanced AdvancedPlayer])
(:gen-class))
(defn -main
[]
(let [filename "/path/to/a/music/file.mp3"
fis (java.io.FileInputStream. filename)
bis (java.io.BufferedInputStream. fis)
player (AdvancedPlayer. bis)]
(doto player (.play) (.close))))
The project.clj:
(defproject test "0.0.1-SNAPSHOT"
:description "FIXME: write description"
:dependencies [[org.clojure/clojure "1.5.1"]
[javazoom/jlayer "1.0.1"]]
:javac-options ["-target" "1.6" "-source" "1.6" "-Xlint:-options"]
:aot :all
:main test.core)
So, no core.async and no threading. The playback did get a bit smoother, but it's still about 200ms music and 200ms pause.
Most obvious to me is that you have a lot of un-hinted interop code, leading to very expensive runtime reflection. Try running lein check (I think that's built in, but maybe you need a plugin) and fixing the reflection issues it points out.

Linux capture window of a large stream of data

Let's say I have a ton of data flowing through stdout over a long period of time, maybe an hour, and I want to capture a 30 second window of that data based on a trigger that occurs in the middle of that window. For instance, maybe something like
$ program-that-outputs-lots-of-data | program-that-captures-a-window-of-data
At some point, a line that contains "A-unique-string" will be output by the program, and at that point I want to save the 15 seconds worth of data before and after that string, discarding everything before that. Immediately afterward, I want to start monitoring again for the same string and capture another window when it comes in and save it to a new file. Any idea how I can do something like this with Linux tools?
The fact that you are trying to use time as a unit for buffering makes your problem very rare. Under the Unix command line, everything tends to be designed around the text line concept.
For example, if instead of 15 seconds of data you would like to capture 15 lines of text (before and after the special token), you could simply do:
$ program-that-outputs-lots-of-data | grep -C 15 A-unique-string
In your case, even if you are developing your own tailored filtering tool, deciding how much input to save and to discard is a pretty complex problem. I'd say that multimedia streaming is the area where there might be some ready to use tools.
I don't think anything exists that approaches these goals. Aside from the fact that your requirements are fairly specific, you also ask that the window be time-based, whereas most Unix-style text filters are line-oriented (e.g. grep -C 100 to get the hundred lines surrounding a match).
It should be fairly straightforward to do this in Python or Perl or Ruby or a similar scripting language.

Linux - Program Design for Debug - Print STDOUT streams from several programs

Let's say I have 10 programs (in terminals) working in tandem: {p1,p2,p3,...,p10}.
It's hard to keep track of all STDOUT debug statements in their respective terminal. I plan to create a GUI to keep track of each STDOUT such that, if I do:
-- Click on p1 would "tail" program 1's output.
-- Click on p3 would "tail" program 4's output.
It's a decent approach but there may be better ideas out there? It's just overwhelming to have 10 terminals; I'd rather have 1 super terminal that keeps track of this.
And unfortunately, linux "screen" is not an option. RESTRICTIONS: I only have the ability to either: redirect STDOUT to a file. (or read directly from STDOUT).
If you are looking for a creative alternative, I would suggest that you look at sockets.
If each program writes to the socket (rather than STDOUT), then your master terminal can act as a server and organize the output.
Now from what you described, it seems as though you are relatively constrained to STDOUT, however it could be possible to do something like this:
# (use netcat (or nc on some systems) to write to a socket on the provided port)
./prog1 | netcat localhost 12312
I'm not sure if this fits in the requirements of what you are doing (and it might be more effort than it is worth!), but it could provide a very stable solution.
EDIT: As was pointed out in the comments, netcat does exactly what you would need to make this work.

Resources