GNU sort -R deterministic or not? - gnu

I was under the impression the GNU sort -R was a pseudo-random shuffler of lines, i.e. deterministic. However, every time I run it the lines are in a different order! Am I misunderstanding sort -R? Is there another deterministic shuffler?

For deterministic behavior, you need to provide --random-source argument
e.g. cat ~/tmp/test1.txt | sort -R --random-source=/dev/zero
The --random-source acts like a seed and can point to any file
The default sort -R is not deterministic, I've tested on Ubuntu 14.04

Related

Run a program when new line appears in dmesg

sudo dmesg -w|grep "Manufacturer: Keychron K1"| xargs -I{} xset r rate 250 70
It does not work, why?
I am trying to reset keyboard settings when the keyboard is reconnected, but I cannot get dmesg -w|xargs... combination to work. It supposed to be working, for example if I do this:
while :; do echo tick; sleep 1s; done|xargs -I{} date
I will get a new time printed every second. The idea was to use dmesg -w then grep then xargs but it does not work until xargs is killed from another terminal.
Why and how to fix?
You're asking an XY(Z) problem here, see "What is the XY problem?".
Your problem X is "how do I automatically have input devices configured in Xorg when hotplugged?"
And for that you simply want to write a xorg.conf.d InputClass rule that will make the appropriate settings for your keyboard:
https://www.x.org/releases/current/doc/man/man5/xorg.conf.5.xhtml#heading9
However you misidentified this problem as problem Y "how to automatically execute a program upon hotplug?" – for that we have UDev
For that problem look at UDev rules: https://wiki.archlinux.org/title/Udev
However misidentified problem Y as the "problem" Z "how can I execute the last part of a chain if pipe redirections, when a certain string appears". Tackling that does not solve your actual problem.
This
sudo dmesg -w | grep … | xargs …
pipes three programs together, which are all executing at the same time. xargs waits for the end of input and then executes whatever was passed to it as parameters. Of course dmesg -w will never produce an end-of-file.
Why
Buffering.
how to fix?
Set line buffering of tools with stdbuf or grep --line-buffered.

Redirect output from subshell and processing through pipe at the same time

tl;dr: need a way to process (with grep) an output inside subshell AND redirect all original output to the main stdout/stderr at the same time. I am looking for shell-independent (!) way.
In detail
There is a proprietary binary which I want to grep for some value
The proprietary binary from time to time might be interactive to ask for a password (depends on the internal logic)
I want to grep the output of the binary AND want being able to enter the password it that is required to proceed further
So the script which is supposed to achieve my task might look like:
#!/bin/sh
user_id=... # some calculated value
list_actions_cmd="./proprietary-binary --actions ${user_id}"
action_item=$(${list_actions_cmd} | grep '^Main:')
Here proprietary-binary might ask for a password through stdin. Since subshell inside $() catches the all output, an end-user won't understand that the list_actions_cmd waits for input. What I want is either to show all output of list_action_cmd AND grepping at the same time or at least caught the keyword that now user will be asked for a password and let him know about that.
Currently what I figured out is to tee the output and grep there:
#!/bin/sh
user_id=... # some calculated value
list_actions_cmd="./proprietary-binary --actions ${user_id}"
$list_actions_cmd 2>&1 | tee /tmp/.proprietary-binary.log
action_item=$(grep "^Main" /tmp/.proprietary-binary.log)
But I wonder is there any elegant shell-independent (not limited to bash which is quite powerful) solution without any intermediate temporary file? Thanks.
What about duplicating output to stderr if executed in a terminal:
item=$(your_command | tee /dev/stderr | grep 'regexp')

Bash shell argument passing...?

You can use a semicolon in bash shell to specify multiple commands.
Sometimes, one of those commands pops a question, requiring user input. (typically 'y' / 'n', or whatever really)
If I know what I want to answer in advance, is there a way to parse it to the commands somehow, like an argument, or some weird magical pipe stuff?
You don't need any "weird magical pipe stuff", just a pipe.
./foo ; echo "y" | ./bar ; ./baz
Or magical herestring syntax if you prefer:
./foo ; ./bar <<<"y" ; ./baz
You can use the yes command to put a lot of 'y' 's to a pipe.
For example, if you want to remove all your text files, you can use
yes | rm -r *.txt
causing every question asked by rm being answered with a y.
If you want another default answer, you can give it as an argument to yes:
yes n | rm -r *.txt
This will output a 'n'.
For more information, see http://en.wikipedia.org/wiki/Yes_(Unix)
For the simple "yes" answer there is a command yes, available on most Unix and Linux platforms:
$ yes | /bin/rm -i *
For an advanced protocol you may want to check the famous Expect, also widely available. It needs basic knowledge of Tcl.
First, it's not bash popping these questions. It is the particular program called (for instance, cp -i asks before overwriting files). Frequently those commands also have switches to answer the questions, like -y for fsck, or how -f overrides -i in rm. Many programs could be answered through a pipe, which is why we have the command "yes", but some go to extra lengths to ensure they cannot; for instance ssh when asking for passwords. Also, there's nothing magical about pipes. If the program only sometimes asks a question, and it matters what question, there are tools designed for that such as "expect".
In a typical shell script, when you do know exactly what you want to feed in and the program accepts input on stdin, you could handle it using a plain pipe:
echo -e '2+2\n5*3' | bc
If it's a longer piece then a here document might be helpful:
bc <<EOF
2+2
3*5
EOF
Sometimes a command provides an option to set default answer to a question. One notable example is apt-get - a package manager for Debian/Ubuntu/Mint. It provides and options -y, --yes, --assume-yes to be used in non-interactive scripts.

Preserving color of text piped through "less" or "more"

Certain commands produce text in color for readability.
I'm using Linux. For example when I'm using rak or hg diff
the output is in color for better readability.
However when I pipe the output through less
hg diff | less
the colors are lost.
How do I preserve the color?
Thanks!
I believe some commands are smart enough NOT to output color if they detect that they are writing to a pipe or a file instead of to the console, since that could ruin the parsing of their output by the next program in the pipeline.
You can try forcing the programs into outputting color with their respective flags (e.g. --color or whatever), but it's ultimately implementation dependent if they'll honor your request or not.
GNU grep 2.27, for example, will not output color into less even when passing --color to it. But if you pass --color=always and pipe it into less, you'll be able to see the color escape codes through less. And then, using the -R flag will have less interpret the color escape codes.
Try less -r or (safer) less -R. See the manual.
Since you probably don't want to specify that all the time:
export LESS=-R # Put that in a startup script like .bashrc.local
hg diff | less
For Mercurial, you can also use the pager extension.
Note: Some commands automatically turn of color output when they detect the output goes to a pipe instead of the terminal. To fix this, force color output.
For example ls -al will show color output but ls -al | less -R will not. ls -al --color | less -R will work as expected.

Read stdout from a process (linux embedded)

Before flagging the question as duplicate, please read my various issues I encountered.
A bit of background: we are developing a C++ application running on embedded ARM sbc using a lite variant of debian linux. The application start at boot launched by the boot script and print various information to stdout. What we would like is the ability to connect using SSH/Telnet and read the application output, without having to kill the process and restart it for the current bash session. I want to create a simple .sh script for non-tech-savvy people to use.
The first solution for the similar question posted here is to use gdb. First it's not user-friendly (need to write multiple commands manually) and I wonder why but it don't seems to output anything into the file.
The second solution strace -ewrite -p PID works perfectly, that's what I want. Problem is, there's a lot more information than just the stdout, and it's badly formatted.
I managed to get an "acceptable" result with strace -e write=1 -s 1024 -p 20049 2>&1 | grep "write(1," but it still have the superfluous write(1, "...", 19) = 19 text. Up to this point it's simply a bit of string formatting, and I've found on multiple other pages this line saying it achieve good formatting : strace -ff -e write=1,2 -s 1024 -p PID 2>&1 | grep "^ |" | cut -c11-60 | sed -e 's/ //g' | xxd -r -p
There are some things I find strange in this command (why -ff?, why grep "^ |"?, why use xxd there?) and it just don't output anything when I try it.
Unfortunately, we do use a old buggy version of busybox (1.7.1) that have some problem with multiple pipes. That bug gives me bad results. For example, if I only do grep it works, and if I only do cut it also works, but let's say "grep "write(1," | cut -c11-60" returns nothing.
I know the real solution would simply be to update busybox and use these multiple pipes to format the string, but we can't update it since the os distribution is already installed on thousands of boards shipped to our clients worldwide..
Anyone have a miraculous solution? Thanks
Screen can be connected to an existing process using reptyr (http://blog.nelhage.com/2011/01/reptyr-attach-a-running-process-to-a-new-terminal/), or you can use neercs (http://caca.zoy.org/wiki/neercs) which I haven't used but apparently is like screen but supports attaching to an existing process all by itself.

Resources