Documentation for uinput - linux

I am trying very hard to find the documentation for the uinput but the only thing I have found was the linux/uinput.h. I have also found some tutorials on the internet but no documentation at all!
For example I would like to know what UI_SET_MSCBIT does but I can't find anything about it.
How does people know how to use uinput?

Well, it takes some investigation effort for such subtle things. From
drivers/input/misc/uinput.c and include/uapi/linux/uinput.h files you can see bits for UI_SET_* definitions, like this:
MSC
REL
LED
etc.
Run next command in kernel sources directory:
$ git grep --all-match -e 'MSC' -e 'REL' -e 'LED' -- Documentation/*
or use regular grep, if your kernel doesn't have .git directory:
$ grep -rl MSC Documentation/* | xargs grep -l REL | xargs grep -l LED
You'll get this file: Documentation/input/event-codes.txt, from which you can see:
EV_MSC: Used to describe miscellaneous input data that do not fit into other types.
EV_MSC events are used for input and output events that do not fall under other categories.
A few EV_MSC codes have special meaning:
MSC_TIMESTAMP: Used to report the number of microseconds since the last reset. This event should be coded as an uint32 value, which is allowed to wrap around with no special consequence. It is assumed that the time difference between two consecutive events is reliable on a reasonable time scale (hours). A reset to zero can happen, in which case the time since the last event is unknown. If the device does not provide this information, the driver must not provide it to user space.
I'm afraid this is the best you can find out there for UI_SET_MSCBIT out there.

Related

How can I use the parallel command to exploit multi-core parallelism on my MacBook?

I often use the find command on Linux and macOS. I just discovered the command parallel, and I would like to combine it with find command if possible because find command takes a long time when we search a specific file into large directories.
I have searched for this information but the results are not accurate enough. There appear to be a lot of possible syntaxes, but I can't tell which one is relevant.
How do I combine the parallel command with the find command (or any other command) in order to benefit from all 16 cores that I have on my MacBook?
Update
From #OleTange, I think I have found the kind of commands that interests me.
So, to know more about these commands, I would like to know the usefulness of characters {}and :::in the following command :
parallel -j8 find {} ::: *
1) Are these characters mandatory ?
2) How can I insert classical options of find command like -type f or -name '*.txt ?
3) For the moment I have defined in my .zshrc the function :
ff () {
find $1 -type f -iname $2 2> /dev/null
}
How could do the equivalent with a fixed number of jobs (I could also set it as a shell argument)?
Parallel processing makes sense when your work is CPU bound (the CPU does the work, and the peripherals are mostly idle) but here, you are trying to improve the performance of a task which is I/O bound (the CPU is mostly idle, waiting for a busy peripheral). In this situation, adding parallelism will only add congestion, as multiple tasks will be fighting over the already-starved I/O bandwidth between them.
On macOS, the system already indexes all your data anyway (including the contents of word-processing documents, PDFs, email messages, etc); there's a friendly magnifying glass on the menu bar at the upper right where you can access a much faster and more versatile search, called Spotlight. (Though I agree that some of the more sophisticated controls of find are missing; and the "user friendly" design gets in the way for me when it guesses what I want, and guesses wrong.)
Some Linux distros offer a similar facility; I would expect that to be the norm for anything with a GUI these days, though the details will differ between systems.
A more traditional solution on any Unix-like system is the locate command, which performs a similar but more limited task; it will create a (very snappy) index on file names, so you can say
locate fnord
to very quickly obtain every file whose name matches fnord. The index is simply a copy of the results of a find run from last night (or however you schedule the backend to run). The command is already installed on macOS, though you have to enable the back end if you want to use it. (Just run locate locate to get further instructions.)
You could build something similar yourself if you find yourself often looking for files with a particular set of permissions and a particular owner, for example (these are not features which locate records); just run a nightly (or hourly etc) find which collects these features into a database -- or even just a text file -- which you can then search nearly instantly.
For running jobs in parallel, you don't really need GNU parallel, though it does offer a number of conveniences and enhancements for many use cases; you already have xargs -P. (The xargs on macOS which originates from BSD is more limited than GNU xargs which is what you'll find on many Linuxes; but it does have the -P option.)
For example, here's how to run eight parallel find instances with xargs -P:
printf '%s\n' */ | xargs -I {} -P 8 find {} -name '*.ogg'
(This assumes the wildcard doesn't match directories which contain single quotes or newlines or other shenanigans; GNU xargs has the -0 option to fix a large number of corner cases like that; then you'd use '%s\0' as the format string for printf.)
As the parallel documentation readily explains, its general syntax is
parallel -options command ...
where {} will be replaced with the current input line (if it is missing, it will be implicitly added at the end of command ...) and the (obviously optional) ::: special token allows you to specify an input source on the command line instead of as standard input.
Anything outside of those special tokens is passed on verbatim, so you can add find options at your heart's content just by specifying them literally.
parallel -j8 find {} -type f -name '*.ogg' ::: */
I don't speak zsh but refactored for regular POSIX sh your function could be something like
ff () {
parallel -j8 find {} -type f -iname "$2" ::: "$1"
}
though I would perhaps switch the arguments so you can specify a name pattern and a list of files to search, à la grep.
ff () {
# "local" is not POSIX but works in many sh versions
local pat=$1
shift
parallel -j8 find {} -type f -iname "$pat" ::: "$#"
}
But again, spinning your disk to find things which are already indexed is probably something you should stop doing, rather than facilitate.
Just use background running at each first level paths separately
In example below will create 12 subdirectories analysis
$ for i in [A-Z]*/ ; do find "$i" -name "*.ogg" & >> logfile ; done
[1] 16945
[2] 16946
[3] 16947
# many lines
[1] Done find "$i" -name "*.ogg"
[2] Done find "$i" -name "*.ogg"
#many lines
[11] Done find "$i" -name "*.ogg"
[12] Done find "$i" -name "*.ogg"
$
Doing so creates many find process the system will dispatch on different cores as any other.
Note 1: it looks a little pig way to do so but it just works..
Note 2: the find command itself is not taking hard on cpus/cores this is 99% of use-case just useless because the find process will spend is time to wait for I/O from disks. Then using parallel or similar commands won't work*
As others have written find is I/O heavy and most likely not limited by your CPUs.
But depending on your disks it can be better to run the jobs in parallel.
NVMe disks are known for performing best if there are 4-8 accesses running in parallel. Some network file systems also work faster with multiple processes.
So some level of parallelization can make sense, but you really have to measure to be sure.
To parallelize find with 8 jobs running in parallel:
parallel -j8 find {} ::: *
This works best if you are in a dir that has many subdirs: Each subdir will then be searched in parallel. Otherwise this may work better:
parallel -j8 find {} ::: */*
Basically the same idea, but now using subdirs of dirs.
If you want the results printed as soon as they are found (and not after the find is finished) use --line-buffer (or --lb):
parallel --lb -j8 find {} ::: */*
To learn about GNU Parallel spend 20 minutes reading chapter 1+2 of https://doi.org/10.5281/zenodo.1146014 and print the cheat sheet: https://www.gnu.org/software/parallel/parallel_cheat.pdf
Your command line will thank you for it.
You appear to want to be able to locate files quickly in large directories under macOS. I think the correct tool for that job is mdfind.
I made a hierarchy with 10,000,000 files under my home directory, all with unique names that resemble UUIDs, e.g. 80104d18-74c9-4803-af51-9162856bf90d. I then tried to find one with:
mdfind -onlyin ~ -name 80104d18-74c9-4803-af51-9162856bf90d
The result was instantaneous and too fast to measure the time, so I did 100 lookups and it took under 20s, so on average a lookup takes 0.2s.
If you actually wanted to locate 100 files, you can group them into a single search like this:
mdfind -onlyin ~ 'kMDItemDisplayName==ffff4bbd-897d-4768-99c9-d8434d873bd8 || kMDItemDisplayName==800e8b37-1f22-4c7b-ba5c-f1d1040ac736 || kMDItemDisplayName==800e8b37-1f22-4c7b-ba5c-f1d1040ac736'
and it executes even faster.
If you only know a partial filename, you can use:
mdfind -onlyin ~ "kMDItemDisplayName = '*cdd90b5ef351*'"
/Users/mark/StackOverflow/MassiveDirectory/800f0058-4021-4f2d-8f5c-cdd90b5ef351
You can also use creation dates, file types, author, video duration, or tags in your search. For example, you can find all PNG images whose name contains "25DD954D73AF" like this:
mdfind -onlyin ~ "kMDItemKind = 'PNG image' && kMDItemDisplayName = '*25DD954D73AF*'"
/Users/mark/StackOverflow/MassiveDirectory/9A91A1C4-C8BF-467E-954E-25DD954D73AF.png
If you want to know what fields you can search on, take a file of the type you want to be able to look for, and run mdls on it and you will see all the fields that macOS knows about:
mdls SomeMusic.m4a
mdls SomeVideo.avi
mdls SomeMS-WordDocument.doc
More examples here.
Also, unlike with locate, there is no need to update a database frequently.

Creat File with Process Names with filter

For a UNIX class we are supposed to create a file with the name deamons that contains all the processes whose names end with d.
My approach was to use something like:
ps | find PID *d
or
find *d > ps
but none of these approaches is anywhere close to achieving the results I want.
We have used ls, ps, find, grep and some other basic UNIX commands so far. The terminal runs in CentOS 5 and we don't have rights to install any packages.
Well ... you're using the wrong tool, find is for finding files. You want to use grep. A bare ps probably also isn't what you want.
ps -e | grep 'd$' > daemons
I highly recommend that you familiarise yourself with the tools at your disposal before stringing them together.
Btw, what were the outputs/results of your own invocations? ;)
That's a good learning exercise, trying to understand what you actually did.

How do I "dump" the contents of an X terminal programmatically a la /dev/vcs{,a} in the Linux console?

Linux's kernel-level console/non-X terminal emulator contains a very cool feature (if compiled in): each /dev/ttyN device corresponds with /dev/vcsaN and /dev/vcsN devices which represent the in-memory (displayed) state of that tty, with and without attributes (color, flashing, etc) respectively. This allows you to very easily cat /dev/vcs7 and see a dump of /dev/tty7 wherever cat was launched. I used this incredibly practical capability the other day to login to a system via SSH and remotely watch a dd process I'd forgotten to put inside a screen (or similar) session - it was running off a text console, so I took a few moments to finetune the character ranges that I wanted to grab, and presently I was watching dd's transfer status over SSH (once every second, incidentally).
To reiterate and clarify, /dev/vcs{,a}* are character devices that retrieve the current in-memory representation the kernel console VT100 emulator, represented as a single "line" of text (there are no "newlines" at the end of each "line" of the screen). Just to remove confusion, I want to note that I can't tail -f this device: it's not a character stream like the TTY itself is. (But I've never needed this kind of behavior, for what it's worth.)
I've kept my ears perked for many years for a method to dump the character-cell memory state of X terminal emulators - or indeed any arbitrary process that needs to work with ttys, in some similar manner as I can with the Linux console. And... I am rather surprised that there is no practical solution to this problem - since it has, arguably, existed for approximately 30 years - X was introduced in 1984 - or, to be pedantic, at least 19 years - /dev/vcs{,a}* was introduced in kernel 1.1.94; the newest file in that release is dated 22 Feb 1995. (The oldest is from 1st Dec 1993 :P)
I would like to say that I do understand and realize that the tty itself is not a "screen buffer" as such but a character stream, and that the nonstandard feature I essentially exploited above is a quirky capability specific to the Linux VT102 emulator. However, this feature is cool enough (why else would it be in the mainline tree? :D) that, in my opinion, there should be a counterpart to it for things that work with /dev/pts*.
This afternoon, I needed to screen-scrape the output of an interactive ncurses application so I could extract metadata from the information it presented in my terminal. (There was no other practical way to achieve the goal I was aiming for.) Linux' kernel VT100 driver would permit such a task to be completed very easily, and I made the mistake of thinking that it, in light of this, it couldn't truly be that hard to do the same under X11.
By 9AM, I'd decided that the easiest way to experimentally request a dump of a remote screen would be to run it in dtach (think "screen -x" without any other options) and hack the dtach code to request a screen update and quit.
Around 11AM-12PM, I was requesting screen updates and dumping them to stdout.
Around 3:30PM, I accepted that using dtach would be impossible:
First of all, it relies on the application itself to send the screen redraws on request, by design, to keep the code simple. This is great, but, as luck would have it, the application I was using didn't support whole-screen repaints - it would only redraw on screen-size change (and only if the screen size was truly different!).
Running the program inside a screen session (because screen is a true terminal emulator and has an internal 2D character-cell buffer), then running screen -x inside dtach, also mysteriously failed to produce character cell updates.
I have previously examined screen and found the code sufficiently insane enough to remove any inclinations I might otherwise have to hack on it; all I can say is that said insanity may be one of the reasons screen does not already have the capabilities I have presented here (which would arguably be very easy to implement).
Other questions similar to this one frequently get answers to use typescript, or script; I just want to clarify that script saves the stream of the tty itself to a file, which I would need to push through a VT100 emulator to obtain a screen image of the current state of the tty in question. In other words, script would be a very insane solution to my problem.
I'm not marking this as accepted since it doesn't solve the actual core issue (which is many years old), but I was able to achieve the specific goal I set out to do.
My specific requirements were that I wanted to screen-scrape the output of the ncdu interactive disk usage browser, so I could simply press Enter in another terminal (or perform some similar, easy sequence) to add the directory currently highlighted/selected in ncdu to a file-list of files I wanted to work with.My goal was not to have to distract myself with endless copy+paste and/or retyping of directory names (probably with not a few inaccuracies to boot), so I could focus on the directories I wanted to select.
screen has a refresh feature, accessed by pressing (by default) CTRL+A, CTRL+L. I extended my copy of dtach to be capable of sending keystrokes in addition to dumping remote screens to stdout, and wrapped dtach in a script that transmitted the refresh sequence (\001\014) to screen -x running inside dtach. This worked perfectly, retrieving complete screen updates without any flicker.
I will warn anyone interested in trying this technique, however, that you will need to perfect the art of dodging VT100 escape sequences. I used regular expressions for this so I wasn't writing thousands of lines of code; here's the specific part of the script that extracted out the two pieces of information I needed:
sh -c "(sleep 0.1; dtach -k qq $'\001\014') &"; path="$(dtach -d qq -t 130000 | sed -n $'/^\033\[7m.*\/\.\./q;/---.*$/{s/.*--- //;s/ -\+.*//;h};/^\033\[7m/{s/.\033.*//g;s/\r.*//g;s/ *$//g;s/^\033\[7m *[^ ]\+ \[[# ]*\] *\(\/*\)\(.*\)$/\/\\2\\1/;p;g;p;q}' | sed 'N;s/\(.*\)\n\(.*\)/\2\1/')"
Since screenshots are cool and help people visualize things, here's a look at how it works when it's running:
The file shown inverted at the bottom of the ncdu-scrape window is being screen-scraped from the ncdu window itself; the four files in the list are there because I selected them using the arrow keys in ncdu, moved my mouse over to the ncdu-scrape window (I use focus-follows-mouse), and pressed Enter. That added the file to the list (a simple text file itself).
Having said this, I would like to clarify that the regular expression above is not a code sample to run with; it is, rather, a warning: for anything beyond incredibly trivial (!!) content extractions such as the one presented here, you're basically getting into the same territory as large corporations/interests who want to convert from VT100-based systems to something more modern, who have to spend tends of thousands commissioning large translation frameworks that perform the kind of conversion outlined above on an especially large scale.
Saner solutions appreciated.

Finding non SIMILAR lines on Solaris (or Linux) in two files

I'm trying to compare 2 files on a Solaris box and only see the lines that are not similar. I know that I can use the command given below to find lines that are not exact matches, but that isn't good enough for what I'm try to do.
comm -12 <(sort FILE1.txt | uniq) <(sort FILE2.txt | uniq) > diff.txt
For the purposes of this question I would define simlar as having the same characters ~80% of the time, but completely ignoring locations that differ (since the sections that differ may also differ in length). The locations that differ can be assumed to occur at roughly the same point in the line. In other words once we find a location that differs we have to figure out when to start comparing again.
I know this is a hard problem to solve and will appreciate any help/ideas.
EDIT:
Example input 1:
Abend for SP EAOJH with account s03284fjw and client pewaj39023eipofja,level.error
Exception Invalid account type requested: 134029830198,level.fatal
Only in file 1
Example input 2:
Exception Invalid account type requested: 1307230,level.fatal
Abend for SP EREOIWS with account 32192038409aoewj and client eowaji30948209,level.error
Example output:
Only in file 1
I am also realizing that it would be ideal if the files were not read into memory all at once since they can be nearly 100 gigs. Perhaps perl would be better than bash because of this need.

grep limited characters - one line

I want to look up a word in multiple files, and return only a single line per result, or a limited number of characters (40 ~ 80 characters for example), and not the entire line, as by default.
grep -sR 'wp-content' .
file_1.sql:3309:blog/wp-content
file_1.sql:3509:blog/wp-content
file_2.sql:309:blog/wp-content
Currently I see the following:
grep -sR 'wp-content' .
file_1.sql:3309:blog/wp-content-Progressively predominate impactful systems without resource-leveling best practices. Uniquely maximize virtual channels and inexpensive results. Uniquely procrastinate multifunctional leadership skills without visionary systems. Continually redefine prospective deliverables without.
file_1.sql:3509:blog/wp-content-Progressively predominate impactful systems without resource-leveling best practices. Uniquely maximize virtual channels and inexpensive results. Uniquely procrastinate multifunctional leadership skills without visionary systems. Continually redefine prospective deliverables without.
file_2.sql:309:blog/wp-content-Progressively predominate impactful systems without resource-leveling best practices. Uniquely maximize virtual channels and inexpensive results. Uniquely procrastinate multifunctional leadership skills without visionary systems. Continually redefine prospective deliverables without.
You could use a combination of grep and cut
Using your example I would use:
grep -sRn 'wp-content' .|cut -c -40
grep -sRn 'wp-content' .|cut -c -80
That would give you the first 40 or 80 characters respectively.
edit:
Also, theres a flag in grep, that you could use:
-m NUM, --max-count=NUM
Stop reading a file after NUM matching lines.
This with a combination of what I previously wrote:
grep -sRnm 1 'wp-content' .|cut -c -40
grep -sRnm 1 'wp-content' .|cut -c -80
That should give you the first time it appears per file, and only the first 40 or 80 chars.
egrep -Rso '.{0,40}wp-content.{0,40}' *.sh
This will not call the Radio-Symphonie-Orchestra, but -o(nly matching).
A maximum of 40 characters before and behind your pattern. Note: *e*grep.
If you change the regex to '^.*wp-content' you can use egrep -o. For example,
egrep -sRo '^.*wp-content' .
The -o flag make egrep only print out the portion of the line that matches. So matching from the start of line to wp-content should yield the sample output in your first code block.

Resources