conky drives info, changing totalbar colours under certain conditions - conky

I have a conky script displaying drives info, percentage used,
current amount used and total size
I have a total bar under each drive name and I want to change the colour
to red if the total is greater than 90%
For the highest cpu/mem sections I am running top command for top4 processes and setting top1 to red
Do i need some sort of if statement to do this with the total bar for drives? Not sure how to do if statements in conky.
here is my current script
use_xft yes
xftfont 123:size=8
xftalpha 0.1
update_interval 1
total_run_times 0
own_window yes
own_window_type normal
own_window_transparent yes
own_window_hints undecorated,below,sticky,skip_taskbar,skip_pager
own_window_colour 000000
own_window_argb_visual yes
own_window_argb_value 0
double_buffer yes
#minimum_size 250 5
#maximum_width 500
draw_shades no
draw_outline yes
draw_borders no
draw_graph_borders no
default_color 000000
default_shade_color 000000
default_outline_color 99ddff
alignment top_left
gap_x 0
gap_y 320
no_buffers yes
uppercase no
cpu_avg_samples 2
net_avg_samples 1
override_utf8_locale yes
use_spacer yes
minimum_size 0 0
TEXT
${font GE Inspira:pixelsize=25}
${color }CPU: ${color }$cpu%
${color 99ddff}${cpubar 5,150}
${color }MEM: ${color }$memperc% $mem/$memmax
${color 99ddff}${membar 5,150}
${color }SWAP: ${color }$swapperc% $swap/$swapmax
${color 99ddff}${swapbar 5,150}
${color }ROOT: ${color }${fs_used_perc /}% ${fs_free /}/${fs_size /}
${color 99ddff}${fs_bar 5,150 /}
${color }HOME: ${color }${fs_used_perc /home/brian/}% ${fs_free /home/brian/}/${fs_size /home/brian/}
${color 99ddff}${fs_bar 5,150 /home/brian/}
${color }MOVIES: ${color }${fs_used_perc /media/brian/Movies/}% ${fs_free /media/brian/Movies/}/${fs_size /media/brian/Movies/}
${color 99ddff}${fs_bar 5,150 /media/brian/Movies}
${color }ANIME: ${color }${fs_used_perc /media/brian/Anime/}% ${fs_free /media/brian/Anime/}/${fs_size /media/brian/Anime/}
${color 99ddff}${fs_bar 5,150 /media/brian/Anime}
${color }TV SHOWS: ${color }${fs_used_perc /media/brian/Tv Shows/}% ${fs_free /media/brian/Tv Shows/}/${fs_size /media/brian/Tv Shows/}
${color 99ddff}${fs_bar 5,150 /media/brian/Tv Shows}

The Conky object if_match is the key. For example...
${if_match ${fs_used_perc /media/brian/Tv Shows/} > 90}${color red}${else}${color 99ddff}${endif}${fs_bar 5,150 /media/brian/Tv Shows}
See man conky for more info.

Related

How to log just comands typed by user using Auditd and ignore system calls? [migrated]

This question was migrated from Stack Overflow because it can be answered on Super User.
Migrated 5 days ago.
Using Auditd, I performed the following configuration:
# /etc/audit/rules.d/audit.rules
[...]
-a always,exit -F arch=b64 -S execve
-a always,exit -F arch=b32 -S execve
It works, however ends up generating too many events for just one command executed by the user.
I just need the SYSCALL, EXECVE, CWD and SYSCALL of the typed command. But all commands executed behind are also being logged.
For example:
$ hostnamectl
# /var/log/auditd/auditd.log
[...]
***# Logs I want:***
type=**SYSCALL** msg=audit(1676405948.076:1109891): arch=c000003e syscall=59 success=yes exit=0 a0=55751f25f240 a1=55751f2807c0 a2=55751f12a150 a3=8 items=2 ppid=8200 pid=8528 auid=1002 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts3 ses=45102 comm="hostnamectl" exe="/usr/bin/hostnamectl" subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 key=(null)ARCH=x86_64 SYSCALL=execve AUID="myUser" UID="root" GID="root" EUID="root" SUID="root" FSUID="root" EGID="root" SGID="root" FSGID="root"
type=**EXECVE** msg=audit(1676405948.076:1109891): argc=1 a0="hostnamectl"
type=**CWD** msg=audit(1676405948.076:1109891): cwd="/home/myUser"
type=**SYSCALL** msg=audit(1676405948.381:1109892): arch=c000003e syscall=59 success=yes exit=0 a0=5622b4642810 a1=5622b467fb70 a2=5622b4798820 a3=5622b4766810 items=2 ppid=1 pid=8529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-hostnam" exe="/usr/lib/systemd/systemd-hostnamed" subj=system_u:system_r:systemd_hostnamed_t:s0 key=(null)ARCH=x86_64 SYSCALL=execve AUID="unset" UID="root" GID="root" EUID="root" SUID="root" FSUID="root" EGID="root" SGID="root" FSGID="root"
**# Logs I want to discard**
type=**BPRM_FCAPS **msg=audit(1676405948.381:1109892): fver=0 fp=0000000000000000 fi=0000000000000000 fe=0 old_pp=0000003fffffffff old_pi=0000000000000000 old_pe=0000003fffffffff old_pa=0000000000000000 pp=0000000000200000 pi=0000000000000000 pe=0000000000200000 pa=0000000000000000
type=**EXECVE **msg=audit(1676405948.381:1109892): argc=1 a0="/usr/lib/systemd/systemd-hostnamed"
type=**CWD **msg=audit(1676405948.381:1109892): cwd="/"
type=**SERVICE_START **msg=audit(1676405948.388:1109893): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'UID="root" AUID="unset"
[...]
There is some filter can I apply to log just informations about the typed command?
My current audit.rules configuration:
## First rule - delete all
-D
## Increase the buffers to survive stress events.
## Make this bigger for busy systems
-b 8192
## This determine how long to wait in burst of events
--backlog_wait_time 60000
## Set failure mode to syslog
-f 1
## Ignore PATH and PROCTITLE records
-a always,exclude -F msgtype=PATH
-a always,exclude -F msgtype=PROCTITLE
## Cron jobs fill the logs with stuff we normally don't want (works with SELinux)
-a never,user -F subj_type=crond_t
-a exit,never -F subj_type=crond_t
-a exit,always -F arch=b64 -S execve
-a exit,always -F arch=b32 -S execve

How to forcefully copy a file from Hdfs to linux file system?

For the command, -copyFromLocal there is an option with -f which will forcefully copy the data from Local file system to Hdfs. Similarly with -copyToLocal option I tried with -f option but, it didn't work. So, can anyone please guide me on that.
Thanks,
Karthik
There is not such -f for copytolocal
$ hadoop fs -help
Usage: hadoop fs [generic options]
[-appendToFile <localsrc> ... <dst>]
[-cat [-ignoreCrc] <src> ...]
[-checksum <src> ...]
[-chgrp [-R] GROUP PATH...]
[-chmod [-R] <MODE[,MODE]... | OCTALMODE> PATH...]
[-chown [-R] [OWNER][:[GROUP]] PATH...]
[-copyFromLocal [-f] [-p] <localsrc> ... <dst>]
[-copyToLocal [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
[-count [-q] <path> ...]
[-cp [-f] [-p | -p[topax]] <src> ... <dst>]
[-createSnapshot <snapshotDir> [<snapshotName>]]
[-deleteSnapshot <snapshotDir> <snapshotName>]
[-df [-h] [<path> ...]]
[-du [-s] [-h] <path> ...]
[-expunge]
[-get [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
[-getfacl [-R] <path>]
[-getfattr [-R] {-n name | -d} [-e en] <path>]
[-getmerge [-nl] <src> <localdst>]
[-help [cmd ...]]
[-ls [-d] [-h] [-R] [<path> ...]]
[-mkdir [-p] <path> ...]
[-moveFromLocal <localsrc> ... <dst>]
[-moveToLocal <src> <localdst>]
[-mv <src> ... <dst>]
[-put [-f] [-p] <localsrc> ... <dst>]
[-renameSnapshot <snapshotDir> <oldName> <newName>]
[-rm [-f] [-r|-R] [-skipTrash] <src> ...]
[-rmdir [--ignore-fail-on-non-empty] <dir> ...]
[-setfacl [-R] [{-b|-k} {-m|-x <acl_spec>} <path>]|[--set <acl_spec> <path>]]
[-setfattr {-n name [-v value] | -x name} <path>]
[-setrep [-R] [-w] <rep> <path> ...]
[-stat [format] <path> ...]
[-tail [-f] <file>]
[-test -[defsz] <path>]
[-text [-ignoreCrc] <src> ...]
[-touchz <path> ...]
Pls refer this for more info Hadoop hdfs commands

how can i sort the content of a file by date?

i have a file with the next content :
linux-4.4.1.tar.gz 31-Jan-2016 19:34 127M
linux-4.4.2.tar.gz 17-Feb-2016 20:35 127M
linux-4.4.3.tar.gz 25-Feb-2016 20:13 127M
linux-4.4.4.tar.gz 03-Mar-2016 23:16 127M
linux-4.4.5.tar.gz 09-Mar-2016 23:44 127M
linux-4.4.6.tar.gz 16-Mar-2016 16:28 127M
linux-4.4.7.tar.gz 12-Apr-2016 16:13 127M
linux-4.4.8.tar.gz 20-Apr-2016 07:00 127M
linux-4.4.tar.gz 10-Jan-2016 23:12 127M
linux-4.5.1.tar.gz 12-Apr-2016 16:08 128M
linux-4.5.2.tar.gz 20-Apr-2016 07:00 128M
linux-4.5.tar.gz 14-Mar-2016 04:38 128M
and i would like to get this content filtered by their dates , but im not sure how can i do that, so far i have only the following code to convert the dates for a comparation but im not sure how to use it in bash code in order to filter the file:
date -d 20-Apr-2016 +"%Y%m%d"
Schwartzian transform:
while read -r line; do
d=$(date -d "${line:24:11}" +"%Y%m%d")
echo "$d $line"
done < file | sort -k1,1n | cut -d " " -f 2-
Output:
linux-4.4.tar.gz 10-Jan-2016 23:12 127M
linux-4.4.1.tar.gz 31-Jan-2016 19:34 127M
linux-4.4.2.tar.gz 17-Feb-2016 20:35 127M
linux-4.4.3.tar.gz 25-Feb-2016 20:13 127M
linux-4.4.4.tar.gz 03-Mar-2016 23:16 127M
linux-4.4.5.tar.gz 09-Mar-2016 23:44 127M
linux-4.5.tar.gz 14-Mar-2016 04:38 128M
linux-4.4.6.tar.gz 16-Mar-2016 16:28 127M
linux-4.4.7.tar.gz 12-Apr-2016 16:13 127M
linux-4.5.1.tar.gz 12-Apr-2016 16:08 128M
linux-4.4.8.tar.gz 20-Apr-2016 07:00 127M
linux-4.5.2.tar.gz 20-Apr-2016 07:00 128M
If open to perl then Schwartzian transform is best implemented in it. This uses a core module so no need to install one from CPAN.
perl -MTime::Piece -lane'
push #rows, [ $_, join (" ", $F[1], $F[2]) ];
}{
print for
map { $_->[0] }
sort {
Time::Piece->strptime($a->[1], "%d-%b-%Y %R") <=>
Time::Piece->strptime($b->[1], "%d-%b-%Y %R")
}
map { [ $_->[0], $_->[1] ] } #rows;
' file
linux-4.4.tar.gz 10-Jan-2016 23:12 127M
linux-4.4.1.tar.gz 31-Jan-2016 19:34 127M
linux-4.4.2.tar.gz 17-Feb-2016 20:35 127M
linux-4.4.3.tar.gz 25-Feb-2016 20:13 127M
linux-4.4.4.tar.gz 03-Mar-2016 23:16 127M
linux-4.4.5.tar.gz 09-Mar-2016 23:44 127M
linux-4.5.tar.gz 14-Mar-2016 04:38 128M
linux-4.4.6.tar.gz 16-Mar-2016 16:28 127M
linux-4.5.1.tar.gz 12-Apr-2016 16:08 128M
linux-4.4.7.tar.gz 12-Apr-2016 16:13 127M
linux-4.4.8.tar.gz 20-Apr-2016 07:00 127M
linux-4.5.2.tar.gz 20-Apr-2016 07:00 128M
If you are comfortable using GNU AWK, then a script like this would work:
conv_date.awk
BEGIN { # sort array numerically
PROCINFO["sorted_in"] = "#ind_num_asc"
# prepare a mapping month name to month-number:
split("JAN FEB MAR APR MAY JUN JUL AUG SEP OCT NOV DEC", tmp," ")
for( ind in tmp ) { monthMap [ tmp[ ind ] ] = ind }
}
{ split( $2, dt, /[-]/)
ts = mktime( dt[3] " " monthMap[ toupper( dt[2]) ] " " dt[1] " 0 0 0" )
if (ts in lines) lines[ts] = lines[ts] "\n" $0
else lines[ts] = $0
}
END { # output in chronological order
for( l in lines ) print lines[ l ]
}
Use it like this: awk -f conv_date.awk your_file
Save the script
#!/bin/bash
reorder()
{
awk '{printf "%s %s %s %s\n",$2,$3,$1,$4}' $1 \
| sort -t'-' -k2 -M \
| awk '{printf "%s %s %s %s\n",$3,$1,$2,$4}' #You can omit this pipe
}
reorder $1
as sortcontent.sh and run it like
./sortcontent.sh your_file_name

how to find Linux module path

in the linux, lsmod lists a lot of modules. but how can we find where those module loaded from.
for some modules,linux command "modprobe -l" shows a path but some are not.
edited
i also tried "find" and "locate". both of them lists all kind of versions
locate fake
/svf/SVDrv/kernel/linux/.fake.ko.cmd
/svf/SVDrv/kernel/linux/.fake.mod.o.cmd
/svf/SVDrv/kernel/linux/.fake.o.cmd
/svf/SVDrv/kernel/linux/fake.ko
/svf/SVDrv/kernel/linux/fake.mod.o
/svf/SVDrv/kernel/linux/fake.o
/svf/SVDrv.03.11.2014.16.00/kernel/linux/.fake.ko.cmd
/svf/SVDrv.03.11.2014.16.00/kernel/linux/.fake.mod.o.cmd
/svf/SVDrv.03.11.2014.16.00/kernel/linux/.fake.o.cmd
/svf/SVDrv.03.11.2014.16.00/kernel/linux/fake.ko
/svf/SVDrv.03.11.2014.16.00/kernel/linux/fake.mod.o
/svf/SVDrv.03.11.2014.16.00/kernel/linux/fake.o
/svf/SVDrv.04.29.2014.17.39/kernel/linux/.fake.ko.cmd
/svf/SVDrv.04.29.2014.17.39/kernel/linux/.fake.mod.o.cmd
/svf/SVDrv.04.29.2014.17.39/kernel/linux/.fake.o.cmd
/svf/SVDrv.04.29.2014.17.39/kernel/linux/fake.ko
/svf/SVDrv.04.29.2014.17.39/kernel/linux/fake.mod.o
/svf/SVDrv.04.29.2014.17.39/kernel/linux/fake.o
/svf/SVDrv.05.05.2014.11.25/kernel/linux/.fake.ko.cmd
/svf/SVDrv.05.05.2014.11.25/kernel/linux/.fake.mod.o.cmd
/svf/SVDrv.05.05.2014.11.25/kernel/linux/.fake.o.cmd
/svf/SVDrv.05.05.2014.11.25/kernel/linux/fake.ko
/svf/SVDrv.05.05.2014.11.25/kernel/linux/fake.mod.o
/svf/SVDrv.05.05.2014.11.25/kernel/linux/fake.o
/svf/SVDrv.05.05.2014.17.43/kernel/linux/.fake.ko.cmd
/svf/SVDrv.05.05.2014.17.43/kernel/linux/.fake.mod.o.cmd
/svf/SVDrv.05.05.2014.17.43/kernel/linux/.fake.o.cmd
/svf/SVDrv.05.05.2014.17.43/kernel/linux/fake.ko
/svf/SVDrv.05.05.2014.17.43/kernel/linux/fake.mod.o
/svf/SVDrv.05.05.2014.17.43/kernel/linux/fake.o
/svf/SVDrv.05.07.2014.14.59/kernel/linux/.fake.ko.cmd
/svf/SVDrv.05.07.2014.14.59/kernel/linux/.fake.mod.o.cmd
/svf/SVDrv.05.07.2014.14.59/kernel/linux/.fake.o.cmd
/svf/SVDrv.05.07.2014.14.59/kernel/linux/fake.ko
/svf/SVDrv.05.07.2014.14.59/kernel/linux/fake.mod.o
/svf/SVDrv.05.07.2014.14.59/kernel/linux/fake.o
Sorry if the answer comes a bit late but I just stumbled across this particular question myself today...
To minimize manual labor here is my listing of the paths curretly loaded modules are loaded from:
awk '{ print $1 }' /proc/modules | xargs modinfo -n | sort
I needed this to create a minimal kernel image containg only the modules i really need.
Unfortunately lsmod only displays the name field which does not alwys match the modules# file name (e.g phy-am335x-control.ko and phy_am335x_control).
I hope this helps.
You can use "locate" or "find" command on these modules to find where they are , for example
[root#localhost core_src]# lsmod
Module Size Used by
iptable_filter 2793 0
ipt_MASQUERADE 2466 1
iptable_nat 6158 1
vmware_balloon 7199 0
i2c_piix4 12608 0
i2c_core 31276 1 i2c_piix4
shpchp 33482 0
ext4 371331 2
mbcache 8144 1 ext4
jbd2 93312 1 ext4
sd_mod 39488 4
crc_t10dif 1541 1 sd_mod
sr_mod 16228 0
cdrom 39803 1 sr_mod
mptspi 17051 3
mptscsih 36828 1 mptspi
mptbase 94005 2 mptspi,mptscsih
scsi_transport_spi 26151 1 mptspi
pata_acpi 3701 0
ata_generic 3837 0
ata_piix 22846 0
dm_mirror 14101 0
dm_region_hash 12170 1 dm_mirror
dm_log 10122 2 dm_mirror,dm_region_hash
dm_mod 81692 2 dm_mirror,dm_log
[root#localhost core_src]# locate vmware_balloon
/lib/modules/2.6.32-279.el6.x86_64/kernel/drivers/misc/vmware_balloon.ko
Get the paths from the list of loaded modules. Without the need for awk.
while IFS= read -r line;
do modinfo -n "${line%% *}"
done < /proc/modules | sort

Best way to divide in bash using pipes?

I'm just looking for an easy way to divide a number (or provide other math functions). Let's say I have the following command:
find . -name '*.mp4' | wc -l
How can I take the result of wc -l and divide it by 3?
The examples I've seen don't deal with re-directed out/in.
Using bc:
$ bc -l <<< "scale=2;$(find . -name '*.mp4' | wc -l)/3"
2.33
In contrast, the bash shell only performs integer arithmetic.
Awk is also very powerful:
$ find . -name '*.mp4' | wc -l | awk '{print $1/3}'
2.33333
You don't even need wc if using awk:
$ find . -name '*.mp4' | awk 'END {print NR/3}'
2.33333
Edit 2018-02-22: Adding shell connector
There is more than 1 way:
Depending on precision required and number of calcul to be done! See shell connector further!
Using bc (binary calculator)
find . -type f -name '*.mp4' -printf \\n | wc -l | xargs printf "%d/3\n" | bc -l
6243.33333333333333333333
or
echo $(find . -name '*.mp4' -printf \\n | wc -l)/3|bc -l
6243.33333333333333333333
or using bash, result in integer only:
echo $(($(find . -name '*.mp4' -printf \\n| wc -l)/3))
6243
Using bash interger builtin math processor
res=000$((($(find . -type f -name '*.mp4' -printf "1+")0)*1000/3))
printf -v res "%.2f" ${res:0:${#res}-3}.${res:${#res}-3}
echo $res
6243.33
Pure bash
With recent 64bits bash, you could even use #glennjackman's ideas of using globstar, but computing pseudo floating could be done by:
shopt -s globstar
files=(**/*.mp4)
shopt -u globstar
res=$[${#files[*]}000/3]
printf -v res "%.2f" ${res:0:${#res}-3}.${res:${#res}-3}
echo $res
6243.33
There is no fork and $res contain a two digit rounded floating value.
Nota: Care about symlinks when using globstar and **!
Introducing shell connector
If you plan to do a lot of calculs, require high precision and use bash, you could use long running bc sub process:
mkfifo /tmp/mybcfifo
exec 5> >(exec bc -l >/tmp/mybcfifo)
exec 6</tmp/mybcfifo
rm /tmp/mybcfifo
then now:
echo >&5 '12/34'
read -u 6 result
echo $result
.35294117647058823529
This subprocess stay open and useable:
ps --sid $(ps ho sid $$) fw
PID TTY STAT TIME COMMAND
18027 pts/9 Ss 0:00 bash
18258 pts/9 S 0:00 \_ bc -l
18789 pts/9 R+ 0:00 \_ ps --sid 18027 fw
Computing $PI:
echo >&5 '4*a(1)'
read -u 6 PI
echo $PI
3.14159265358979323844
To terminate sub process:
exec 6<&-
exec 5>&-
Little demo, about The best way to divide in bash using pipes!
Computing range {1..157} / 42 ( I will let you google for answer to the ultimate question of life, the universe, and everything ;)
... and print 13 result by lines in order to reduce output:
printf -v form "%s" "%5.3f "{,}{,}{,,};form+="%5.3f\n";
By regular way
testBc(){
for ((i=1; i<157; i++)) ;do
echo $(bc -l <<<"$i/42");
done
}
By using long running bc sub process:
testLongBc(){
mkfifo /tmp/mybcfifo;
exec 5> >(exec bc -l >/tmp/mybcfifo);
exec 6< /tmp/mybcfifo;
rm /tmp/mybcfifo;
for ((i=1; i<157; i++)) ;do
echo "$i/42" 1>&5;
read -u 6 result;
echo $result;
done;
exec 6>&-;
exec 5>&-
}
Let's see without:
time printf "$form" $(testBc)
0.024 0.048 0.071 0.095 0.119 0.143 0.167 0.190 0.214 0.238 0.262 0.286 0.310
0.333 0.357 0.381 0.405 0.429 0.452 0.476 0.500 0.524 0.548 0.571 0.595 0.619
0.643 0.667 0.690 0.714 0.738 0.762 0.786 0.810 0.833 0.857 0.881 0.905 0.929
0.952 0.976 1.000 1.024 1.048 1.071 1.095 1.119 1.143 1.167 1.190 1.214 1.238
1.262 1.286 1.310 1.333 1.357 1.381 1.405 1.429 1.452 1.476 1.500 1.524 1.548
1.571 1.595 1.619 1.643 1.667 1.690 1.714 1.738 1.762 1.786 1.810 1.833 1.857
1.881 1.905 1.929 1.952 1.976 2.000 2.024 2.048 2.071 2.095 2.119 2.143 2.167
2.190 2.214 2.238 2.262 2.286 2.310 2.333 2.357 2.381 2.405 2.429 2.452 2.476
2.500 2.524 2.548 2.571 2.595 2.619 2.643 2.667 2.690 2.714 2.738 2.762 2.786
2.810 2.833 2.857 2.881 2.905 2.929 2.952 2.976 3.000 3.024 3.048 3.071 3.095
3.119 3.143 3.167 3.190 3.214 3.238 3.262 3.286 3.310 3.333 3.357 3.381 3.405
3.429 3.452 3.476 3.500 3.524 3.548 3.571 3.595 3.619 3.643 3.667 3.690 3.714
real 0m10.113s
user 0m0.900s
sys 0m1.290s
Wow! Ten seconds on my raspberry-pi!!
Then with:
time printf "$form" $(testLongBc)
0.024 0.048 0.071 0.095 0.119 0.143 0.167 0.190 0.214 0.238 0.262 0.286 0.310
0.333 0.357 0.381 0.405 0.429 0.452 0.476 0.500 0.524 0.548 0.571 0.595 0.619
0.643 0.667 0.690 0.714 0.738 0.762 0.786 0.810 0.833 0.857 0.881 0.905 0.929
0.952 0.976 1.000 1.024 1.048 1.071 1.095 1.119 1.143 1.167 1.190 1.214 1.238
1.262 1.286 1.310 1.333 1.357 1.381 1.405 1.429 1.452 1.476 1.500 1.524 1.548
1.571 1.595 1.619 1.643 1.667 1.690 1.714 1.738 1.762 1.786 1.810 1.833 1.857
1.881 1.905 1.929 1.952 1.976 2.000 2.024 2.048 2.071 2.095 2.119 2.143 2.167
2.190 2.214 2.238 2.262 2.286 2.310 2.333 2.357 2.381 2.405 2.429 2.452 2.476
2.500 2.524 2.548 2.571 2.595 2.619 2.643 2.667 2.690 2.714 2.738 2.762 2.786
2.810 2.833 2.857 2.881 2.905 2.929 2.952 2.976 3.000 3.024 3.048 3.071 3.095
3.119 3.143 3.167 3.190 3.214 3.238 3.262 3.286 3.310 3.333 3.357 3.381 3.405
3.429 3.452 3.476 3.500 3.524 3.548 3.571 3.595 3.619 3.643 3.667 3.690 3.714
real 0m0.670s
user 0m0.190s
sys 0m0.070s
Less than one second!!
Hopefully, results are same, but execution time is very different!
My shell connector
I've published a connector function: Connector-bash on GitHub.com
and shell_connector.sh on my own site.
source shell_connector.sh
newConnector /usr/bin/bc -l 0 0
myBc 1764/42 result
echo $result
42.00000000000000000000
find . -name '*.mp4' | wc -l | xargs -I{} expr {} / 2
Best used if you have multiple outputs you'd like to pipe through xargs. Use{} as a placeholder for the expression term.
Depending on your bash version, you don't even need find for this simple task:
shopt -s nullglob globstar
files=( **/*.mp4 )
dc -e "3 k ${#files[#]} 3 / p"
This method will correctly handle the bizarre edgecase of filenames containing newlines.

Resources