Periodic function generator for Linux - linux

In my work I need samples of mathematical functions in the form of text streams. For example, I need a program which generates values of sine function at discrete time points and prints them into stdout. Then I need to combine these samples in some form, for example sum two samples shifted by some phase. So I can split my question may by two:
Is there a pretty standard way to generate samples of mathematical function, such as sine, with given parameters – frequency, phase, amplitude, time step – in the form of simple text stream with two columns: time and function value? I know that simple script in Perl/Tcl can do this work, but I'd like to know the gereric solution.
What programs can manipulate these streams? I know about awk, but what can I do with awk when I have several streams as an input? For example, how can I make a sum or product of two or three sine samples?
I'm using Debian Linux and I prefer The Unix Way, when each program does only simple task and does it perfectly, and results of separate programs may be combined by another program.
Thanks.

You can do simple numeric calculations with bc. See the man page. More complicated calculations can be done with octave, which is a free Matlab clone.
For example this calculates the values of an interval:
$ octave -q --eval 'printf ("%f\n", [0:0.1:pi/2])'|nl|tee x.txt
1 0.000000
2 0.100000
3 0.200000
4 0.300000
5 0.400000
6 0.500000
7 0.600000
8 0.700000
9 0.800000
10 0.900000
11 1.000000
12 1.100000
13 1.200000
14 1.300000
15 1.400000
16 1.500000
This calculates the sin values:
$ octave -q --eval 'printf ("%f\n", sin([0:0.1:pi/2]))'|nl|tee y.txt
1 0.000000
2 0.099833
3 0.198669
4 0.295520
5 0.389418
6 0.479426
7 0.564642
8 0.644218
9 0.717356
10 0.783327
11 0.841471
12 0.891207
13 0.932039
14 0.963558
15 0.985450
16 0.997495
And the join command can be used to join the two files:
$ join -1 1 -2 1 -o 1.2 2.2 x.txt y.txt
0.000000 0.000000
0.100000 0.099833
0.200000 0.198669
0.300000 0.295520
0.400000 0.389418
0.500000 0.479426
0.600000 0.564642
0.700000 0.644218
0.800000 0.717356
0.900000 0.783327
1.000000 0.841471
1.100000 0.891207
1.200000 0.932039
1.300000 0.963558
1.400000 0.985450
1.500000 0.997495
But it is probably better to stay in Octave for the whole computation:
$ octave -q --eval 'for x = .1:0.1:pi/2 ; printf ("%f %f\n", x, sin(x)); end'
0.100000 0.099833
0.200000 0.198669
0.300000 0.295520
0.400000 0.389418
0.500000 0.479426
0.600000 0.564642
0.700000 0.644218
0.800000 0.717356
0.900000 0.783327
1.000000 0.841471
1.100000 0.891207
1.200000 0.932039
1.300000 0.963558
1.400000 0.985450
1.500000 0.997495

General text manipulation programs that would be useful:
paste or join (Merging two files together)
combine (Preform set-like operations on lines in files)
colrm (Remove columns)
sort (General sorting)
sed (Search and replace, and other ed commands)
grep (Searching)
awk (General text manipulation)
tee (A T-junction. Though if you need this you're probably doing something too complex and should break it down.)
I see no problem with using a perl script to generate the values. Using a bc script would of course also be an option.

Did you have a look to bc ?

Related

Bokeh Dodge Chart using Different Pandas DataFrame

everyone! So I have 2 dataframes extracted from Pro-Football-Reference as a csv and run through Pandas with the aid of StringIO.
I'm pasting only the header and a row of the info right below:
data_1999 = StringIO("""Tm,W,L,W-L%,PF,PA,PD,MoV,SoS,SRS,OSRS,DSRS Indianapolis Colts,13,3,.813,423,333,90,5.6,0.5,6.1,6.6,-0.5""")
data = StringIO("""Tm,W,L,T,WL%,PF,PA,PD,MoV,SoS,SRS,OSRS,DSRS Indianapolis Colts,10,6,0,.625,433,344,89,5.6,-2.2,3.4,3.9,-0.6""")
And then interpreted normally using pandas.read_csv, creating 2 different dataframes called df_nfl_1999 and df_nfl respectively.
So I was trying to use Bokeh and do something like here, except instead of 'apples' and 'pears' would be the name of the teams being the main grouping. I tried to emulate it by using only Pandas Dataframe info:
p9=figure(title='Comparison 1999 x 2018',background_fill_color='#efefef',x_range=df_nfl_1999['Tm'])
p9.xaxis.axis_label = 'Team'
p9.yaxis.axis_label = 'Variable'
p9.vbar(x=dodge(df_nfl_1999['Tm'],0.0,range=p9.x_range),top=df_nfl_1999['PF'],legend='PF in 1999', width=0.3)
p9.vbar(x=dodge(df_nfl_1999['Tm'],0.25,range=p9.x_range),top=df_nfl['PF'],legend='PF in 2018', width=0.3, color='#A6CEE3')
show(p9)
And the error I got was:
ValueError: expected an element of either String, Dict(Enum('expr',
'field', 'value', 'transform'), Either(String, Instance(Transform),
Instance(Expression), Float)) or Float, got {'field': 0
Washington Redskins
My initial idea was to group by Team Name (df_nfl['Tm']), analyzing the points in favor in each year (so df_nfl['PF'] for 2018 and df_nfl_1999['PF'] for 1999). A simple offset of the columns could resolve, but I can't seem to find a way to do this, other than the dodge chart, and it's not really working (I'm a newbie).
By the way, the error reference is appointed at happening on the:
p9.vbar(x=dodge(df_nfl_1999['Tm'],0.0,range=p9.x_range),top=df_nfl_1999['PF'],legend='PF in 1999', width=0.3)
I could use a scatter plot, for example, and both charts would coexist, and in some cases overlap (if the data is the same), but I was really aiming at plotting it side by side. The other answers related to the subject usually have older versions of Bokeh with deprecated functions.
Any way I can solve this? Thanks!
Edit:
Here is the .head() method. The other one will return exactly the same categories, columns and rows, except that obviously the data changes since it's from a different season.
Tm W L W-L% PF PA PD MoV SoS SRS OSRS \
0 Washington Redskins 10 6 0.625 443 377 66 4.1 -1.3 2.9 6.8
1 Dallas Cowboys 8 8 0.500 352 276 76 4.8 -1.6 3.1 -0.3
2 New York Giants 7 9 0.438 299 358 -59 -3.7 0.7 -3.0 -1.8
3 Arizona Cardinals 6 10 0.375 245 382 -137 -8.6 -0.2 -8.8 -5.5
4 Philadelphia Eagles 5 11 0.313 272 357 -85 -5.3 1.1 -4.2 -3.3
DSRS
0 -3.9
1 3.4
2 -1.2
3 -3.2
4 -0.9
And the value of executing just x=dodge returns:
dodge() missing 1 required positional argument: 'value'
By adding that argumento value=0.0 or value=0.2 the error returned is the same as the original post.
The first argument to dodge should a single column name of a column in a ColumnDataSource. The effect is then that any values from that column are dodged by the specified amount when used as coordinates.
You are trying to pass the contents of a column, which is is not expected. It's hard to say for sure without complete code to test, but you most likely want
x=dodge('Tm', ...)
However, you will also need to actually use an explicit Bokeh ColumnDataSource and pass that as source to vbar as is done in the example you link. You can construct one explicitly, but often times you can also just pass the dataframe directly source=df, and it will be adapted.

Sort range Linux

everyone. I have some questions about sorting in bash. I am working with Ubuntu 14.04 .
The first question is: why if I have file some.txt with this content:
b 8
b 9
a 8
a 9
And when I type this :
sort -n -k 2 some.txt
the result will be:
a 8
b 8
a 9
b 9
which means that the file is sorted first to the second field and after that to the first field, but I thought that is will stay stable i.e.
b 8
a 8
...
...
Maybe if two rows are equal it is applied lexicographical sort or what ?
The second question is: why the following doesn`t working:
sort -n -k 1,2 try.txt
The file try.txt is like this:
8 2
8 11
8 0
8 5
9 2
9 0
The third question is not actally for sorting, but it appears when I try to do this:
sort blank.txt > blank.txt
After this the blank.txt file is empty. Why is that ?
Apparently GNU sort is not stable by default: add the -s option
Finally, as a last resort when all keys compare equal, sort compares entire lines as if no ordering options other than --reverse (-r) were specified. The --stable (-s) option disables this last-resort comparison so that lines in which all fields compare equal are left in their original relative order.
(https://www.gnu.org/software/coreutils/manual/html_node/sort-invocation.html)
There's no way to answer your question if you don't show the text file
Redirections are handled by the shell before handing off control to the program. The > redirection will truncate the file if it exists. After that, you are giving an empty file to sort
for #2, you don't actually explain what's not working. Expanding your sample data, this happens
$ cat try.txt
8 2
8 11
9 2
9 0
11 11
11 2
$ cat try.txt
8 2
8 11
9 2
9 0
11 11
11 2
I assume you want to know why the 2nd column is not sorted numerically. Let's go back to the sed manual:
‘-n’
‘--numeric-sort’
‘--sort=numeric’
Sort numerically. The number begins each line and consists of ...
Looks like using -n only sorts the first column numerically. After some trial and error, I found this combination that sorts each column numerically:
$ sort -k1,1n -k2,2n try.txt
8 2
8 11
9 0
9 2
11 2
11 11

sort multiple column file

I have a file a.dat as following.
1 0.246102 21 1 0.0408359 0.00357267
2 0.234548 21 2 0.0401056 0.00264361
3 0.295771 21 3 0.0388905 0.00305116
4 0.190543 21 4 0.0371858 0.00427217
5 0.160047 21 5 0.0349674 0.00713894
I want to sort the file according to values in second column. i.e. output should look like
5 0.160047 21 5 0.0349674 0.00713894
4 0.190543 21 4 0.0371858 0.00427217
2 0.234548 21 2 0.0401056 0.00264361
1 0.246102 21 1 0.0408359 0.00357267
3 0.295771 21 3 0.0388905 0.00305116
How can do this with command line?. I read that sort command can be used for this purpose. But I could not figure out how to use sort command for this.
Use sort -k to indicate the column you want to use:
$ sort -k2 file
5 0.160047 21 5 0.0349674 0.00713894
4 0.190543 21 4 0.0371858 0.00427217
2 0.234548 21 2 0.0401056 0.00264361
1 0.246102 21 1 0.0408359 0.00357267
3 0.295771 21 3 0.0388905 0.00305116
This makes it in this case.
For future references, note (as indicated by 1_CR) that you can also indicate the range of columns to be used with sort -k2,2 (just use column 2) or sort -k2,5 (from 2 to 5), etc.
Note that you need to specify the start and end fields for sorting (2 and 2 in this case), and if you need numeric sorting, add n.
sort -k2,2n file.txt

replacing specific lines below the line containing a certain string using sed inplace editing in linux

I am trying to script the automatic input of file, which is as follows
*CONTACT_FORMING_ONE_WAY_SURFACE_TO_SURFACE
$# cid title
$# ssid msid sstyp mstyp sboxid mboxid spr mpr
1 2 3 3 0 0 0 0
$# fs fd dc vc vdc penchk bt dt
0.0100 0.000 0.000 0.000 0.000 0 0.000 1.0000E+7
$# sfs sfm sst mst sfst sfmt fsf vsf
1.000000 1.000000 0.000 0.000 1.000000 1.000000 1.000000 1.000000
*CONTACT_FORMING_ONE_WAY_SURFACE_TO_SURFACE
$# cid title
$# ssid msid sstyp mstyp sboxid mboxid spr mpr
1 3 3 3 0 0 0 0
$# fs fd dc vc vdc penchk bt dt
0.0100 0.000 0.000 0.000 0.000 0 0.000 1.0000E+7
$# sfs sfm sst mst sfst sfmt fsf vsf
1.000000 1.000000 0.000 0.000 1.000000 1.000000 1.000000 1.000000
I want to changed fifth line after the string
*CONTACT_FORMING_ONE_WAY_SURFACE_TO_SURFACE
with a line from other file frictionValues.txt
What I am using is as follows
sed -i -e '/^\*CONTACT_FORMING_ONE_WAY_SURFACE_TO_SURFACE/{n;n;n;n;n;R frictionValues.txt' -e 'd}' input.txt
but this changes all the 5 lines after the string but it reads the values 2 times from the file frictionValues.txt. I want that it reads only first line and then copy it at all the instance where it finds the string. Can anybody tell me using sed with inplace editing like this one?
This might work for you (I might be well off the mark as to what you want!):
sed '1s|.*|1{x;s/^/&/;x};/^\*CONTACT_FORMING_ONE_WAY_SURFACE_TO_SURFACE/{n;n;n;n;n;G;s/.*\\n//}|;q' frictionValues.txt |
sed -i -f - input.txt
Explanation:
Build a sed script from the first line of the frictionValues.txt that stuffs the said first line into the hold space (HS). The remaining script is as before but instead of R frictionValues.txt appends the HS to the pattern space using G.
Run the above sed script against the input.txt file using the -f - switch the sed script is passed via stdin from the previous pipeline.
Try with this:
Content of frictionValues.txt:
monday
tuesday
Content of input.txt will be the same that you pasted in the question.
Content of script.sed:
## Match literal string.
/^\*CONTACT_FORMING_ONE_WAY_SURFACE_TO_SURFACE/ {
## Append next five lines.
N
N
N
N
N
## Delete the last one.
s/\(^.*\)\n.*$/\1/
## Print the rest of lines.
p
## Queue a line from external file.
R frictionValues.txt
## Read next line (it will the external one).
b
}
## Print line.
p
Run it like:
sed -nf script.sed input.txt
With following result:
*CONTACT_FORMING_ONE_WAY_SURFACE_TO_SURFACE
$# cid title
$# ssid msid sstyp mstyp sboxid mboxid spr mpr
1 2 3 3 0 0 0 0
$# fs fd dc vc vdc penchk bt dt
monday
$# sfs sfm sst mst sfst sfmt fsf vsf
1.000000 1.000000 0.000 0.000 1.000000 1.000000 1.000000 1.000000
*CONTACT_FORMING_ONE_WAY_SURFACE_TO_SURFACE
$# cid title
$# ssid msid sstyp mstyp sboxid mboxid spr mpr
1 3 3 3 0 0 0 0
$# fs fd dc vc vdc penchk bt dt
tuesday
$# sfs sfm sst mst sfst sfmt fsf vsf
1.000000 1.000000 0.000 0.000 1.000000 1.000000 1.000000 1.000000
I got a two step approach :
First find out the line number that has your matching text:
linenum=`grep -m 1 \*CONTACT_FORMING_ONE_WAY_SURFACE_TO_SURFACE input.txt | awk '{print $1}'`
Now, combine sed commands to replace based on line number.
Change data at linenum+5 with value from "frictionValues.txt" - and also, delete data at linenum+5
sed -e "$((linenum+5)) c `cat frictionValues.txt`" -e "$((linenum+5)) d" input.txt
Assumptions
frictionValues.txt - has only one line
You are using one of the modern Linux OSs

linux + ksh + Round down or Round up - float numbers

in my ksh script I need to calculate only integer numbers
Sometimes I get float numbers such as 3.49 or 4.8...etc
so I need to translate the float numbers to integer’s numbers according to the following rules (examples)
3.49 will be 3
2.9 will be 3
4.1 will be 4
23.51 will be 24
982.4999 will be 982
10.5 will be 11 ( this example if float is .5 then it will roundup )
Please advice how to do this in ksh or awk or perl
Or
any other language that can be run in my ksh script
After a brief google session, I found that printf seems to be able to do the job, at least in bash (couldn't find an online interpreter that does ksh).
printf "%0.f\n" 4.51
5
printf "%0.f\n" 4.49
4
Code at: http://ideone.com/nEFYF
Note: perl might be overkill, like Marius says, but here's a perl way:
The perl module Math::Round seems to handle the job.
One-liner:
perl -MMath::Round -we 'print round $ARGV[0]' 12.49
Script:
use v5.10;
use Math::Round;
my #list = (3.49, 2.9, 4.1, 23.51, 982.4999);
say round $_ for #list;
Script output:
3
3
4
24
982
In awk you can use the int() function to truncate the values of a floating point number to make it integer.
[jaypal:~/Temp] cat f
3.49 will be 3
2.9 will be 3
4.1 will be 4
23.51 will be 24
982.4999 will be 982
[jaypal:~/Temp] awk '{x=int($1); print $0,x}' f
3.49 will be 3 3
2.9 will be 3 2
4.1 will be 4 4
23.51 will be 24 23
982.4999 will be 982 982
To Round off you can do something like this -
[jaypal:~/Temp] awk '{x=$1+0.5; y=int(x); print $0,y}' f
3.49 will be 3 3
2.9 will be 3 3
4.1 will be 4 4
23.51 will be 24 24
982.4999 will be 982 982
Note: I am not sure how you would like to handle numbers like 2.5. The above method will return 3 for 2.5.
Versions of ksh that do non-integer math probably have floor(), trunc(), and round() functions. Can't check 'em all, but at least on my Mac (Lion), I get this:
$ y=3.49
$ print $(( round(y) ))
3
$ y=3.51
$ print $(( round(y) ))
4
$ (( p = round(y) ))
$ print $p
4
$
In perl, my $i = int($f+0.5);. Should be similar in the other, assuming they have a convert to integer or floor function. Or, if like in javascript, they have a Math.round function which could be used directly.

Resources