plotting energy generation rate in gnuplot - gnuplot

I'm trying to make a graph for my astronomy homework in which I plot the energy generation rate of the proton-proton chain divided by density vs. the temperature. I have the equation:
q/rho = ((2.4 * 10**(4))* X**(2))/T9**(2/3) * exp(-3.38/T9**(1/3))
where: X**2 = the mass fraction squared = 0.5041
T9 = temperature/10**9
the only thing that changes is T9, everything else is a constant.
Now I've tried to plot this in gnuplot but I always get just a straight line. I've adjusted the ranges and used logscale but it always shows a straight line.
Any ideas please?

gnuplot requires a decimal point to indicate where a number is to be treated as a floating point quantity rather than an integer. So 2/3 is zero but 2./3. is 0.6666... and so on.

Related

Why my fit for a logarithm function looks so wrong

I'm plotting this dataset and making a logarithmic fit, but, for some reason, the fit seems to be strongly wrong, at some point I got a good enough fit, but then I re ploted and there were that bad fit. At the very beginning there were a 0.0 0.0076 but I changed that to 0.001 0.0076 to avoid the asymptote.
I'm using (not exactly this one for the image above but now I'm testing with this one and there is that bad fit as well) this for the fit
f(x) = a*log(k*x + b)
fit = fit f(x) 'R_B/R_B.txt' via a, k, b
And the output is this
Also, sometimes it says 7 iterations were as is the case shown in the screenshot above, others only 1, and when it did the "correct" fit, it did like 35 iterations or something and got a = 32 if I remember correctly
Edit: here is again the good one, the plot I got is this one. And again, I re ploted and get that weird fit. It's curious that if there is the 0.0 0.0076 when the good fit it's about to be shown, gnuplot says "Undefined value during function evaluation", but that message is not shown when I'm getting the bad one.
Do you know why do I keep getting this inconsistence? Thanks for your help
As I already mentioned in comments the method of fitting antiderivatives is much better than fitting derivatives because the numerical calculus of derivatives is strongly scattered when the data is slightly scatered.
The principle of the method of fitting an integral equation (obtained from the original equation to be fitted) is explained in https://fr.scribd.com/doc/14674814/Regressions-et-equations-integrales . The application to the case of y=a.ln(c.x+b) is shown below.
Numerical calculus :
In order to get even better result (according to some specified criteria of fitting) one can use the above values of the parameters as initial values for iterarive method of nonlinear regression implemented in some convenient software.
NOTE : The integral equation used in the present case is :
NOTE : On the above figure one can compare the result with the method of fitting an integral equation to the result with the method of fitting with derivatives.
Acknowledgements : Alex Sveshnikov did a very good work in applying the method of regression with derivatives. This allows an interesting and enlightening comparison. If the goal is only to compute approximative values of parameters to be used in nonlinear regression software both methods are quite equivalent. Nevertheless the method with integral equation appears preferable in case of scattered data.
UPDATE (After Alex Sveshnikov updated his answer)
The figure below was drawn in using the Alex Sveshnikov's result with further iterative method of fitting.
The two curves are almost indistinguishable. This shows that (in the present case) the method of fitting the integral equation is almost sufficient without further treatment.
Of course this not always so satisfying. This is due to the low scatter of the data.
In ADDITION , answer to a question raised in comments by CosmeticMichu :
The problem here is that the fit algorithm starts with "wrong" approximations for parameters a, k, and b, so during the minimalization it finds a local minimum, not the global one. You can improve the result if you provide the algorithm with starting values, which are close to the optimal ones. For example, let's start with the following parameters:
gnuplot> a=47.5087
gnuplot> k=0.226
gnuplot> b=1.0016
gnuplot> f(x)=a*log(k*x+b)
gnuplot> fit f(x) 'R_B.txt' via a,k,b
....
....
....
After 40 iterations the fit converged.
final sum of squares of residuals : 16.2185
rel. change during last iteration : -7.6943e-06
degrees of freedom (FIT_NDF) : 18
rms of residuals (FIT_STDFIT) = sqrt(WSSR/ndf) : 0.949225
variance of residuals (reduced chisquare) = WSSR/ndf : 0.901027
Final set of parameters Asymptotic Standard Error
======================= ==========================
a = 35.0415 +/- 2.302 (6.57%)
k = 0.372381 +/- 0.0461 (12.38%)
b = 1.07012 +/- 0.02016 (1.884%)
correlation matrix of the fit parameters:
a k b
a 1.000
k -0.994 1.000
b 0.467 -0.531 1.000
The resulting plot is
Now the question is how you can find "good" initial approximations for your parameters? Well, you start with
If you differentiate this equation you get
or
The left-hand side of this equation is some constant 'C', so the expression in the right-hand side should be equal to this constant as well:
In other words, the reciprocal of the derivative of your data should be approximated by a linear function. So, from your data x[i], y[i] you can construct the reciprocal derivatives x[i], (x[i+1]-x[i])/(y[i+1]-y[i]) and the linear fit of these data:
The fit gives the following values:
C*k = 0.0236179
C*b = 0.106268
Now, we need to find the values for a, and C. Let's say, that we want the resulting graph to pass close to the starting and the ending point of our dataset. That means, that we want
a*log(k*x1 + b) = y1
a*log(k*xn + b) = yn
Thus,
a*log((C*k*x1 + C*b)/C) = a*log(C*k*x1 + C*b) - a*log(C) = y1
a*log((C*k*xn + C*b)/C) = a*log(C*k*xn + C*b) - a*log(C) = yn
By subtracting the equations we get the value for a:
a = (yn-y1)/log((C*k*xn + C*b)/(C*k*x1 + C*b)) = 47.51
Then,
log(k*x1+b) = y1/a
k*x1+b = exp(y1/a)
C*k*x1+C*b = C*exp(y1/a)
From this we can calculate C:
C = (C*k*x1+C*b)/exp(y1/a)
and finally find the k and b:
k=0.226
b=1.0016
These are the values used above for finding the better fit.
UPDATE
You can automate the process described above with the following script:
# Name of the file with the data
data='R_B.txt'
# The coordinates of the last data point
xn=NaN
yn=NaN
# The temporary coordinates of a data point used to calculate a derivative
x0=NaN
y0=NaN
linearFit(x)=Ck*x+Cb
fit linearFit(x) data using (xn=$1,dx=$1-x0,x0=$1,$1):(yn=$2,dy=$2-y0,y0=$2,dx/dy) via Ck, Cb
# The coordinates of the first data point
x1=NaN
y1=NaN
plot data using (x1=$1):(y1=$2) every ::0::0
a=(yn-y1)/log((Ck*xn+Cb)/(Ck*x1+Cb))
C=(Ck*x1+Cb)/exp(y1/a)
k=Ck/C
b=Cb/C
f(x)=a*log(k*x+b)
fit f(x) data via a,k,b
plot data, f(x)
pause -1

fit function in gnuplot at x-log(y) scale

My data has two columns: date (in Month/Year format) and corresponding value. I plotted this data on x-log(y) scale using gnuplot. It looks very close to a straight line. I am interested to draw a straight line using curve fitting. I tried with few fit functions but did not get success.
I tried the following fit functions:
f(x) = a * x + b (f(x) is not linear as scale is x-log(y))
f(x) = a*10**x + b (overflow error)
Any help in this regard would be appreciated.
The overflow error should be due to at least one large value of x. If you can rescale the x data so that there is no overflow when calculating 10**x, the fit might work. As a test, try something like:
x_scaled = x / 1000.0
f(x_scaled) = a*10**x_scaled + b
Inspecting the maximum value of x will give you an idea of the scaling value, shown as 1000.0 in my example.

finding value of a point between measured points on a 2D plane

I'm trying to find the best way to calculate this. On a 2D plane I have fixed points all with an instantaneous measurement value. The coordinates of these points is known. I want to predict the value of a movable point between these fixed points. The movable point coodinates will be known. So the distance betwwen the points is known as well.
This could be comparable to temperature readings or elevation on topography. I this case I'm wanting to predict ionospheric TEC of the mobile point from the fixed point measurements. The fixed point measurements are smoothed over time however I do not want to have to store previous values of the mobile point estimate in RAM.
Would some sort of gradient function be the way to go here?
This is the same algorithm for interpolating the height of a point from a triangle.
In your case you don't have z values for heights, but some other float value for each triangle vertex, but it's the same concept, still 3D points.
Where you have 3D triangle points p, q, r and test point pt, then pseudo code from the above mathgem is something like this:
Vector3 v1 = q - p;
Vector3 v2 = r - p;
Vector3 n = v1.CrossProduct(v2);
if n.z is not zero
return ((n.x * (pt.x - p.x) + n.y * (pt.y - p.y)) / -n.z) + p.z
As you indicate in your comment to #Phpdevpad, you do have 3 fixed points so this will work.
You can try contour plots especially contour lines. Simply use a delaunay triangulation of the points and a linear transformation along the edges. You can try my PHP implementations https://contourplot.codeplex.com for geographic maps. Another algorithm is conrec algorithm from Paul Bourke.

Linear Fit does not adjust b independently form a

I'm using the following gnuplot script to plot a linear fit:
#!/usr/bin/gnuplot
set term cairolatex
set output "linear_fit.tex"
c = 299792458.
x(x) = c / x
y(x) = x
h(x) = a * x + b
fit h(x) "linear_fit.dat" u (x($1)):(y($2)) via a,b
plot "linear_fit.dat" u (x($1)):(y($2)) w points title "", \
(h(x)) with lines linecolor rgb "black" title "Linear Fit"
However, after the iterations converge, b is always 1.0: https://dpaste.de/ozReq/
How can I get gnuplot to adjust b as well as a?
Update: Repeating the fit command a few hundred times with alternating via a/via b does give pretty good results, but that just can't be how it's supposed to be done.
Update 2: Here's the data in linear_fit.dat:
# lambda, V
360e-9 1.119
360e-9 1.148
360e-9 1.145
400e-9 0.949
400e-9 0.993
400e-9 0.971
440e-9 0.883
440e-9 0.875
440e-9 0.863
490e-9 0.737
490e-9 0.728
490e-9 0.755
540e-9 0.575
540e-9 0.571
540e-9 0.592
590e-9 0.457
590e-9 0.455
590e-9 0.482
I think your troubles stem from the fact that your x-values are very large (on the order of 10e14).
If you do not provide gnuplot with an initial guess for a and b, it will assume a=1 and b=1 as starting points for the fit. However, this is a poor initial guess:
Please note the log scale on both the x- and y-axis.
From the gnuplot documentation:
fit may, and often will get "lost" if started far from a solution, where SSR is large and changing slowly as the parameters are varied, or it may reach a numerically unstable region (e.g., too large a number causing a floating point overflow) which results in an "undefined value" message or gnuplot halting.
To improve the chances of finding the global optimum, you should set the starting values at least roughly in the vicinity of the solution, e.g., within an order of magnitude, if possible. The closer your starting values are to the solution, the less chance of stopping at another minimum. One way to find starting values is to plot data and the fitting function on the same graph and change parameter values and replot until reasonable similarity is reached. The same plot is also useful to check whether the fit stopped at a minimum with a poor fit.
In your case, such starting values could be:
a = 1e-15
b = -0.5
I obtained these values by eye-balling your range of values.
With those starting values, the linear fit results in:
Final set of parameters Asymptotic Standard Error
======================= ==========================
a = 1.97355e-015 +/- 6.237e-017 (3.161%)
b = -0.5 +/- 0.04153 (8.306%)
Which looks like this:
You can play with the control setting of fit (such as setting FIT_LIMIT = 1.e-35) or the starting values to achieve a better fit than this.
EDIT
While I still have not been able to coax gnuplot into modifying both parameters a, b at the same time, I found an alternate approach using R. I am aware that there are many other (scripting) languages that can perform a linear fit and this question was about gnuplot. However, the required effort with R appeared to be minimal.
Here's an example, which, when saved as linear_fit.R and called with
R CMD BATCH linear_fit.R
will provide the two coefficients of the linear fit, that gnuplot failed to provide.
y <- c(1.119, 1.148, 1.145, 0.949, 0.993, 0.971, 0.883, 0.875, 0.863,
0.737, 0.728, 0.755, 0.575, 0.571, 0.592, 0.457, 0.455, 0.482)
x <- c(3.60E-007, 3.60E-007, 3.60E-007, 4.00E-007, 4.00E-007,
4.00E-007, 4.40E-007, 4.40E-007, 4.40E-007, 4.90E-007,
4.90E-007, 4.90E-007, 5.40E-007, 5.40E-007, 5.40E-007,
5.90E-007, 5.90E-007, 5.90E-007)
c = 299792458.
x <- c/x
lm.out <- lm(y ~ x)
svg("linear_fit.svg")
plot(x,y)
abline(lm.out,col="red")
summary(lm.out)
You will end up with an svg-file that contains the plot and a linear_fit.Rout text file. In there you'll find the following coefficients:
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -5.429e-01 4.012e-02 -13.53 3.55e-10 ***
x 2.037e-15 6.026e-17 33.80 2.61e-16 ***
So, in the terminology of the original question, we obtain:
a = 2.037e-15
b = -5.429e-01
These values are very close to the values you quoted from alternating the fit.
In case the comments get purged, these questions were identified as related:
What is gnuplot's internal representation of floating point numbers?
Gnuplot behaves oddly in polynomial fit. Why is that?

Histogram using gnuplot?

I know how to create a histogram (just use "with boxes") in gnuplot if my .dat file already has properly binned data. Is there a way to take a list of numbers and have gnuplot provide a histogram based on ranges and bin sizes the user provides?
yes, and its quick and simple though very hidden:
binwidth=5
bin(x,width)=width*floor(x/width)
plot 'datafile' using (bin($1,binwidth)):(1.0) smooth freq with boxes
check out help smooth freq to see why the above makes a histogram
to deal with ranges just set the xrange variable.
I have a couple corrections/additions to Born2Smile's very useful answer:
Empty bins caused the box for the adjacent bin to incorrectly extend into its space; avoid this using set boxwidth binwidth
In Born2Smile's version, bins are rendered as centered on their lower bound. Strictly they ought to extend from the lower bound to the upper bound. This can be corrected by modifying the bin function: bin(x,width)=width*floor(x/width) + width/2.0
Be very careful: all of the answers on this page are implicitly taking the decision of where the binning starts - the left-hand edge of the left-most bin, if you like - out of the user's hands. If the user is combining any of these functions for binning data with his/her own decision about where binning starts (as is done on the blog which is linked to above) the functions above are all incorrect. With an arbitrary starting point for binning 'Min', the correct function is:
bin(x) = width*(floor((x-Min)/width)+0.5) + Min
You can see why this is correct sequentially (it helps to draw a few bins and a point somewhere in one of them). Subtract Min from your data point to see how far into the binning range it is. Then divide by binwidth so that you're effectively working in units of 'bins'. Then 'floor' the result to go to the left-hand edge of that bin, add 0.5 to go to the middle of the bin, multiply by the width so that you're no longer working in units of bins but in an absolute scale again, then finally add back on the Min offset you subtracted at the start.
Consider this function in action:
Min = 0.25 # where binning starts
Max = 2.25 # where binning ends
n = 2 # the number of bins
width = (Max-Min)/n # binwidth; evaluates to 1.0
bin(x) = width*(floor((x-Min)/width)+0.5) + Min
e.g. the value 1.1 truly falls in the left bin:
this function correctly maps it to the centre of the left bin (0.75);
Born2Smile's answer, bin(x)=width*floor(x/width), incorrectly maps it to 1;
mas90's answer, bin(x)=width*floor(x/width) + binwidth/2.0, incorrectly maps it to 1.5.
Born2Smile's answer is only correct if the bin boundaries occur at (n+0.5)*binwidth (where n runs over integers). mas90's answer is only correct if the bin boundaries occur at n*binwidth.
Do you want to plot a graph like this one?
yes? Then you can have a look at my blog article: http://gnuplot-surprising.blogspot.com/2011/09/statistic-analysis-and-histogram.html
Key lines from the code:
n=100 #number of intervals
max=3. #max value
min=-3. #min value
width=(max-min)/n #interval width
#function used to map a value to the intervals
hist(x,width)=width*floor(x/width)+width/2.0
set boxwidth width*0.9
set style fill solid 0.5 # fill style
#count and plot
plot "data.dat" u (hist($1,width)):(1.0) smooth freq w boxes lc rgb"green" notitle
As usual, Gnuplot is a fantastic tool for plotting sweet looking graphs and it can be made to perform all sorts of calculations. However, it is intended to plot data rather than to serve as a calculator and it is often easier to use an external programme (e.g. Octave) to do the more "complicated" calculations, save this data in a file, then use Gnuplot to produce the graph. For the above problem, check out the "hist" function is Octave using [freq,bins]=hist(data), then plot this in Gnuplot using
set style histogram rowstacked gap 0
set style fill solid 0.5 border lt -1
plot "./data.dat" smooth freq with boxes
I have found this discussion extremely useful, but I have experienced some "rounding off" problems.
More precisely, using a binwidth of 0.05, I have noticed that, with the techniques presented here above, data points which read 0.1 and 0.15 fall in the same bin. This (obviously unwanted behaviour) is most likely due to the "floor" function.
Hereafter is my small contribution to try to circumvent this.
bin(x,width,n)=x<=n*width? width*(n-1) + 0.5*binwidth:bin(x,width,n+1)
binwidth = 0.05
set boxwidth binwidth
plot "data.dat" u (bin($1,binwidth,1)):(1.0) smooth freq with boxes
This recursive method is for x >=0; one could generalise this with more conditional statements to obtain something even more general.
We do not need to use recursive method, it may be slow. My solution is using a user-defined function rint instesd of instrinsic function int or floor.
rint(x)=(x-int(x)>0.9999)?int(x)+1:int(x)
This function will give rint(0.0003/0.0001)=3, while int(0.0003/0.0001)=floor(0.0003/0.0001)=2.
Why? Please look at Perl int function and padding zeros
I have a little modification to Born2Smile's solution.
I know that doesn't make much sense, but you may want it just in case. If your data is integer and you need a float bin size (maybe for comparison with another set of data, or plot density in finer grid), you will need to add a random number between 0 and 1 inside floor. Otherwise, there will be spikes due to round up error. floor(x/width+0.5) will not do because it will create pattern that's not true to original data.
binwidth=0.3
bin(x,width)=width*floor(x/width+rand(0))
With respect to binning functions, I didn't expect the result of the functions offered so far. Namely, if my binwidth is 0.001, these functions were centering the bins on 0.0005 points, whereas I feel it's more intuitive to have the bins centered on 0.001 boundaries.
In other words, I'd like to have
Bin 0.001 contain data from 0.0005 to 0.0014
Bin 0.002 contain data from 0.0015 to 0.0024
...
The binning function I came up with is
my_bin(x,width) = width*(floor(x/width+0.5))
Here's a script to compare some of the offered bin functions to this one:
rint(x) = (x-int(x)>0.9999)?int(x)+1:int(x)
bin(x,width) = width*rint(x/width) + width/2.0
binc(x,width) = width*(int(x/width)+0.5)
mitar_bin(x,width) = width*floor(x/width) + width/2.0
my_bin(x,width) = width*(floor(x/width+0.5))
binwidth = 0.001
data_list = "-0.1386 -0.1383 -0.1375 -0.0015 -0.0005 0.0005 0.0015 0.1375 0.1383 0.1386"
my_line = sprintf("%7s %7s %7s %7s %7s","data","bin()","binc()","mitar()","my_bin()")
print my_line
do for [i in data_list] {
iN = i + 0
my_line = sprintf("%+.4f %+.4f %+.4f %+.4f %+.4f",iN,bin(iN,binwidth),binc(iN,binwidth),mitar_bin(iN,binwidth),my_bin(iN,binwidth))
print my_line
}
and here's the output
data bin() binc() mitar() my_bin()
-0.1386 -0.1375 -0.1375 -0.1385 -0.1390
-0.1383 -0.1375 -0.1375 -0.1385 -0.1380
-0.1375 -0.1365 -0.1365 -0.1375 -0.1380
-0.0015 -0.0005 -0.0005 -0.0015 -0.0010
-0.0005 +0.0005 +0.0005 -0.0005 +0.0000
+0.0005 +0.0005 +0.0005 +0.0005 +0.0010
+0.0015 +0.0015 +0.0015 +0.0015 +0.0020
+0.1375 +0.1375 +0.1375 +0.1375 +0.1380
+0.1383 +0.1385 +0.1385 +0.1385 +0.1380
+0.1386 +0.1385 +0.1385 +0.1385 +0.1390
Different number of bins on the same dataset can reveal different features of the data.
Unfortunately, there is no universal best method that can determine the number of bins.
One of the powerful methods is the Freedman–Diaconis rule, which automatically determines the number of bins based on statistics of a given dataset, among many other alternatives.
Accordingly, the following can be used to utilise the Freedman–Diaconis rule in a gnuplot script:
Say you have a file containing a single column of samples, samplesFile:
# samples
0.12345
1.23232
...
The following (which is based on ChrisW's answer) may be embed into an existing gnuplot script:
...
## preceeding gnuplot commands
...
#
samples="$samplesFile"
stats samples nooutput
N = floor(STATS_records)
samplesMin = STATS_min
samplesMax = STATS_max
# Freedman–Diaconis formula for bin-width size estimation
lowQuartile = STATS_lo_quartile
upQuartile = STATS_up_quartile
IQR = upQuartile - lowQuartile
width = 2*IQR/(N**(1.0/3.0))
bin(x) = width*(floor((x-samplesMin)/width)+0.5) + samplesMin
plot \
samples u (bin(\$1)):(1.0/(N*width)) t "Output" w l lw 1 smooth freq

Resources